text
stringlengths
6
128k
Further author information: E-mail<EMAIL_ADDRESS> # Astrometric and Wavelength Calibration of the NIRSpec Instrument during Commissioning using a model-based approach Nora Lützgendorf European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Giovanna Giardino ATG Europe for the European Space Agency, Noordwijk, The Netherlands Catarina Alves de Oliveira European Space Agency, ESAC, Madrid, Spain Peter Zeidler AURA for the European Space Agency, STScI, Baltimore, USA Pierre Ferruit European Space Agency, ESAC, Madrid, Spain Peter Jakobsen Cosmic Dawn Center, Niels Bohr Institute, University of Copenhagen, Denmark Nimisha Kumari AURA for the European Space Agency, STScI, Baltimore, USA Timothy Rawle European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Stephan M. Birkmann European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Torsten Böker European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Charles Proffitt Space Telescope Science Institute, Baltimore, USA Marco Sirianni European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Maurice Te Plate European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA Sangmo Tony Sohn Space Telescope Science Institute, Baltimore, USA ###### Abstract The NIRSpec instrument for the James Webb Space Telescope (JWST) is a highly versatile near-infrared spectrograph that can be operated in various observing modes, slit apertures, and spectral resolutions. Obtaining dedicated calibration data for all possible combinations of aperture and disperser is an intractable task. We have therefore developed a procedure to derive a highly realistic model of the instrument’s optical geometry across the entire field of view, using calibration data acquired through only a subset of NIRSpec apertures, which nevertheless allows the light paths within the spectrograph to be accurately computed for all apertures and all observing modes. This parametric instrument model thus provides the basis for the extraction of wavelength-calibrated spectra from any NIRSpec exposure, regardless of observing mode or aperture used. Optimizing the NIRSpec instrument model and deriving its final wavelength and astrometric calibration was one of the most crucial elements of the NIRSpec commissioning phase. Here, we describe the process of re-fitting the NIRSpec instrument model with in-orbit commissioning data, and present its final performance in terms of wavelength accuracy and astrometric calibration. ###### keywords: James Webb Space Telescope; JWST; Near-Infrared Spectrograph; NIRSpec; Commissioning; Wavelength Calibration, Astrometric Calibration ## 1 INTRODUCTION The Near Infrared Spectrograph (NIRSpec) onboard the James Webb Space Telescope (JWST) has four main scientific observing modes: Fixed Slits (FS), Multi-object spectroscopy (MOS), Integral Field Unit (IFU) mode and Bright Object Time Series (BOTS). Operating in the near infrared in the wavelength range of $0.6-5.3\,{\mu{\rm m}}$, and using a collection of gratings and a prism that allow to take spectra in low (R=100), medium (R=1000) and high (R=2700) resolution, NIRSpec is designed to take up to 200 spectra at once, for example to study high-redshift galaxies and the early universe. A detailed description of the NIRSpec design, observing modes, and scientific use cases can be found in [1, 2, 3, 4]. Because of its complexity especially in the MOS mode (which has $\sim$ 250000 individual shutters), calibrating the wavelength and astrometric solutions using a static method for each individual shutter and disperser is impossible. In addition, the grating wheel assembly (GWA), which is the center piece of the internal light path, has limited angular positioning repeatability. This means that whenever the GWA returns to the same grating, it does so with a slightly different position, causing the spectra on the detector to shift by small amounts, both in the dispersion and cross-dispersion direction. Therefore, we have developed a procedure that uses calibration data acquired for a limited subset of the NIRSpec apertures to derive a highly realistic model of the instrument’s optical geometry. The NIRSpec instrument model consists of a collection of coordinate transforms between the various optical planes illustrated in Fig. 1 (OTE, FORE, COL, CAM, IFU-FORE, IFU-POST - see Fig. 2 for acronyms), each represented by a paraxial transformation (rotation, magnification and offset) and a 5th-degree 2-D polynomial representing the local distortions. The model also relies on a set of geometrical parameters that capture the physical properties of key optical elements (MSA, IFU slicer, GWA and FPA), for example the locations of the individual quadrants in the micro shutter assembly (MSA) plane, the precise orientation of the dispersers in the GWA, or the location and orientation of the detectors in the FPA. --- Figure 1: Schematic overview of the NIRSpec instrument model. The purpose of the model is to describe the light path through the instrument from sky to the detector and vice versa. It is composed of coordinate transforms and geometrical parameters and splits into three main parts: The spectrograph model describing the path between the MSA and the detector, the grating wheel sensor calibration and the astrometric calibration. The model is split up into three parts that require independent calibration, but build on each other. The first is the spectrograph or internal model. It describes the light path from the MSA plane to the detector and back. It can be fit for all dispersers by using the internal calibration lamps, and thus does not require on-sky data. The procedure of fitting this part of the model is described in section 3. The second part of the model is the grating wheel sensor calibration described in [5] and summarized in section 4. The remaining FORE optical transforms are fit in the astrometric calibration step by registering the positions of stars on the detector with an astrometric catalogue as described in 5. Note that the OTE transform as shown in Fig. 1 is fixed to a pre-launch model, as it is not possible to fit this transform individually in flight. Figure 2 lists all of the free parameters of the model, color coded by its three components. In total, the NIRSpec instrument model consists of $>900$ free parameters. --- Figure 2: Overview of all model parameters that are fit in commissioning. The different parts of the model are highlighted in the corresponding colors as in figure 1. The total number of parameters fit in commissioning is $>$ 900. ## 2 DATA The final version of the instrument model was fit using in-flight commissioning data obtained as part of the 6-month long NIRSpec commissioning campaign between December 25, 2021 and July 15, 2022 (see [6] for a detailed description). In addition, data taken during various ground campaigns as well as simulations were used to obtain absolute wavelength calibration of the internal lamps, and to develop procedures to prepare for commissioning. In the next sections, those campaigns and data are briefly summarized. ### 2.1 Instrument-level test campaigns The first two instrument test campaigns took place at Industrieanlagen- Betriebsgesellschaft mbH (IABG) facilities in Ottobrunn, Germany, in 2011 and 2013. The instrument was enclosed in a vacuum chamber cooled by gaseous Helium, and illuminated by external calibration light sources that used a cryo-mechanism to switch between flat-field illumination and a grid of pinholes. Using this set-up, an Argon lamp spectrum was used to perform the first spectral calibration of the instrument, and to assign absolute wavelengths to the internal calibration lamps (see [7] for more details). Furthermore, the pinhole mask was placed at the OTEIP plane of the instrument and used to derive a first calibration of the NIRSpec FORE optical transforms (astrometric calibration). Analysis of the data showed that the measurements were consistent with the as-designed optical transforms that were therefore kept as the baseline FORE optical transforms. ### 2.2 Higher-level test campaigns After the delivery to NASA in 2013, NIRSpec was installed on the Integrated Science Instrument Module (ISIM), together with the other JWST science instruments. The fully assembled ISIM underwent two cryogenic test campaigns in 2014 (CV2) and 2015 (CV3) at the Goddard Space Flight Center (GSFC). Between CV2 and CV3, the instrument needed to be refurbished by replacing the MSA and FPA subsystems with new parts. This made the CV3 test campaign a crucial step in the calibration of NIRSpec, since for the first time, it was in its final flight configuration. Therefore, the spectrograph model needed to be completely redone. The final cryogenic test campaign was done after ISIM was integrated with the Optical Telescope Element (OTE), which together form the Optical Telescope Element and Integrated Science Module (OTIS). In 2017, OTIS was shipped to Johnson Space Center (JSC) and underwent extended cryogenic testing over the course of three months. For the NIRSpec instrument model, a subset of data was taken to apply a reduced model fit, mostly monitoring smaller differences in the geometric parameters (such as shifts in the MSA quadrants). This was a good test to see how the model changed after NIRSpec underwent environmental testing (vibe and acoustic). ### 2.3 Simulations In addition to data taken by cryogenic test campaigns, we also produced a set of simulations using the instrument performance simulator (IPS, see [8, 9]]) for NIRSpec which was developed by the Centre de Recherche Astrophysique de Lyon (CRAL) and delivered to ESA. This was particularly important for data that we could not reproduce with ground testing campaigns such as the astrometric calibration on sky. As the simulator uses the instrument model to generate the exposures, those could not be used to obtain a preliminary model fit for the FORE optics, but they were used to develop the routines necessary to perform the fit once on-sky data became available. ### 2.4 Commissioning Data The final dataset used to perform the model fit was taken during the NIRSpec commissioning campaign. All exposures were processed with the NIRSpec Commissioning Team ramps-to-slopes pipeline that applies the following corrections: bias subtraction, reference pixel subtraction, linearity correction, dark subtraction and finally count-rate estimation, including jump detection and cosmic-ray rejection [10]. The data set needed to update the spectrograph part of the model was taken in Commissioning Activity Request (CAR) NIRSpec-041 (Instrument Model Update, Proposal ID 1132). This CAR used internal lamp exposures in imaging, MOS, FS and IFS modes. For imaging, the MSA was configured to a (3x3) checkerboard and a customized pattern with crosses, used for a first manual adjustment without the full fit. In order to determine the location of spectra through individual shutters (a.k.a. ‘traces’), a combination of four different slitlet configurations for each grating/lamp combination was used, in order to cover the detector area as evenly as possible. This data set also included IFU exposures and their dedicated ‘leakage’ exposures, i.e. exposures with the IFU closed which only contain traces from failed open shutters (which can then be subtracted). Each combination of grating and MSA configuration was observed with the LINE, FLAT, and REF lamps at the same GWA position (i.e. without any intermittent GWA movement). Before changing to the next grating, the GWA was ‘spun around’ once, and a REF lamp exposure was taken at each position. These extra observations were used for the grating wheel sensor calibration (see [5] for a full list of data). The exposures used for the remaining FORE portion of the model (a.k.a. astrometric calibration) were taken in CAR NIRSpec-019 (Astrometric Calibration, PID 1120). They consist of undispersed images of the JWST astrometric reference field (located in the Large Magellanic Cloud [11, 12] through all seven NIRSpec filters, acquired with the GWA in the MIRROR position, the MSA configured to ALLOPEN, and the IFU aperture unblocked. For each filter, two images were obtained, separated by a half-shutter small angle maneuver (SAM) in order to be able to correct for obscuration by the MSA bars that separate the individual shutters. Lastly, internal lamp exposures using the TEST lamp, the GWA in MIRROR, and four MSA configurations with customized test patterns (CROSS5-C, (1x1) and (3x3) checkerboards) were also taken as part of PID 1120. The purpose of these was to acquire internal exposures at the exact same grating wheel positions as the on-sky exposures, to provide the most accurate means to derive the transformation between the sky and the MSA plane. This approach was mostly a safeguard against the possibility that the existing sensor calibration derived from the PID 1141 data was not yet accurate enough. ## 3 SPECTROGRAPH MODEL The spectrograph (or internal) portion of the NIRSpec instrument model describes the light-path from the MSA plane to the detector and back. It can be calibrated on the ground and in space by using the NIRSpec internal calibration lamps. A full description of the fitting process and the first results obtained during the IABG tests can be found in [7]. The full fit procedure of the spectroscopic model is outlined in figure 3. The red and blue boxes (manual adjustments and extraction of reference points) are described in sections 3.1 and 3.2, respectively. Green boxes represent the different steps when fitting to the extracted reference data, and the yellow box denotes the final verification step (both discussed in section 3.4). --- Figure 3: Schematic workflow of the spectrograph model fit. Red boxes represent manual adjustments, blue boxes reference point creation and extractions, green boxes fits and the yellow box verification. ### 3.1 Manual adjustments The process of creating reference points in order to fit the model partially relies on the extraction of traces and spectra using the model itself. It is therefore important that the initial model we start with is already relatively close to the final model, in order to avoid mis-identification of shutters or lines if the offset is too large. Therefore, we have developed routines in the form of Python GUIs that allow a rough adjustment of the model parameters in a manual procedure, without having to extract every source and perform the (time-consuming) full fit. Manual adjustments of the model are done for imaging mode by adjusting the mirror angles dependent on the positions of the slit images on the detector, and for the spectroscopic mode in dispersion and cross-dispersion direction by manually moving extraction windows on-top of traces and line positions in spectra. ### 3.2 Extraction of the reference points The next step in the fit of the spectrograph model is the extraction of the reference points (i.e. the measured points on the detector). We have developed and optimized methods to obtain highly accurate centroid positions on the detector, which are different for imaging or spectroscopic mode data. #### 3.2.1 Imaging reference points For the imaging reference points, we use a 3x3 checkerboard MSA configuration. This provides a good compromise between having as many points as possible with the best spatial coverage, and avoiding crowding of the shutters that would complicate the centroid measurements. We use Source Extractor [13] to find and extract the centroids of the shutter images on the detector. The extraction parameters were optimized for the microshutters’ shape and contrast. We then cross-reference the shutter centroids with a shutter operability file to avoid having contaminated shutters and mis-identifications in the list. This results in a cleaner cut of the extracted shutter positions. #### 3.2.2 Spectral reference points For the spectral reference points, two steps are needed: 1) tracing the spectra on the detector, and 2) measuring the center of emission lines in the spectral direction. To trace the spectra on the detector, we use exposures taken with the FLAT calibration lamps. Those calibration lamps include five continuum sources intended for spectral flat-fielding that employ filters matching the five long-pass order-separation filters in the FWA (see [1]). The spectra on the detector are captured within a so-called ‘extraction window’, and the flux-weighted centroid of light is calculated along each detector column (the y-direction), thus defining the ‘trace’ as a function of column number (the x-direction). A 4th-degree polynomial (3rd-degree for the prism due to its shorter traces) is fit to the trace for each aperture and exposure (see figure 4). These polynomials are later used to obtain the exact (x,y) position of a given reference point on the detector. --- Figure 4: Example for a polynomial fit to a spectral trace on the detector, in this case for shutter (349,135) in MSA quadrant 4. The y-values were derived by computing the weighted mean for each pixel column on the detector. The polynomial fit to the data (orange line) is compared to the trace position predicted by the current model (green line). For the spectral reference points, we use the LINE calibration lamps, most of which are Fabry-Perot type interference filters that produce 5 to 6 Lorentzian-profiled wavelength calibration lines in each of NIRSpec bands, thus covering the full $0.6-5.3\mu$m spectral range. A separate calibration lamp (REF) uses an Erbium-doped filter that provides an absolute wavelength reference near $1.4\mu m$ with sharp absorption features. The spectra are extracted for all modes and apertures and flat-fielded using the before- mentioned FLAT lamp exposures. Afterwards, they are rectified and extracted to a 1D spectrum using the NIRSpec Instrument Pipeline Software (NIPS [14]). For each spectrum, the centroid positions of the emission and absorption lines are extracted using the center of gravity approach [15] with a peak fraction of ft=0.6 and customized extraction windows for each line, based on previous measurements and knowledge of the line positions (see Fig. 5). --- Figure 5: LINE and REF lamp spectra used for the extraction of reference points in the model fit. The derived centers of gravity are depicted by red dashed lines while the predicted line positions are shown with the yellow dotted lines. The final set of spectral reference points is computed by determining the position of all spectral line on the detector, using the measured wavelengths and the polynomials derived from the traces. Note that this approach uses the model under the assumption that the spatial variations of the distortions are small and smooth. The process is repeated after the first model fit, to allow for an iterative approach of the reference point creation. ### 3.3 The imaging model For very early commissioning activities such as creation of the MSA operability files (see [16]), a model for the imaging mode was needed in order to be able to identify individual shutters. This could be achieved using TEST lamp exposures through a (3x3) checkerboard configuration of the MSA. In this procedure, only the two mirror angles in the x and y directions (which in the full model fit are set to zero) are fit, in order to obtain the best match (least squares solution) between the positions measured on the detector and those predicted by the model. Since the final residuals were still too large, and a clear structure was being detected, we also fit the geometrical parameters of the MSA (i.e. the position and rotation of each quadrant) since they could have moved during launch. After computing the grating wheel sensor calibration for the mirror (see Section 4 and [5]), the imaging model was complete. ### 3.4 The full spectrograph model fit The first fit, that also contains the largest number of parameters fit at once, is performed on all the data (MOS, FS) except the dispersers G140H, G235H, and PRISM because of their rather uneven distribution of reference points on the detector. For the remaining dispersers, the grating angles $\theta_{x},\theta_{y}$ and $\theta_{z}$ are fit with a few exceptions listed below. Furthermore, the MSA geometrical parameters, the FPA geometrical parameters, as well as the camera (CAM) and collimator (COL) forward transforms are fit. In total, this amounts to 118 free parameters that were fit to approximately 27600 reference points. In order to not bias the fit towards imaging data, only 1500 randomly selected microshutters per quadrant are used during the fit. For fitting, we use a Trust Region Reflective algorithm (trf) as provided in the `scipy` routine `least_squares`. As described in [7], not all parameters of the model are fit freely. There are some assumptions on symmetry and conventions that are applied and listed below: * • The GWA MIRROR has all alignment tilt angles at 0, and defines the GWA reference plane. All other dispersers have alignment tilt angles relative to this surface. * • The MSA quadrants are rectangular and regular in size, i.e. in an individual quadrant the shutter pitch is uniform, and the shutter axes are perpendicular. * • The physical gap between NIRSpec’s two Sensor Chip Assemblies (SCAs) is forced to be centered on the y-axis. SCA 491 has no rotation and is symmetrical to the x-axis. SCA 492 is free to move and rotate within the first condition that couples the positions of 491 and 492. This does not restrict the modeling of the instrument, as the movements can be compensated by the distortion polynomials of the CAM. It does, however, produce a geometrically simple FPA description without excessive tilts and offsets. * • The COL, CAM and FORE transforms are assumed to be achromatic (all-reflective optical parts) with no wavelength dependence of the distortion coefficients. * • The alignment tilt angle in z direction for the grating G395H ($\theta_{z}$) is fixed to +30 degrees (factory setting). The last point was an improvement to the process introduced in commissioning in order to avoid unphysical values of the grating wheel rotation angles introduced during the fit, which were then compensated by rotations in the COL and CAM transforms. By fixing the $\theta_{z}$ angle for one of the gratings, we allow for variations in the tilt angles of the remaining gratings relative to each other, while keeping them in a realistic range. Otherwise, the fit tends to converge on nonphysical values, due to the degeneracy with the rotations in the coordinate transforms. --- Figure 6: Concept of fitting the spectrograph model that also applied to other parts of the model fit. Reference points are extracted and compared with the predicted positions on the detector by the current model. A least square fit is applied to derive the optimal model parameters. In a second step, the positions of the fixed slits were optimized individually, because their initial positions may not be optimal in combination with the new collimator distortion, which was dominated by the MOS data in the previous optimization ($\sim 20\times$ more shutter references than for fixed slits). Therefore, another fitting run was done, only changing the SLIT positions in the MSA, using the reference points of the fixed slits only. The third step to complete the forward model fit, was to fit the angles of the remaining dispersers (G140H, G235H, and PRISM) individually using the newly derived distortions and slit positions from the previous fit. After completing the forward fit for the spectrograph model, we have now a good description of the light passing from the MSA to the detector for each mode. However, the backwards transform (going from detector to the MSA plane) cannot be analytically derived from the 5th degree polynomial transform. Therefore, a numerical approach is used to compute the backwards transforms. Here, only the collimator and camera coordinate transforms need to be derived, since all geometrical parameters remain the same in both directions. The fit does not require any exposure data, it merely uses the forward transforms for a regular grid of points. The concept is the same for either transform: for the collimator transform, a regular grid is created in the MSA plane, and the forward transform of the model is used to translate this grid to the GWA-in plane. Similarly, the camera transform is derived from a regular grid of points in the GWA-out plane, and the forward transform of the model is again used to translate this grid to the FPA plane. In either case, a least square fit is applied to the equation $X_{out}=AX_{in}$ to obtain the polynomial coefficients of matrix $A$. Now that the forward and backward transforms for MOS and FS mode are in hand, the final step is to fit the positions of the virtual IFU slits on the detector. This is done by using a REF lamp exposure of the G395H grating (due to it’s even distribution of the absorption lines on the detector) in combination with an IFU mirror exposure and fitting the $x_{out},y_{out}$ positions from the IFU-POST paraxial transform for each slice. In the final verification step, all reference points are re-generated (re- extracted) using the new model and the residuals (difference between measured and predicted points on the detector) are computed. Figure 7 and Table 1 show the results for the spectrograph model fit. All residuals remain within 0.1 pixel RMS, which should be compared to the maximum acceptable standard deviation (derived from the NIRSpec calibration error budget) of 1/10 of a resolution element or 0.2 pixels. The residuals for all modes meet this requirement with substantial margin (a factor of two). They are also very similar to the residuals derived from ground campaign data, which implies that no major components of the NIRSpec optical train have moved noticeably from their pre-launch positions.111In fact, the residuals for the prism improved by a factor of two, due to the use of a more accurate prescription for the index of refraction of $\rm CaF_{2}$ as a function of wavelength, compared to what was used for pre-launch model fits. --- Figure 7: Residuals of forward coordinate transforms from MSA to FPA, for all gratings. The vectors indicate amplitude and direction of the difference between measured and model-computed location of the data points. Table 1: Residuals of the optimized model forward projection from MSA to FPA per grating, for MOS and FS mode. Here, $i$ is the coordinate in dispersion direction, and $j$ in spatial direction. Grating | i mean + RMS | j mean + RMS | median absolute ---|---|---|--- G140H | $0.011\pm 0.063$ | $0.000\pm 0.022$ | $0.054\pm 0.041$ G140M | $0.001\pm 0.096$ | $0.003\pm 0.033$ | $0.066\pm 0.077$ G235H | $0.002\pm 0.102$ | $0.011\pm 0.054$ | $0.096\pm 0.066$ G235M | $0.000\pm 0.062$ | $0.003\pm 0.021$ | $0.054\pm 0.036$ G395H | $0.001\pm 0.089$ | $0.003\pm 0.022$ | $0.071\pm 0.057$ G395M | $-0.001\pm 0.072$ | $0.003\pm 0.018$ | $0.061\pm 0.042$ PRISM | $-0.004\pm 0.087$ | $0.000\pm 0.022$ | $0.076\pm 0.048$ MIRROR | $0.001\pm 0.012$ | $0.003\pm 0.015$ | $0.017\pm 0.009$ ## 4 THE GRATING WHEEL SENSOR CALIBRATION A detailed description of the grating wheel sensor calibration is provided in [5]. Here, we only provide a short summary. The finite angular positioning repeatability of the grating wheel causes small but measurable displacements of the light beam in the focal plane, thus ruling out a static solution to predict the light-path. To address that, the GWA includes two magneto-resistive position sensors that are read after every wheel move, in order to measure the precise ‘tip and tilt’ orientation of each GWA element. However, the precise relation between the measured sensor voltage and actual wheel position must be carefully calibrated after every cryo-cycle of NIRSpec. In practice, this is done by fitting the relationship between the sensor voltages and the observed angular displacement of the wheel, measured relative to the selected reference point of the spectrograph instrument model. For the dispersers, a number of internal lamp exposures with intermediate movements of the grating wheel are taken. Absorption lines in the spectra, as well as shutter centroids in an image of the (3x3) checkerboard MSA configuration are used to measure the absolute positions, and to derive the actual grating and mirror angles as described in sections 3.2 and 3.4. The calibration relations between the sensor voltage reading and the measured angular displacement have been shown to be linear, both for the MIRROR and all dispersers. For every NIRSpec exposure, they are then applied to complete the geometrical description of the state of the instrument. ## 5 ASTROMETRIC CALIBRATION The final step to complete the full NIRSpec instrument model is the astrometric calibration or FORE optics fit. The FORE transform can be measured from images of the sky, but it should be kept in mind that the optical path from the sky to the MSA plane includes the OTE optics (see Fig, 1, and discussion below). Because the refractive properties of the various filters in the FWA are different, the transform needs to be fit separately for each filter. In order to obtain these fits, two imaging mode exposures of the JWST astrometric reference field in the Large Magellanic Cloud (LMC) through the ALLOPEN shutter configuration were taken for each filter. --- Figure 8: Color image of the LMC astrometric field viewed through the open shutters on the NIRSpec detector. Figure 8 shows an example exposure of the astrometric field viewed through the open shutters. It is obviously a very crowded field which makes identification of individual stars challenging. The problem is made worse by the fact that the grid of shutter bars and the various failed-closed shutters affect the shape of the point spread function (PSF) of individual stars. In order to mitigate these effects, the two exposures per filter were ‘dithered’, i.e. offset by half the shutter pitch in both x- and y-direction. We used a python implementation of `DAOFIND` [17] with parameters optimized to the NIRSpec data, to detect stars and record their positions. A subset of those stars meet certain isolation and roundness criteria, and are used to obtain a model PSF. This is done in an iterative algorithm as provided in the `photutils` python package. An example of a final model PSF is shown in figure 9. Using the PSF model, the centroids of all stars are extracted over the full field of view, for each filter and for both dither positions. --- Figure 9: Schematic outline of the astrometric calibration. In short: positions of stars are measured on the detector and matched with the astrometric catalogue. The instrument model is used to predict the star positions on the detector from their sky coordinates in the stellar catalogue. A least-squares fit of the predictions to the measurements is then applied to obtain the optimal model parameters. The fit of the FORE transform for each filter is, in essence, a fit of the combined (OTE + FORE) transform, but since the OTE transform is fixed, only the FORE parameters are fit. This was done as follows: first, the paraxial parameters of the transform (the reference points in the input and output plane, the magnification factors and the rotation) are set to unity and only the transforms are fit. The new paraxial parameters are extracted using the transform itself and its derivatives. The distortions are then fit again while fixing the new paraxial parameters. In this way, we ensure that only meaningful paraxial parameters are used, and only the effects of distortion are captured in the transforms. On average, each filter had about 5000 stars used for the fit. The JWST Telescope’s team used observations from FGS1 and FGS2 obtained in parallel mode, to determine the most accurate pointing and roll angle information for each of the NIRSpec calibration exposures. This was important, as the uncertainty in pointing and roll would have been absorbed in our model fit, thus resulting in an incorrect model for any other pointing on the sky. Note that we perform the fit of the FORE transforms using coordinates measured on the detector, even though the output plane of the FORE transform is the MSA. This requires fixing all the other transforms (OTE, COL and CAM) by using the newly fit internal model from section 3. We do not use the grating wheel calibration in this fit, but rather take advantage of the extra internal exposures that were taken during the astrometric calibration (see section 2.4) to derive the exact grating wheel angle for the sky exposures. This removes any uncertainties in the sensor wheel calibration. The backwards fit was performed in the same method as described for the collimator and camera transforms in Section 3. Figure 10 shows the residuals for one filter (F110W). Table 2 shows the results for all filters. Residuals for all filters are range between 0.1-0.25 pixel RMS. This is slightly larger than what we expected from using simulations ($<0.1$). As mentioned before, the bars of the shutters and the crowding in the field make the centroiding process challenging. We therefore assume that the larger uncertainties mainly originate from the uncertainty in the centroiding, rather than from the accuracy of the model, and performed several tests to confirm this, described in the following section. --- Figure 10: Residuals of the FORE forward coordinate transforms from sky to MSA, for the F110W filter on the detector. The vectors give amplitude and direction of the difference between measured and model-computed location of the data points. Table 2: Residuals on the detectors of the optimized model forward projection for the FORE transform per filter. Here i is for dispersion direction and j for spatial direction. Filter | i mean + RMS | j mean + RMS | median absolute ---|---|---|--- CLEAR | $0.002\pm 0.142$ | $0.000\pm 0.206$ | $0.156\pm 0.084$ F070LP | $0.002\pm 0.145$ | $0.000\pm 0.115$ | $0.163\pm 0.087$ F100LP | $0.003\pm 0.155$ | $0.000\pm 0.119$ | $0.174\pm 0.090$ F110W | $0.002\pm 0.095$ | $0.000\pm 0.072$ | $0.106\pm 0.054$ F140X | $0.002\pm 0.116$ | $0.000\pm 0.086$ | $0.129\pm 0.066$ F170LP | $0.003\pm 0.161$ | $0.000\pm 0.122$ | $0.179\pm 0.095$ F290LP | $0.000\pm 0.236$ | $-0.001\pm 0.160$ | $0.250\pm 0.136$ ### 5.1 Tests on centroiding accuracy In order to test the accuracy of our centroiding algorithm and the smoothness of the residuals we performed two tests. The first one was to divide the residuals into different sections on the detector, and to measure their mean and RMS within each section. This allows us to verify that there are no strong spatial variations in the residuals, which would hint towards residual distortions in the model. Figure 11 shows one of those tests: the distribution appears to be symmetric around zero, smooth over the field of view and without any spatially organised structure present. --- Figure 11: Binning the spatial residuals of the F110W filter over the field of view in order to test for gradients or spatial dependencies. The distributions of the individual bins is symmetric around zero The second test made use of the fact that we have two dither positions for each filter. By comparing the measured shift for a given star with the one predicted by the model, and analyzing the distribution for all stars, we can estimate the average spread in the centroiding distribution. Figure 12 shows the results of this test. It is clear that the spread in positions is of the order of 0.15 pixel in either direction, which would explain the higher spread in the residuals compared to the simulations. --- Figure 12: Test of centroiding accuracy, performed by locating the same stars in two dithered images. The upper panels show the distribution of shifts as calculated by the model. The width and shape of the distribution is due to the distortions in the instrument and the spatial distribution of the four quadrants. The lower panels show the measured shifts on the detector. It is clear that the wider shape of those distributions compared to the modelled ones is caused by the limited centroiding accuracy. Given the results from these tests, we are confident that the larger residuals in the FORE model fit are due to the limited accuracy of our centroiding algorithm, and not due to the performance of the model. In the future, it might be worth exploring more advanced centroiding algorithms to mitigate this source of uncertainty. ## 6 SUMMARY/CONCLUSIONS The NIRSpec instrument model describes the light path through the instrument from sky to the detector and back. It consists of nearly 1000 free parameters and is divided into three main parts: the spectrograph/internal model, the grating wheel sensor calibration, and the astrometric or FORE calibration. During commissioning campaign between December 25, 2021 and July 15, 2022, we took various data sets to obtain a final fit of the instrument model, and to provide accurate wavelength and astrometric calibration of the instrument. The data contains a large set of internal calibration lamps for each mode and grating/filter combination, as well as external imaging mode sky observations of an astrometric reference field. The residuals for the internal model and the grating wheel sensor calibration were as expected and similar to the ones obtained during ground campaigns. They were below the requirements of 1/10 of a resolution element (1/20 of a pixel) for all modes and gratings. For the astrometric calibration, we obtained higher residuals than expected, but concluded after performing multiple tests that they stem from the systematic uncertainty of the centroiding rather than the quality of the model fit. External tests on the wavelength calibration and successful target acquisition using the MSA have shown that the new model derived during commissioning is in excellent shape and within all requirements. ### 6.1 Maintenance and future work In a stable orbit and without further vibrations and temperature changes, the model should remain stable over time. However, this has never been tested and the grating wheel sensor showed a strong dependence on temperature during ground campaigns and cool-down phase. Therefore we have planned monitoring programs in the cycle-1 calibration campaign of JWST. This program contains a subset of the internal lamp exposures in order to validate the accuracy of the model. If there is reason to assume a substantial change to the model, contingency plans need to be explored and the model likely needs to be re-fit. ## References * [1] Jakobsen, P., Ferruit, P., Alves de Oliveira, C., et al., “The Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope. I. Overview of the instrument and its capabilities,” A&A 661, A80 (May 2022). * [2] Ferruit, P., Jakobsen, P., Giardino, G., et al., “The Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope. II. Multi-object spectroscopy (MOS),” A&A 661, A81 (May 2022). * [3] Böker, T., Arribas, S., Lützgendorf, N., et al., “The Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope. III. Integral-field spectroscopy,” A&A 661, A82 (May 2022). * [4] Birkmann, S. M., Ferruit, P., Giardino, G., et al., “The Near-Infrared Spectrograph (NIRSpec) on the James Webb Space Telescope. IV. Capabilities and predicted performance for exoplanet characterization,” A&A 661, A83 (May 2022). * [5] Alves de Oliveira, C., Lützgendorf, N., Zeidler, P., et al., “In-flight performance and calibration of the Grating Wheel Assembly sensors (NIRSpec/JWST),” 12180-185, International Society for Optics and Photonics, SPIE (2022). * [6] Böker, T., Abul-Huda, Y., Altenburg, M., and et al., “In-orbit Commissioning of the Near-Infrared Spectrograph on the James Webb Space Telescope,” 12180-30, International Society for Optics and Photonics, SPIE (2022). * [7] Dorner, B., Giardino, G., Ferruit, P., Alves de Oliveira, C., Birkmann, S. M., Böker, T., De Marchi, G., Gnata, X., Köhler, J., Sirianni, M., and Jakobsen, P., “A model-based approach to the spatial and spectral calibration of NIRSpec onboard JWST,” A&A 592, A113 (Aug. 2016). * [8] Piqueras, L., Legay, P. J., Legros, E., Ferruit, P., Pecontal, A., Gnata, X., and Mosner, P., “The JWST/NIRSpec instrument performance simulator,” in [Modeling, Systems Engineering, and Project Management for Astronomy III ], Angeli, G. Z. and Cullum, M. J., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7017, 70170Z (July 2008). * [9] Piquéras, L., Legros, E., Pons, A., Legay, P. J., Ferruit, P., Dorner, B., Pécontal, A., Gnata, X., and Mosner, P., “The JWST/NIRSpec instrument performance simulator software,” in [Modeling, Systems Engineering, and Project Management for Astronomy IV ], Angeli, G. Z. and Dierickx, P., eds., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7738, 773812 (July 2010). * [10] Birkmann, S., Giardino, G., Sirianni, M., et al., “The In-Flight Noise Performance of the JWST/NIRSpec Detector System,” 12180-101, International Society for Optics and Photonics, SPIE (2022). * [11] Sahlmann, J., “Astrometric Accuracy of the JWST Calibration Field Catalog Examined with the First Gaia Data Release.” JWST STScI Technical Report, JWST-STScI-005492 (Jan. 2017). * [12] Anderson, J., Fall, M., and the Astrometry Working Group, “The JWST Calibration Field: Absolute Astrometry and Proper Motions with GAIA and a Second HST Epoch.” JWST STScI Technical Report, JWST-STScI-007716 (Mar. 2021). * [13] Bertin, E. and Arnouts, S., “SExtractor: Software for source extraction.,” Astronomy and Astrophysics Supplement 117, 393–404 (June 1996). * [14] Alves de Oliveira, C., Birkmann, S. M., Böker, T., Ferruit, P., Giardino, G., Lützgendorf, N., Puga, E., Rawle, T., Sirianni, M., and te Plate, M., “Preparing the NIRSpec/JWST science data calibration: from ground testing to sky,” in [Observatory Operations: Strategies, Processes, and Systems VII ], Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10704, 107040Q (July 2018). * [15] Cameron, D. G., Kauppinen, J. K., Moffatt, D. J., and Mantsch, H. H., “Precision in condensed phase vibrational spectroscopy,” Applied Spectroscopy 36(3), 245–250 (1982). * [16] Rawle, T., Giardino, G., Bechtold, K., et al., “In-flight performance of the NIRSpec Micro Shutter Array,” 12180-184, International Society for Optics and Photonics, SPIE (2022). * [17] Stetson, P. B., “DAOPHOT: A Computer Program for Crowded-Field Stellar Photometry,” Publications of the Astronomical Society of the Pacific 99, 191 (Mar. 1987).
# Assessment of the second-order statically screened exchange correction to the random phase approximation for correlation energies Arno Förster<EMAIL_ADDRESS>Theoretical Chemistry, Vrije Universiteit, De Boelelaan 1083, NL-1081 HV, Amsterdam, The Netherlands ###### Abstract With increasing inter-electronic distance, the screening of the electron- electron interaction by the presence of other electrons becomes the dominant source of electron correlation. This effect is described by the random phase approximation (RPA) which is therefore a promising method for the calculation of weak interactions. The success of the RPA relies on the cancellation of errors, which can be traced back to the violation of the crossing symmetry of the 4-point vertex, leading to strongly overestimated total correlation energies. By addition of second-order screened exchange (SOSEX) to the correlation energy, this issue is substantially reduced. In the adiabatic connection (AC) SOSEX formalism, one of the two electron-electron interaction lines in the second-order exchange term is dynamically screened (SOSEX($W$,$v_{c}$)). A related SOSEX expression in which both electron- electron interaction lines are statically screened (SOSEX($W(0)$,$W(0)$)) is obtained from the $G3W2$ contribution to the electronic self-energy. In contrast to SOSEX($W$,$v_{c}$), the evaluation of this correlation energy expression does not require an expensive numerical frequency integration and is therefore advantageous from a computational perspective. We compare the accuracy of the statically screened variant to RPA and RPA+SOSEX($W$,$v_{c}$) for a wide range of chemical reactions. While both methods fail for barrier heights, SOSEX($W(0)$,$W(0)$) agrees very well with SOSEX($W$,$v_{c}$) for charged excitations and non-covalent interactions where they lead to major improvements over RPA. ###### keywords: GW, SOSEX, RPA, vertex corrections, non-covalent interactions ## 1 Introduction The random phase approximation (RPA)1, 2 has found widespread use in quantum chemistry for the calculation of covalent and non-covalent interaction energies.3, 4, 5, 6, 7, 8, 9, 10 The direct (particle-hole) RPA can be derived in the framework of the adiabatic connection (AC) fluctuation-dissipation theorem (ACFD)11, 12, 13 or as a subset of terms in the coupled cluster (CC)14, 15, 16, 17, 18 singles and doubles (CCD) expansion.19, 20 Within Many-body perturbation theory (MBPT)21, 22, 23, 24, the RPA is obtained by evaluating the Klein,25 or alternatively, the Luttinger-Ward26 functional with the self-energy in the $GW$ approximation (GWA) using a (non-interacting) Kohn-Sham (KS)27 Density functional theory (DFT)28 Green’s function.29, 30 In the GWA31, the self-energy is approximated as the first term of its expansion in terms of a screened electron-electron interaction where screening is usually calculated within a pair bubble approximation The pair bubble approximation is typically also denoted as RPA. To avoid potential confusion with the expression for the correlation energy, we will use the term bubble approximation when referring to the screening.24 Not only for solids but also for larger molecules it becomes decisive to consider screening which is the main reason for the popularity of the $GW$ method in solid-state calculations.24 The RPA is generally believed to describe long-range electron correlation very accurately Since charge screening is the dominant source of electron correlation in this limit.12, 24 CC and MBPT based methods describe screening by resummation of certain classes of self-energy diagrams to infinite order.22, 33, 34 The RPA is the simplest first principle method which accounts for these effects and is implemented with $\mathcal{O}\left(N^{4}\right)$ scaling with system size using global density fitting (DF).35 Modern RPA (and $GW$) implementations typically use local density-fitting approaches to calculate the non-interacting polarizability,36, 37, 38, 39, 40, 41 leading to quadratic or cubic scaling in practice, and even effectively linearly scaling implementations (for sufficiently sparse and large systems) have been reported.42, 43, 44, 45 For these reasons, the RPA is considered a promising method to study weakly correlated large molecules.46, 47, 4, 10, 48 At short electron-electron distances, however, charge screening becomes less important for the description of electron correlation and taking into account higher-order contributions to the self-energy via the 4-point vertex function becomes decisive.49 The absence of these terms in the RPA leads to Pauli exclusion principle violating contributions to the electron correlation energy.50 As a consequence, total correlation energies are much too high compared to exact reference values.51, 52 In contrast to RPA, the approximations to the correlation energy of Møller- Plesset (MP) perturbation theory are free of Pauli principle violating terms. Especially MP2 is relatively inexpensive and can be applied routinely to systems with more than 100 atoms even close to the complete basis set limit. However, screening effects are entirely absent in MP perturbation theory and electron correlation is described by HF quasiparticles (QP) interacting via the bare Coulomb interaction instead, neglecting the fact that the interactions between the HF QPs are generally much weaker than the ones between the undressed electrons. This issue is also present in orbital optimized MP2 in which the HF QPs are replaced by MP2 QPs.53, 54, 55 Therefore, MP2 is a suitable method only for (typically small) systems in which screening effects are negligible. The divergence of MP perturbation theory for the uniform electron gas (see for instance chapter 10 in ref. 22 for a thorough discussion) is known at least since early work by Macke1 and has been demonstrated later on for metals56 and recently also for large, non- covalently bound organic complexes.48 The divergence of the MP series for small-gap systems is directly related to this issue since the magnitude of the screening is proportional to the width of the fundamental gap.57, 58 There have been various approaches to regularize MP2 by an approximate treatment of higher-order screening effects, either using empirical regularizers59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69 or diagrammatically motivated modifications70, 34, 71, 72 or attacking the problem from a DFT perspective.73, 74 Starting from the opposite direction, there have been many attempts to correct the RPA correlation energy expression by adding additional terms to improve the description of short-range correlation. This includes range-separation based approaches,75, 76, 77, 78, 79, 80, 81, 82, 83, 84 or augmentations by singles contributions.85, 86, 87 Via MBPT, the RPA can generally be improved upon inclusion of the 4-point vertex in the electronic self-energy, either directly, or indirectly through the kernel of the Bethe- Salpeter equation (BSE) for the generalized susceptibility. Following the latter approach, approximations often start from the ACFD and go beyond the Coulomb kernel in the BSE by adding additional terms, for instance exact exchange (exx) (often denoted as exx-RPA)88, 89, 90, 91, 92, 93 and higher order contributions,94, 95, 96, 97 or the statically screened $GW$ kernel,98, 99, 100 but also empirically tuned functions of the eigenvalues of the KS density-density response.101, 102 Notice, that the BSE for the generalized susceptibility reduces to a Dyson equation for the density-density response function which makes local kernels very attractive from a computational perspective. Instead of relying on the ACFD theorem, beyond-RPA energy expressions can also be introduced directly from approximations to the self-energy beyond the GWA. For instance, in RPAx103, 104, 105, 106 a local 4-point vertex obtained from the functional derivative of the _local_ exact exchange potential calculated within the optimized effective potential method107, 108, 109 is used in the self-energy. In Freeman’s second-order screened exchange (SOSEX) correction,110 the HF vertex (i.e. the functional derivative of the _non- local_ HF self-energy with respect to the single-particle Green’s function) is included in the self-energy directly but not in the screened interaction.111, 112, 86, 6, 87, 113, 50 Another expression for SOSEX can be obtained by including the static $GW$ kernel in the self-energy but not in the density- density response. This possibility has not been explored until recently114 and is the main topic of this work. In our recent work, we have assessed the accuracy of the statically screened $G3W2$ correction to the $GW$ self-energy for charged excitations.114 This correction has first been applied by Grüneis _at al._115 to calculate the electronic structure of solids and is obtained by calculating the self-energy to second-order in the screened Coulomb interaction (equivalent to including the full $GW$ vertex) and then taking the static limit for both terms. The resulting energy expression fulfills the crossing symmetry of the vertex to first order in the electron-electron interaction. Preliminary results for the correlation energies of atoms have been promising.114 This realization of SOSEX is computationally more efficient than AC-SOSEX since no expensive numerical frequency integration is required. Here, we assess the accuracy of this method for bond dissociation, atomization energies, barrier heights, charged excitations and non-covalent interactions. Our results show that the statically screened SOSEX variant is comparable in accuracy to AC-SOSEX but we observe important differences in the dissociation of diatomic molecules and charged dimers. The remainder of this work is organized as follows. In section 2 we give a detailed derivation of the different SOSEX energy expressions. After an outline of our computational approach and implementation in section 3, we present and analyze our numerical results in section 4. Finally, section 5 summarizes and concludes this work. ## 2 Theory The central object of MBPT is the one-particle irreducible (1PI) electronic self-energy $\Sigma$. It is the sum of all 1PI skeleton diagrams (diagrams which do not contain any self-energy insertions) of $n$th order in the electron-electron interaction $v_{c}$. It maps the interacting single-particle Green’s function $G$ to its non-interacting counterpart $G^{(0)}$ by means of Dyson’s equation,116 $G(1,2)=G^{(0)}(1,2)+G^{(0)}(1,3)\Sigma(3,4)G(4,2)\;.$ (1) Space, spin, and imaginary time indices are collected as $1=(\bm{r}_{1},\sigma_{1},i\tau_{1})$. One can always switch between imaginary time and imaginary frequency using the Laplace transforms117 $f(i\tau)=\frac{i}{2\pi}\int d\omega F(i\omega)e^{i\omega\tau}$ (2) and $f(i\omega)=-i\int d\tau F(i\tau)e^{-i\omega\tau}\;.$ (3) In (1), $G=G_{1}$ is defined by $G_{n}(1,\dots 2n)=\left\langle\Psi^{(N)}_{0}\Big{|}\mathcal{T}\left[\hat{\psi}^{\dagger}(1)\hat{\psi}(2)\dots\hat{\psi}^{\dagger}(2n-1)\hat{\psi}(2n)\right]\Big{|}\Psi^{(N)}_{0}\right\rangle\;.$ (4) Here, $\Psi^{(N)}_{0}$ is the ground state of an $N$-electron system, $\mathcal{T}$ is the time-ordering operator and $\hat{\psi}$ is the field operator. $\Sigma$ is given by $\Sigma(1,2)=v_{H}(1)\delta(1,2)+\Sigma_{xc}(1,2)\;,$ (5) where the second term on the _r.h.s._ can be written as $\Sigma_{xc}(1,2)=iG(1,2)W(1,2)+iG(1,3)W^{(0)}(1,4)\chi(6,4,5,4^{+})\Gamma_{xc}^{(0)}(6,5,2,3)\;.$ (6) For a detailed deviation we refer to the supporting information. We note, that Maggio and Kresse118 and Martin et al.24 used a similar expression. Equation 6 combines several quantities. These are the particle-hole irreducible 4-point vertex (i.e. the sum of all diagrams which cannot be cut into parts by removing a particle and a hole line),119 $\Gamma_{Hxc}^{(0)}(1,2,3,4)=\Gamma_{H}^{(0)}(1,2,3,4)+\Gamma_{xc}^{(0)}(1,2,3,4)=i\frac{\delta\Sigma_{H}(1,3)}{\delta G(4,2)}+i\frac{\delta\Sigma_{xc}(1,3)}{\delta G(4,2)}\;,$ (7) the non-interacting generalized susceptibility, $\chi^{(0)}(1,2,3,4)=-iG(1,4)G(2,3)\;,$ (8) and the screened (bare) Coulomb interaction $W$ ($W^{(0)}$). These quantities are related by the Dyson equation $W(1,2)=W^{(0)}(1,2)+W^{(0)}(1,3)P(3,4)W^{(0)}(4,2)\;,$ (9) with $W^{(0)}(1,2)=v_{c}(\bm{r}_{1},\bm{r}_{2})\delta_{\sigma,\sigma^{\prime}}\delta(t_{1}-t_{2})\;,$ (10) given in terms of the bare coulomb interaction $v_{c}$ and the reducible polarizability $P(1,2)=\chi(1,2,3,4)\delta(1,4)\delta(2,3)\;,$ (11) with $\chi(1,2,3,4)=-iG_{2}(1,2,3,4)-iG(1,2)G(3,4)\;.$ (12) $\chi$ is related to its non-interacting counterpart $\chi^{(0)}$ by a Bethe- Salpeter equation (BSE),120, 119 $\chi(1,2,3,4)=\chi^{(0)}(1,2,3,4)+\chi^{(0)}(1,8,3,7)\Gamma_{Hxc}^{(0)}(7,5,8,6)\chi(6,2,5,4)\;,$ (13) which reduces to a Dyson equation for the polarizability $P$ when the xc- contribution to the 4-point vertex is set to zero. One can then also introduce the irreducible polarizability $P^{(0)}$ as $P^{(0)}(1,2)=\chi^{(0)}(1,2,3,4)\delta(1,4)\delta(2,3)\;.$ (14) Using this quantity, (9) can also be written as $W(1,2)=W^{(0)}(1,2)+W^{(0)}(1,3)P^{(0)}(3,4)W(4,2)\;.$ (15) Note, that the equations above are completely equivalent to Hedin’s equations.31 Their form given here has the advantages that the BSE appears explicitly and that only 2-point or 4-point quantities occur. Therefore, the resulting equations are invariant under unitary transformations of the basis, as has for instance been pointed out by Starke and Kresse.121 or in ref. 122 The xc-contribution to the self-energy defined in (6) can also be obtained as the functional derivative $\Sigma_{xc}=\frac{\delta\Phi[G]}{\delta G}\;.$ (16) $\Phi$ is a universal functional of the interacting $G$ and is defined by26, 123, 24 $\Phi[G]=\frac{1}{2}\sum_{n}\frac{1}{n}\int d1d2G(1,2)\Sigma_{xc}^{(n)}[G](2,1)\;.$ (17) As for instance discussed in refs. 30, 123, If this expression is evaluated with a non-interacting Green’s function one directly obtains the exchange- correlation energy from it. A suitable non-interacting Green’s function $G^{s}$ can be obtained from $G^{(0)}$ by $G^{s}(1,2)=G^{(0)}(1,2)+G^{(0)}(1,3)v_{s}(3,4)G^{s}(4,1)\;,$ (18) where $v_{s}(1,2)=v_{H}(1,2)\delta(1,2)+v_{xc}(\bm{r}_{1},\bm{r}_{2})\delta(\tau_{12})$ (19) and with $v_{xc}$ being a KS xc-potential mixed with a fraction of HF exchange and $\tau_{12}=\tau_{1}-\tau_{2}$. The correlation energy $E_{c}[G^{s}]=E_{Hxc}[G^{s}]-E_{Hx}[G^{s}]$ (20) is then given by30 $E_{c}=\frac{1}{2}\sum_{n=2}\frac{1}{n}\int d1d2G^{s}(1,2)\Sigma^{(n)}(2,1)[G^{s}]\;.$ (21) The $Hx$ contribution to the electron-electron interaction energy is obtained as $E_{Hx}[G^{s}]=\frac{1}{2}\int d1d2G^{s}(1,2)\Sigma^{(1)}(2,1)[G^{s}]\;.$ (22) In case $G^{s}$ is the Hartree-Fock (HF) Green’s function, (22) is the HF expression for the Hartree and exchange energy. In the GWA, the self-energy (6) is approximated as $\Sigma\approx\Sigma_{H}+iGW$. $W$ is typically calculated within the RPA which consists in approximating $\Gamma^{(0)}_{Hxc}\approx\Gamma^{(0)}_{H}$ in the BSE (13). Making both approximations and using eqs. 14 and 9, the RPA exchange-correlation energy $\displaystyle E^{RPA}_{xc}=$ $\displaystyle i\frac{1}{2}\int d1d2G^{s}(1,2)G^{s}(2,1)W(2,1)$ (23) $\displaystyle=$ $\displaystyle-\frac{1}{2}\int d1d2P^{(0)}(1,2)\;\left\\{W^{(0)}(2,1)+\frac{1}{2}W^{(0)}(2,3)P^{(0)}(3,4)W^{(0)}(4,1)+\dots\right\\}$ is obtained.123 Isolating the exchange contribution to the Hartree-exchange energy, $E_{x}=\int d1d2\;\delta(\tau_{12})G^{s}(1,2)W^{(0)}(2,1)G^{s}(2,1)\;,$ (24) we obtain the RPA correlation energy $\displaystyle E^{RPA}_{c}=$ $\displaystyle-\frac{1}{2}\sum_{n}\frac{1}{n}\int d1d2\left\\{\left[\int d3P^{(0)}(1,3)W^{(0)}(3,2)\right]^{n}+\int d3P^{(0)}(1,3)W^{(0)}(3,2)\right\\}$ (25) $\displaystyle=$ $\displaystyle\frac{1}{2}\int d1d2\left\\{\ln\left[\delta(1,2)-\int d3P^{(0)}(1,3)W^{(0)}(3,2)\right]+\int d3P^{(0)}(1,3)W^{(0)}(3,2)\right\\}\;,$ and using (2) as well as the symmetry of the polarizability on the imaginary frequency axis, its well-known representation due to Langreth and Perdew12 is obtained, $\displaystyle E^{RPA}_{c}=$ $\displaystyle\frac{1}{2\pi}\int d\bm{r}_{1}\int^{\infty}_{0}d\omega\left\\{\ln\left[\delta(1,2)-\int d\bm{r}^{\prime}P^{(0)}(\bm{r}_{1},\bm{r}^{\prime},i\omega)v_{c}(\bm{r}^{\prime},\bm{r}_{1})\right]\right.$ (26) $\displaystyle+\left.\int d\bm{r}^{\prime}P^{(0)}(\bm{r}_{1},\bm{r}^{\prime},i\omega)v_{c}(\bm{r}^{\prime},\bm{r}_{1})\right\\}\;.$ In this work, we are interested in approximations to the self-energy beyond the GWA. It follows from the antisymmetry of Fermionic Fock space that $G_{2}$ needs to change sign when the two creation or annihilation operators in (4) are interchanged. This property is known as crossing symmetry.124 In the RPA, the crossing symmetry is violated which leads to the well-known overestimation of absolute correlation energies. However, when the 4-point vertex is approximated by the functional derivative of the Hartree-exchange self-energy the crossing symmetry is fulfilled. We show this explicitly in the supporting information. Approximations to the self-energy in Hedin’s equations always violate the crossing symmetry.125, 126 However, with each iteration of Hedin’s pentagon, the crossing symmetry is fulfilled up to an increasingly higher order in $v_{c}$. We can then expect to obtain improvements over the RPA energies expressions by choosing a self-energy which fulfills the crossing symmetry to first order in $v_{c}$. The easiest approximation to the self-energy of this type is obtained from the HF vertex, $\Gamma_{xc}^{(0),HF}(1,2,3,4)=i\frac{\delta\Sigma^{HF}_{xc}(1,3)}{\delta G^{s}(4,2)}=-W^{(0)}(1,2)\delta(1,4)\delta(3,2)\;.$ (27) Using this expression in (6) with (8) yields the AC-SOSEX contribution to the self-energy.127, 118 We first notice that within the pair bubble approximation for $W$, (6) becomes $\Sigma^{\textrm{SOSEX}(W,v_{c})}(1,2)=-\int d3d4G^{s}(1,3)W(1,4)G^{s}(3,4)G^{s}(4,2)W^{(0)}(3,2)\;,$ (28) where we have indicated the screening of the electron-electron interaction in the SOSEX expression in the superscript on the _l.h.s._ of (28). Here we have used the identity $W\chi^{(0)}=W^{(0)}\chi$ in (6) (see supporting information) which is only valid if $W$ is calculated within the RPA. Using the $GW$ self-energy in (7), to first order in $W^{(0)}$ (ignoring the vriation of $W$ with respect to $G$) the screened exchange kernel is obtained, $\Gamma_{xc}^{(0),GW}(1,2,3,4)=i\frac{\delta\Sigma^{GW}_{xc}(1,3)}{\delta G^{s}(4,2)}=-W(1,2)\delta(1,4)\delta(3,2)\;.$ (29) The resulting self-energy is the complete second-order term in the expansion of the self-energy in terms of the screened electron-electron interaction,31 $\Sigma^{\textrm{G3W2}}(1,2)=-\int d3d4G^{s}(1,3)W(1,4)G^{s}(3,4)G^{s}(4,2)W(3,2)$ (30) and contains the AC-SOSEX self-energy. Figure 1: Diagrammatic representation of the different contributions to the second order exchange (SOX) term. pluses and minuses denote the different branches on the Keldysh contour. The double and single wiggly lines are screened and bare electron-electron interactions, respectively a) Greater and lesser contributions to the full $G3W2$ the self-energy term. b) Greater and lesser components of the SOSEX self-energy c) Greater and lesser components of the MP2 self-energy. The static approximation to the $G3W2$ self-energy is the same, with the bare electron-electron interaction lines replaced by the statically screened ones. The black part of the diagrams are the contributions to the self-energy only which, combined with the blue lines, yield the corresponding single-particle propagator. The $G3W2$ self-energy can be decomposed into eight skeleton diagrams on the Keldysh contour,128 but the AC-SOSEX self-energy only into four.129 Diagrammatically, this is shown in figure 1a) and 1b), respectively. In practice, the evaluation of the resulting energy expression requires to perform a double frequency integration while the evaluation of the AC-SOSEX energy only requires a single frequency integration. Since the computation of the AC-SOSEX term is already quite cumbersome, the complete $G3W2$ energy expression is therefore not a good candidate for an efficient beyond-RPA correction. Instead, we take the static limit in both $W$ in (30) to arrive at a self-energy expression similar to AC-SOSEX, $\Sigma^{\textrm{SOSEX}(W(0),W(0))}(1,2)=-\int d3d4G^{s}(1,3)W(1,4)G^{s}(3,4)G^{s}(4,2)W(3,2)\delta(\tau_{32})\delta(\tau_{14})\;,$ (31) whose diagrammatic form is shown in figure 1c). Due to the presence of the two $\delta$-functions, only two out of the eight diagrams of the $G3W2$ term remain. This expression is similar to the MP2 self-energy, with the only difference that the bare electron-electron interaction is replaced by the statically screened one. However, the resulting expression for the correlation energy will be different due to the factors $\frac{1}{n}$ in (17). Using (9), eq. (31) can be written as $\Sigma^{\textrm{SOSEX}(W(0),W(0))}(1,2)=\Sigma^{\textrm{MP2-SOX}}(1,2)+\Sigma^{\delta\textrm{MP2-SOX}}(1,2)\;,$ (32) with the first term being the second-order exchange (SOX) term in MP2 and with the remainder accounting for the screening of the electron-electron interaction. Defining $\delta W(1,2)=\int d3d4W^{(0)}(1,3)P(3,4)W^{(0)}(4,2)\;,$ (33) it can be written as $\displaystyle\Sigma^{\delta\textrm{MP2-SOX}}(1,2)=$ $\displaystyle-\int d3d4G^{s}(1,3)G^{s}(3,4)G^{s}(4,2)\left[W^{(0)}(1,4)\delta W(3,2)\delta(\tau_{32})\right.$ (34) $\displaystyle\left.\qquad+\delta W(1,4)\delta(\tau_{14})W^{(0)}(3,2)+\delta W(1,4)\delta(\tau_{14})\delta W(3,2)\delta(\tau_{32})\right]\;.$ In the same way one can see, that the statically screened $GW$ vertex contains the HF vertex. The same is obviously true for all other flavors of SOSEX, and therefore all of them fulfill the crossing symmetry of the full 4-point vertex to first order in the electron-electron interaction. Therefore, all of these approximations compensate the overestimation of the electron correlation energy in the RPA. In contrast to the RPA which is efficiently evaluated in a localized basis, beyond-RPA energies are most easily formulated in the molecular spin-orbital basis $\left\\{\phi_{i}(\bm{r,\sigma})\right\\}$ in which the time-ordered KS Green’s function is diagonal, $\displaystyle G^{s}_{kk^{\prime}}(i\tau_{12})=$ $\displaystyle\delta_{kk^{\prime}}\Theta(\tau_{12})G^{>}_{kk^{\prime}}(i\tau_{12})-\delta_{kk^{\prime}}\Theta(\tau_{21})G^{<}_{kk^{\prime}}(i\tau_{21})$ (35) $\displaystyle G^{>}_{kk}(i\tau_{12})=$ $\displaystyle i\left(1-f(\epsilon_{k})\right)e^{-\epsilon_{k}\tau_{12}}$ $\displaystyle G^{<}_{kk}(i\tau_{12})=$ $\displaystyle if(\epsilon_{k})e^{-\epsilon_{k}\tau_{12}}\;.$ The $\epsilon_{k}$ denote KS eigenvalues which are understood to be measured relative to the chemical potential $\mu$ and $f(\epsilon_{k})$ denotes the occupation number of the $k$th orbital. One can now obtain energy expressions analogous to (26). For example, inserting the AC-SOSEX self-energy (28) into (21), we obtain $\displaystyle E^{\textrm{SOSEX}}_{c}=$ $\displaystyle\frac{1}{2}\int d1d2d3d4\;G^{s}(1,2)G^{s}(2,3)G^{s}(3,4)G^{s}(4,1)$ (36) $\displaystyle\times\left\\{\frac{1}{2}W^{(0)}(3,1)W^{(0)}(2,4)+\frac{1}{3}\int d5d6W^{(0)}(3,1)W^{(0)}(2,5)P^{(0)}(5,6)W^{(0)}(6,4)+\dots\right\\}\;.$ In contrast to the RPA energy expression, the terms in this equation cannot be summed exactly due to the presence of the $1/n$-terms. However, defining $\Sigma_{Hxc}^{\lambda}=\sum_{n=1}^{\infty}\lambda^{n}\Sigma_{Hxc}^{(n)}\left[G^{s},v_{c}\right]$ (37) we can rewrite (21) as an integral over a coupling constant $\lambda$, $E_{c}=\frac{1}{2}\sum_{n=2}\frac{1}{n}\int d1d2G^{s}(1,2)\Sigma_{Hxc}^{(n)}(2,1)[G^{s}]=\frac{1}{2}\int^{1}_{0}\frac{d\lambda}{\lambda}\int d1d2G^{s}(1,2)\Sigma_{Hxc}^{(\lambda)}(2,1)[G^{s}]\;,$ (38) Therefore, (37) becomes $\Sigma_{Hxc}^{\lambda}=\sum_{n=1}^{\infty}\Sigma_{Hxc}^{(n)}\left[G^{s},\lambda v_{c}\right]=\sum_{n=1}^{\infty}\Sigma_{Hxc}^{(n)}\left[G^{s},W^{(\lambda)}\right]\;,$ (39) where $W^{(\lambda)}$ is defined as in (15), with $W^{(0)}$ replaced by $\lambda W^{(0)}$. Defining $\overline{W}=\int^{1}_{0}d\lambda W^{(\lambda)}\;,$ (40) and $\overline{\Sigma}=\Sigma\left[\overline{W}\right]$ (41) the correlation energy becomes $E_{c}=\frac{1}{2}\int d1d2\;G^{s}(1,2)\overline{\Sigma}_{c}(2,1)\;.$ (42) The integral in (40) needs to be computed numerically, but converges typically very fast when Gauss-Legendre grids are employed.87 In ref. 130 a trapezoidal rule for the solution of this integral has been used and also ref. 3 suggests that this choice is often suitable for the calculation of correlation energies within the RPA and beyond. This choice is very well justified for weakly correlated systems for which the adiabatic connection is approximately a straight line.131, 132 Below, we will assess the effect of such approximate coupling constant integration on absolute and relative correlation energies for non-covalent interactions. Notice, that using a trapezoidal rule, (42) reduces to $E_{c}=\frac{1}{4}\int d1d2\;G^{s}(1,2)\Sigma_{c}(2,1)\;,$ (43) and when the statically screened $G3W2$ self-energy (31) is used in this expression, the energy expression of ref. 114 is obtained. When additionally both $W(0)$ are replaced by $v_{c}$, (43) gives the MP2 correlation energy (evaluated with $G^{s}$).30 Using (42), simple expressions for the AC-SOSEX energy in the basis of KS orbitals is obtained. With eqs. 42, 28 and 35 we have $\displaystyle E^{\textrm{SOSEX}(W,v_{c})}=$ $\displaystyle\frac{i}{2}\sum_{pqrs}\int d\tau_{12}d\tau_{3}G^{s}_{p}(\tau_{13})G^{s}_{q}(\tau_{31})G^{s}_{r}(\tau_{12})G^{s}_{s}(\tau_{21})W^{(0)}_{spqr}\overline{W}_{rspq}(\tau_{23})$ (44) $\displaystyle=$ $\displaystyle-\frac{1}{4\pi}\sum_{pqrs}\int d\omega^{\prime}W^{(0)}_{spqr}\overline{W}_{rspq}(i\omega^{\prime})\int d\tau_{12}G^{s}_{r}(\tau_{12})G^{s}_{s}(\tau_{21})$ $\displaystyle\times\underbrace{\int d\tau_{3}e^{-i\omega^{\prime}\tau_{23}}G^{s}_{p}(\tau_{13})G^{s}_{q}(\tau_{31})}_{I(i\tau_{12})}\;.$ In going from the second equations, we have used (2) to transform $W$ to the imaginary frequency axis. The integral over $\tau_{3}$ can be evaluated by splitting it at $\tau_{1}$ and using the definition of the KS Green’s function (35), $I(i\tau_{12})=\frac{\left[\left(1-f(\epsilon_{p})\right)f(\epsilon_{q})-\left(1-f(\epsilon_{q})\right)f(\epsilon_{p})\right]e^{i\omega^{\prime}\tau_{12}}}{\epsilon_{p}-\epsilon_{q}+i\omega^{\prime}}=-e^{i\omega^{\prime}\tau_{12}}\frac{f(\epsilon_{p})-f(\epsilon_{q})}{\epsilon_{p}-\epsilon_{q}+i\omega^{\prime}}$ (45) The remaining integral over $\tau_{12}$ is $I_{\tau_{12}}=-\int G^{(0)}_{r}(\tau_{12})G^{(0)}_{s}(\tau_{21})e^{i\omega^{\prime}\tau_{12}}d\tau_{12}=\frac{f(\epsilon_{r})-f(\epsilon_{s})}{\epsilon_{r}-\epsilon_{s}-i\omega^{\prime}}\;,$ (46) so that the correlation energy becomes $E^{\textrm{SOSEX}(W,v_{c})}=-\frac{1}{4\pi}\sum_{pqrs}\int d\omega^{\prime}W^{(0)}_{spqr}\overline{W}_{rspq}(i\omega^{\prime})\frac{f(\epsilon_{r})-f(\epsilon_{s})}{\epsilon_{r}-\epsilon_{s}-i\omega^{\prime}}\frac{f(\epsilon_{p})-f(\epsilon_{q})}{\epsilon_{p}-\epsilon_{q}+i\omega^{\prime}}$ (47) Each of the nominators can only give a non-vanishing contribution if one of the two occupation numbers are zero. If the difference of the occupation numbers is $-1$, we simply flip sign in the denominator. Without loss of generality we can then decide that the indices $r$ and $p$ belong to occupied and the indices $s$ and $q$ to virtual single-particle states. Equation 47 then becomes $E^{\textrm{SOSEX}(W,v_{c})}=-\frac{1}{4\pi}\sum^{occ}_{ij}\sum^{virt}_{ab}\int^{\infty}_{0}d\omega\overline{W}_{iajb}(i\omega)W^{(0)}_{jaib}\frac{4(\epsilon_{i}-\epsilon_{a})(\epsilon_{j}-\epsilon_{b})}{\left[(\epsilon_{i}-\epsilon_{a})^{2}+\omega^{2}\right]\left[(\epsilon_{j}-\epsilon_{b})^{2}+\omega^{2}\right]}\;.$ (48) For a closed-shell system we can also sum over spins which gives us an additional factor of 2. The resulting expression is then equivalent to the one of ref. 87. In the spin-orbital basis, the SOSEX$(W(0),W(0))$ Correlation energy is obtained from (30) and (35) as $\displaystyle E^{\textrm{SOSEX}(W(0),W(0))}=$ $\displaystyle-\frac{1}{4}\sum_{pqrs}\int d\tau_{12}G^{s}_{p}(\tau_{12})G^{s}_{q}(\tau_{21})G^{s}_{r}(\tau_{12})G^{s}_{s}(\tau_{21})$ (49) $\displaystyle\times\overline{W}_{spqr}(i\omega=0)\overline{W}_{rspq}(i\omega=0)$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\sum^{occ}_{ij}\sum^{virt}_{ab}\frac{\overline{W}_{spqr}(i\omega=0)\overline{W}_{rspq}(i\omega=0)}{\epsilon_{i}+\epsilon_{j}-\epsilon_{a}-\epsilon_{b}}\;.$ This is the expression we have introduced in ref. 114. It is completely equivalent to the exchange term in MP2 with the bare electron-electron interaction replaced by the statically screened, coupling constant averaged one. Both RPA+SOSEX variants can be understood as renormalized MP2 expressions and allow for a clear diagrammatic interpretation. In the next section, we briefly outline our implementation of these expressions, before we proceed by assessing their accuracy for correlation energies in sec. 4. ## 3 Technical and Computational Details All expressions presented herein have been implemented in a locally modified development version of the Amsterdam density functional (ADF) engine of the Amsterdam modelling suite 2022 (AMS2022).133 The non-interacting polarizability needed to evaluate (26) and (15) is calculated in imaginary time with quadratic scaling with system size in the atomic orbital basis. The implementation is described in detail in ref. 39. In all calculations, we expand the KS Green’s functions in correlation consistent bases of Slater-type orbitals of triple- and quadruple-$\zeta$ quality (TZ3P and QZ6P, respectively).134 All 4-point correlation functions (screened and unscreened Coulomb interactions as well as polarizabilities) are expressed in auxiliary basis sets of Slater type functions which are usually 5 to 10 times larger than the primary bases. In all calculations, we use auxiliary basis sets of _VeryGood_ quality. The transformation between primary and auxiliary basis (for the polarizability) is implemented with quadratic scaling with system size using the pair-atomic density fitting (PADF) method for products of atomic orbitals.135, 136 For an outline of the implementation of this method in ADF, we refer to ref. 137. Eq. (26) is then evaluated in the basis of auxiliary fit functions with cubic scaling with system size. Equations 48 and 49 are evaluated with quintic scaling with system size in the canonical basis of KS states. This implementation is completely equivalent to the canonical MP2 implementation outlined in ref. 137. Eq. (40) is evaluated using Gauss-Legendre grids with 8 points, except for the potential energy curves where 20 points have been used. At stretched bonds, the integrands become increasingly non-linear and a large number of integration points are necessary. As discussed in detail in the supporting information, for non-covalent interactions a single integration point does generally suffice and therefore we have used a single integration point only for all calculations for the S66 and S66x8 database. In case of a single $\lambda-$point, a trapezoidal rule is used for integration. Imaginary time and imaginary frequency variables are discretized using non- uniform bases $\mathcal{T}=\left\\{\tau_{\alpha}\right\\}_{\alpha=1,\dots N_{\tau}}$ and $\mathcal{W}=\left\\{\omega_{\alpha}\right\\}_{\alpha=1,\dots N_{\omega}}$ of sizes $N_{\tau}$ and $N_{\omega}$, respectively, tailored to each system. More precisely, (2) and (3) are then implemented by splitting them into sine- and cosine transformation parts as $\displaystyle\overline{F}(i\omega_{\alpha})=$ $\displaystyle\sum_{\beta}\Omega^{(c)}_{\alpha\beta}\overline{F}(i\tau_{\beta})$ (50) $\displaystyle\underline{F}(i\omega_{\alpha})=$ $\displaystyle\sum_{\beta}\Omega^{(s)}_{\alpha\beta}\underline{F}(i\tau_{\beta})\;,$ where $\overline{F}$ and $\underline{F}$ denote even and odd parts of $F$, respectively. The transformation from imaginary frequency to imaginary time only requires the (pseudo)inversion of $\Omega^{(c)}$ and $\Omega^{(s)}$, respectively. Our procedure to calculate $\Omega^{(c)}$ and $\Omega^{(s)}$ as well as $\mathcal{T}$ and $\mathcal{W}$ follows Kresse and coworkers.138, 139, 140 The technical specifications of our implementation have been described in the appendix of ref. 134. We use in all calculations grids of 24 points in imaginary time and imaginary frequency which is more than sufficient for convergence.137 The final correlation energies are then extrapolated to the complete basis set limit using the relation ,141 $E_{CBS}=E_{QZ}+\frac{E_{QZ}*4^{3}-E_{TZ}*3^{3}}{4^{3}-3^{3}}\;,$ (51) where $E_{QZ}$ ($E_{TZ}$) denotes the total energies at the QZ6P (TZ3P) level. The extrapolation scheme has been shown to be suitable for correlation consistent basis sets but cannot be used for KS or HF contributions.141, 142 Therefore, we do not extrapolate the DFT energies, but assume them to be converged on the QZ level. Since the basis set error is not completely eliminated with this approach, we also counterpoise correct all energies, taking into account 100 % of the counterpoise correction. With these settings, we assume all our calculated values to be converged well enough to be able to draw quantitative conclusions about the performance of the methods we benchmark herein. We use the _VeryGood_ numerical quality for integrals over real space and distance cut-offs. Dependency thresholds39 have been set to $5e^{-4}$. All Full configuration interaction calculations have been performed with the code by Knowles and Handy.143, 144 The 1- and 2-electron integral which are required as input have been generated with ADF. ## 4 Results ### Dissociation Curves The potential energy curves of small diatomic molecules serve as an important test for electronic structure methods. We first consider molecules with different bonding types for which we were able to calculate FCI reference values: H2 is covalently bound, LiH is an ionic molecule, and He2 has a very weak, non-covalent bond. Figure 2: Potential energy curves (in Hartree) of H2 (left) and LiH (middle), as well as binding energy (in mHartree) as a function of system size for He2 on the right using FCI, RPA@PBE and different variants of RPA+SOSEX@PBE. For H2 and He2, all calculations have been performed with the TZ3P basis set. For Lih, all calculations have been performed using the TZP basis set. The dissociation curve of H2 calculated with RPA+SOSEX($W(0)$,$W(0)$)@PBE is the red line in figure 2. Our calculations are not converged with respect to the basis set size but comparison of our dissociation curves calculated with RPA@PBE and RPA+SOSEX($W$,$v_{c}$)@PBE to the ones presented in ref. 112 and 145 clearly shows that their qualitative behavior is reproduced correctly. It is known that RPA describes the dissociation of covalently bonded molecules qualitatively correctly while RPA+SOSEX($W$,$v_{c}$) and other exchange- corrected RPA approaches fail.112, 145, 91 Here we find that also RPA+SOSEX($W(0)$,$W(0)$) dissociates the hydrogen molecule correctly and that the potential energy curve has a similar shape than the RPA one. Henderson and Scuseria have argued that the self-correlation in the RPA mimics static correlation effects145 whose good description is necessary to dissociate H2 correctly. The fact that in RPA+SOSEX($W(0)$,$W(0)$ the self-correlation error is eliminated to some large extent (also see table 1 in the SI) but not completely therefore explains the similarity to the RPA dissociation curve. To rationalize this result further, we also calculated the dissociation curve within the static limit of RPA+SOSEX($W$,$v_{c}$), RPA+SOSEX($W(0)$,$v_{c}$) (blue curve). This shows that the screening of the second electron-electron interaction line is responsible for the qualitative differences between SOSEX($W$,$v_{c}$) and SOSEX($W(0)$,$W(0)$). It should also be noted that the RPA+SOSEX($W(0)$,$W(0)$) dissociation curve of H2 very closely resembles the one calculated by Bates and Furche using the approximate exchange kernel (AXK) correction to the RPA.94 SOSEX($W(0)$,$W(0)$) and the AXK kernel have in common that both electron-electron interaction lines are screened. For LiH, we find a similar behavior than for H2. For He2 (notice that we plotted here the binding energy and not the total energy) we see that all approaches give the correct dissociation limit. method | H2 | LiH | He2 | F2 | Be2 ---|---|---|---|---|--- exp. | | | | 1.413146 | 2.320147 accurate | 0.741 | 1.601 | 2.626 | 1.413146 | 2.320148 RPA | 0.742 | 1.597 | 2.632 | 1.437 | 2.403 RPA + SOSEX($W(0),W(0)$) | 0.744 | 1.605 | 2.852 | 1.444 | 2.424 RPA + SOSEX($W,v_{c}$) | 0.738 | 1.594 | 2.871 | 1.364 | – RPA + SOSEX($W(0),v_{c}$) | 0.735 | 1.599 | 3.542 | 1.348 | – Table 1: Equilibrium bond length of selected molecules. All values are in $\AA$. The bond lengths for H2, He2, and LiH have been calculated using the TZ3P and TZP basis sets to make them comparable to the FCI result. The bond lengths for F2 and Be2 have been obtained using the QZ6P basis set. All RPA(+SOSEX) calculations have been performed with a PBE Green’s function. From these potential energy curves, we also extracted the equilibrium bond lengths via cubic spline interpolation. These are shown in table 1 Around the equilibrium distances, RPA+SOSEX($W$,$v_{c}$) generally gives the best energies but this does not necessarily translate into the best bond lengths. For the covalently bound molecules LiH and F2 as well as LiH RPA+SOSEX($W$,$v_{c}$) underestimates and RPA+SOSEX($W(0)$,$W(0)$) overestimates the bond lengths. Again, RPA+SOSEX($W(0)$,$W(0)$) behaves qualitatively similar to RPA. For He2, both approaches give similar results, while RPA+SOSEX($W(0)$,$v_{c}$) fails completely. On the other hand, unlike RPA+SOSEX($W(0)$,$W(0)$), RPA+SOSEX($W$,$v_{c}$) does predict an unbound Be2 dimer. ### Dissociation of charged Dimers | RPA | SOSEX($W$,$v_{c}$) | SOSEX($W(0)$,$v_{c}$) | SOSEX($W(0)$,$W(0)$) ---|---|---|---|--- H2+ | | | | 1.0 | 5.19 | 0.76 | -2.58 | 3.09 1.25 | 7.59 | -0.26 | -5.33 | 5.19 1.50 | 11.21 | -1.31 | -8.23 | 8.89 1.75 | 16.15 | -2.30 | -11.14 | 14.27 He2+ | | | | 1.0 | 13.23 | 0.23 | -5.30 | 14.34 1.25 | 25.40 | -2.84 | -12.91 | 27.56 1.50 | 40.60 | -5.64 | -20.32 | 44.79 1.75 | 56.76 | -7.65 | -25.76 | 63.38 (NH3)2+ | | | | 1.0 | 5.89 | 15.17 | 24.91 | 16.23 1.25 | 13.00 | 20.08 | 36.23 | 33.50 1.50 | 20.61 | 21.89 | 42.78 | 50.41 1.75 | 30.88 | 15.14 | 28.73 | 61.48 (H2O)2+ | | | | 1.0 | 10.19 | 29.79 | 51.70 | 33.79 1.25 | 20.62 | 12.16 | 21.61 | 38.68 1.50 | 31.88 | 2.35 | 4.58 | 50.58 1.75 | 42.08 | 0.50 | 5.47 | 65.61 MAD | 21.95 | 8.63 | 19.22 | 33.24 Table 2: Errors in kcal/mol for the charger dimers in the SIE4x4 benchmark set calculated with RPA and different variants of RPA+SOSEX. PBE orbitals have been used in all calculations In table 2 we investigate the dissociation of four charged dimers by means of the SIE4x4 dataset.149 Here, the self-correlation error of RPA leads to considerable underbinding,112, 145, 6 whereas RPA+SOSEX($W$,$v_{c}$) is exact,113 the remaining error for H2 being due to basis set errors as well as the fact that PBE orbitals have been used. Furche and coworkers have observed a catastrophic failure of RPA+SOX for (NH3)2+ and (H2O)2+150 and also SOSEX($W(0)$,$W(0)$) considerably deteriorates the RPA results for those systems. Only for H2+, one finds that the partial cancellation of the RPA self-correlation leads to small improvements over RPA. ### Thermochemistry and Kinetics We move on to assess the performance of RPA+SOSEX($W(0)$,$W(0)$) for reaction types which are relevant for thermochemistry and kinetics. Total atomization energies, ionization potentials and electron affinities as well as barrier heights of different reactions serve hereby as important testing grounds. For this work, we calculated the atomization energies (defined as the total energy of the molecule minus the sum of the energies of the atomic fragments) of the 144 small and medium molecules in the W4-11 dataset.151 The reference values have been calculated using the highly accurate W4 protocol.152 For barrier heights, we use the BH76 database which is a compilation of the HTBH38153 and NHTBH38154 databases for barrier heights by Truhlar and coworkers, which are typically used in benchmarks of (beyond-)RPA methods.86, 5, 6, 87 The reference values have been calculated with the W2-F12 protocol.155, 149 To benchmark the performance for ionization potentials and electron affinities we employ the G21IP and G21EA databases by Pople and coworkers and use the original experimental reference values.156 Figure 3: Mean absolute deviations (MAD) (lower triangle in each plot) and Maximum deviations (MAX) (upper triangle) with respect to the reference values as well as using different KS Green’s functions as input for BH76 (left), G21-IP (middle) and G21-EA (right) for RPA (top) and RPA+SOSEX($W(0)$,$W(0)$) (bottom). All values are in kcal/mol. To start with, we assess the effect of the Green’s function $G^{s}$ used to calculate the correlation energies. RPA calculations can in principle be performed self-consistently using a variety of approaches.88, 157, 158, 159, 160, 161, 162, 163, 164 (see ref. 165 for a review) This is rarely done in practice since self-consistent RPA calculations are computationally demanding and since the resulting energies are often worse than the ones evaluated using a Green’s function from a generalized gradient approximation (GGA) or hybrid calculation.159 GGAs like PBE or TPSS are often used to construct $G^{s}$.87, 48, 9 Using hybrid orbitals can be seen as a pragmatic way to compensate for the lack of self-consistency in the RPA calculation and therefore we assess here whether they lead to improvements over GGA orbitals. For W4-11, the differences between different starting points are minor, but PBE tends to give the best results. For the BH76, G21IP, and G21EA datasets, we show mean absolute deviations (MAD) and maximum deviations (MAX) with respect to the reference values and with respect to the different starting points in figure 3. The RPA results generally improve with increasing amount of Fock exchange, while 25 % (PBE0) generally seems to work best for RPA+SOSEX($W(0)$,$W(0)$). The differences are often substantial, for instance in case of the RPA barrier heights (fig 3a) or the RPA+SOSEX($W(0)$,$W(0)$) electron affinities (fig 3f). For charged excitations, this observation aligns very well with the experience from $G_{0}W_{0}$ calculations where hybrid functional with 25 - 50 % are typically a much better starting point than GGAs.166, 167 However, when $G3W2$ corrections are added to the $G_{0}W_{0}$ QP energies, using a hybrid functional with a smaller fraction of exact exchange might often be beneficial.168, 114 For barrier heights, hybrid functionals with a larger fraction of exact exchange are usually required to obtain qualitatively correct barrier heights149, 169 and it therefore not surprising that hybrid orbitals serve as a suitable starting point for RPA calculations. Figure 4: Errors of RPA@PBE and RPA+SOSEX($W$,$v_{c}$)@PBE for the atomization energies in the W4-11 dataset. Black dots denote the individual data points and the horizonal line in each box denotes the median deviation. the box contains all data points between the first quartile ($Q1$) and third quartile ($Q2$) and the whiskers are at $Q1\pm|Q1-Q3|$. (in case of a normal distribution, the whiskers include 99.3% of all data points). All values are in kcal/mol. Our atomization energies for the W4-11 dataset are shown in figure 4. It has first been observed by Furche170 that RPA underestimates atomization energies (indicated here by negative errors). This has been confirmed later by Ren at al.6 and Paier et al.86 for the 55 covalently bound molecules in the G2-I set.156 The same holds for RPA+SOSEX($W$,$v_{c}$), but compared to RPA the magnitude of the error is reduced on average.86, 6 We observe here that unlike SOSEX($W,v_{c}$), the addition of SOSEX($W(0),W(0)$ substantially overcorrects the RPA atomization energies which are now much too high in magnitude.Notice, that our non-counterpoise corrected calculations based on (T,Q) extrapolation will still include a sizable basis set incompleteness error for atomization energies. However, our qualitative conclusions will be valid. Adding bare SOX to RPA leads to underestimated correlation energies.52 This effect is expected to be more pronounced for the molecule than for the individual atoms since more electrons are correlated in the former. Therefore, RPA+SOX will substantially overestimate atomization energies and due to underestimated screening of the SOX term in SOSEX($W(0),W(0)$, RPA+SOSEX($W(0),W(0)$ inherits this problem. Figure 5: Errors of RPA@PBE and different RPA+SOSEX variants for barrier heights (BH76, left), ionization potentials (G21-IP, middle) and electron affinities (G21-EA, right). For an explanation of the boxplots, see the caption of figure 4. All values are in kcal/mol. As also shown in more detail in figure 5, the performance of RPA+SOSEX($W(0)$,$W(0)$) is in all cases comparable to RPA+SOSEX($W$,$v_{c}$), for which the trends presented here are well known:112, 172, 5, 6, 87 RPA+SOSEX($W$,$v_{c}$), fails for barrier heights, where the inclusion of renormalized singles excitations is necessary to obtain good results86, 6, 87 and works very well for charged excitations.6, 5 We note, that RPA+SOSEX($W(0)$,$W(0)$)@PBE0 performs very well for charged excitations, with an accuracy challenging modern double-hybrid functionals.149 ### Non-covalent Interactions #### S66 Interaction Energies Figure 6: Mean absolute deviations (MAD) (lower triangle in each plot) and Maximum deviations (MAX) (upper triangle) for a) RPA, b) SOSEX($W(0),W(0)$) and c) SOSEX($W(0),v_{c}$) interaction energies for the S66 database using different KS Green’s functions as well as to the CCSD(T) reference values (ref.). All values are in kcal/mol. We now turn to our benchmark results for non-covalent interactions. As for the previous datasets, we also assess the dependence of RPA and RPA+SOSEX correlation energies on the choice of the KS Green’s function $G^{s}$. In figure 6 the interaction energies in the S66 database173 obtained using different $G^{s}$ are compared to each other as well as to the CCSD(T) reference values by Hobza and coworkers.173 All values have been obtained using a single integration point for the $\lambda$-integral. As shown in the supporting information, a few outliers aside the errors arising from this approximation are small for non-covalent interactions. RPA and RPA+SOSEX($W(0),W(0)$) are equivalently independent of the choice of the KS Green’s function, with MADs between 0.20 and 0.39 kcal/mol between the different functionals. However, individual values can differ by almost 2 kcal/mol which is a sizable difference, given that the largest interaction energies in the S66 database are of the order of 20 kcal/mol only. The performance of RPA compared to the CCSD(T) reference is rather insensitive to the KS Green’s function, even though the hybrid starting points lead to slightly better results.With 0.52 kcal/mol, the MAD for RPA@PBE is in excellent agreement with the 0.61 kcal/mol MAD obtained by Nguyen _et. al._ in ref. 48, which has been obtained with GTO-type basis sets and 50 % counterpoise correction instead of 100 %. This shows, that our interaction energies are well converged with respect to the basis set size. The RPA+SOSEX($W(0),W(0)$) results are much better using the hybrid functionals than with PBE. RPA+SOSEX($W,v_{c}$)@PBE, is slightly more accurate than RPA+SOSEX($W,v_{c}$)@PBE0, but unlike for the datasets discussed before, the differences between the different starting points are negligibly small. Also, the dependence of SOSEX($W,v_{c}$) on the starting point is smaller than for SOSEX($W(0),W(0)$). Figure 7: Deviations of RPA@PBE0 and both RPA + SOSEX@PBE0 variants for the S66 database with respect to the CCSD(T) reference. All values are in kcal/mol. | MAD ---|--- | S66 | hydr. bond | dispersion | mixed Method | [$\frac{kcal}{mol}$] | [%] | [$\frac{kcal}{mol}$] | [%] | [$\frac{kcal}{mol}$] | [%] | [$\frac{kcal}{mol}$] | [%] SOSEX($W(0),W(0)$)@PBE0 | 0.32 | 7.28 | 0.45 | 5.76 | 0.29 | 10.33 | 0.21 | 5.50 SOSEX($W(0),v_{c}$)@PBE0 | 0.28 | 6.88 | 0.30 | 3.42 | 0.34 | 11.77 | 0.20 | 5.25 SOSEX($W,v_{c}$)@PBE0 | 0.29 | 6.85 | 0.31 | 3.39 | 0.33 | 11.63 | 0.21 | 5.33 SOSEX($W,v_{c}$)@PBE | 0.26 | 6.25 | 0.23 | 3.51 | 0.33 | 10.16 | 0.17 | 4.26 RPA | 0.46 | 11.54 | 0.55 | 7.19 | 0.47 | 17.74 | 0.34 | 9.41 PBE0-D3(BJ) | 0.28 | 5.09 | 0.47 | 4.80 | 0.18 | 5.09 | 0.18 | 5.42 DSD-PBE-P86-D3(BJ) | 0.23 | 5.07 | 0.31 | 3.71 | 0.21 | 6.99 | 0.16 | 4.43 Table 3: MADs (absolute and in %) of different electronic structure methods with respect to the CCSD(T) reference values for the whole S66 database and for its subcategories. Figure 7 shows the deviations of RPA and both RPA+SOSEX variants with respect to CCSD(T) for all datapoints in the S66 database. MADs and mean absolute percentage deviations (MAPD) for the whole database as well as for the individual categories are presented in table 3. The interactions of the first 22 complexes in the database are dominated by Hydrogen bonds which are predominantly of electrostatic origin.131 The next 22 complexes are mostly bound by dispersion interactions and the remaining interactions are of mixed nature.173 It is useful to distinguish between these different interaction patterns in the following comparison. For the whole database, RPA gives a MAPD of 11.5 % and the SOSEX corrections sizably reduce the MAPDs with respect to the CCSD(T) reference values to in between 7.3 % and 6.3 %. SOSEX($W,v_{c}$) outperforms SOSEX($W(0),W(0)$) by far for the hydrogen-bonded complexes, and is even slightly more accurate than the double-hybrid DSD-PBE-P86-D3(BJ),175 one of the best double hybrid functionals for weak interactions.176 For dispersion interactions, the performance of SOSEX($W(0),W(0)$) and SOSEX($W,v_{c}$) is comparable. Here, the empirically dispersion corrected177, 178 functionals, the hybrid PBE0-D3(BJ) and DSD-PBE-P86-D3(BJ), are much more accurate than all MBPT based methods. A few exceptions aside, fig. 7 shows that RPA understabilizes the complexes in the S66 database (indicated by positive errors). SOSEX corrections lower the interaction energies, i.e. the complexes are predicted to be more stable. SOSEX($W,v_{c}$) shows a tendency to overstabilize the hydrogen-bonded complexes. For these systems, the RPA+SOSEX($W(0),W(0)$) energies are almost identical to the ones from RPA. Also from the sizable differences of SOSEX($W,v_{c}$) (green points) to its static limit (with only a single screened interaction line, blue points) shown in figure 7 it is clear that the dynamical screening effects are important for the hydrogen-bonded complexes. As can be seen from the MAPD in table 3, this does however not improve agreement with the CCSD(T) reference values. For the dispersion bound complexes, there are only negligible differences between both variants, demonstrating that the dynamical variations of the screening average out. For the last 22 complexes in the database the differences are slightly larger. In all cases, dressing the second electron-electron interaction line does not alter the results decisively. #### S66x8 Interaction Energy The S66x8 dataset contains the complexes in the S66 database at 8 different geometries.173 The separations of the monomers in the complexes are given relative to their equilibrium distances, i.e. a relative separation of 2.0 means that the monomers separation in the complex is twice as large as the equilibrium separation. For our assessment of the SOSEX($W(0),W(0)$) correction, we divide the separations of the potential energy curve in three regions, which we denote as short (equilibrium distance scaled by a factor 0.9-0.95), middle (1.0-1.25) and long (1.5-2.0). All RPA (+SOSEX) calculation discussed here have been performed using a PBE0 Green’s function. Figure 8: MADs (in percent) for the S66x8 database with respect to the CCSD(T) reference values for RPA, RPA+SOSEX$(W,v_{c})$ and RPA+SOSEX$(W(0),W(0))$. MADs are shown separately for the whole database (columns on the left) and for different monomer-monomer separations. | short [%] | middle [%] | long [%] ---|---|---|--- SOSEX$(W,v_{c})$ | 35.2 | 42.8 | 13.5 SOSEX$(W(0),W(0))$ | 31.0 | 37.9 | 19.1 Table 4: Relative improvements obtained with different SOSEX variants over RPA for different groups of monomer-monomer separations. Figure 9: Upper plots: three RPA+SOSEX($W$,$v_{c}$)@PBE0 potential energy curves in the S66x8 database (green), separated in correlation contributions (yellow) and HF energy (evaluated with PBE0 orbitals). Lower plots: decomposition of the correlation energies into RPA and SOSEX($W$,$v_{c}$) contributions. All values are in kcal/mol. The results of our comparison are shown in figure 8, where the MAPDs with respect to CCSD(T) for the whole database as well as for the scaled monomer- monomer separations are shown. For the whole database, the average relative deviations with respect to the reference values are larger than for S66. With in between 31 and 43 %, both SOSEX correction lead to sizable improvements over the RPA in the short and medium regime. For large monomer-monomer separations, the improvements become much smaller, with 14 % for SOSEX$(W,v_{c})$ and 19 % for SOSEX$(W(0),W(0))$. This can be rationalized by observing that for large electron-electron distances the correlation contributions to the interaction energies quickly decay to zero. This is shown in figure 9 where we have plotted three of the RPA+SOSEX($W$,$v_{c}$) potential energy curves (Green curves in the upper plots) in the S66x8 database and separated the correlation contributions (The Green curves are the sums of the red and yellow curves). The lower plots separately show the RPA and SOSEX($W$,$v_{c}$) contributions to the correlation energy differences. In all three plots, the potential energy curves are dominated by the difference of the correlation energy of the dimer and the sum of correlation energies of the monomers. Therefore, the approximation used for the calculation of the correlation energy plays a large role. However, this difference quickly goes to zero for larger separations. At two times of the equilibrium distance, the correlation contributions to the potential energy curves are almost zero in all three considered examples. Therefore, the expression used for the correlation energy becomes less and less important with increasing monomer separation. This argument also holds if one expresses the contributions in % of the total interaction energy. One would expect the SOSEX contribution to decay faster than the RPA one, since the former is of exchange nature and therefore fundamentally short- ranged.52 However, the plots in the lower part of figure 9 shows that this is only the case for the potential energy curve on the right, but not for the two curves on the left, where SOSEX and RPA contributions seem to decay equally fast. ## 5 Conclusions The accuracy of the RPA can in principle be improved by including vertex correction in the self-energy. This can be done either directly, or indirectly through the solution of the BSE. Including the first-order vertex in the self- energy, different variants of SOSEX are obtained. These are the well-known AC- SOSEX, herein termed SOSEX$(W,v_{c})$, first introduced by Jansen _et. al_ ,111 in which only one of the Coulomb interaction lines is dynamically screened, as well as an energy expression which is obtained from the statically screened $G3W2$ correction to the $GW$ self-energy.115, 114 This energy expression has already been introduced in our earlier work114, albeit without a rigorous derivation. Especially, we have implicitly assumed that the integral over the coupling strength is evaluated using a trapezoidal rule. Here, we have derived this expression (referred to as SOSEX$(W(0),W(0))$ in this work) taking into account its $\lambda$-dependence and highlighted the differences to SOSEX$(W,v_{c})$. We have then assessed the accuracy of the SOSEX$(W(0),W(0))$ correction to RPA correlation energies for a wide range of chemical problems including bond dissociation, thermochemistry, kinetics, and non-covalent interactions. The main conclusion we can draw from our work is that in situation where the addition of SOSEX$(W,v_{c})$ leads to major improvements over the RPA, the addition of SOSEX$(W(0),W(0))$ does as well. This is the case for the calculation of ionization potentials and electron affinities where RPA+SOSEX approaches challenge the accuracy of modern double-hybrid functionals.149 Also for non-covalent interactions both SOSEX variants lead to the same substantial improvements over RPA. SOSEX$(W,v_{c})$ is most accurate for the hydrogen- bonded complexes while SOSEX$(W(0),W(0))$ is slightly more accurate for dispersion interactions. We also showed that the frequency-dependence of the screened interactions does seem to be an important factor for hydrogen-bonding but not for dispersion interactions. Differences between both SOSEX variants have been observed in the dissociation of diatomic molecules. As RPA and unlike RPA+SOSEX($W$,$v_{c}$),112, 145 RPA+SOSEX$(W(0),W(0))$ dissociates the Hydrogen molecule correctly. RPA does so because the self-correlation error effectively describes static correlation.145 The situation seems to be similar for RPA+SOSEX $(W(0),W(0))$ since in contrast to RPA+SOSEX($W$,$v_{c}$) it is not completely self-correlation free for 1-electron systems. We have also shown that this qualitative difference is due to the screening of the second electron-electron interaction line. The incomplete cancellation of self-correlation error does however negatively affect the dissociation of charged dimers for which RPA+SOSEX($W$,$v_{c}$) fixes most of the deficiencies of RPA.112, 150 Here, RPA+SOSEX$(W(0),W(0))$ performs even worse than RPA. Furthermore, the good dissociation of diatomic molecules does not automatically carry over to accurate barrier heights153, 154 where both SOSEX variants considerably worsen the RPA results. In summary, our results suggest that the statically screened SOSEX is a suitable alternative to dynamically screened SOSEX. While both formally scale as $N^{5}$ with system size, the computation of the SOSEX$(W,v_{c})$ correction requires a numerical imaginary frequency integration. The calculation of the SOSEX($W(0),W(0)$) correction is therefore much cheaper, comparable to MP2. MP2 is however inadequate for large molecules since it neglects screening effects entirely.1, 48 RPA+SOSEX$(W(0),W(0))$ is in principle applicable also to large molecules. A stochastic linear scaling implementation of the SOSEX self-energy has already been developed179 and a recent RPA+SOSEX implementation by Ochsenfeld and co-workers180 allowed applications to the L7 dataset,181 albeit with small basis sets. Other low- scaling MP2 implementations182, 183, 184 could potentially be generalized to SOSEX as well. Finally, it should be mentioned that the accuracy of the dynamically screened SOSEX correction to the RPA can be improved upon by the addition of renormalized single excitations.6, 87 Other methods which have been shown to outperform SOSEX, in particular for barrier heights, are the AXK kernel method94, 150, 185 or a SOSEX variant in which the terms of RPA and SOSEX beyond second order in $v_{c}$ are scaled down.185 It remains to be investigated whether the concept of static screening can also be combined with those approaches and leads to good results. This research received funding (project number 731.017.417) from the Netherlands Organization for Scientific Research (NWO) in the framework of the Innovation Fund for Chemistry and from the Ministry of Economic Affairs in the framework of the “ _TKI/PPS-Toeslagregeling_ ”. We thank Mauricio Rodríguez- Mayorga and Timothy Daas for fruitful discussions. Derivation of (6), discussion of the crossing symmetries of the 4-point vertices, assessment of the dependence of the $\lambda$-integration on the number of Gauss-Legendre points for the S66 database, .csv files with all calculated energies at the extrapolated CBS TZ3P, and QZ6P level and explanations of those files. ## References * Macke 1950 Macke, W. Über die Wechselwirkungen im Fermi-Gas. _Zeitschrift fur Naturforsch._ 1950, _5_ , 192–208 * Bohm and Pines 1953 Bohm, D.; Pines, D. A collective description of electron interactions: III. Coulomb interactions in a degenerate electron gas. _Phys. Rev._ 1953, _92_ , 609–625 * Heßelmann and Görling 2011 Heßelmann, A.; Görling, A. Random-phase approximation correlation methods for molecules and solids. _Mol. Phys._ 2011, _109_ , 2473–2500 * Eshuis and Furche 2011 Eshuis, H.; Furche, F. A parameter-free density functional that works for noncovalent interactions. _J. Phys. Chem. Lett._ 2011, _2_ , 983–989 * Eshuis et al. 2012 Eshuis, H.; Bates, J. E.; Furche, F. Electron correlation methods based on the random phase approximation. _Theor. Chem. Acc._ 2012, _131_ , 1084 * Ren et al. 2012 Ren, X.; Rinke, P.; Joas, C.; Scheffler, M. Random-phase approximation and its applications in computational chemistry and materials science. _J. Mater. Sci._ 2012, _47_ , 7447–7471 * Chen et al. 2017 Chen, G. P.; Voora, V. K.; Agee, M. M.; Balasubramani, S. G.; Furche, F. Random-Phase Approximation Methods. _Annu. Rev. Phys. Chem._ 2017, _68_ , 421–445 * Chedid et al. 2018 Chedid, J.; Ferrara, N. M.; Eshuis, H. Describing transition metal homogeneous catalysis using the random phase approximation. _Theor. Chem. Acc._ 2018, _137:158_ , 1–11 * Kreppel et al. 2020 Kreppel, A.; Graf, D.; Laqua, H.; Ochsenfeld, C. Range-Separated Density-Functional Theory in Combination with the Random Phase Approximation: An Accuracy Benchmark. _J. Chem. Theory Comput._ 2020, _16_ , 2985–2994 * Modrzejewski et al. 2020 Modrzejewski, M.; Yourdkhani, S.; Klimeš, J. Random Phase Approximation Applied to Many-Body Noncovalent Systems. _J. Chem. Theory Comput._ 2020, _16_ , 427–442 * Langreth and Perdew 1975 Langreth, D. C.; Perdew, J. P. The Exhange-Correlation Energy of a metalic surface. _Solid State Commun._ 1975, _17_ , 1425–1429 * Langreth and Perdew 1977 Langreth, D. C.; Perdew, J. P. Exchange-correlation energy of a metallic surface: Wave-vector analysis. _Phys. Rev. B_ 1977, _15_ , 2884–2901 * Harris and Griffin 1975 Harris, J.; Griffin, A. Correlation energy and van der Waals interaction of coupled metal films. _Phys. Rev. B_ 1975, _11_ , 3669–3677 * Coester 1958 Coester, F. Bound states of a many-particle system. _Nucl. Phys._ 1958, _7_ , 421–424 * Coester and Kümmel 1960 Coester, F.; Kümmel, H. Short-range correlations in nuclear wave functions. _Nucl. Phys._ 1960, _17_ , 477–485 * Čížek 1966 Čížek, J. On the Correlation Problem in Atomic and Molecular Systems. Calculation of Wavefunction Components in Ursell-Type Expansion Using Quantum-Field Theoretical Methods. _J. Chem. Phys._ 1966, _45_ , 4256–4266 * Čížek 1969 Čížek, J. On the Use of the Cluster Expansion and the Technique of Diagrams in Calculations of Correlation Effects in Atoms and Molecules. _Adv. Chem. Physics, Vol. XIV_ 1969, _XIV_ , 35–89 * Paldus et al. 1972 Paldus, J.; Čížek, J.; Shavitt, I. Correlation Problems in Atomic and Molecular Systems. IV. Extended Coupled-Pair Many-Electron Theory and Its Application to the BH3 Molecule. _Phys. Rev. A_ 1972, _5_ , 50–67 * Scuseria et al. 2008 Scuseria, G. E.; Henderson, T. M.; Sorensen, D. C. The ground state correlation energy of the random phase approximation from a ring coupled cluster doubles approach. _J. Chem. Phys._ 2008, _129_ , 231101 * Scuseria et al. 2013 Scuseria, G. E.; Henderson, T. M.; Bulik, I. W. Particle-particle and quasiparticle random phase approximations: Connections to coupled cluster theory. _J. Chem. Phys._ 2013, _139_ , 104113 * Abrikosov et al. 1975 Abrikosov, A. A.; Gorkov, P. L.; Dzyaloshinski, I. E. _Methods of quantum field theory in statistical physics_ ; Dover Publications INC. New York, 1975 * Mattuck 1992 Mattuck, R. D. _A Guide to Feynman Diagrams in the Many-body Problem_ , 2nd ed.; Dover Publications INC. New York, 1992 * Bruus and Flensberg 2004 Bruus, H.; Flensberg, K. _Many-Body Quantum Theory in Condensed Matter Physics: An Introduction_ ; OUP Oxford: Oxford, 2004 * Martin et al. 2016 Martin, R. M.; Reining, L.; Ceperley, D. M. _Interacting electrons_ ; Cambridge University Press, 2016 * Klein 1961 Klein, A. Perturbation theory for an infinite medium of fermions. _Phys. Rev._ 1961, _121_ , 950–956 * Luttinger and Ward 1960 Luttinger, J. M.; Ward, J. C. Ground-state energy of a many-fermion system. II. _Phys. Rev._ 1960, _118_ , 1417–1427 * Kohn and Sham. L. J. 1965 Kohn, W.; Sham. L. J., Self-Consistent Equations Including Exchange and Correlation Effects. _Phys. Rev._ 1965, _140_ , A1133 * Hohenberg and Kohn 1964 Hohenberg, P.; Kohn, W. Inhomogeneous Electron Gas. _Phys. Rev._ 1964, _136_ , 864–871 * Casida 1995 Casida, M. E. Generalization of the optimized-effective-potential model to include electron correlation: A variational derivation of the Sham-Schlüter equation for the exact exchange-correlation potential. _Phys. Rev. A_ 1995, _51_ , 2005–2013 * Dahlen et al. 2006 Dahlen, N. E.; van Leeuwen, R.; von Barth, U. Variational energy functionals of the Green function and of the density tested on molecules. _Phys. Rev. A - At. Mol. Opt. Phys._ 2006, _73_ , 012511 * Hedin 1965 Hedin, L. New method for calculating the one-particle Green’s function with application to the electron-gas problem. _Phys. Rev._ 1965, _139_ , A796 * 32 The pair bubble approximation is typically also denoted as RPA. To avoid potential confusion with the expression for the correlation energy, we will use the term bubble approximation when referring to the screening. * Zhang and Grüneis 2019 Zhang, I. Y.; Grüneis, A. Coupled cluster theory in materials science. _Front. Mater._ 2019, _6_ , 00123 * Keller et al. 2022 Keller, E.; Tsatsoulis, T.; Reuter, K.; Margraf, J. T. Regularized second-order correlation methods for extended systems. _J. Chem. Phys._ 2022, _156_ , 024106 * Eshuis et al. 2010 Eshuis, H.; Yarkony, J.; Furche, F. Fast computation of molecular random phase approximation correlation energies using resolution of the identity and imaginary frequency integration. _J. Chem. Phys._ 2010, _132_ , 234114 * Wilhelm et al. 2016 Wilhelm, J.; Seewald, P.; Del Ben, M.; Hutter, J. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis. _J. Chem. Theory Comput._ 2016, _12_ , 5851–5859 * Wilhelm et al. 2018 Wilhelm, J.; Golze, D.; Talirz, L.; Hutter, J.; Pignedoli, C. A. Toward GW Calculations on Thousands of Atoms. _J. Phys. Chem. Lett._ 2018, _9_ , 306–312 * Wilhelm et al. 2021 Wilhelm, J.; Seewald, P.; Golze, D. Low-scaling GW with benchmark accuracy and application to phosphorene nanosheets. _J. Chem. Theory Comput._ 2021, _17_ , 1662–1677 * Förster and Visscher 2020 Förster, A.; Visscher, L. Low-Order Scaling G0W0 by Pair Atomic Density Fitting. _J. Chem. Theory Comput._ 2020, _16_ , 7381–7399 * Duchemin and Blase 2019 Duchemin, I.; Blase, X. Separable resolution-of-the-identity with all-electron Gaussian bases: Application to cubic-scaling RPA. _J. Chem. Phys._ 2019, _150_ , 174120 * Duchemin and Blase 2021 Duchemin, I.; Blase, X. Cubic-Scaling All-Electron GW Calculations with a Separable Density-Fitting Space-Time Approach. _J. Chem. Theory Comput._ 2021, _17_ , 2383–2393 * Schurkus and Ochsenfeld 2016 Schurkus, H. F.; Ochsenfeld, C. Communication: An effective linear-scaling atomic-orbital reformulation of the random-phase approximation using a contracted double-Laplace transformation. _J. Chem. Phys._ 2016, _144_ , 031101 * Luenser et al. 2017 Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. Vanishing-Overhead Linear-Scaling Random Phase Approximation by Cholesky Decomposition and an Attenuated Coulomb-Metric. _J. Chem. Theory Comput._ 2017, _13_ , 1647–1655 * Vlček et al. 2017 Vlček, V.; Rabani, E.; Neuhauser, D.; Baer, R. Stochastic GW Calculations for Molecules. _J. Chem. Theory Comput._ 2017, _13_ , 4997–5003 * Graf et al. 2018 Graf, D.; Beuerle, M.; Schurkus, H. F.; Luenser, A.; Savasci, G.; Ochsenfeld, C. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method. _J. Chem. Theory Comput._ 2018, _14_ , 2505–2515 * Lu et al. 2010 Lu, D.; Nguyen, H. V.; Galli, G. Power series expansion of the random phase approximation correlation energy: The role of the third- and higher-order contributions. _J. Chem. Phys._ 2010, _133_ * Lebègue et al. 2010 Lebègue, S.; Harl, J.; Gould, T.; Ángyán, J. G.; Kresse, G.; Dobson, J. F. Cohesive properties and asymptotics of the dispersion interaction in graphite by the random phase approximation. _Phys. Rev. Lett._ 2010, _105_ , 1–4 * Nguyen et al. 2020 Nguyen, B. D.; Chen, G. P.; Agee, M. M.; Burow, A. M.; Tang, M. P.; Furche, F. Divergence of Many-Body Perturbation Theory for Noncovalent Interactions of Large Molecules. _J. Chem. Theory Comput._ 2020, _16_ , 2258–2273 * Irmler et al. 2019 Irmler, A.; Gallo, A.; Hummel, F.; Grüneis, A. Duality of Ring and Ladder Diagrams and Its Importance for Many-Electron Perturbation Theories. _Phys. Rev. Lett._ 2019, _123_ , 156401 * Hummel et al. 2019 Hummel, F.; Grüneis, A.; Kresse, G.; Ziesche, P. Screened Exchange Corrections to the Random Phase Approximation from Many-Body Perturbation Theory. _J. Chem. Theory Comput._ 2019, _15_ , 3223–3236 * Singwi et al. 1968 Singwi, K. S.; Tosi, M. P.; Land, R. H.; Sjölander, A. Electron correlations at metallic densities. _Phys. Rev._ 1968, _176_ , 589–599 * Jiang and Engel 2007 Jiang, H.; Engel, E. Random-phase-approximation-based correlation energy functionals: Benchmark results for atoms. _J. Chem. Phys._ 2007, _127_ , 184108 * Lochan and Head-Gordon 2007 Lochan, R. C.; Head-Gordon, M. Orbital-optimized opposite-spin scaled second-order correlation: An economical method to improve the description of open-shell molecules. _J. Chem. Phys._ 2007, _126_ , 164101 * Neese et al. 2009 Neese, F.; Schwabe, T.; Kossmann, S.; Schirmer, B.; Grimme, S. Assessment of orbital-optimized, spin-component scaled second-order many-body perturbation theory for thermochemistry and kinetics. _J. Chem. Theory Comput._ 2009, _5_ , 3060–3073 * Kossmann and Neese 2010 Kossmann, S.; Neese, F. Correlated ab initio spin densities for larger molecules: Orbital-optimized spin-component-scaled MP2 method. _J. Phys. Chem. A_ 2010, _114_ , 11768–11781 * Grüneis et al. 2010 Grüneis, A.; Marsman, M.; Kresse, G. Second-order Møller-Plesset perturbation theory applied to extended systems. II. Structural and energetic properties. _J. Chem. Phys._ 2010, _133_ , 074107 * van Schilfgaarde et al. 2006 van Schilfgaarde, M.; Kotani, T.; Faleev, S. Quasiparticle self-consistent GW theory. _Phys. Rev. Lett._ 2006, _96_ , 226402 * Tal et al. 2021 Tal, A.; Chen, W.; Pasquarello, A. Vertex function compliant with the Ward identity for quasiparticle self-consistent calculations beyond GW. _Phys. Rev. B_ 2021, _103_ , 161104 * Jung et al. 2004 Jung, Y.; Lochan, R. C.; Dutoi, A. D.; Head-Gordon, M. Scaled opposite-spin second order moller-plesset correlation energy: An economical electronic structure method. _J. Chem. Phys._ 2004, _121_ , 9793–9802 * Lochan et al. 2005 Lochan, R. C.; Jung, Y.; Head-Gordon, M. Scaled opposite spin second order Møller-Plesset theory with improved physical description of long-range dispersion interactions. _J. Phys. Chem. A_ 2005, _109_ , 7598–7605 * Grimme 2003 Grimme, S. Improved second-order Møller-Plesset perturbation theory by separate scaling of parallel- and antiparallel-spin pair correlation energies. _J. Chem. Phys._ 2003, _118_ , 9095–9102 * Szabados 2006 Szabados, Á. Theoretical interpretation of Grimme’s spin-component-scaled second order Møller-Plesset theory. _J. Chem. Phys._ 2006, _125_ , 214105 * Pitoňák et al. 2009 Pitoňák, M.; Neogrády, P.; Černý, J.; Grimme, S.; Hobza, P. Scaled MP3 non-covalent interaction energies agree closely with accurate CCSD(T) benchmark data. _ChemPhysChem_ 2009, _10_ , 282–289 * Sedlak et al. 2013 Sedlak, R.; Riley, K. E.; Řezáč, J.; Pitoňák, M.; Hobza, P. MP2.5 and MP2.X: Approaching CCSD(T) quality description of noncovalent interaction at the cost of a single CCSD iteration. _ChemPhysChem_ 2013, _14_ , 698–707 * Goldey and Head-Gordon 2012 Goldey, M.; Head-Gordon, M. Attenuating away the errors in inter- and intramolecular interactions from second-order Møller-plesset calculations in the small aug-cc-pVDZ basis set. _J. Phys. Chem. Lett._ 2012, _3_ , 3592–3598 * Goldey et al. 2013 Goldey, M.; Dutoi, A.; Head-Gordon, M. Attenuated second-order Møller-Plesset perturbation theory: Performance within the aug-cc-pVTZ basis. _Phys. Chem. Chem. Phys._ 2013, _15_ , 15869–15875 * Goldey et al. 2015 Goldey, M. B.; Belzunces, B.; Head-Gordon, M. Attenuated MP2 with a Long-Range Dispersion Correction for Treating Nonbonded Interactions. _J. Chem. Theory Comput._ 2015, _11_ , 4159–4168 * Lee and Head-Gordon 2018 Lee, J.; Head-Gordon, M. Regularized Orbital-Optimized Second-Order Møller-Plesset Perturbation Theory: A Reliable Fifth-Order-Scaling Electron Correlation Model with Orbital Energy Dependent Regularizers. _J. Chem. Theory Comput._ 2018, _14_ , 5203–5219 * Monino and Loos 2022 Monino, E.; Loos, P.-F. Unphysical Discontinuities, Intruder States and Regularization in GW Methods. _J. Chem. Phys._ 2022, _156_ , 231101 * Pittner 2003 Pittner, J. Continuous transition between Brillouin-Wigner and Rayleigh-Schrödinger perturbation theory, generalized Bloch equation, and Hilbert space multireference coupled cluster. _J. Chem. Phys._ 2003, _118_ , 10876 * Engel and Jiang 2006 Engel, E.; Jiang, H. Orbital-Dependent Representation of the Correlation Energy Functional: Properties of Second-Order Kohn–Sham Perturbation Expansion. _Int. J. Quantum Chem._ 2006, _106_ , 3242–3259 * Jiang and Engel 2006 Jiang, H.; Engel, E. Kohn-Sham perturbation theory: Simple solution to variational instability of second order correlation energy functional. _J. Chem. Phys._ 2006, _125_ , 184108 * Daas et al. 2021 Daas, T. J.; Fabiano, E.; Della Sala, F.; Gori-Giorgi, P.; Vuckovic, S. Noncovalent Interactions from Models for the Møller-Plesset Adiabatic Connection. _J. Phys. Chem. Lett._ 2021, _12_ , 4867–4875 * Daas et al. 2022 Daas, T. J.; Kooi, D. P.; Grooteman, A. J. A. F.; Seidl, M.; Gori-Giorgi, P. Gradient Expansions for the Large-Coupling Strength Limit of the Møller–Plesset Adiabatic Connection. _J. Chem. Theory Comput._ 2022, _18_ , 1584–1594 * Kurth et al. 1999 Kurth, S.; Perdew, J. P.; Blaha, P. Molecular and solid-state tests of density functional approximations: LSD, GGAs, and Meta-GGAs. _Int. J. Quantum Chem._ 1999, _75_ , 889–909 * Yan et al. 2000 Yan, Z.; Perdew, J. P.; Kurth, S. Density functional for short-range correlation: Accuracy of the random-phase approximation for isoelectronic energy changes. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2000, _61_ , 16430–16439 * Angyan et al. 2005 Angyan, J. G.; Gerber, I. C.; Savin, A.; Toulouse, J. Van der Waals forces in density functional theory: Perturbational long-range electron-interaction corrections. _Phys. Rev. A_ 2005, _72_ , 012510 * Janesko et al. 2009 Janesko, B. G.; Henderson, T. M.; Scuseria, G. E. Long-range-corrected hybrids including random phase approximation correlation. _J. Chem. Phys._ 2009, _130_ , 081105 * Janesko et al. 2009 Janesko, B. G.; Henderson, T. M.; Scuseria, G. E. Long-range-corrected hybrid density functionals including random phase approximation correlation: Application to noncovalent interactions. _J. Chem. Phys._ 2009, _131_ , 034110 * Toulouse et al. 2009 Toulouse, J.; Gerber, I. C.; Jansen, G.; Savin, A.; Ángyán, J. G. Adiabatic-connection fluctuation-dissipation density-functional theory based on range separation. _Phys. Rev. Lett._ 2009, _102_ , 096404 * Zhu et al. 2010 Zhu, W.; Toulouse, J.; Savin, A.; Ángyán, J. G. Range-separated density-functional theory with random phase approximation applied to noncovalent intermolecular interactions. _J. Chem. Phys._ 2010, _132_ , 244108 * Toulouse et al. 2010 Toulouse, J.; Zhu, W.; Ángyán, J. G.; Savin, A. Range-separated density-functional theory with the random-phase approximation: Detailed formalism and illustrative applications. _Phys. Rev. A - At. Mol. Opt. Phys._ 2010, _82_ , 032502 * Toulouse et al. 2011 Toulouse, J.; Zhu, W.; Savin, A.; Jansen, G.; Ángyán, J. G. Closed-shell ring coupled cluster doubles theory with range separation applied on weak intermolecular interactions. _J. Chem. Phys._ 2011, _135_ , 084119 * Beuerle and Ochsenfeld 2017 Beuerle, M.; Ochsenfeld, C. Short-range second order screened exchange correction to RPA correlation energies. _J. Chem. Phys._ 2017, _147_ , 204107 * Ren et al. 2011 Ren, X.; Tkatchenko, A.; Rinke, P.; Scheffler, M. Beyond the random-phase approximation for the electron correlation energy: The importance of single excitations. _Phys. Rev. Lett._ 2011, _106_ , 153003 * Paier et al. 2012 Paier, J.; Ren, X.; Rinke, P.; Scuseria, G. E.; Grüneis, A.; Kresse, G.; Scheffler, M. Assessment of correlation energies based on the random-phase approximation. _New J. Phys._ 2012, _14_ , 043002 * Ren et al. 2013 Ren, X.; Rinke, P.; Scuseria, G. E.; Scheffler, M. Renormalized second-order perturbation theory for the electron correlation energy: Concept, implementation, and benchmarks. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2013, _88_ , 035120 * Hellgren and Von Barth 2007 Hellgren, M.; Von Barth, U. Correlation potential in density functional theory at the GWA level: Spherical atoms. _Phys. Rev. B_ 2007, _76_ , 075107 * Hellgren and Von Barth 2008 Hellgren, M.; Von Barth, U. Linear density response function within the time-dependent exact-exchange approximation. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2008, _78_ , 115107 * Heßelmann and Görling 2010 Heßelmann, A.; Görling, A. Random phase approximation correlation energies with exact Kohn-Sham exchange. _Mol. Phys._ 2010, _108_ , 359–372 * Heßelmann and Görling 2011 Heßelmann, A.; Görling, A. Correct description of the bond dissociation limit without breaking spin symmetry by a random-phase-approximation correlation functional. _Phys. Rev. Lett._ 2011, _106_ , 093001 * Bleiziffer et al. 2015 Bleiziffer, P.; Krug, M.; Görling, A. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel. _J. Chem. Phys._ 2015, _142_ , 244108 * Mussard et al. 2016 Mussard, B.; Rocca, D.; Jansen, G.; Ángyán, J. G. Dielectric Matrix Formulation of Correlation Energies in the Random Phase Approximation: Inclusion of Exchange Effects. _J. Chem. Theory Comput._ 2016, _12_ , 2191–2202 * Bates and Furche 2013 Bates, J. E.; Furche, F. Communication: Random phase approximation renormalized many-body perturbation theory. _J. Chem. Phys._ 2013, _139_ , 171103 * Erhard et al. 2016 Erhard, J.; Bleiziffer, P.; Görling, A. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability. _Phys. Rev. Lett._ 2016, _117_ , 143002 * Olsen et al. 2019 Olsen, T.; Patrick, C. E.; Bates, J. E.; Ruzsinszky, A.; Thygesen, K. S. Beyond the RPA and GW methods with adiabatic xc-kernels for accurate ground state and quasiparticle energies. _Nat. Comput. Mater._ 2019, _5_ , 106 * Görling 2019 Görling, A. Hierarchies of methods towards the exact Kohn-Sham correlation energy based on the adiabatic-connection fluctuation-dissipation theorem. _Phys. Rev. B_ 2019, _99_ , 235120 * Maggio and Kresse 2016 Maggio, E.; Kresse, G. Correlation energy for the homogeneous electron gas: Exact Bethe-Salpeter solution and an approximate evaluation. _Phys. Rev. B_ 2016, _93_ , 235113 * Holzer et al. 2018 Holzer, C.; Gui, X.; Harding, M. E.; Kresse, G.; Helgaker, T.; Klopper, W. Bethe-Salpeter correlation energies of atoms and molecules. _J. Chem. Phys._ 2018, _149_ , 144106 (2018); * Loos et al. 2020 Loos, P.-F.; Scemama, A.; Duchemin, I.; Jacquemin, D.; Blase, X. Pros and Cons of the Bethe-Salpeter Formalism for Ground-State Energies. _J. Phys. Chem. Lett._ 2020, _11_ , 3536–3545 * Trushin et al. 2021 Trushin, E.; Thierbach, A.; Görling, A. Toward chemical accuracy at low computational cost: Density-functional theory with $\sigma$-functionals for the correlation energy. _J. Chem. Phys._ 2021, _154_ , 014104 * Fauser et al. 2021 Fauser, S.; Trushin, E.; Neiss, C.; Görling, A. Chemical accuracy with $\sigma$-functionals for the Kohn-Sham correlation energy optimized for different input orbitals and eigenvalues. _J. Chem. Phys._ 2021, _155_ , 134111 * Colonna et al. 2014 Colonna, N.; Hellgren, M.; De Gironcoli, S. Correlation energy within exact-exchange adiabatic connection fluctuation-dissipation theory: Systematic development and simple approximations. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2014, _90_ , 1–10 * Colonna et al. 2016 Colonna, N.; Hellgren, M.; De Gironcoli, S. Molecular bonding with the RPAx: From weak dispersion forces to strong correlation. _Phys. Rev. B_ 2016, _93_ , 1–11 * Hellgren et al. 2018 Hellgren, M.; Colonna, N.; De Gironcoli, S. Beyond the random phase approximation with a local exchange vertex. _Phys. Rev. B_ 2018, _98_ , 1–12 * Hellgren and Baguet 2021 Hellgren, M.; Baguet, L. Random phase approximation with exchange for an accurate description of crystalline polymorphism. _Phys. Rev. Res._ 2021, _3_ * Sharp and Horton 1953 Sharp, T.; Horton, G. A Variational Approach to the Unipotential Many-Electron Problem. _Phys. Rev._ 1953, _90_ , 317 * Talman and Shadwick 1976 Talman, J. D.; Shadwick, W. F. Optimized effective atomic central potential. _Phys. Rev. A_ 1976, _14_ , 36–40 * Engel, Eberhard and Dreizler 2013 Engel, Eberhard and Dreizler, R. M. _Density functional theory An Advanced Course_ ; Springer, 2013 * Freeman 1977 Freeman, D. Coupled-cluster expansion applied to the electron gas: Inclusion of ring and exchange effects. _Phys. Rev. B_ 1977, _15_ , 5512–5521 * Jansen et al. 2010 Jansen, G.; Liu, R. F.; Ángyán, J. G. On the equivalence of ring-coupled cluster and adiabatic connection fluctuation-dissipation theorem random phase approximation correlation energy expressions. _J. Chem. Phys._ 2010, _133_ , 154106 * Paier et al. 2010 Paier, J.; Janesko, B. G.; Henderson, T. M.; Scuseria, G. E.; Grüneis, A.; Kresse, G. Hybrid functionals including random phase approximation correlation and second-order screened exchange. _J. Chem. Phys._ 2010, _132_ , 094103 * Grüneis et al. 2009 Grüneis, A.; Marsman, M.; Harl, J.; Schimka, L.; Kresse, G. Making the random phase approximation to electronic correlation accurate. _J. Chem. Phys._ 2009, _131_ , 154115 * Förster and Visscher 2022 Förster, A.; Visscher, L. Exploring the statically screened G3W2 correction to the GW self-energy : Charged excitations and total energies of finite systems. _Phys. Rev. B_ 2022, _105_ , 125121 * Grüneis et al. 2014 Grüneis, A.; Kresse, G.; Hinuma, Y.; Oba, F. Ionization potentials of solids: The importance of vertex corrections. _Phys. Rev. Lett._ 2014, _112_ , 096401 * Dyson 1949 Dyson, F. J. The S matrix in quantum electrodynamics. _Phys. Rev._ 1949, _75_ , 1736–1755 * Rieger et al. 1999 Rieger, M. M.; Steinbeck, L.; White, I. D.; Rojas, H. N.; Godby, R. W. GW space-time method for the self-energy of large systems. _Comput. Phys. Commun._ 1999, _117_ , 211–228 * Maggio and Kresse 2017 Maggio, E.; Kresse, G. GW Vertex Corrected Calculations for Molecular Systems. _J. Chem. Theory Comput._ 2017, _13_ , 4765–4778 * Baym and Kadanoff 1961 Baym, G.; Kadanoff, L. P. Conservation laws and correlation functions. _Phys. Rev._ 1961, _124_ , 287–299 * Salpeter and Bethe 1951 Salpeter, E. E.; Bethe, H. A. A relativistic equation for bound-state problems. _Phys. Rev._ 1951, _84_ , 1232–1242 * Starke and Kresse 2012 Starke, R.; Kresse, G. Self-consistent Green function equations and the hierarchy of approximations for the four-point propagator. _Phys. Rev. B_ 2012, _85_ , 075119 * Held et al. 2011 Held, K.; Taranto, C.; Rohringer, G.; Toschi, A. In _LDA+DMFT approach to strongly Correl. Mater._ ; Pavarini, E., Koch, E., Vollhardt, D., Lichtenstein, A., Eds.; 2011; Chapter 13, pp 13.1–13.22 * Caruso et al. 2013 Caruso, F.; Rohr, D. R.; Hellgren, M.; Ren, X.; Rinke, P.; Rubio, A.; Scheffler, M. Bond breaking and bond formation: How electron correlation is captured in many-body perturbation theory and density-functional theory. _Phys. Rev. Lett._ 2013, _110_ , 146403 * Rohringer et al. 2012 Rohringer, G.; Valli, A.; Toschi, A. Local electronic correlation at the two-particle level. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2012, _86_ , 125114 * Rohringer et al. 2018 Rohringer, G.; Hafermann, H.; Toschi, A.; Katanin, A. A.; Antipov, A. E.; Katsnelson, M. I.; Lichtenstein, A. I.; Rubtsov, A. N.; Held, K. Diagrammatic routes to nonlocal correlations beyond dynamical mean field theory. _Rev. Mod. Phys._ 2018, _90_ , 25003 * Krien et al. 2021 Krien, F.; Kauch, A.; Held, K. Tiling with triangles: Parquet and GW$\gamma$ methods unified. _Phys. Rev. Res._ 2021, _3_ , 13149 * Ren et al. 2015 Ren, X.; Marom, N.; Caruso, F.; Scheffler, M.; Rinke, P. Beyond the GW approximation: A second-order screened exchange correction. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2015, _92_ , 081104(R) * van Leeuwen et al. 2015 van Leeuwen, R.; Dahlen, N. E.; Stefanucci, G.; Almbladh, C. O.; Von Barth, U. In _Time-Dependent Density Funct. Theory_ ; Marques, M. A., Ullrich, C. A., Nogueira, F., Rubio, A., Burke, K., Gross, E. K., Eds.; Springer Heidelberg, 2015; pp 185–217 * Stefanucci et al. 2014 Stefanucci, G.; Pavlyukh, Y.; Uimonen, A. M.; van Leeuwen, R. Diagrammatic expansion for positive spectral functions beyond GW: Application to vertex corrections in the electron gas. _Phys. Rev. B_ 2014, _90_ , 115134 * Rodríguez-Mayorga et al. 2021 Rodríguez-Mayorga, M.; Mitxelena, I.; Bruneval, F.; Piris, M. Coupling Natural Orbital Functional Theory and Many-Body Perturbation Theory by Using Nondynamically Correlated Canonical Orbitals. _J. Chem. Theory Comput._ 2021, _17_ , 7562–7574 * Becke 2014 Becke, A. D. Perspective: Fifty years of density-functional theory in chemical physics. _J. Chem. Phys._ 2014, _140_ , 18A301 * Vuckovic et al. 2020 Vuckovic, S.; Fabiano, E.; Gori-Giorgi, P.; Burke, K. MAP: An MP2 Accuracy Predictor for Weak Interactions from Adiabatic Connection Theory. _J. Chem. Theory Comput._ 2020, _16_ , 4141–4149 * Baerends et al. 2022 Baerends, E.; Ziegler, T.; Atkins, A.; Autschbach, J.; Baseggio, O.; Bashford, D.; Bérces, A.; Bickelhaupt, F.; Bo, C.; Boerrigter, P.; Cavallo, L.; Daul, C.; Chong, D.; Chulhai, D.; Deng, L.; Dickson, R.; Dieterich, J.; Ellis, D.; van Faassen, M.; Fan, L.; Fischer, T.; Förster, A.; Guerra, C. F.; Franchini, M.; Ghysels, A.; Giammona, A.; van Gisbergen, S.; Goez, A.; Götz, A.; Groeneveld, J.; Gritsenko, O.; Grüning, M.; Gusarov, S.; Harris, F.; van den Hoek, P.; Hu, Z.; Jacob, C.; Jacobsen, H.; Jensen, L.; Joubert, L.; Kaminski, J.; van Kessel, G.; König, C.; Kootstra, F.; Kovalenko, A.; Krykunov, M.; van Lenthe, E.; McCormack, D.; Michalak, A.; Mitoraj, M.; Morton, S.; Neugebauer, J.; Nicu, V.; Noodleman, L.; Osinga, V.; Patchkovskii, S.; Pavanello, M.; Peeples, C.; Philipsen, P.; Post, D.; Pye, C.; Ramanantoanina, H.; Ramos, P.; Ravenek, W.; Reimann, M.; Rodríguez, J.; Ros, P.; Rüger, R.; Schipper, P.; Schlüns, D.; van Schoot, H.; Schreckenbach, G.; Seldenthuis, J.; Seth, M.; Snijders, J.; Solà, M.; Stener, M.; Swart, M.; Swerhone, D.; Tognetti, V.; te Velde, G.; Vernooijs, P.; Versluis, L.; Visscher, L.; Visser, O.; Wang, F.; Wesolowski, T.; van Wezenbeek, E.; Wiesenekker, G.; Wolff, S.; Woo, T.; Yakovlev, A. ADF2022.1, locally modified development version. 2022 * Förster and Visscher 2021 Förster, A.; Visscher, L. GW100: A Slater-Type Orbital Perspective. _J. Chem. Theory Comput._ 2021, _17_ , 5080–5097 * Krykunov et al. 2009 Krykunov, M.; Ziegler, T.; Van Lenthe, E. Hybrid density functional calculations of nuclear magnetic shieldings using slater-type orbitals and the zeroth- order regular approximation. _Int. J. Quantum Chem._ 2009, _109_ , 1676–1683 * Wirz et al. 2017 Wirz, L. N.; Reine, S. S.; Pedersen, T. B. On Resolution-of-the-Identity Electron Repulsion Integral Approximations and Variational Stability. _J. Chem. Theory Comput._ 2017, _13_ , 4897–4906 * Förster et al. 2020 Förster, A.; Franchini, M.; van Lenthe, E.; Visscher, L. A Quadratic Pair Atomic Resolution of the Identity Based SOS-AO-MP2 Algorithm Using Slater Type Orbitals. _J. Chem. Theory Comput._ 2020, _16_ , 875 – 891 * Kaltak et al. 2014 Kaltak, M.; Klimeš, J.; Kresse, G. Low scaling algorithms for the random phase approximation: Imaginary time and laplace transformations. _J. Chem. Theory Comput._ 2014, _10_ , 2498–2507 * Kaltak et al. 2014 Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling algorithm for the random phase approximation: Self-interstitials and vacancies in Si. _Phys. Rev. B_ 2014, _90_ , 054115 * Liu et al. 2016 Liu, P.; Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling GW: Towards fast quasiparticle calculations. _Phys. Rev. B_ 2016, _94_ , 165109 * Helgaker et al. 1997 Helgaker, T.; Klopper, W.; Koch, H.; Noga, J. Basis-set convergence of correlated calculations on water. _J. Chem. Phys._ 1997, _106_ , 9639–9646 * Jensen 2013 Jensen, F. Atomic orbital basis sets. _Wiley Interdiscip. Rev. Comput. Mol. Sci._ 2013, _3_ , 273–295 * Knowles and Handy 1984 Knowles, P. J.; Handy, N. C. A new determinant-based full configuration interaction method. _Chem. Phys. Lett._ 1984, _111_ , 315–321 * Knowles and Handy 1989 Knowles, P. J.; Handy, N. C. A determinant based full configuration interaction program. _Comput. Phys. Commun._ 1989, _54_ , 75–83 * Henderson and Scuseria 2010 Henderson, T. M.; Scuseria, G. E. The connection between self-interaction and static correlation: A random phase approximation perspective. _Mol. Phys._ 2010, _108_ , 2511–2517 * Peterson et al. 1993 Peterson, K. A.; Kendall, R. A.; Dunning, T. H. Benchmark calculations with correlated molecular wave functions. III. Configuration interaction calculations on first row homonuclear diatomics. _J. Chem. Phys._ 1993, _99_ , 9790–9805 * Hönisch et al. 2009 Hönisch, B.; Hemming, G. N.; Archer, D.; Sidall, M.; McManus, J. F. Beryllium Dimer — Caught in the Act of Bonding. _Science (80-. )._ 2009, _324_ , 1548–1552 * Røeggen and Veseth 2005 Røeggen, I.; Veseth, L. Interatomic potential for the X1g+ state of Be2, revisited. 2005 * Goerigk et al. 2017 Goerigk, L.; Hansen, A.; Bauer, C.; Ehrlich, S.; Najibi, A.; Grimme, S. A look at the density functional theory zoo with the advanced GMTKN55 database for general main group thermochemistry, kinetics and noncovalent interactions. _Phys. Chem. Chem. Phys._ 2017, _19_ , 32184–32215 * Chen et al. 2018 Chen, G. P.; Agee, M. M.; Furche, F. Performance and Scope of Perturbative Corrections to Random-Phase Approximation Energies. _J. Chem. Theory Comput._ 2018, _14_ , 5701–5714 * Karton et al. 2011 Karton, A.; Daon, S.; Martin, J. M. W4-11: A high-confidence benchmark dataset for computational thermochemistry derived from first-principles W4 data. _Chem. Phys. Lett._ 2011, _510_ , 165–178 * Karton et al. 2006 Karton, A.; Rabinovich, E.; Martin, J. M. W4 theory for computational thermochemistry: In pursuit of confident sub-kJ/mol predictions. _J. Chem. Phys._ 2006, _125_ , 144108 * Zhao et al. 2005 Zhao, Y.; Lynch, B. J.; Truhlar, D. G. Multi-coefficient extrapolated density functional theory for thermochemistry and thermochemical kinetics. _Phys. Chem. Chem. Phys._ 2005, _7_ , 43–52 * Zhao et al. 2005 Zhao, Y.; González-Garda, N.; Truhlar, D. G. Benchmark database of barrier heights for heavy atom transfer, nucleophilic substitution, association, and unimolecular reactions and its use to test theoretical methods. _J. Phys. Chem. A_ 2005, _109_ , 2012–2018 * Karton and Martin 2012 Karton, A.; Martin, J. M. Explicitly correlated Wn theory: W1-F12 and W2-F12. _J. Chem. Phys._ 2012, _136_ , 124114 * Curtiss et al. 1991 Curtiss, L. A.; Raghavachari, K.; Trucks, G. W.; Pople, J. A. Gaussian-2 theory for molecular energies of first- and second-row compounds. _J. Chem. Phys._ 1991, _94_ , 7221–7230 * Hellgren et al. 2012 Hellgren, M.; Rohr, D. R.; Gross, E. K. Correlation potentials for molecular bond dissociation within the self-consistent random phase approximation. _J. Chem. Phys._ 2012, _136_ , 034106 * Verma and Bartlett 2012 Verma, P.; Bartlett, R. J. Increasing the applicability of density functional theory. II. Correlation potentials from the random phase approximation and beyond. _J. Chem. Phys._ 2012, _136_ , 044105 * Bleiziffer et al. 2013 Bleiziffer, P.; Heßelmann, A.; Görling, A. Efficient self-consistent treatment of electron correlation within the random phase approximation. _J. Chem. Phys._ 2013, _139_ , 084113 * Klimeš and Kresse 2014 Klimeš, J.; Kresse, G. Kohn-Sham band gaps and potentials of solids from the optimised effective potential method within the random phase approximation. _J. Chem. Phys._ 2014, _140_ , 054516 * Hellgren et al. 2015 Hellgren, M.; Caruso, F.; Rohr, D. R.; Ren, X.; Rubio, A.; Scheffler, M.; Rinke, P. Static correlation and electron localization in molecular dimers from the self-consistent RPA and GW approximation. _Phys. Rev. B - Condens. Matter Mater. Phys._ 2015, _91_ , 165110 * Voora et al. 2019 Voora, V. K.; Balasubramani, S. G.; Furche, F. Variational generalized Kohn-Sham approach combining the random-phase-approximation and Green’s-function methods. _Phys. Rev. A_ 2019, _99_ , 012518 * Graf and Ochsenfeld 2020 Graf, D.; Ochsenfeld, C. A range-separated generalized Kohn-Sham method including a long-range nonlocal random phase approximation correlation potential. _J. Chem. Phys._ 2020, _153_ , 244118 * Riemelmoser et al. 2021 Riemelmoser, S.; Kaltak, M.; Kresse, G. Optimized effective potentials from the random-phase approximation: Accuracy of the quasiparticle approximation. _J. Chem. Phys._ 2021, _154_ , 154103 * Yu et al. 2021 Yu, J. M.; Tsai, J.; Hernandez, D. J.; Furche, F.; Tsai, J.; Hernandez, D. J. Selfconsistent random phase approximation methods. _J. Chem. Phys._ 2021, _155_ , 040902 * Caruso et al. 2016 Caruso, F.; Dauth, M.; Van Setten, M. J.; Rinke, P. Benchmark of GW Approaches for the GW100 Test Set. _J. Chem. Theory Comput._ 2016, _12_ , 5076–5087 * Zhang et al. 2022 Zhang, L.; Shu, Y.; Xing, C.; Chen, X.; Sun, S.; Huang, Y.; Truhlar, D. G. Recommendation of Orbitals for G 0 W 0 Calculations on Molecules and Crystals. _J. Chem. Theory Comput._ 2022, _18_ , 3523–3537 * Wang et al. 2021 Wang, Y.; Rinke, P.; Ren, X. Assessing the G 0 W 0 $\Gamma$ 0 (1) Approach: Beyond G 0 W 0 with Hedin’s Full Second-Order Self-Energy Contribution. _J. Chem. Theory Comput._ 2021, _17_ , 5140–5154 * Hait and Head-Gordon 2018 Hait, D.; Head-Gordon, M. Delocalization Errors in Density Functional Theory Are Essentially Quadratic in Fractional Occupation Number. _J. Phys. Chem. Lett._ 2018, _9_ , 6280–6288 * Furche 2001 Furche, F. Molecular tests of the random phase approximation to the exchange-correlation energy functional. _Phys. Rev. B_ 2001, _64_ , 195120 * 171 Notice, that our non-counterpoise corrected calculations based on (T,Q) extrapolation will still include a sizable basis set incompleteness error for atomization energies. However, our qualitative conclusions will be valid. * Paier et al. 2010 Paier, J.; Janesko, B. G.; Henderson, T. M.; Scuseria, G. E.; Grüneis, A.; Kresse, G. Erratum: Hybrid functionals including random phase approximation correlation and second-order screened exchange (Journal of Chemical Physics (2010) 132 (094103)). _J. Chem. Phys._ 2010, _133_ , 2009–2011 * Řezáč et al. 2011 Řezáč, J.; Riley, K. E.; Hobza, P. S66: A well-balanced database of benchmark interaction energies relevant to biomolecular structures. _J. Chem. Theory Comput._ 2011, _7_ , 2427–2438 * 174 With 0.52 kcal/mol, the MAD for RPA@PBE is in excellent agreement with the 0.61 kcal/mol MAD obtained by Nguyen _et. al._ in ref. 48, which has been obtained with GTO-type basis sets and 50 % counterpoise correction instead of 100 %. This shows, that our interaction energies are well converged with respect to the basis set size. * Santra et al. 2019 Santra, G.; Sylvetsky, N.; Martin, J. M. Minimally Empirical Double-Hybrid Functionals Trained against the GMTKN55 Database: RevDSD-PBEP86-D4, revDOD-PBE-D4, and DOD-SCAN-D4. _J. Phys. Chem. A_ 2019, _123_ , 5129–5143 * Mehta et al. 2018 Mehta, N.; Casanova-Páez, M.; Goerigk, L. Semi-empirical or non-empirical double-hybrid density functionals: Which are more robust? _Phys. Chem. Chem. Phys._ 2018, _20_ , 23175–23194 * Grimme et al. 2010 Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. _J. Chem. Phys._ 2010, _132_ , 154104 * Grimme et al. 2011 Grimme, S.; Ehrlich, S.; Goerigk, L. Effect of the damping function in dispersion corrected density functional theory. _J. Comput. Chem._ 2011, _32_ , 1456–1465 * Vlček 2019 Vlček, V. Stochastic Vertex Corrections: Linear Scaling Methods for Accurate Quasiparticle Energies. _J. Chem. Theory Comput._ 2019, _15_ , 6254–6266 * Beuerle et al. 2018 Beuerle, M.; Graf, D.; Schurkus, H. F.; Ochsenfeld, C. Efficient calculation of beyond RPA correlation energies in the dielectric matrix formalism. _J. Chem. Phys._ 2018, _148_ , 204104 * Sedlak et al. 2013 Sedlak, R.; Janowski, T.; Pitoňák, M.; Řezáč, J.; Pulay, P.; Hobza, P. Accuracy of quantum chemical methods for large noncovalent complexes. _J. Chem. Theory Comput._ 2013, _9_ , 3364–3374 * Doser et al. 2009 Doser, B.; Lambrecht, D. S.; Kussmann, J.; Ochsenfeld, C. Linear-scaling atomic orbital-based second-order Møller-Plesset perturbation theory by rigorous integral screening criteria. _J. Chem. Phys._ 2009, _130_ , 064107 * Pinski et al. 2015 Pinski, P.; Riplinger, C.; Valeev, E. F.; Neese, F. Sparse maps - A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals. _J. Chem. Phys._ 2015, _143_ , 034108 * Nagy et al. 2016 Nagy, P. R.; Samu, G.; Kállay, M. An Integral-Direct Linear-Scaling Second-Order Møller-Plesset Approach. _J. Chem. Theory Comput._ 2016, _12_ , 4897–4914 * Mezei et al. 2019 Mezei, P. D.; Ruzsinszky, A.; Kállay, M. Reducing the Many-Electron Self-Interaction Error in the Second-Order Screened Exchange Method. _J. Chem. Theory Comput._ 2019, _15_ , 6607–6616
# Decentralized Signal Temporal Logic Control for Perturbed Interconnected Systems via Assume-Guarantee Contract Optimization Kasra Ghasemi, Sadra Sadraddini, and Calin Belta This work was partially supported by the NSF under grants IIS-2024606 and IIS-1723995.K. Ghasemi and C. Belta are with the Division of System Engineering, Boston University, Boston, MA 02215, USA<EMAIL_ADDRESS><EMAIL_ADDRESS>Sadraddini is with the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA<EMAIL_ADDRESS> ###### Abstract We develop a novel decentralized control method for a network of perturbed linear systems with dynamical couplings subject to Signal Temporal Logic (STL) specifications. We first transform the STL requirements into set containment problems and then we develop controllers to solve these problems. Our approach is based on treating the couplings between subsystems as disturbances, which are bounded sets that the subsystems negotiate in the form of parametric assume-guarantee contracts. The set containment requirements and parameterized contracts are added to the subsystems’ constraints. We introduce a centralized optimization problem to derive the contracts, reachability tubes, and decentralized closed-loop control laws. We show that, when the STL formula is separable with respect to the subsystems, the centralized optimization problem can be solved in a distributed way, which scales to large systems. We present formal theoretical guarantees on robustness of STL satisfaction. The effectiveness of the proposed method is demonstrated via a power network case study. ## I INTRODUCTION Multi agent systems benefit from decentralized control laws that require only local information. Online computations lessen as each system implements its own control law. Furthermore, communication issues are also mitigated as agents do not need to constantly share information. Multi agent control drew a lot of interest in control theory, particularly in the last two decades [22, 6, 9]. For multi agent systems with hard constraints and uncertainties, rigorous mathematical tools are required to reason about the closed-loop performance of the aggregate system. Formal methods provide mathematical guarantees for the behavior of control systems. Formal languages, such as temporal logics [1], can be used to describe system specifications. With particular relevance to this work, Signal Temporal Logic (STL) [18] can describe a broad range of temporally bounded constraints. For example, the STL formula $\psi={\textbf{F}}_{[0,10]}(x>10)\vee{\textbf{G}}_{[5,20]}(x<0)$ reads in plain English “ _eventually_ between time 0 and 10 the value of $x$ exceeds 10 or _always_ between 5 and 20 the value of $x$ remains below 0”. The use of formal methods in multi-agent systems has also been investigated [23, 16, 15]. But, with only one exception [15], they only studied dynamically decoupled agents, and none of them took into account the presence of additive disturbances. A related approach in formal methods is based on set-valued dynamics. Analyzing such systems enables characterizing all the possible responses in the presence of bounded uncertainties. Reachability analysis and correct-by-design control synthesis, which guarantee correctness without the need for system testing, received a lot of attention in recent years [11, 17, 8]. Formal methods come with a high computational cost, which makes it challenging to apply them to multi agent systems. That is especially true when we are considering systems with disturbances, and want to guarantee the satisfaction of temporal logic specification under all allowed disturbances. Divide and conquer techniques are a natural way to break the problem into smaller pieces. They can be applied to interconnected systems, where the dynamics of the agents are coupled. Assume-guarantee contracts [4] formalize the promises that systems make and provide over dynamical couplings. For instance, assume- guarantee contracts were used to describe vehicular flow between neighborhoods of a traffic network [13], aircraft power distributions, [21], and dynamics of an aerial robot tethered to a ground one [19]. In this paper, we study the problem of decentralized control design for interconnected perturbed linear systems subject to STL constraints. Unlike approaches that assume that feasible assume-guarantee contracts are given a-priori [20, 5], we parameterize the contracts and search for feasibility. Unlike the search methods in [13, 14], our parameterization, which is based on our prior work [10], has a special convexity property that leads to a tractable solution. The approach in [19] also parameterized contracts and found them using convex optimization, but was limited to polytopic invariant sets. Here we include complex, non-convex STL constraints, and retain the parameterization from [10]. The main contributions of this paper are as follows: 1. 1. By fixing the “logical behavior” through solving a mixed-integer program, we are able to convert the STL specifications into set containment problems. Then, a linear program is proposed to jointly optimize assume-guarantee contracts, set-valued trajectories, and decentralized closed loop control laws. This allows steering the aggregate system in a way that the global STL formulae is satisfied, while disturbances are rejected in a decentralized manner. The resulting bounds are computed using assume-guarantee contracts and are connected to the STL robustness score, a signed distance to satisfaction. 2. 2. When the given STL formula is separable with respect to the subsystems, we provide a method to make the contribution above computationally more tractable for large networks by making it compositional. We use the convexity properties in [10] to optimize contracts, reachability sets, and controllers in a distributed way. The rest of the paper is organized as follows. We first provide the notation and the necessary background in Section II. We state the problem in Section III. The solution is provided in Sections IV and V. Finally, an illustrative example is shown in Section VI. ## II Notations and Preliminaries ### II-A Notation $\mathbb{R}$, $\mathbb{R}_{+}$ and $\mathbb{N}$ stand for the sets of real, non-negative real, and non-negative integers, respectively; $\mathbb{N}_{h}$ represents the set of non-negative numbers up to $h\in\mathbb{N}$. An $h$-dimensional box is defined as $\mathbb{B}_{h}:=\\{b\in\mathbb{R}^{h}|||b||_{\infty}\leq 1\\}$. $\mathbb{S}_{1}\oplus\mathbb{S}_{2}:=\\{s_{1}+s_{2}|s_{1}\in\mathbb{S}_{1},s_{2}\in\mathbb{S}_{2}\\}$ is the Minkowski sum of two sets $\mathbb{S}_{1}$ and $\mathbb{S}_{2}$. The Directed Hausdorff distance $d_{DH}(\mathbb{S}_{1},\mathbb{S}_{2})$ is a quantitative measure of how far $\mathbb{S}_{2}$ is from being a subset of $\mathbb{S}_{1}$, and it can be computed as: $d_{DH}(\mathbb{S}_{1},\mathbb{S}_{2}):=\sup_{s_{2}\in\mathbb{S}_{2}}\inf_{s_{1}\in\mathbb{S}_{1}}d(s_{1},s_{2}),$ (1) where $d:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}_{+}$ is a metric. For compact sets, $d_{DH}(\mathbb{S}_{1},\mathbb{S}_{2})=0$ if and only if $\mathbb{S}_{2}\subseteq\mathbb{S}_{1}$. The Cartesian product of sets $\mathbb{S}_{1}$ and $\mathbb{S}_{2}$ is denoted by $\mathbb{S}_{1}\times\mathbb{S}_{2}$ and the Cartesian product of $\mathbb{S}_{1},\cdots,\mathbb{S}_{N}$ by $\prod_{i=1}^{N}\mathbb{S}_{i}$. $I_{n}$, $0_{n}$, and $[A_{1},A_{2}]$ represent the $n\times n$ identity matrix, the $n$-dimensional zero vector, and the horizontal concatenation of matrices $A_{1}$, $A_{2}$ with the same number of rows, respectively. ### II-B Zonotopes A zonotope is a symmetric shape set representation defined as $\mathcal{Z}(c,G):=\\{c+Gb|\forall b\in\mathbb{B}_{q}\\}$, where $c\in\mathbb{R}^{n}$ and $G\in\mathbb{R}^{n\times q}$ $(n,q\in\mathbb{N})$ denote the zonotope’s center and generator, respectively. The order of the zonotope is equal to $\dfrac{q}{n}$. Zonotopes are convenient for set calculations, such as Minkowski sums and linear transformations. Given two sets $\mathbb{S}_{1}=\mathcal{Z}(c_{1},G_{1})$ and $\mathbb{S}_{2}=\mathcal{Z}(c_{2},G_{2})$, a matrix $A\in\mathbb{R}^{m\times n}$, and a vector $b\in\mathbb{R}^{n}$, where $c_{1},c_{2}\in\mathbb{R}^{n}$ and $G_{1}\in\mathbb{R}^{n\times q_{1}}$ , $G_{2}\in\mathbb{R}^{n\times q_{2}}$, we have $\mathbb{S}_{1}\oplus\mathbb{S}_{2}=\mathcal{Z}(c_{1}+c_{2},[G_{1},G_{2}])$ and $A\mathbb{S}_{1}+b=\mathcal{Z}(Ac_{1}+b,AG_{1})$. ### II-C Specifications Signal Temporal Logic (STL) was introduced in [18] to specify Boolean and temporal properties of real-valued, time signals. A discrete-time signal is a function $s:\mathbb{N}\rightarrow\mathbb{R}^{q}$. We use $(s,[t_{1},t_{2}])$ to denote the sequence $s(t_{1}),...,s(t_{2})$ and $(s,t)$ for $(s,[t,\infty])$. An STL formula is defined with the following recursive grammar: $\varphi::=\pi|\neg\varphi|\varphi\wedge\psi|\varphi|\varphi\vee\psi|\textbf{F}_{[t_{1},t_{2}]}\varphi|\textbf{G}_{[t_{1},t_{2}]}\varphi|\varphi\textbf{U}_{[t_{1},t_{2}]}\psi$ (2) where $\pi$ is a predicate. All predicates are assumed to be linear in the form $p(s)\leq c$ or $p(s)\geq c$, with $c$ being a scalar and $p:\mathbb{R}^{q}\rightarrow\mathbb{R}$ being a linear function. Symbols $\neg$ , $\wedge$ , and $\vee$ denote Boolean negation, conjunction, and disjunction, respectively; $\textbf{F}_{[t_{1},t_{2}]}$, $\textbf{G}_{[t_{1},t_{2}]}$, and $\textbf{U}_{[t_{1},t_{2}]}$ are temporal operators for “eventually”,“always”, and “until”, respectively. Also, $(s,t)\models\varphi$ denotes that signal $s$ satisfies formula $\varphi$ at time $t$, and $(s,t)\nvDash\varphi$ if this is not the case. ###### Definition 1 The satisfaction of a formula by a signal $s$ at time $t$ is defined as follows: * • $(s,t)\models(p(s)\geq c)\Leftrightarrow p(s(t))\geq c$ , * • $(s,t)\models(p(s)\leq c)\Leftrightarrow p(s(t))\leq c$ , * • $(s,t)\models\neg\varphi\Leftrightarrow(s,t)\nvDash\varphi$ , * • $(s,t)\models\varphi_{1}\wedge\varphi_{2}\Leftrightarrow(s,t)\models\varphi_{1}\wedge(s,t)\models\varphi_{2}$ , * • $(s,t)\models\varphi_{1}\vee\varphi_{2}\Leftrightarrow(s,t)\models\varphi_{1}\vee(s,t)\models\varphi_{2}$ , * • $(s,t)\models\textbf{G}_{[t_{1},t_{2}]}\varphi\Leftrightarrow\forall t^{\prime}\in[t_{1},t_{2}],(s,t)\models\varphi$ , * • $(s,t)\models\textbf{F}_{[t_{1},t_{2}]}\varphi\Leftrightarrow\exists t^{\prime}\in[t_{1},t_{2}],(s,t^{\prime})\models\varphi$ , * • $(s,t)\models\varphi_{1}\textbf{U}_{[t_{1},t_{2}]}\varphi_{2}\Leftrightarrow\exists t^{\prime}\in[t_{1},t_{2}],(s,t^{\prime})\models\varphi_{2}\wedge\forall t^{\prime\prime}\in[t_{1},t^{\prime\prime}](s,t^{\prime\prime})\models\varphi_{1}$. For simplicity, $(s,0)\models\varphi$ is denoted by $s\models\varphi$. The horizon of a formula is the shortest amount of time required to determine whether a formula $\varphi$ is satisfied, and it is denoted by $hrz(\varphi)$ [2]. The robustness [7] of an STL formula with respect to a signal determines how strongly the signal satisfies / violates the formula. Robustness is a real function that produces a score, where larger scores mean stronger satisfaction. The robustness of the formula $\varphi$ with respect to the signal $s$ at time $t$ is denoted by $\rho(s,\varphi,t)$ and can be computed recursively. The procedure begins with the predicates, where each predicate’s robustness is defined as $\rho(s,(p(s)\geq c),t)=\rho=p(s)-c$. Without loss of generality, we only consider negation free formulas in this paper. This is not restrictive, as any STL formula can be made negation-free. It is also worth noting that, while predicates with inequalities are used in the semantics definition, strict inequalities and equalities can be formed using the Boolean operators. ## III Problem Definition and Approach Consider the following network of coupled time-variant linear subsystems: $x_{i}(t+1)=A_{ii}(t)x_{i}(t)+B_{ii}(t)u_{i}(t)+\sum_{j\neq i}A_{ij}(t)x_{j}(t)\\\ +\sum_{j\neq i}B_{ij}(t)u_{j}(t)+w_{i}(t),\;i\in\mathcal{I},$ (3) where $\mathcal{I}$ is an index set for the subsystems; $A_{ii}(t)\in\mathbb{R}^{n_{i}\times n_{i}}$, $A_{ij}(t)\in\mathbb{R}^{n_{i}\times n_{j}}$, $B_{ii}(t)\in\mathbb{R}^{n_{i}\times m_{i}}$, and $B_{ij}(t)\in\mathbb{R}^{n_{i}\times m_{j}}$ are given, time-variant matrices for subsystem $i$. Let $\eta=|\mathcal{I}|$ denote the number of subsystems in the network. The state, control input, and disturbance for subsystem $i$ at time step $t$ are represented by $x_{i}(t)\in\mathbb{R}^{n_{i}}$, $u_{i}(t)\in\mathbb{R}^{m_{i}}$, and $w_{i}(t)\in\mathbb{R}^{n_{i}}$, which are bounded by given polytopic sets $x_{i}(t)\in X_{i}(t)\subseteq\mathbb{R}^{n_{i}}$, $u_{i}(t)\in U_{i}(t)\subseteq\mathbb{R}^{m_{i}}$, and $w_{i}(t)\in W_{i}(t)\subset\mathbb{R}^{n_{i}}$, respectively. A decentralized controller $\mu_{i}(.,t):X_{i}(t)\rightarrow U_{i}(t)$ is a function that maps the current state of subsystem $i$ into a control input in the control space of the same subsystem. System (3) with no disturbances is called a nominal system. ###### Definition 2 (Decentralized Finite-Time Viable Sets) Given $h\in\mathbb{N}$, the sequences of sets $\Omega_{i}(0),\Omega_{i}(1),...,\Omega_{i}(h)$, $i\in\mathcal{I}$ for the interconnected system in (3) are called decentralized _viable_ sets, if for all $t\in\mathbb{N}_{h},\forall i\in\mathcal{I}$, $\Omega_{i}(t)\subseteq X_{i}(t)$ and there exists a set of policies $\mu_{i}(.,t)$ such that $\Theta_{i}(t)\subseteq U_{i}(t)$ and $\forall t\in\mathbb{N}_{h-1},\forall x_{i}(t)\in\Omega_{i}(t),\forall w_{i}(t)\in W_{i}(t)\Rightarrow x_{i}(t+1)\in\Omega_{i}(t+1)$, where $\Theta_{i}(t):=\mu(\Omega_{i}(t),t)$ is called action set. A signal $s:\mathbb{N}\rightarrow X\times U\subset\mathbb{R}^{n+m}$ is a trajectory where $s(t)$ represents a vector stacking the state and control of the aggregated system at time step $t$, which is represented by $s(t)=(x(t),u(t))$, where $x(t)=[x^{T}_{1}(t),\cdots,x^{T}_{\eta}(t)]^{T}\in\mathbb{R}^{n}$ and $u(t)=[u^{T}_{1}(t),\cdots,u^{T}_{\eta}(t)]^{T}\in\mathbb{R}^{m}$ and $n=\sum_{i\in\mathcal{I}}n_{i}$ and $m=\sum_{i\in\mathcal{I}}m_{i}$. In this paper, we consider the following problem: ###### Problem 1 Given a network of perturbed linear systems in the form (3), the initial states $x_{i}^{initial}(0)\in X_{i}(0),\forall i\in\mathcal{I}$, a bounded STL formula $\varphi$ with linear predicates in the states and / or controls, and a quadratic cost $J:\mathbb{S}\rightarrow\mathbb{R}_{+}$, find the optimal decentralized controllers $\mu_{i}(x_{i}(t),t),\forall i\in\mathcal{I}$ and their corresponding sequence of viable sets $\Omega_{i}(t)$ such that $J$ is minimized, $x_{i}(t)\in X_{i}(t)$, $u_{i}(t)\in U_{i}(t)$, and $s\models\varphi$. If such a signal does not exist, find $\Omega_{i}(t)$ corresponding to the maximum possible value of the robustness, i.e, find a signal with the least amount of violation. To solve Problem 1, a two-step optimization-based approach is proposed. We begin by solving a mixed-integer program for the aggregated nominal system, which is constrained by the STL formula $\varphi$ [2]. It allows us to determine active predicates at each time and convert the STL formula satisfaction into a set containment problems, which is shown to be a convex programming problem [25]. In the second step, we take into account the additive disturbance, along with the set containment constraints, and we find a set of decentralized closed-loop controllers and viable sets. The technical details are explained in the next sections. ## IV Converting STL formulas into Set Containment Problems In this section, the method from [2] is used to encode an STL formula into a mixed-integer linear program. Then, the set of predicates whose satisfaction corresponds to the maximum robustness for the nominal system is identified and transformed to a set containment problem. ### IV-A Encoding the STL Formulas Following the predicate-based encoding from [2], a binary variable $z_{t}^{\pi}\in\\{0,1\\}$ is dedicated to each predicate $\pi=(y\geq 0)$, which must be assigned to $1$ if the predicate is true, and to 0 otherwise. The relation between $z_{t}^{\pi}$, the robustness $\rho$, and $y_{t}$ is encoded as $y_{t}+M(1-z_{t}^{\pi})\geq\rho\quad,\quad y_{t}-Mz_{t}^{\pi}<\rho,$ (4) where $M$ is a sufficiently large number such that for all time steps, $M\geq\text{max\color[rgb]{1,0,0} }y_{i},i\in\mathbb{N}_{n_{y}}$. The equations in (4) enforce the binary variable $z_{t}^{\pi}$ to be equal to $1$ when $y_{t}\geq\rho$ and equal to $0$ when $y_{t}<\rho$. Disjunctions and conjunctions are captured by the following constraints: $z=\bigwedge_{i=0}^{n_{z}}z_{i}\Rightarrow z\leq z_{i},i\in\mathbb{N}_{n_{z}},z=\bigvee_{i=0}^{n_{z}}z_{i}\Rightarrow z\leq\sum_{i=0}^{n_{z}}z_{i},$ (5) where $n_{z}\in\mathbb{N}$ and $z\in[0,1]$ is declared as a continuous variable. However, as the above equation shows, it can only take binary values. In [12], [24] upper-bounding constraints are added to create a necessary and sufficient condition: $z=\bigwedge_{i=0}^{n_{z}}z_{i}\Leftrightarrow z\geq\sum_{i=0}^{n_{z}}z_{i}-n_{z}+1,z\leq z_{i},i\in\mathbb{N}_{n_{z}}$ (6a) $z=\bigvee_{i=0}^{n_{z}}z_{i}\Leftrightarrow z\geq z_{i},i\in\mathbb{N}_{n_{z}},z\leq\sum_{i=0}^{n_{z}}z_{i}.$ (6b) The upper-bound constraints are necessary when the specification does not include negation. $z_{t}^{\varphi}\in[0,1]$ is the variable that indicates whether $(s,t)\models\varphi$. A recursive translation of an STL formula is as follows: $\varphi=\bigwedge_{i=1}^{n_{\varphi}}\varphi_{i}\Rightarrow z_{t}^{\varphi}=\bigwedge_{i=1}^{n_{\varphi}}z_{k}^{\varphi_{i}};\varphi=\bigvee_{i=1}^{n_{\varphi}}\varphi_{i}\Rightarrow z_{t}^{\varphi}=\bigvee_{i=1}^{n_{\varphi}}z_{t}^{\varphi_{i}};\\\ \varphi=G_{I}\psi\Rightarrow z_{t}^{\varphi}=\bigwedge_{t^{\prime}\in I}z_{t^{\prime}}^{\psi};\varphi=F_{I}\psi\Rightarrow z_{t}^{\varphi}=\bigvee_{t^{\prime}\in I}z_{t^{\prime}}^{\psi};\\\ \varphi=\psi_{1}U_{I}\psi_{2}\Rightarrow z_{t}^{\varphi}=\bigvee_{t^{\prime}\in I}(z_{t^{\prime}}^{\psi_{2}}\wedge\bigwedge_{t^{\prime\prime}\in[t,t^{\prime}]}z_{t^{\prime\prime}}^{\psi_{1}}),$ (7) where $n_{\varphi}\in\mathbb{N}$. Given a formula $\varphi$, the set of constraints recursively constructed by equations (4), (6), and (7) is denoted by $\mathcal{C}_{\varphi}$. ###### Theorem 1 (Adapted from [2]) The following properties hold for the above mixed-integer linear program encoding:(i) $(s,t)\models\varphi$, if adding $z_{t}^{\varphi}=1$ and $\rho\geq 0$ to the constraints makes $\mathcal{C}_{\varphi}$ feasible, (ii) $(s,t)\nvDash\varphi$, if adding $z_{t}^{\varphi}=1$ and $\rho\geq 0$ makes $\mathcal{C}_{\varphi}$ infeasible, (iii) the largest $\rho$ such that $z_{t}^{\varphi}=1$ and $\mathcal{C}_{\varphi}$ is feasible is equal to the robustness. It is shown in [2] that when the STL formulas are negation free, $\rho$ equals robustness. As a result, it can be used as an objective function to maximize robustness. ### IV-B Set Containment for STL Formula Satisfaction The objective of this subsection is to get the set of $z_{t}^{\pi}$s equal to $1$, which are called active predicates, for the maximum robustness while considering the nominal system. If the disturbance bound is small enough, it can be assumed that the perturbed and nominal systems have the same set of active predicates, and a closed-loop controller can be found to ensure that the system’s reachability set still satisfies those predicates. We can do the synthesis for the aggregate nominal system [2] rewritten as $x(t+1)=A(t)x(t)+Bu(t)$ from (3), by using the STL satisfaction constraints introduced before: $\displaystyle\max_{x(t),u(t),z_{t}^{\pi},\rho}\quad-J(s[0,hrz(\varphi)])+\mathcal{M}(|\rho|-\rho)$ (8) s.t. $\displaystyle x(t+1)=A(t)x(t)+Bu(t),t\in\mathbb{N}_{hrz(\varphi)-1}$ $\displaystyle x(0)=[x^{initial}_{1},...,x^{initial}_{\eta}],$ $\displaystyle\mathcal{C}_{\varphi},z_{0}^{\varphi}=1.$ As long as robustness is positive, the proposed objective function minimizes the user defined cost function $J(.)$, which can be a regular quadratic function in the form of $\sum_{t=0}^{hrz(\varphi)}x(t)^{T}Qx(t)+\sum_{t=0}^{hrz(\varphi)}u(t)^{T}Ru(t)$. Otherwise, it maximizes robustness due to the effect of the large scalar $\mathcal{M}$ and finds the nominal trajectory with the least violation. Each active predicate is actually a set, $y_{t}\geq\rho,\forall y_{t}$, which must hold for all possible signals at time $t$. By definition, we have $s(t)\in\prod_{i}\Omega_{i}(t)\times\prod_{i}\Theta_{i}(t)$. Assuming the set $\prod_{i}\Omega_{i}(t)\times\prod_{i}\Theta_{i}(t)$ is represented by a zonotopic set $\mathcal{Z}(c,G)$ (notation $t$ is removed for readability), then any possible signal must satisfy $\mathrm{e}\geq\rho,\quad\forall\mathrm{e}\in\mathcal{Z}(p(c),p(G))$. Also, by definition, the zonotope $\mathcal{Z}(c,G)$ has the following upper and lower bounds $c-\sum_{i}|g_{i}|\leq\mathcal{Z}(c,G)\leq c+\sum_{i}|g_{i}|$, where $g_{i}$ is the $i$th column of $G$. Using these bounds, the satisfaction constraint for an active predicate would be: $-p(c)+\sum_{i}|p(G)_{i}|\leq-\rho$ (9) where $p(G)_{i}$ is the $i$th element of $p(G)$. ###### Theorem 2 The constraint in (9) can be written as a set of linear constraints as follows: $-p(c)+\sum_{i}p^{\prime}_{i}\leq-\rho,p^{\prime}_{i}\geq p(G)_{i},p^{\prime}_{i}\geq-p(G)_{i}.$ (10) ###### Proof: It can be easily seen that if such $p^{\prime}_{i}$s exist, the following relation holds: $-p(c)+\sum_{i}|p(G)_{i}|\leq-p(c)+\sum_{i}p^{\prime}_{i}\leq-\rho,$ (11) which also satisfies the original constraint (9). Also, because (10) is the relaxed form of the original problem, if such $p^{\prime}_{i}$s do not exist, the original problem is also infeasible. ∎ Finally, the set of linear constraints that guarantees any possible trajectories in viable and action sets satisfies the STL formula $\varphi$ is denoted by $\mathcal{G}_{\varphi}$. ## V Computation of Viable Sets under Additive Disturbance The original problem has been transformed into a decentralized control synthesis problem with zonotopic set containment constraints. The latter problem was considered in [10], where a compositional approach using assume- guarantee contracts is proposed. In this section, we give a brief overview of [10] and incorporate the linear constraints $\mathcal{G}_{\varphi}$ into its formulation. ### V-A Decentralized Synthesis First, the subsystems are decoupled from each other by considering the effects of other subsystems as disturbances, and by making some assumptions on the operational sets of each subsystem, as follows: $x_{i}(t+1)=A_{ii}(t)x_{i}(t)+B_{ii}(t)u_{i}(t)+w_{i}^{aug}(t),$ (12) where $w_{i}^{aug}(t)$ is the augmented disturbance set on subsystem $i$, which belongs to: $w_{i}^{aug}(t)\in\bigoplus_{j\neq i}A_{ij}(t)\mathcal{X}_{j}(t)\oplus\bigoplus_{j\neq i}B_{ij}(t)\mathcal{U}_{j}(t)\oplus W_{i}(t),$ (13) where $\mathcal{X}_{j}(t)$ and $\mathcal{U}_{j}(t)$ are assumed operational sets for the state and the control input of subsystem $j\in\mathcal{I}$. It can be seen that the performance of each subsystem affects the assumptions of the other subsystems. This give and take contracts are called assume-guarantee contracts. ###### Definition 3 (Assume-Guarantee Contracts) An assume-guarantee contract for subsystem $i\in\mathcal{I}$ is a pair $\mathcal{C}_{i}=(\mathcal{A}_{i},\mathcal{G}_{i})$, where: * • The assumption $\mathcal{A}_{i}$ is the assumption set over the disturbance $w_{i}^{aug}(t)\in\mathcal{W}_{i}(t)$, * • The guarantee $\mathcal{G}_{i}$ is the promise of subsystem $i$ over its state and control input $x_{i}(t)\in\mathcal{X}_{i}(t),u_{i}(t)\in\mathcal{U}_{i}(t)$. As seen in (13), the following relation holds between the guarantee of other subsystems $\mathcal{X}_{j},\mathcal{U}_{j},j\neq i$ and the assumption of subsystem $i$, $\mathcal{A}_{i}$: $\mathcal{W}_{i}(t)=\bigoplus_{j\neq i}A_{ij}(t)\mathcal{X}_{j}(t)\oplus\bigoplus_{j\neq i}B_{ij}(t)\mathcal{U}_{j}(t)\oplus W_{i}(t)$ (14) The above zonotopic set is represented by $W_{i}(t)=\mathcal{Z}(d^{w}_{i}(t),G^{w}_{i}(t))$, where $d^{w}_{i}(t)\in\mathbb{R}^{n_{i}}$ and $G^{w}_{i}(t)\in\mathbb{R}^{n_{i}\times l(t)}$. Next, we define a parametric assume-guarantee contract, which is similar to the regular contract except that the sets $\mathcal{X}_{i}(t)$, $\mathcal{U}_{i}(t)$ are replaced with the parametric sets below: $\mathcal{X}_{i}(t,\alpha_{i}^{x}(t)):=\mathcal{Z}(c_{i}^{x}(t),G_{i}^{x}\text{Diag}(\alpha_{i}^{x}(t))),$ (15a) $\mathcal{U}_{i}(t,\alpha_{i}^{u}(t)):=\mathcal{Z}(c_{i}^{u}(t),G_{i}^{u}\text{Diag}(\alpha_{i}^{u}(t))),$ (15b) where $G_{i}^{x}\in\mathbb{R}^{n_{i}\times f_{i}}$ and $G_{i}^{u}\in\mathbb{R}^{m_{i}\times g_{i}}$ ($f_{i},g_{i}\in\mathbb{N}$) are given matrices defined by the user, and the vectors $c_{i}^{x}(t)\in\mathbb{R}^{n_{i}}$ , $\alpha_{i}^{x}(t)\in\mathbb{R}^{f_{i}}$, $c_{i}^{u}(t)\in\mathbb{R}^{m_{i}}$, and $\alpha_{i}^{u}(t)\in\mathbb{R}^{g_{i}}$ are parameters. Also, the parametric assumption set $\mathcal{W}_{i}(t,\alpha^{ext})$ is derived by replacing the above parametric sets into equation (14), where $\alpha^{ext}$ denotes the set of all parameters. To deal with the mismatch between the assumed and real operational disturbance sets, we introduce the notion of correctness: ###### Definition 4 (Correctness) A set of parametric contracts $\mathcal{C}_{i}$ is correct if $\mathcal{W}_{i}(t)\subseteq\bigoplus_{j\neq i}A_{ij}(t)\Omega_{j}(t)\oplus B_{ij}(t)\Theta_{j}(t)\oplus W_{i}(t),\forall i,t.$ (16) The preceding definition is required to resolve the circularity problem of assumption-guarantee contracts. It was shown in [10] that the following sufficient constraints imply (16): $\mathcal{X}_{i}(t,\alpha_{i}^{x}(t))\subseteq\Omega_{i}(t),\mathcal{U}_{i}(t,\alpha_{i}^{u}(t))\subseteq\Theta_{i}(t).$ (17) The next step is to design a robust controller for each subsystem. The following decentralized controller structure is proposed for each subsystem: $x_{i}(t)=\bar{x}^{i}_{t}+T^{i}_{t}\zeta,u_{i}(t)=\bar{u}^{i}_{t}+M^{i}_{t}\zeta,\zeta\in\mathbb{B}_{k},$ (18) where $\bar{x}^{i}_{t}\in\mathbb{R}^{n_{i}}$, $\bar{u}^{i}_{t}\in\mathbb{R}^{m_{i}}$, $T^{i}_{t}\in\mathbb{R}^{n_{i}\times q_{i}(t)}$, and $M^{i}_{t}\in\mathbb{R}^{m_{i}\times q_{i}(t)}$ are unknowns that need to be tuned and $q(t)=k+\sum_{i=0}^{i=t}{l(i)}$, where $k\in\mathbb{N}$ is a hyper-parameter. Then, for subsystem $i$, it can be shown that the following linear constraints are sufficient for tuning the control parameters: $[A_{ii}(t)T^{i}_{t}+B_{ii}(t)M^{i}_{t},G^{aug}_{i}(t)]=[T^{i}_{t+1}],t\in\mathbb{N}_{h-1}$ (19a) $A_{ii}(t)\bar{x}^{i}_{t}+B_{ii}(t)\bar{u}^{i}_{t}+d^{aug}_{i}(t)=\bar{x}^{i}_{t+1},t\in\mathbb{N}_{h-1}.$ (19b) If such parameters exist, $\Omega_{i}(t)=\mathcal{Z}(\bar{x}^{i}_{t},T^{i}_{t})$ is the viable set, $\Theta_{i}(t)=\mathcal{Z}(\bar{u}^{i}_{t},M^{i}_{t})$ is the action set, and (18) is the controller. Intuitively, constraints (19a) and (19b) are set containment constraints; (19b) adjusts the centers of the viable sets and (19a) takes care of the set expansion at each step, such that all the possible trajectories are contained within the tube $\Omega_{i}(0),\Omega_{i}(1),\cdots,\Omega_{i}(h)$, where $h$ is the horizon, which is set to $hrz(\varphi)$ in our problem. Additionally, the following constraints are proposed to impose hard constraints: $\mathcal{Z}(\bar{x}^{i}_{t},T^{i}_{t})\subseteq X_{i}(t),\mathcal{Z}(\bar{u}^{i}_{t},M^{i}_{t})\subseteq U_{i}(t),t\in\mathbb{N}_{h}.$ (20) It was demonstrated in [25] that zonotope and polytope containment problems can be encoded into linear constraints. Thus, all of the suggested constraints (16), (19), (20), and $\mathcal{G}_{\varphi}$ for all subsystems and time steps may be merged to build a centralized linear program to solve Problem 1. The objective function is ad-hoc, but we recommend the mean square error between the center line of viable/action sets and the nominal trajectory/controllers generated by (8). ### V-B Compositional Computation of Decentralized Viable Sets Despite the fact that the centralized solution presented at the end of the preceding subsection is a linear program, it still suffers from curse of dimensionality in high dimensions. Nevertheless, it is demonstrated in [10] that the suggested parameterization (15) allows for compositional computation of viable sets in a time-efficient manner by transforming a single, large linear program into a group of smaller linear programs. We show that if the STL formula in Problem 1 is separable by subsystems, we can also use the parameterization to solve the same centralized approach in the previous section in a compositional manner. Additionally, convergence is ensured due to the convexity of the problem set. ###### Assumption 1 The STL formula in Problem 1 is separable by the subsystems, meaning it can take the form $\varphi=\varphi_{1}\wedge...\wedge\varphi_{\eta}$, where $i$ is the subsystem’s index. In [10], we proposed a parametric potential function that quantifies how far a set of contracts is from correctness. This comes in contrast to the previously introduced correctness property, which was either true or false. The larger the parametric potential function, the farther the set of contracts is from correctness, so the goal is to minimize the proposed potential function. Here, the parametric potential function is modified by including the containment constraints coming from the STL formulas, as well as adding the sum of the directed Hausdorff distances between the hard constraints and the viable/action sets in (20) into the potential function. ###### Definition 5 (Parametric potential function) The parametric potential function $\mathcal{V}(\alpha^{ext})$ is defined as $\mathcal{V}(\alpha^{ext})=\sum_{i\in\mathcal{I}}\mathcal{V}_{i}(\alpha^{ext})$, where $\mathcal{V}_{i}(\alpha^{ext}):=\\\ \sum_{t\in\mathbb{N}_{hrz(\varphi)}}[d_{DH}(\mathcal{X}_{i}(t),\Omega_{i}(t))+d_{DH}(\mathcal{U}_{i}(t),\Theta_{i}(t))\\\ +d_{DH}(X_{i}(t),\Omega_{i}(t))+d_{DH}(U_{i}(t),\Theta_{i}(t))].$ (21) Using the technique explained in Subsection IV-B, the satisfaction of the STL formula $\varphi_{i}$ for subsystem $i$ can be encoded as a set of linear constraints denoted by $\mathcal{G}_{\varphi_{i}}$. Each component of the parametric potential function $\mathcal{V}_{i}(\alpha^{ext})$ can be computed using these constraints and (19) by solving the following linear program: $\displaystyle\mathcal{V}_{i}(\alpha)=\min_{\underset{,d_{t}^{x},d_{t}^{u},\bar{d}_{t}^{x},\bar{d}_{t}^{u}}{\mathrm{x}^{i},T^{i},\mathrm{u}^{i},M^{i}}}\begin{aligned} &\sum_{t\in\mathbb{N}_{hrz(\varphi)}}[d^{x}_{t}+\bar{d}^{x}_{t}]+\sum_{t\in\mathbb{N}_{hrz(\varphi)-1}}[d^{u}_{t}+\bar{d}^{u}_{t}]\end{aligned}$ subject to $\displaystyle[A_{ii}(t)T^{i}_{t}+B_{ii}(t)M^{i}_{t},G_{i}^{w}(t)]=[T^{i}_{t+1}],\forall t\in\mathbb{N}_{hrz(\varphi)-1}$ (22a) $\displaystyle A_{ii}(t)\bar{\mathrm{x}}^{i}_{t}+B_{ii}(t)\bar{\mathrm{u}}^{i}_{t}+d_{i}^{w}(t)=\bar{\mathrm{x}}^{i}_{t+1},\forall t\in\mathbb{N}_{hrz(\varphi)-1}$ (22b) $\displaystyle\mathcal{Z}(\bar{\mathrm{x}}^{i}_{t},T^{i}_{t})\subseteq\ \mathcal{X}_{i}(t,\alpha^{x}_{i}(t))\oplus\mathcal{Z}(0,d^{x}_{t}I_{n_{i}}),\forall t\in\mathbb{N}_{hrz(\varphi)}$ (22c) $\displaystyle\mathcal{Z}(\bar{\mathrm{u}}^{i}_{t},M^{i}_{t})\subseteq\ \mathcal{U}_{i}(t,\alpha^{u}_{i}(t))\oplus\mathcal{Z}(0,d^{u}_{t}I_{m_{i}}),\forall t\in\mathbb{N}_{hrz(\varphi)-1}$ (22d) $\displaystyle\mathcal{Z}(\bar{\mathrm{x}}^{i}_{t},T^{i}_{t})\subseteq X_{i}(t)\oplus\mathcal{Z}(0,\bar{d}^{x}_{t}I_{n_{i}}),\forall t\in\mathbb{N}_{hrz(\varphi)}$ (22e) $\displaystyle\mathcal{Z}(\bar{\mathrm{u}}^{i}_{t},M^{i}_{t})\subseteq U_{i}(t)\oplus\mathcal{Z}(0,\bar{d}^{u}_{t}I_{m_{i}}),\forall t\in\mathbb{N}_{hrz(\varphi)-1}$ (22f) $\displaystyle\mathcal{G}_{\varphi_{i}},\bar{\mathrm{x}}^{i}_{0}=x_{i}^{initial}(0)$ (22g) $\displaystyle d_{t}^{x},\bar{d}_{t}^{x}\geq 0,\hskip 5.69054pt\forall t\in\mathbb{N}_{hrz(\varphi)},$ (22h) $\displaystyle d_{t}^{u},\bar{d}_{t}^{u}\geq 0,\hskip 5.69054pt\forall t\in\mathbb{N}_{hrz(\varphi)-1}.$ (22i) Constraints (22a) and (22b) originate from (19). Also, the STL satisfaction constraints and the initial state constraint are added in (22g). The remaining constraints along with the objective function compute an over-approximation for the summation of the directed Hausdorff distance between sets $\Omega_{i}(t)$ and $\mathcal{X}_{i}(t)/X_{i}(t)$ and $\Theta_{i}(t)$ and $\mathcal{U}_{i}(t)$/$U_{i}(t)$ over all time steps. This approach of computing the Directed Hausdorff distance is inspired from [25]. ###### Theorem 3 (Convexity of the potential function) The potential function proposed above is convex with respect to the parameters. The set of acceptable parameters (correct and valid) is also a convex set. ###### Proof: As seen in (22), each component of the potential function is a linear program, which makes $\mathcal{V}_{i}(\alpha^{ext})$ a convex and piecewise affine function (a sum of convex functions is convex). Also, it is a well-known fact that the level set of a convex function is a convex set, thus, the set of acceptable parameters, which is equal to the zero level set of the potential function, is also a convex set. ∎ The idea is to minimize the potential function using gradient descent and iteratively update the parameters: $\alpha^{ext}\leftarrow\alpha^{ext}-\sum_{i\in\mathcal{I}}\nabla_{\alpha^{ext}}\mathcal{V}_{i}(\alpha^{ext}),$ (23) Convergence to the global minimum is guaranteed because the proposed potential function is convex. Each subsystem can find the direction that is best for it ($\nabla_{\alpha^{ext}}\mathcal{V}_{i}(\alpha^{ext})$), using its own local information and the common knowledge parameters, breaking the problem down into many smaller linear programs. If the minimum of the potential function is zero (by definition, the potential function is always larger than zero), it indicates that both the set of derived parametric contracts are correct and the viable and actions sets are within hard constraints. Thus, the desired control policies and viable sets are determined. Also, the nominal trajectories and controllers derived from (8) can be used as initial values for the center parameters in our parameterized sets to give the gradient descent a warm start. ## VI Case Study We apply the method developed in this paper to address the load-frequency problem in power networks [3]. A network is made up of several areas, each with its own power generator and demands, and some of them can be connected to each other to interchange power as needed, depending on the network architecture. Each area’s state is represented by a 2-dimensional vector $[\delta_{i}(t),f_{i}(t)]^{T}$, where $\delta_{i}(t)\in\mathbb{R}$ is the deviation of the phase angle and $f_{i}(t)\in\mathbb{R}$ is the deviation of the frequency at time $t$ for area $i\in\mathbb{N}$. Also, $u_{i}(t)\in\mathbb{R}$ is the control input, which is the amount of change from its nominal value in the power generated by the generator at the area $i$ and time $t$. The dynamics for each area is given by: $\dot{\delta}_{i}(t)=2\pi f_{i}(t)\hskip 5.69054pt,\hskip 5.69054pt\dot{f}_{i}(t)=-\dfrac{f_{i}(t)}{T_{p_{i}}}+\dfrac{K_{p_{i}}u_{i}(t)}{T_{p_{i}}}-\\\ \dfrac{K_{p_{i}}}{2\pi T_{p_{i}}}(\sum_{j\in\mathcal{N}_{i}}K_{s_{ij}}[\delta_{i}(t)-\delta_{j}(t)])-\dfrac{K_{p_{i}}\omega_{i}(t)}{T_{p_{i}}},$ (24) where $K_{p_{i}},K_{s_{ij}},T_{p_{i}}$ are the system gain, synchronizing coefficient between area $i$ and $j$, and system model time constant. In this case study, they are set to 110, 0.5, and 25, respectively, for all areas. Also, $\omega_{i}(t)$ is the load disturbance for area $i$ at time $t$, which is bounded by $|\omega_{i}(t)|\leq 0.001$. In addition, $\mathcal{N}_{i}$ denotes the neighbours of area $i$. Here, we consider the ring network architecture consisted of $20$ areas. Also, the control input is bounded by $|u_{i}(t)|\leq 0.1$. We use the Euler method to discretize the dynamics for every $0.1$ unit of time. For all areas, the initial state is $[0.1,0.1]^{T}$ and the STL specification is $\varphi_{i}=\textbf{F}_{[0,6]}\textbf{G}_{[0,2]}\psi_{1}\wedge\textbf{F}_{[0,8]}\psi_{2}$, where $\psi_{1}=[\delta_{i}\leq 0.26]\wedge[\delta_{i}\geq 0.14]\wedge[f_{i}\leq-0.04]\wedge[f_{i}\geq-0.16]$ and $\psi_{2}=[\delta_{i}\leq 0.01]\wedge[\delta_{i}\geq-0.01]\wedge[f_{i}\leq 0.01]\wedge[f_{i}\geq-0.01]$. The goal is to synthesize decentralized controllers for each area subject to the specifications. We set the horizon to nine and synthesize the controllers using our approach. The baseline parametric sets are selected to be the viable and action sets generated from (19) while couplings to other areas are ignored. The initial value of all parameters in the distributed algorithm is one. We used Gurobi on a MacBook Pro with 2.6 GHz 6-Core Intel Core i7 and 16 GB memory to run the algorithm. The results are shown in Fig.1(a), and Fig.1(b). It can be seen that any possible trajectory that passes through the viable sets satisfies the STL specification and at the same time all the implemented controllers satisfy the hard constraint on the control input. To demonstrate the approach’s scalability, we experimented with various number of areas in the ring network and reported the running time in Fig 1(c). The stated time period includes only the time spent on second step, but not on solving the MILP. That is because both distributed and centralized approaches share the first step. Additionally, to ensure that the solution exists for high-dimensional state spaces, we consider a large bound for the controller(i.e. $|u_{i}(t)|\leq 10$). As predicted, the distributed approach has a slower growth rate, making it more appropriate for the state spaces larger than $40$. Moreover, one of the primary benefits of the distributed technique is that it may be calculated in parallel. While we handled everything sequentially here, if multiprocessing is employed, the stated time could be reduced more depending on the number of cores used. (a) (b) (c) Figure 1: (a) The green sets illustrate the viable sets for one of the areas in the case study. The blue and orange sets are the set of states satisfying $\psi_{1}$ and $\psi_{2}$, respectively. The red sets show the parameterized sets defined on the state space at different time steps for this specific area (some of them are tightly close to the viable sets and are not visible). The black line represents the trajectory traveled by this area. (b) Controllers for each area as time series. (c) Reported time in seconds for the distributed and centralized approach for different state space dimensions. ## VII CONCLUSIONS Control synthesis subject to both a STL formula and a bounded disturbance is a computationally challenging problem. To overcome this challenge, we propose a solution which consists of two steps: First, we convert satisfaction of the STL formula into a set containment problem. To handle it, we consider the nominal system and use a centralized MILP. We claim that for small enough disturbances, both systems would have the same set of active predicates, which are seen as bounds. Second, we synthesize controllers subject to these bounds. Since the second step needs a set-based calculation, it has a relatively higher computational cost and thus creates a bottleneck for large scale systems. We show that this step can be achieved in a compositional fashion when the STL formula is separable by subsystems. In the future, we will investigate the possibility of replacing the MILP in the first step, which remains a barrier due to its computational cost. Sampling methods are a promising direction. Additionally, we are considering employing a distributed control architecture instead of the decentralized architecture proposed here. ## References * [1] Christel Baier and Joost-Pieter Katoen. Principles of model checking. MIT press, 2008. * [2] Calin Belta and Sadra Sadraddini. Formal Methods for Control Synthesis: An Optimization Perspective. Robotics, and Autonomous Systems Annu. Rev. Control Robot. Auton. Syst, 28:12–13, 2018. * [3] E. Camponogara, D. Jia, B.H. Krogh, and S. Talukdar. Distributed model predictive control. IEEE Control Systems Magazine, 22(1):44–52, 2002. * [4] Krishnendu Chatterjee and Thomas A Henzinger. Assume-guarantee synthesis. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 261–275. Springer, 2007. * [5] Yuxiao Chen, James Anderson, Karan Kalsi, Steven H Low, and Aaron D Ames. Compositional set invariance in network systems with assume-guarantee contracts. In 2019 American Control Conference (ACC), pages 1027–1034. IEEE, 2019. * [6] Jorge Cortes, Sonia Martinez, Timur Karatas, and Francesco Bullo. Coverage control for mobile sensing networks. IEEE Transactions on robotics and Automation, 20(2):243–255, 2004\. * [7] Alexandre Donzé and Oded Maler. Robust satisfaction of temporal logic over real-valued signals. In International Conference on Formal Modeling and Analysis of Timed Systems, pages 92–106. Springer, 2010. * [8] Souradeep Dutta, Xin Chen, and Sriram Sankaranarayanan. Reachability analysis for neural feedback systems using regressive polynomial rule inference. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, pages 157–168, 2019. * [9] Magnus Egerstedt and Xiaoming Hu. Formation constrained multi-agent control. IEEE transactions on robotics and automation, 17(6):947–951, 2001\. * [10] Kasra Ghasemi, Sadra Sadraddini, and Calin Belta. Compositional synthesis via a convex parameterization of assume-guarantee contracts. In Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control, pages 1–10, 2020. * [11] Antoine Girard. Reachability of uncertain linear systems using zonotopes. In International Workshop on Hybrid Systems: Computation and Control, pages 291–305. Springer, 2005. * [12] Sertac Karaman, Ricardo G Sanfelice, and Emilio Frazzoli. Optimal control of mixed logical dynamical systems with linear temporal logic specifications. In 2008 47th IEEE Conference on Decision and Control, pages 2117–2122. IEEE, 2008. * [13] Eric S Kim, Murat Arcak, and Sanjit A Seshia. Compositional controller synthesis for vehicular traffic networks. In 2015 54th IEEE Conference on Decision and Control (CDC), pages 6165–6171. IEEE, 2015. * [14] Weixuan Lin and Eilyan Bitar. Decentralized control of constrained linear systems via assume-guarantee contracts. In 2020 American Control Conference (ACC), pages 917–924. IEEE, 2020. * [15] Lars Lindemann and Dimos V Dimarogonas. Control barrier functions for multi-agent systems under conflicting local signal temporal logic tasks. IEEE control systems letters, 3(3):757–762, 2019. * [16] Zhiyu Liu, Bo Wu, Jin Dai, and Hai Lin. Distributed communication-aware motion planning for multi-agent systems from stl and spatel specifications. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pages 4452–4457. IEEE, 2017. * [17] Anirudha Majumdar and Russ Tedrake. Funnel libraries for real-time robust feedback motion planning. The International Journal of Robotics Research, 36(8):947–982, 2017\. * [18] Oded Maler and Dejan Nickovic. Monitoring temporal properties of continuous signals. In Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems, pages 152–166. Springer, 2004. * [19] Petter Nilsson and Necmiye Ozay. Synthesis of separable controlled invariant sets for modular local control design. In 2016 American Control Conference (ACC), pages 5656–5663. IEEE, 2016. * [20] Pierluigi Nuzzo, Alberto L Sangiovanni-Vincentelli, Davide Bresolin, Luca Geretti, and Tiziano Villa. A platform-based design methodology with contracts and related tools for the design of cyber-physical systems. Proceedings of the IEEE, 103(11):2104–2132, 2015. * [21] Chanwook Oh, Eunsuk Kang, Shinichi Shiraishi, and Pierluigi Nuzzo. Optimizing assume-guarantee contracts for cyber-physical system design. In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 246–251. IEEE, 2019. * [22] Reza Olfati-Saber, J Alex Fax, and Richard M Murray. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1):215–233, 2007. * [23] Yash Vardhan Pant, Houssam Abbas, Rhudii A Quaye, and Rahul Mangharam. Fly-by-logic: Control of multi-drone fleets with temporal logic objectives. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pages 186–197. IEEE, 2018. * [24] Vasumathi Raman, Alexandre Donzé, Mehdi Maasoumy, Richard M Murray, Alberto Sangiovanni-Vincentelli, and Sanjit A Seshia. Model predictive control with signal temporal logic specifications. In 53rd IEEE Conference on Decision and Control, pages 81–87. IEEE, 2014. * [25] Sadra Sadraddini and Russ Tedrake. Linear encodings for polytope containment problems. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 4367–4372. IEEE, 2019.
# To the synthesis and characterization of layered metal phosphorus triselenides proposed for electrochemical sensing and energy applications (Manuscript was initially submitted to ACS Catalysis as a Comment for Ref. 22; however was rejected after appeal) Yuriy Dedkov<EMAIL_ADDRESS>Mouhui Yan Elena Voloshina <EMAIL_ADDRESS>Department of Physics, Shanghai University, 200444 Shanghai, China Institute of Physical and Organic Chemistry, Southern Federal University, 344090 Rostov on Don, Russia ###### Abstract Recent studies reported on the synthesis and characterization of several bulk crystals of layered metal triselenophosphites MPSe3 (M = transition metals). In these works characterization was performed via a combination of different bulk- and surface-sensitive experimental methods accompanied by DFT calculations. However, the critical examination of the available experimental and theoretical data demonstrates that these results do not support the conclusions on the electrochemical sensing and energy applications of studied triselenophosphites. These conclusions are made without any relation to the age of discussed data and possible recent progress in experimental and theoretical approaches. ###### keywords: trichalcogenides, DFT, XRD, EDS, XPS ## 1 Introduction The discovery of the unique transport properties of graphene in in 2004 [1, 2] lead to the increased attention to graphene-based systems [3, 4, 5, 6] and also to other classes of 2D materials. Among them are h-BN [7], two- dimensional dichalcogenides [8, 9], and many others, as well as heterosystems on the basis of these materials [10]. These studies led to the discovery of many exciting properties, which can be used in different applications, like, touch screens [11, 12], gas sensors [13, 14, 15], very good thermal conductors [16, 17], etc. Recently, a new class of low-dimensional materials, namely transition metal phosphorus trichalcogenides received a lot of attention, because they are considered as a new class of 2D materials, which can find application in electronics, sensing, catalysis, or energy conversion. Many experimental and theoretical works as well as several respective review articles on the studies of the transition metal phosphorus trichalcogenides appeared in literature [18, 19, 20, 21, 22, 23, 24]. However, the available in literature experimental and theoretical works on the studies of, e. g. electrocatalytic properties of these materials, contain sometimes misleading information on the synthesis and characterization. These inaccurate results obtained during experiment/sample preparation and treatment of experimental data lead to the fact that the discussed effects and main conclusions are not supported by the presented experimental data. Recently a work of Gusmão et al. [22] was published, which is devoted to the studies of the electrochemical performance (hydrogen and oxygen evolution reactions) of transition metal phosphorus trichalcogenides MPSe3 (M = Cd, Cr, Fe, Mn, Sn, Zn). It was found that among all synthesised samples, FePSe3 and MnPSe3 have the highest efficiency for the hydrogen evolution reaction with good stability. A the same time it was found that MnPSe3 holds the lowest oxidation potential, although it was assigned to the presence of MnO2 in the sample. In the same work MPSe3 samples were synthesised using standard chemical vapour transport method. Structural characterization was performed by means of x-ray diffraction (XRD) in the $\theta-2\theta$ geometry using Cu K$\alpha$ line. Morphology and chemical analysis were studied using scanning electron microscopy (SEM) combined with energy-dispersive x-ray spectroscopy (EDS) applying the gentle electron beam with the energy of $2$ keV. X-ray photoelectron spectroscopy (XPS) was used for the surface analysis of the studied samples. These data obtained during samples characterization were used to support the observed effects in further experiments on the electrochemical performance of these materials (hydrogen and oxygen evolution reactions). However, as shown below, the presented claims in the discussed manuscript have to be reconsidered, because experimental as well as theoretical data on the sample characterization contain several flaws and errors and these results have to be fully reanalyzed. Therefore the main conclusions on the electrochemical performance of transition metal phosphorus trichalcogenides have to be reconsidered. ## 2 Experimental and computational details Samples synthesis. Manganese ($99.9$%), phosphorus ($99.999$%), and selenium ($99.999$%) from Shanghai Macklin Biochemical Co., Ltd. and Alfa Aesar were used during synthesis. A stoichiometric amount of high-purity elements (mole ratio $\mathrm{Mn}:\mathrm{P}:\mathrm{Se}=1:1:3$, $1$ g in total) and iodine (about $20$ mg) as a transport agent were sealed into a quartz ampule (length $17$ cm, external diameter approximately $15$ mm) and kept in a two-zone furnace ($650-600^{\circ}$ C). The pressure inside the ampule was pumped down to $1\times 10^{-3}$ Torr. After 10 days of heating, the ampule was cooled down to room temperature with bulk crystals in the colder edge. Characterization. XRD patterns were collected with a Bruker D2 Phaser diffractometer using Cu K$\alpha$ ($1.54178$ Å) radiation at room temperature. Optical images were collected with optical microscope at different magnifications. Presented reference spectra of WSe2 were collected using SPECS PHOIBOS 150 energy analyzer and Al K$\alpha$ monochromatized x-ray source ($h\nu=1486.6$ eV). DFT calculations. Spin-polarised DFT calculations based on plane-wave basis sets of $500$ eV cutoff energy were performed with the Vienna ab initio simulation package (VASP) [25, 26]. The Perdew-Burke-Ernzerhof (PBE) exchange- correlation functional [27] was employed. The electron-ion interaction was described within the projector augmented wave (PAW) method [28] with Mn ($3p$, $3d$, $4s$), Cr ($3p$, $3d$, $4s$), Fe ($3p$, $3d$, $4s$), P ($3s$, $3p$), and Se ($4s$, $4p$) states treated as valence states. The Brillouin-zone integration was performed on $\Gamma$-centred symmetry reduced Monkhorst-Pack meshes using a Gaussian smearing with $\sigma=0.1$ eV, except for the calculation of density of states (DOS). For these calculations, the tetrahedron method with Blöchl corrections [29] was employed. A $12\times 12\times 4$ $k$-mesh was used in the case of ionic relaxations and $24\times 24\times 8$ for the DOS calculations, respectively. The DFT+$\,U$ scheme [30, 31] was adopted for the treatment of Mn-, Cr- and Fe-$3d$ orbitals. Dispersion interactions were considered adding a $1/r^{6}$ atom-atom term as parameterised by Grimme (“D2” parameterisation) [32]. During structure optimisation, the convergence criteria for energy and force were set equal to $10^{-5}$ eV and $1\times 10^{-2}$ eV/Å, respectively. ## 3 Results and discussions The studied 3D MPSe3 crystals have either $C2/m$ (M = Cr) or $R\bar{3}$ (M = Mn, Zn, Cd, Fe) space group symmetry. The both cases can be viewed as layered structures, where each stacked MPSe3 layer has a $D_{3d}$ symmetry (see Fig. 1). Therefore, one might expect the hexagonal-like shape for the grown crystals with angles between crystallite edges either $120^{\circ}$ or $60^{\circ}$ [20]. Analysis of the presented in Ref. 22 SEM images of the grown crystals demonstrate the absence of this representative feature, indicating the low quality of the studied crystals (see Fig. 2(a) for MnPSe3; see also Figs. 2 and S1 in Ref. 22 for other MPSe3 crystals). At the same time, we present in Fig. 2(b,c) optical images of MnPSe3 crystal grown recently in our laboratory. One can clearly see that this sample demonstrates very high quality with well ordered planes and steps oriented with respect to each other either by $120^{\circ}$ or $60^{\circ}$. The measured XRD patterns of our crystals demonstrate extraordinary quality without additional phases (Fig. 2(d)) [20], whereas the data presented in Ref. 22 (Fig. S3) demonstrate the existence of other undesired crystal phases in the studied samples. The SEM/EDS combination was used in Ref. 22 to study the morphology as well as the composition of the studied MPSe3 samples. Despite the low energy of the used electron beam during this analysis, the EDS method cannot be considered as a surface sensitive as, for example, at $5$ keV of beam energy the probing depth can reach $0.5\,\mu\mathrm{m}$. Therefore, the results presented in Fig. S2 and Tab S1 of Ref. 22 demonstrate extremely poor quality of the studied samples in bulk as well as at the surface: some samples demonstrate the absence of phosphorus in the sample and the level of the C- and O-contamination is very high (it is varied between $\approx 43$ at.% and $\approx 65$ at.% in total). The electronic structure of the studied MPSe3 crystals was investigated in Ref. 22 using density functional theory (DFT) within the PBE+$U$ approach with $U=3$ eV for the proper treatment of electron correlations in the valence band. Here we would like to point out, that from the presented in Ref. 22 data it is absolutely unclear if bulk 3D or monolayer 2D phases were considered in the DFT calculations. This is crucial for the description of the band gap around the Fermi level as its value strongly depends on the system dimensionality as shown in Ref. 33. Thus, only comparison of experimental results with theoretical data obtained for the same phase (here, bulk 3D) will make sense. The calculated values for the band gap are summarized in Table 1, where we compare data from Ref. 22 and our own data obtained in the framework of the present study. Fig. 3 shows density of states (DOS) plots for a series of MPSe3 compounds calculated in the present study for the 3D phase and different values of $U$. As can be seen, the values of band gap for MPSe3 in Ref. 22 are approximately by factor of $2$ smaller compared to the experimental values, which appeared in earlier publications [20] and to the values obtained with DFT in the present study. These values are also not consistent with the recently published values for MnPSe3 [33], which are in good agreement with available experimental data. It is also very confusing, that in Ref. 22, the band gap of $0.5$ eV is claimed for FePSe3, that contradicts to the metallic state for this compound as can be seen in Fig. 3 of this reference. Moreover, high density of states at the Fermi level indicates an unstable situation, which is not energetically favourable. When looking at the DFT results, i. e. DOS calculated for MPSe3, one can note a significant variation of a band gap depending on the nature of M. Comparison of the calculated band gap with, e.g., the free energy of water splitting ($1.23$ eV) may be used as an indication that MnPSe3 (with $\Delta E_{g}=1.8$ eV) is a good candidate for water-splitting catalyst. Such a consideration, however, is oversimplified, because for the strait forward conclusion one has to take into account some important factors. First of all, any catalytic process with MPX3 will take place at the surface and it is known, that the electronic properties of these compounds strongly depend on the system dimensionality [33]. Secondly, one has to consider presence of defects, which on the one hand are expected to enhance the catalytic activity [34], on the other hand will influence (reduce) the band gap [33]. In the case of electrochemical water splitting, the band edges of the catalysts must straddle the redox potentials of water, which in their turn depend on the $pH$ value [21]. Thus, in order to discuss the activity of MPSe3 for the electrochemical applications, one has to perform very detailed, accurate, and systematic study, where the calculations on 3D MPSe3 are just the initial step. At the same timeIt is necessary to note, that the scientific weight of the drawn conclusions will depend on the reliability of this initial step descriptions. The most confusing part of Ref. 22 regards to the XPS characterization of the studied MPSe3 samples and their interpretation. Because these XPS data are used to support the respective electrochemical performance results for the studied material, we believe that the main conclusions of the discussed manuscript have to be reconsidered and the respective data have to be fully reanalized. To remind, here we present the basic consideration for the analysis of XPS emission lines. Core levels in XPS use the notation $nl_{j}$, where $n$ is the principal quantum number, $l$ is the angular momentum quantum number and $j=l+s$ (where $s=\pm 1/2$ is the spin angular momentum number). All orbital levels, except the $s$ levels ($l=0$), give rise to a doublet with the two possible states having different binding energies. This is known as spin-orbit splitting. The peaks will also have specific area ratios based on the degeneracy of each spin state, i.e. the number of different spin combinations that can give rise to the total $j$. For example, for the $2p$ spectra, where $n=2$ and $l=1$, $j$ will be $1/2$ and $3/2$. The area ratio for the two spin orbit peaks ($2p_{1/2}:2p_{3/2}$) will be $1:2$ (corresponding to $2$ electrons in the $2p_{1/2}$ level and $4$ electrons in the $2p_{3/2}$ level). The similar consideration is valid for $d$ and $f$ levels, where the area ratios for the respective spin-orbit split components are $2:3$ and $3:4$, respectively. These ratios must be taken into account when analyzing spectra of the $p$, $d$ and $f$ core levels. Spin-orbit splitting values (in eV) can be found in different databases (e. g.: https://xpssimplified.com/periodictable.php). Here, we present two representative examples, demonstrating the application of the fit procedure for the Se $3d$ spectra, where the above described peaks’ ratios are used. The Se $3d$ spectrum for a WSe2 single crystal measured using monochromotized Al K$\alpha$ x-ray source and shown in Fig. 4(a) demonstrates clear spin-orbit split doublet with components having equal full width at half maximum (FWHM), peak separation of $0.85$ eV and intensities ratio of $I(3d_{3/2})/I(3d_{5/2})=0.67$. The Se $3d$ spectrum for $\alpha$-P4Se3 measured using non-monochromatized Mg K$\alpha$ x-ray source shows that all spin-orbit doublets must be fit in order to properly identify the species present in the sample. The $3d_{3/2}$ and $3d_{5/2}$ doublet for each chemical specie is constrained to have $\approx 2:3$ peak area ratios, equal FWHM, and a peak separation of $\approx 1$ eV (Fig. 4(b)) [35]. The discussed examples demonstrate the universality of the discussed approach without any connection to the specific sample. Coming back to the XPS data presented in Ref. 22 we have to mention the high level of the C- and O-contamination that does not allow to carefully study the chemical states of elements in the studied compounds (see Fig. S5 of the discussed manuscript). Therefore, it is very difficult to draw the clear conclusions, which might support the further data obtained in the studies of the electrochemical performance of these materials. Moreover, the performed analysis and interpretation of the respective core levels presented in this manuscript does not correspond to the high standards accepted in the scientific publications. As an example, we present here several XPS spectra for MnPSe3, FePSe3, and ZnPSe3 extracted from Fig. 5 of Ref. 22. The following list marks the serious deficiencies in the spectra interpretation (Fig. 4(c-e)): * 1. The respective fit of the Se $3d$ XPS lines is not correct for all data presented in the discussed manuscript. As discussed before, the ratio of integral intensities for $3d_{3/2}$ and $3d_{5/2}$ lines have to be $\approx 0.67$ and cannot be more than $1$ as it is presented for the case of ZnPSe3. If necessary, several spin-orbit split lines corresponding to different species have to be introduced for the proper fit of the XPS spectra. * 2. The respective fit of P $2p$ XPS line is not correct for several data sets presented in the discussed manuscript (see the respective spectra for FePSe3 and ZnPSe3). As discussed before, the ratio of integral intensities for $2p_{1/2}$ and $2p_{3/2}$ lines have to be $\approx 0.5$ and cannot be more than $1$ as it is presented for the case of ZnPSe3. If necessary, several spin-orbit split lines corresponding to different species have to be introduced. It has to be also noted that the same broad peak in the P $2p$ spectra for different MPSe3 compounds, which is located at $\approx 134$ eV of binding energy, is assigned to different emission lines: P4O10 or P $2p_{1/2}$, although it is obvious that this component corresponds to the same chemical state of P atoms, namely to phosphorus oxide (PxOy). Ironically, on the basis of this inaccurate fit, authors of Ref. 22 made the conclusion on the low concentration of PxOy for FePSe3 and on the absence of the oxidation state of P for ZnPSe3, which is, of course, wrong. * 3. For the Mn $2p_{3/2}$ XPS line, two components in the spectra were assigned to the emission from Mn atoms either in MnPSe3 or in MnO2 (Fig. 4(c)). However, this statement is not supported by any additional reference XPS measurements. The quality of the XPS spectra presented in Ref. 22 is very poor, not allowing to perform clear identification of the components and several Mn-oxide phases can be assigned to this “MnO2”-peak: MnO, Mn2O3, or MnO2 [36]. * 4. The presented fit of the Fe $2p$ XPS spectra is not correct. One can clearly see a well resolved shoulder at the lower binding energies for Fe $2p_{3/2}$ (marked by the arrow in Fig. 4(d)). This component is missed in the analysis; however, it might be assigned to another chemical state of Fe atoms in FePSe3, or the main fitted peak can be due to FexOy in the studied sample [37, 38] and the small not-assigned peak is due to the FePSe3 phase. * 5. The previous consideration is also valid for the analysis of the Zn $2p$ XPS spectra. One can clearly see a well resolved shoulder at the lower binding energies for Zn $2p_{3/2}$ (marked by the arrow in Fig. 4(e)), which is also is not included in the respective analysis [39, 40]. * 6. Analysis of Table S2 of Ref. 22 shows that binding energies for the spin-orbit split components of the Se $3d$ XPS line change the order for different compounds, which is, of course, wrong - the binding energy of the $3d_{5/2}$ has to be always smaller. However, this mistake can be related to the typing error. ## 4 Conclusions In conclusion, we have demonstrated that the recent work of Gusmão et al. [22] contains several serious flaws and errors, which are related to the characterization of the synthesised MPSe3 samples (M = Cd, Cr, Fe, Mn, Sn, Zn). In this work the structural characterization was performed using XRD and sample morphology and chemical analysis was studied using the SEM/EDS combinations. XPS was used for the surface analysis of the studied samples. Because these data are considered as the main results for the MPSe3 samples characterization and they are used to support the respective results from the further studies of the electrochemical performance of the studied objects (hydrogen and oxygen evolution reactions), we believe that the main conclusions of the discussed manuscript are not valid and the respective experimental data, which are criticised here, have to be carefully reconsidered and reanalyzed. Here we would like to point out that the recently published data have no limitation period because all presently used experimental and theoretical approaches were and are available for the last 10 years (at least), as demonstrated by the respective examples. ## Acknowledgement This contribution was supported by the National Natural Science Foundation of China (Grant No. 21973059). Y.D. and E.V. thank the support by the Ministry of Science and Higher Education of the Russian Federation (State assignment in the field of scientific activity, Southern Federal University, 2020). ## References ## References * [1] K. Novoselov, A. Geim, S. Morozov, D. Jiang, M. Katsnelson, I. Grigorieva, S. Dubonos, A. Firsov, Two-dimensional gas of massless Dirac fermions in graphene, Nature 438 (2005) 197–200. * [2] Y. Zhang, Y. Tan, H. Stormer, P. Kim, Experimental observation of the quantum Hall effect and Berry’s phase in graphene, Nature 438 (2005) 201–204. * [3] Y. Dedkov, E. Voloshina, Graphene growth and properties on metal substrates, J. Phys.: Condens. Matter 27 (2015) 303002. * [4] P. Janthon, F. Viñes, S. M. Kozlov, J. Limtrakul, F. Illas, Theoretical assessment of graphene-metal contacts, J. Chem. Phys. 138 (2013) 244701\. * [5] R. Roy, R. Thapa, S. Chakrabarty, A. Jha, P. R. Midya, E. M. Kumar, K. K. Chattopadhyay, Role of oxygen functionality on the band structure evolution and conductance of reduced graphene oxide, Chem. Phys. Lett. 677 (2017) 80–86. * [6] Y. Dedkov, E. Voloshina, Epitaxial graphene/Ge interfaces: a minireview, Nanoscale, accepted (2020), doi: 10.1039/D0NR00185F. * [7] C. Oshima, A. Nagashima, Ultra-thin epitaxial films of graphite and hexagonal boron nitride on solid surfaces, J. Phys.: Condens. Matter 9 (1997) 1–20. * [8] S. Manzeli, D. Ovchinnikov, D. Pasquier, O. V. Yazyev, A. Kis, 2D transition metal dichalcogenides, Nature Rev. Mater. 2 (2017) 147. * [9] L. Meng, S. Hu, W. Yan, J. Feng, H. Li, X. Yan, Controlled synthesis of large scale continuous monolayer WS2 film by atmospheric pressure chemical vapor deposition, Chem. Phys. Lett. 739 (2020) 136945. * [10] A. K. Geim, I. V. Grigorieva, Van der Waals heterostructures, Nature 499 (2014) 419–425. * [11] S. Bae, H. Kim, Y. Lee, X. Xu, J.-S. Park, Y. Zheng, J. Balakrishnan, T. Lei, H. R. Kim, Y. I. Song, Y.-J. Kim, K. S. Kim, B. Ozyilmaz, J.-H. Ahn, B. H. Hong, S. Iijima, Roll-to-roll production of 30-inch graphene films for transparent electrodes, Nat. Nanotech. 5 (2010) 574–578. * [12] J. Ryu, Y. Kim, D. Won, N. Kim, J. S. Park, E.-K. Lee, D. Cho, S.-P. Cho, S. J. Kim, G. H. Ryu, H.-A.-S. Shin, Z. Lee, B. H. Hong, S. Cho, Fast synthesis of high-performance graphene films by hydrogen-free rapid thermal chemical vapor deposition, ACS Nano 8 (2014) 950–956. * [13] F. Schedin, A. K. Geim, S. V. Morozov, E. W. Hill, P. Blake, M. I. Katsnelson, K. S. Novoselov, Detection of individual gas molecules adsorbed on graphene, Nature Mater. 6 (2007) 652–655. * [14] T. O. Wehling, M. I. Katsnelson, A. I. Lichtenstein, Adsorbates on graphene: Impurity states and electron scattering, Chem. Phys. Lett. 476 (2009) 125–134. * [15] V. Nagarajan, R. Chandiramouli, Adsorption behavior of NH3 and NO2 molecules on stanene and stanane nanosheets - A density functional theory study, Chem. Phys. Lett. 695 (2018) 162–169. * [16] N. Wang, M. K. Samani, H. Li, L. Dong, Z. Zhang, P. Su, S. Chen, J. Chen, S. Huang, G. Yuan, X. Xu, B. Li, K. Leifer, L. Ye, J. Liu, Tailoring the thermal and mechanical properties of graphene film by structural engineering, Small 14 (2018) 1801346. * [17] X. Wu, V. Varshney, J. Lee, Y. Pang, A. K. Roy, T. Luo, How to characterize thermal transport capability of 2D materials fairly? - Sheet thermal conductance and the choice of thickness, Chem. Phys. Lett. 669 (2017) 233–237. * [18] X. Li, T. Cao, Q. Niu, J. Shi, J. Feng, Coupling the valley degree of freedom to antiferromagnetic order, Proc. Natl. Acad. Sci. USA 110 (2013) 3738–3742. * [19] X. Li, X. Wu, J. Yang, Half-Metallicity in MnPSe3 exfoliated nanosheet with carrier doping, J. Am. Chem. Soc. 136 (2014) 11065–11069. * [20] K.-z. Du, X.-z. Wang, Y. Liu, P. Hu, M. I. B. Utama, C. K. Gan, Q. Xiong, C. Kloc, Weak Van der Waals stacking, Wwide-range band gap, and Raman study on ultrathin layers of metal phosphorus trichalcogenides, ACS Nano 10 (2016) 1738–1743. * [21] X. Zhang, X. Zhao, D. Wu, Y. Jing, Z. Zhou, MnPSe3 monolayer: A promising 2D visible-light photohydrolytic catalyst with high carrier mobility, Adv. Sci. 3 (2016) 1600062. * [22] R. Gusmão, Z. Sofer, D. Sedmidubský, Š. Huber, M. Pumera, The role of the metal element in layered metal phosphorus triselenides upon their electrochemical sensing and energy applications, ACS Catal. 7 (2017) 8159–8170. * [23] M. A. Susner, M. Chyasnavichyus, M. A. McGuire, P. Ganesh, P. Maksymovych, Metal thio- and selenophosphates as multifunctional van der Waals layered materials, Adv. Mater. 29 (2017) 1602852. * [24] F. Wang, T. A. Shifa, P. Yu, P. He, Y. Liu, F. Wang, Z. Wang, X. Zhan, X. Lou, F. Xia, J. He, New frontiers on van der Waals layered metal phosphorous trichalcogenides, Adv. Funct. Mater. 28 (2018) 1802151. * [25] G. Kresse, J. Furthmuller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54 (1996) 11169–11186. * [26] G. Kresse, J. Hafner, Norm-conserving and ultrasoft pseudopotentials for first-row and transition elements, J. Phys.: Condens. Matter 6 (1994) 8245–8257. * [27] J. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77 (1996) 3865–3868. * [28] P. E. Blöchl, Projector augmented-wave method, Phys. Rev. B 50 (1994) 17953–17979. * [29] P. E. Blöchl, O. Jepsen, O. Andersen, Improved tetrahedron method for Brillouin-zone integrations., Phys. Rev. B 49 (1994) 16223–16233. * [30] V. I. Anisimov, A. I. Poteryaev, M. A. Korotin, A. O. Anokhin, G. Kotliar, First-principles calculations of the electronic structure and spectra of strongly correlated systems: dynamical mean-field theory, J. Phys.: Condens. Matter 9 (1997) 7359–7367. * [31] S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, A. P. Sutton, Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study, Phys. Rev. B 57 (1998) 1505–1509. * [32] S. Grimme, Semiempirical GGA-type density functional constructed with a long-range dispersion correction, J. Comput. Chem. 27 (2006) 1787–1799. * [33] J. Yang, Y. Zhou, Q. Guo, Y. Dedkov, E. Voloshina, Electronic, magnetic and optical properties of MnPX3 (X = S, Se) monolayers with and without chalcogen defects: a first-principles study, RSC Adv. 10 (2020) 851–864. * [34] X. Li, Y. Fang, J. Wang, B. Wei, K. Qi, H. Y. Hoh, Q. Hao, T. Sun, Z. Wang, Z. Yin, Y. Zhang, J. Lu, Q. Bao, C. Su, High-yield electrochemical production of large-sized and thinly layered NiPS3 flakes for overall water splitting, Small 15 (2019) 1902427. * [35] J. R. Rollo, G. R. Burns, W. T. Robinson, R. J. H. Clark, H. M. Dawes, M. B. Hursthouse, A new polymorph of tetraphosphorus triselenide, $\alpha^{\prime}$-P4Se3: an x-ray, Raman, and XPS study of the normal crystalline phases and a DSC study of the crystalline and the orientationally disordered phases of P4Se3, Inorg. Chem. 29 (1990) 2889–2894. * [36] M. C. Biesinger, B. P. Payne, A. P. Grosvenor, L. W. M. Lau, A. R. Gerson, R. S. C. Smart, Resolving surface chemical states in XPS analysis of first row transition metals, oxides and hydroxides: Cr, Mn, Fe, Co and Ni, Appl. Surf. Sci. 257 (2011) 2717–2730. * [37] Y. S. Dedkov, M. Fonin, U. Ruediger, C. Laubschat, Graphene-protected iron layer on Ni(111), Appl. Phys. Lett. 93 (2008) 022509. * [38] Y. S. Dedkov, A. Generalov, E. N. Voloshina, M. Fonin, Structural and electronic properties of Fe3O4/graphene/Ni(111) junctions, Phys. Status Solidi RRL 5 (2011) 226–228. * [39] M. Wang, L. Jiang, E. J. Kim, S. H. Hahn, Electronic structure and optical properties of Zn(OH)2: LDA+U calculations and intense yellow luminescence, RSC Adv. 5 (2015) 87496–87503. * [40] D. Kundu, B. D. Adams, V. Duffort, S. H. Vajargah, L. F. Nazar, A high-capacity and long-life aqueous rechargeable zinc battery using a metal oxide intercalation cathode, Nature Energy 1 (2016) 928. Table 1: Comparison of band gaps obtained in different works using theoretical (DFT) and experimental methods. Compound | PBE+$U$ | PBE+$U$+D2 | HSE06+D2 | Experiment ---|---|---|---|--- | $U$= 3 eV (Ref. 22) | $U$= 3 eV | $U$= 4 eV | $U$= 5 eV | (Ref. 33) | (Ref. 20) MnPSe3 | $1.40$ eV | $1.60$ eV | $1.72$ eV | $1.80$ eV | $2.50$ eV | $2.25$ eV SnPSe3 | $1.20$ eV | | | | | CrPSe3 | $0.25$ eV | $0.39$ eV | $0.70$ eV | $0.96$ eV | | FePSe3 | $0.50$ eV | $0.95$ eV | $1.04$ eV | $1.08$ eV | | $1.20$ eV Figure 1: Top (a) and side (b) views of a single layer of MPSe3. Spheres of different size/colour represent ions of different type. Figure 2: (a) SEM image of MnPSe3 extracted from Ref. 22 (white bar corresponds to $10\,\mu\mathrm{m}$). The MnPSe3 crystals obtained in our experiments: (b) general view of the crystal (grid size is $1\,\mathrm{mm}\times 1\,\mathrm{mm}$), (c) optical microscopy image, (d) respective XRD patterns. Figure 3: DOS of bulk MPSe3 (M = Mn, Fe, Cr) in the antiferromagnetic state calculated in the present work for $U=3$ eV and $U=5$ eV. Figure 4: (a) The Se $3d$ XPS spectrum measured for WSe2 using monochromatized Al K$\alpha$ line. Respective spin-orbit split $3d_{3/2}$ and $3d_{5/2}$ emission lines are marked. (b) The Se $3d$ XPS spectrum for $\alpha$-P4Se3 measured using non-monochromatized Mg K$\alpha$ x-ray source [35]. Respective spin-orbit split emission lines for two kinds of P species are marked. (c-e) Examples of XPS spectra for MnPSe3, FePSe3, and ZnPSe3 extracted from Ref. 22 and discussed in the present commentary.
# Discriminative Semantic Transitive Consistency for Cross-Modal Learning Kranti Kumar Parida Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur, India <EMAIL_ADDRESS> &Gaurav Sharma Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur, India <EMAIL_ADDRESS> ###### Abstract Cross-modal retrieval is generally performed by projecting and aligning the data from two different modalities onto a shared representation space. This shared space often also acts as a bridge for translating the modalities. We address the problem of learning such representation space by proposing and exploiting the property of Discriminative Semantic Transitive Consistency—ensuring that the data points are correctly classified even after being transferred to the other modality. Along with semantic transitive consistency, we also enforce the traditional distance minimizing constraint which makes the projections of the corresponding data points from both the modalities to come closer in the representation space. We analyze and compare the contribution of both the loss terms and their interaction, for the task. In addition, we incorporate semantic cycle-consistency for each of the modality. We empirically demonstrate better performance owing to the different components with clear ablation studies. We also provide qualitative results to support the proposals. ## 1 Introduction With the rapid growth of digital devices and content, there is a plethora of data available today in many different modalities, e.g. audio, video, text, NIR, 3D and so on. The immediate challenge is to enable search and retrieval of appropriate content in all modalities given a query in any one modality. The methods for search and retrieval, when the query and the gallery both come from same modality, have been studied extensively and are now widely used in day to day life. However, the task of cross-modal retrieval, i.e. when the query and gallery are from different modalities, is more challenging and farther from widespread adoption. Cross modal retrieval involves a critical step of aligning different modalities in an intermediate space, after respective projections. It has been studied as an interesting task for a very long time and many different approaches have been proposed, e.g. canonical correlation analysis (CCA) [1] maximizes correlation between the modalities in the common space [2], auto encoders [3] align both modalities by enforcing reconstruction of data in one modality given the other modality as input. Deep learning has been very successful in learning representations from raw input data, for different modalities alike, e.g. videos [4], text [5], audio [6]–all of these are processed with deep neural networks to give state of the art results in many uni-modal tasks. Approaches have also been proposed with deep learning for alignment of different modalities. The most common approach for the alignment is to use initial independent layers for each of the modalities followed by common layers [7, 8]. The independent layers act as non linear projections, for each modality into a common representation space, and the following common layers act as the shared classifier working with the common representation space for both modalities. The classifier learned in the common representation space with annotated data from both modalities, aligns the modalities to enable cross-modal retrieval in this space. To further strengthen the alignment, along with the classification loss, $\ell_{2}$ distance or negative cosine similarity minimizing losses, between the paired data in the different modalities are also used [9]. Such losses work with the projections in the common space and enforce that data points which correspond in the two modalities, e.g. audio and video from the same clip, are closer to each other either in absolute ($\ell_{2}$, cosine loss) or in relative terms (triplet loss) wrt. the points which do not correspond, e.g. a video from a different clip. We present a simple novel idea of discriminative semantic transitive consistency (DSTC), which is inspired by works on cyclic consistency [10, 11] and is adapted for the task of cross modal (audio-visual or image-text) retrieval. We argue that the loss functions used in earlier works of enforcing the data points to lie close together in the common representation space might be too strict, given the final goal of semantic category-based retrieval. The case is similar for loss functions enforcing cyclic consistency, i.e. enforcing the representations to go to the exact same point when translated back to the originating modality. Instead, we propose to enforce a weaker form of correspondence using the individual representation space, i.e. we deem it to be sufficient if the projected points, from one modality, belong to the same class in the representation space of the other modality, as well as when translated back to representation space of the originating modality. In effect, the DSTC loss and its cyclic sibling, are satisfied if the audio and video data from the same clip do not necessarily coincide with each other, but do belong to the same class in both the representation space, and also maintain their class membership when translated back. Figure 1: Block diagram showing the difference between existing (left) [8, 7] and proposed (right) approaches. In the existing approaches (left), there is a common representation space for both modality but in the proposed approach (right) each modality has individual representation space and the respective translators are used for aligning with the representation space of the other modality. The architecture we propose also differs from the popular existing architectures [8, 7]. While in the existing architectures, a shared classifier is learned in the common representation space, we learn individual classifiers for the modalities. Fig 1 compares the architecture we use to the traditional architectures. Instead of learning common representation space for both the modalities, we learn discriminative feature space individually for each of the modalities first and then use translators to separately align one modality to the other. These translators create the bridges which enable cross-modal retrieval. They give the discriminative and transitive natures to the proposed semantic consistency, i.e. if an input $x:$‘dog’ in the audio modality, and $x:y$ via translation to the video modality, then $y:$‘dog’ by transitivity—the property preserved being that of semantic class discrimination. While the usual distance based losses act on the $x:y$ step. The DSTC acts on the $y:$‘dog’ step. Even within our framework, both the (cyclic) DSTC losses as well as pointwise correspondence based ones, after translation to respective modality spaces, can be used together. We investigate such combinations and show complementary strengths of both. We also give extensive empirical evaluations, quantitative and qualitative, to support our proposals. ## 2 Related Work Our work is closely related to the topic of multimodal learning, cross-modal retrieval and the data and style transfer/translation. Multi-modal Learning. Multi-modal learning approaches can be broadly divided into two categories: (i) using already learnt model in one modality to learn or perform a task in other modality, (ii) using both the modalities to improve task performance cf. using a single modality. In the first kind of approaches, many different tasks have been studied recently, such as learning audio representation from image [6], learning image representation from audio [12], recognizing emotions in audio by transferring knowledge from video [13], pre-training action classification network for video by getting the labels from audio [14], using video pre-trained network to track vehicles from audio [15]. Cross-modal data is used in the framework of self-supervised learning in [16, 17] as well to learn better representation in both the modalities by exploiting their correspondence in the data. In the second kind, a variety of different approaches have been proposed, such as domain adaptation [18], sound source separation [19, 20], depth estimation and visual navigation [21, 9], zero-shot learning [9, 22] and person identification [23], with the aim of improving performance using multiple modalities together. In another line of work cross-modal generation is performed [24], where the goal is to reconstruct one modality given other modality as input. Cross-modal Retrieval. Cross-modal retrieval approaches map data from both the modality onto a common representation to perform retrieval.Such approaches can be broadly divided into three types on the basis of training strategy: (i) learning with full supervision [25, 26, 27, 8, 28, 29, 30, 31], (ii) zeros- shot retrieval (learning with limited supervision) [9, 22, 11, 32, 33] , and (iii) self-supervised learning [17, 34, 35]. The proposed method is of the first kind, and requires full supervision for training for performing audio- to-video and image-to-text cross modal retrieval. Closely related approaches for the problem, use shared classifier along with point wise correspondence [7, 8] in the common representation space. Whereas, the proposed method uses modality specific classifiers with translator networks to bridge from one modality to the other for cross-modal retrieval. Our approach lends more flexibility to the projections from individual modality into the corresponding representation space cf. the shared classifier approaches. The proposed loss then enforces that even after translation to the other modality, the point retains its class membership, wrt. the classifier of the other modality. The main motivation here is that enforcing point wise correspondence is too strong condition for alignment, for the task of semantic cross-modal retrieval. For example, considering a ‘dog’ barking audio/video sample—with pointwise correspondence the network will force the audio to be translated to that particular video, however it should suffice that the audio is translated to a video which belongs to the class ‘dog’ as well. In one of the existing approach for cross modal retrieval using hash codes [36], the authors have used a similar approach of cycle consistency using pointwise correspondence to generate data from a common hash code. Data Translation. Image translation has been a popular topic recently, where a particular type of image is converted into a different type, e.g. day to night [37], gray to RGB [38] and summer to winter [39]. Along similar lines, video- to-video synthesis [40, 41, 42] approaches have also been proposed, where photorealistic videos are generated from a sequence of semantic segmented mask images. All these methods are trained in an adversarial fashion, however, most of them exploit the pointwise correspondence between the two type of data. A variety of different approaches [10, 43, 44] are also used for translation of data points without any explicit correspondence annotated training data. These approaches uses the concept of cycle consistency to transfer the data from one type to another, i.e. the original input and the double translated output (from original space to the other space and back) should coincide with each other. Cycle consistency has been used in many tasks such as image-to- image translation [10], canonical surface mapping [44], cross-modal retrieval [45], zero-shot learning [46] etc. It enforces point-wise correspondence after double translation, i.e. to the other modality and back. Whereas, here we enforce semantic class consistency after completing the cycle, i.e. the point need not coincide with originating point, it is sufficient if the class membership is preserved, after double translation. ## 3 Approach #### Notations and Problem We work with paired data for both the modality, $\mathcal{D}=\\{(\mathbf{x}_{i},\mathbf{y}_{i})\\}_{i=1}^{N}$, $\mathbf{x}_{i}\in\mathbb{R}^{d_{1}},\mathbf{y}_{i}\in\mathbb{R}^{d_{2}}$, $N$ being the total number of data points. Further, each pair of data has a class $\mathbf{z}_{i}=(z_{i1},\ldots,z_{iC})\in\\{0,1\\}^{C}$ associated with it encoded as a one-hot vector, with $C$ being the total number of classes in the dataset. The problem of semantic cross modal retrieval is, given a query from one modality, retrieve data from the other modality. A retrieval result is valid when it has the same class label as the query, i.e. for the query $\mathbf{x}_{i}$, $\mathbf{y}_{j}$ is a valid retrieval if $\mathbf{z}_{i}=\mathbf{z}_{j}$. Figure 2: Architecture, information flow, and losses. The architecture consists, two each, of encoders $E$ which project the modalities onto a feature space, translators $T$ which can translate between modalities in this space, and classifiers which predict the classes for each modality. The cross modal retrieval is done in the representation space of the query modality using the translators, which is used to project the gallery examples onto the representation space of query modality. We show all the losses with their information flows, originating from modality 1. All of the losses shown (CE, DSTC, cDSTC, PT and cPT) have a symmetrical part originating from modality 2, and both parts are added to get the corresponding full losses as detailed in section below. ### 3.1 Network Architecture and Losses The proposed network, Fig. 2, contains six different sub-networks, two encoders $\mathbf{E_{x}}$ and $\mathbf{E_{y}}$ for encoding each of the individual modalities, two classification networks $\mathbf{C_{x}}$ and $\mathbf{C_{y}}$ for each of the modalities and two translation networks $\mathbf{T_{xy}}$ and $\mathbf{T_{yx}}$ for translating each of the modalities to the other respectively. Each of the sub-networks are MLPs themselves, the details of the number of layers and their sizes is given in the Experiments section. The network takes a pair of data points $\\{(\mathbf{x}_{i},\mathbf{y}_{i})\\}$ as the input while training and uses the following losses. #### Cross entropy losses. We use the standard cross-entropy losses for training the modality classifiers, $\begin{split}\mathcal{L}_{CE}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}z_{ic}\big{[}\log\left(\mathbf{C_{x}}(\mathbf{E_{x}}(\mathbf{x}_{i}))\right)+\log\left(\mathbf{C_{y}}(\mathbf{E_{y}}(\mathbf{y}_{i}))\right)\big{]}\end{split}$ (1) #### Discriminative Semantic Transitive Consistency (DSTC) losses. The DSTC losses enforce that once an input from one modality is translated into the other modality, it maintains the same class, i.e. if $\mathbf{x}:$‘dog’ and $\mathbf{x}:\mathbf{y}$ by translation, then $\mathbf{y}:$‘dog’ as well, by transitivity of the property of discriminative class membership. Formally, the loss is given by $\begin{split}\mathcal{L}_{DSTC}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}z_{ic}\big{[}\log\left(\mathbf{C_{y}}(\mathbf{T_{xy}}(\mathbf{E_{x}}(\mathbf{x}_{i})))\right)+\log\left(\mathbf{C_{x}}(\mathbf{T_{yx}}(\mathbf{E_{y}}(\mathbf{y}_{i})))\right)\big{]}.\end{split}$ (2) #### Cyclic DSTC (cDSTC) losses. The cyclic versions of the DSTC loss ensures that when an input is double translated to the other modality and then back to the original modality, it maintains its class, $\begin{split}\mathcal{L}_{cDSTC}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}z_{ic}\big{[}\log\left(\mathbf{C_{x}}(\mathbf{T_{yx}}(\mathbf{T_{xy}}(\mathbf{E_{x}}(\mathbf{x}_{i})))\right)+\log\left(\mathbf{C_{y}}(\mathbf{T_{xy}}(\mathbf{T_{yx}}(\mathbf{E_{y}}(\mathbf{y}_{i})))\right)\big{]}.\end{split}$ (3) #### Pointwise consistency losses. Apart from the DSTC losses, we also use the paired data from both the modalities to enforce that the projection from one modality lies close to that from other, after translation, in the respective representation spaces, i.e. $\begin{split}\mathcal{L}_{PC}=\frac{1}{N}\sum_{i=1}^{N}\bigg{[}\lVert\mathbf{E_{x}}(\mathbf{x}_{i})-\mathbf{T_{yx}}(\mathbf{E_{y}}(\mathbf{y}_{i}))\rVert_{2}^{2}+\lVert\mathbf{E_{y}}(\mathbf{y}_{i})-\mathbf{T_{xy}}(\mathbf{E_{x}}(\mathbf{x}_{i}))\rVert_{2}^{2}\bigg{]}\end{split}$ (4) #### Cyclic Pointwise consistency losses. Similar to the cyclic DSTC loss, we enforce the pointwise consistency after double translation of a data point from one modality to the other and then back to the original modality, i.e. $\begin{split}\mathcal{L}_{cPC}=\frac{1}{N}\sum_{i=1}^{N}\bigg{[}\lVert\mathbf{E_{x}}(\mathbf{x}_{i})-\mathbf{T_{yx}}(\mathbf{T_{xy}}(\mathbf{E_{x}}(\mathbf{x}_{i}))\rVert_{2}^{2}+\lVert\mathbf{E_{y}}(\mathbf{y}_{i})-\mathbf{T_{xy}}(\mathbf{T_{yx}}(\mathbf{E_{y}}(\mathbf{y}_{i}))\rVert_{2}^{2}\bigg{]}\end{split}$ (5) We also experiment with Cosine distance instead of Euclidean distance for both the losses $\mathcal{L}_{PC}$ and $\mathcal{L}_{cPC}$ in eq. 4 and eq. 5 respectively. We do this by simply $\ell_{2}$ normalizing the vectors before the Euclidean distance computation. ### 3.2 Information Flow and Training and Inference The final loss for training is given by a weighted average of the above losses, i.e. $\mathcal{L}=\mathcal{L}_{CE}+\alpha\mathcal{L}_{PC}+\beta\mathcal{L}_{DSTC}+\gamma\mathcal{L}_{cPC}+\delta\mathcal{L}_{cDSTC}$ (6) where $\alpha$, $\beta$, $\gamma$, $\delta$ are the hyperparameters used to control the relative weight of individual losses. To train the network, we follow a 2-step approach, with the information flow for the different losses shown with different colors in Fig. 2. In the first step, we individually train both the modality for the task of classification by turning off the weights for the transfer module (i.e. using $\alpha$, $\beta$, $\gamma$, $\delta$ = $0$ ). In the second step, we learn to translate the modalities by jointly training the encoder and translator networks. The motivation of the architecture as well as the two step training procedure are closely connected to each other. While in previous works, e.g. [7, 8] the training forces both, alignment of two modalities in the common representation space as well as good classification with a shared classifier _simultaneously_ , we factorize it into two steps in the hope of making learning easier. Learning and freezing the classifiers in the first step, gives us a good individual representation space which is discriminative for the two modalities individually. In this step there is no alignment between the modalities and we achieve the alignment in the subsequent step by freezing the classifier and training the translators and encoders. We utilize the translator networks’ capacity to do the alignment, such that the classification boundaries defined by the frozen classifier network are respected. The architecture and the training procedure allows us to separate the two aspects of learning alignment between the representations, and keeping them discriminative as well. At test time, cross modal retrieval from one modality to the other is done using distance based scoring and sorting, for query $\mathbf{x}$ and gallery $\\{\mathbf{y}_{j}\\}$ as, $\begin{split}s_{j}=\mathrm{score}(\mathbf{E_{x}}(\mathbf{x}),\mathbf{T_{yx}}(\mathbf{E_{y}}(\mathbf{y}_{j}))),\textrm{output}=\textrm{argsort}(\\{s_{j}\\})\end{split}$ (7) where, scoring function can be $\mathrm{score}(a,b)=-\|a-b\|^{2}$ or $\cos(a,b)$. ## 4 Experiments Sl. | Losses | Cos., Cos. | Cos., Euc. | Euc., Euc. | Euc., Cos. | Class Average (Cos. dist) ---|---|---|---|---|---|--- No. | CE | PT | DSTC. | cPT | cDSTC | A2V | V2A | Both | A2V | V2A | Both | A2V | V2A | Both | A2V | V2A | Both | A2V | V2A | Both 1 | ✗ | ✓ | ✗ | ✗ | ✗ | 27.92 | 27.57 | 27.75 | 20.29 | 23.90 | 22.09 | 30.24 | 32.81 | 31.53 | 32.07 | 34.38 | 33.23 | 25.14 | 25.73 | 25.43 2 | ✗ | ✗ | ✓ | ✗ | ✗ | 50.13 | 51.84 | 50.98 | 29.82 | 46.79 | 38.31 | 28.67 | 47.39 | 38.03 | 49.65 | 51.67 | 50.66 | 35.43 | 37.56 | 36.49 3 | ✓ | ✓ | ✗ | ✗ | ✗ | 27.22 | 26.15 | 26.68 | 23.83 | 20.37 | 22.10 | 49.30 | 47.87 | 48.59 | 50.65 | 49.33 | 49.99 | 41.00 | 41.21 | 41.10 4 | ✓ | ✗ | ✓ | ✗ | ✗ | 51.71 | 52.07 | 51.89 | 49.38 | 43.61 | 46.49 | 48.71 | 44.40 | 46.55 | 51.50 | 52.12 | 51.81 | 36.17 | 36.98 | 36.57 5 | ✓ | ✓ | ✓ | ✗ | ✗ | 51.73 | 51.97 | 51.85 | 50.07 | 44.11 | 47.09 | 54.59 | 50.04 | 52.31 | 55.30 | 54.12 | 54.71 | 43.51 | 42.53 | 43.02 6 | ✓ | ✓ | ✗ | ✓ | ✗ | 28.21 | 26.85 | 27.53 | 25.73 | 24.11 | 24.92 | 48.23 | 45.50 | 46.86 | 49.36 | 46.96 | 48.16 | 39.77 | 39.83 | 39.80 7 | ✓ | ✗ | ✓ | ✗ | ✓ | 52.93 | 51.38 | 52.16 | 50.23 | 42.33 | 46.28 | 49.35 | 43.38 | 46.36 | 52.56 | 51.61 | 52.09 | 37.62 | 36.31 | 36.96 8 | ✓ | ✓ | ✓ | ✓ | ✗ | 51.66 | 51.90 | 51.78 | 50.74 | 44.82 | 47.78 | 54.15 | 50.15 | 52.15 | 55.33 | 53.86 | 54.59 | 43.27 | 42.63 | 42.95 9 | ✓ | ✓ | ✓ | ✗ | ✓ | 53.13 | 51.31 | 52.22 | 50.70 | 42.97 | 46.83 | 55.10 | 50.43 | 52.76 | 56.72 | 54.30 | 55.51 | 44.55 | 42.68 | 43.61 10 | ✓ | ✓ | ✓ | ✓ | ✓ | 53.28 | 51.27 | 52.27 | 51.10 | 43.67 | 47.38 | 55.48 | 51.50 | 53.49 | 56.88 | 54.75 | 55.82 | 44.33 | 43.03 | 43.68 Table 1: Contribution of different loss terms on retrieval performance (mAP) for ‘val’ set of AudioSetZSL using various distance methods at training and testing time. E.g. heading Euclidean, Cosine means that Euclidean distance was used during training and Cosine distance was used during testing. In our experiments we use two kinds of cross-modal dataset. The first kind of dataset contains audio and video modality whereas the second kind contains image and text modality. For audio-video, we use one of the recently proposed dataset, namely AudiosetZSL [9] for the task of multi-modal zero-shot learning involving both the audio and video modality. It is a multiclass extension of AudioSet dataset [47] and is also large scale with around $130$k samples. We consider $23$ seen classes out of total $33$ classes available in the dataset as the unseen class is not available during the training or the pre-training of the network and this might affect the quality of the features and hence the performance of the network. For image-text dataset, we use two most popular dataset, Wikipedia [48] and Pascal Sentence [49] having $10$ and $20$ classes respectively. ### 4.1 Datasets and Implementation Details Audio-Video Dataset AudiosetZSL [9] has both audio and video modalities and is provided with train, val and test splits. We use the same same split for the ‘seen‘ classes’ images. We use the features provided by the authors in [9]. The features for both audio and video are $1024$ dimensional each and are extracted using pre-trained networks. We also perform weighted random sampling for training as the dataset is highly imbalanced and follows a long-tailed distribution. We use 2 layer MLPs for both encoders, single layer MLPs for classifiers, and symmetric hour glass type network for transfer modules with 3 hidden layers (see supplementary for details). We set all the losses to have equal weights, i.e. $\alpha,\beta,\gamma,\delta=1.0$ by validation. We train the network with Adam optimizer and initial learning rate of $10^{-4}$ and subsequently changed to $10^{-10}$ after classifier training. Image-Text Datasets. We use Wikipedia [48] and Pascal Sentence [49] dataset. The former has $2866$ image-text pairs from $10$ classes while the latter has $1000$ pairs from $20$ classes. We extract the features (described in supplementary) following [8]. We also fix the encoder and classifier architectures for both the modality following [8] for a fair comparison. The encoders and classifiers are both single hidden layer MLPs, the translators are hour glass type networks with a single hidden layer as well. We set the hyperparameters $\alpha=10^{1}$, $\beta=1.0$, $\gamma=10^{3}$, $\delta=10^{2}$ for Wikipedia and $\alpha=10^{1}$, $\beta=1.0$, $\gamma=10^{-2}$, $\delta=1.0$ for Pascal Sentence dataset. We use the learning rate of $10^{-4}$ for both the dataset. ### 4.2 Ablation Experiments We show the contribution of individual losses in the training of the network for the task of cross-modal retrieval for AudiosetZSL in Tab. 1. We report the mean average precision (mAP) score used to evaluate the retrieval performance using two distance functions, Euclidean and Cosine at train and test time. Each column in Tab. 1 refers to one of the combinations used for distance calculation at train and test time respectively, e.g. Euclidean, Cosine means that Euclidean distance was used during training and Cosine distance was used during evaluation/testing. Since the AudiosetZSL dataset is highly imbalanced, we also report the class averaged mAP (AP is averaged for each query in the class to get mAP per class which is then averaged over all classes to get the class averaged mAP). We observe that the retrieval performance is better when using Cosine distance at test time even if the training was done using Euclidean distance (eq. 4, eq. 5). Similar observations have been reported in earlier works [11, 33] but potential explanations are missing. We analyze this behaviour at the end of this section. DSTC vs. PT (rows 1 and 2): We observe that the DSTC loss consistently outperforms the PT loss in all the five metric ($27.75$ vs. $50.98$, $22.09$ vs. $38.31$, $31.53$ vs. $38.03$, $33.23$ vs. $50.66$, $25.43$ vs. $36.49$ row 1 and 2). This shows that the discriminative loss is more suitable than the pointwise loss for this task and also this observation is intuitive as there is no semantic information in case of pointwise loss but the discriminative loss enforces the semantic relationship between both the modality. DSTC vs. PT with CE loss (rows 3, 4 and 5): We now add CE loss individually to PT (row 3), DSTC (row 4) respectively, and also combine all three together (row 5). We observe that CE+DSTC (row 4) consistently outperforms CE+PT (row 3) while using cosine distance either in training or testing (col. 1, col. 2, col. 4) ($51.89$ vs. $26.68$, $46.49$ vs. $22.10$, $51.81$ vs. $49.99$) except when using euclidean distance both for training and testing (col. 3) ($46.55$ vs. $48.59$). This discrepancy is similar to that of Euclidean/Cosine difference mentioned earlier and discussed at the end of this section. We further observe that CE+DSTC (row 4) performs better than CE+PT (row 3) in individual cosine distance mAP ($51.81$ vs. $49.99$) but not in class average mAP ($36.57$ vs. $41.10$). The observed performance can be attributed to the fact that some examples are affected relatively more by the pointwise loss whereas some other examples are affected more by the discriminative loss. This observation is further reinforced by the fact that adding all the three losses (row 5) improves the performance significantly in almost all the cases ($51.85$ vs. $51.89$, $46.49$ vs. $47.09$, $46.55$ vs. $52.31$, $51.81$ vs. $54.71$, $36.57$ vs. $43.02$). The difference in performance from the global average to class average case for losses in rows 3 and 4 can be explained: as the dataset is highly imbalanced, possibly some larger class data are more dominated by the pointwise loss as compared to the discriminative loss. CE + PT + cPT vs. CE + DSTC + cDSTC (rows 6 and 7): The performance of pointwise loss with the disrciminative loss along with the cycle terms shows similar trend as the similar loss combinations without the cycle losses (rows 3 and 4); the discriminative loss performs better in the individual case where as the pointwise loss performs better in the class average case. CE + PT + DSTC + cPT vs. CE + PT + DSTC + cDSTC (rows 8, 9 and 10): We now show the impact of the two cycle loss terms on the overall performance. We observe that the addition of cPT decreases the performance or is at par with the baseline of previous three losses (row 8 vs. 5) for all the three distance metric ($51.78$ vs. $51.85$, $47.78$ vs. $47.09$, $52.15$ vs. $52.31$, $54.59$ vs. $54.71$, $42.95$ vs. $43.02$). The decreases in performance can be explained by the fact that the pointwise loss becomes too strict in matching the data point by point, i.e. it tries to match a particular ’dog’ barking sound back to exactly the sound when double translated to video and back, as explained in related work section. We observe finally that adding all the losses improves the performance only marginally in all the case and class average distance ($43.68$ vs. $43.61$). This marginal improvement shows that the cyclic pointwise loss does not have much impact on the performance of the system. Since the method with Euclidean distance for training and Cosine distance for evaluation outperforms all other methods, we report all the following results with this setting. ### 4.3 Euclidean vs. Cosine Loss Figure 3: (left) Training loss (right) Retrieval mAP. Similar to [11, 33], we observe that Cosine distance performs better even if the training was done using Euclidean distance. We observe from Fig. 3 that the training loss using Cosine distance is lower cf. Euclidean distance. But the validation mAP using Cosine distance is better than using Euclidean with the same model, irrespective of the training distance used. This indicates Cosine loss in inherently better, but that training with Cosine loss overfits easily and degrades the performance. In practice using a model trained with Euclidean distance with Cosine distance during testing achieves a favorable balance. ### 4.4 Comparison with State-of-the-art Methods We now compare the proposed method with the existing state-of-the-art methods. Audio-Video Dataset: Method | Aud2Vid | Vid2Aud | Both ---|---|---|--- pre-trained [9] | 3.61 | 4.22 | 3.91 GCCA [9, 50] | 22.12 | 26.68 | 24.4 CCA [51] | 33.55 | 32.60 | 33.07 CJME [9] | 26.87 | 29.83 | 27.95 AVGZSLNet [22] | 26.63 | 29.56 | 28.10 DSCMR+ [8] | 54.95 | 52.41 | 53.68 DSCMR+( w/ class avg.)[8] | 40.21 | 40.10 | 40.15 Ours | 57.81 | 55.09 | 56.45 Ours(w/ class. avg.) | 41.21 | 40.26 | 40.73 Figure 4: Retrieval performance (mAP) comparison of AudiosetZSL with existing methods Tab. 4 shows the results for AudioSetZSL dataset. We compare our approach to two baseline methods Canonical Correlation Analysis (CCA) and Generalized Canonical Correlation Analysis (GCCA). CCA learns a projection which maximizes the correlation of the two modalities in the common space. GCCA is the multi- set extenison of CCA where the correlation is maximized between all the sets. We report the numbers for these baselines from [9]. We also report the performance of two recently proposed zero-shot learning approaches [9, 22] that uses variant of triplet loss to align different modalities. All the results are reported on comparable experimental setup (i.e. on ‘seen’ classes in the dataset). We also show the result using one of the best performing text to image cross-modal retrieval method, DSCMR [8]. As the original DSCMR has a different network structure, we modify it to match with that of ours which is tailored for the dataset (details in supplementary) to have a fair evaluation. We observe that the proposed method with cosine distance for evaluation outperforms the pre-trained baseline, CCA, GCCA and the other methods by a convincing margin ($56.45$ vs $28.10$, $27.95$, $24.4$, $3.91$). We also observe that the proposed method outperforms DSCMR ($56.45$ vs $53.68$) which is the state of the art in text-image cross modal retrieval. Finally, we also report the class average performance where the proposed method marginally outperforms DSCMR ($40.73$ vs. $40.15$). Image-text Datasets: We report the comparison results for both Pascal Sentence and Wikipedia dataset with prior approaches, in Tab. 3 and Tab. 3 respectively. We report the mAP score for prior methods as provided by the authors in [8]. Since there is no fixed split provided on the dataset, we perform the experiment with $10$ random train/test splits, and report the mean and standard deviation. We did the same for the state of the art DSCMR [8] method with the same random splits as well. Method | Img2Txt | Txt2Img | Both ---|---|---|--- CCA [51] | 22.5 | 22.7 | 22.6 JRL [52] | 52.7 | 53.4 | 53.1 CMDN [53] | 54.4 | 52.6 | 53.5 CCL [54] | 57.6 | 56.1 | 56.9 MvDA-VC [55] | 64.8 | 67.3 | 66.1 ACMR [27] | 67.1 | 67.6 | 67.3 DCCAE [56] | 68.0 | 67.1 | 67.5 DCCA [57] | 67.8 | 67.7 | 67.8 DSCMR [8] | 71.0 | 72.2 | 71.6 DSCMR + | 69.77$\pm$0.43 | 70.63$\pm$0.64 | 70.22$\pm$0.41 Ours + | 70.54$\pm$0.26 | 69.21$\pm$0.28 | 69.88$\pm$0.21 DSCMR | 60.82$\pm$3.19 | 60.25$\pm$3.50 | 60.54$\pm$3.09 Ours | 60.12$\pm$2.90 | 60.62$\pm$2.99 | 60.87$\pm$2.90 Table 2: Comparison of retrieval performance (mAP) for Pascal Sentence Dataset with existing methods. + denotes the method using features provided by the authors of [8]. Method | Img2Txt | Txt2Img | Both ---|---|---|--- CCA [51] | 13.4 | 13.3 | 13.4 MCCA [58] | 34.1 | 30.7 | 32.4 MvDA [55] | 33.7 | 30.8 | 32.3 MvDA-VC [55] | 38.8 | 35.8 | 37.3 JRL [52] | 44.9 | 41.8 | 43.4 CMDN [53] | 48.7 | 42.7 | 45.7 DCCA [57] | 44.4 | 39.6 | 42.0 DCCAE [56] | 43.5 | 38.5 | 41.0 ACMR [27] | 47.7 | 43.4 | 45.6 CCL [54] | 50.4 | 45.7 | 48.1 DSCMR [8] | 52.1 | 47.8 | 49.9 DSCMR | 44.68$\pm$1.57 | 45.30$\pm$1.38 | 45.00$\pm$1.42 Ours | 47.74$\pm$0.94 | 44.41$\pm$1.05 | 46.08$\pm$0.95 Table 3: Comparison of retrieval performance (mAP) for Wikipedia Dataset with existing methods. Figure 5: Top-5 retrieval results for AudioSetZSL dataset. The first two rows are the results for audio to video retrieval and the next two are the results for video to audio retrieval, modality of the example is indicated by the icon in the top left of the image. The correct retrieval examples are marked by the green border where as the wrong ones with red. We note that the proposed method is able to perform retrieval with large amount of diversity in the data. See supplementary material for more results. The authors of DSCMR [8] have also provided the features for train and test split for Pascal Sentence dataset. We also report the mean and standard deviation of mAP score for the available features (marked with +) in Tab. 3 for both the method (ours and DSCMR). While we do not finetune our feature extraction networks on the target datasets, DSCMR [8] seems to do that before training the main method. Hence we also compare with the finetuned features provided directly by them. We observe here that our approach performs marginally better than the best performing previous approach using extracted features both in the Pascal Sentence dataset ($60.87$ vs. $60.54$) and Wikipedia dataset ($46.08$ vs. $45.00$). The published numbers by other methods on Wikipedia dataset are higher, eg. DSCMR reports $49.9$ (while it obtains $45.00$ in our implementation). We believe that this is due to stronger features used by the previous approaches, which are unfortunately not publicly available for us to compare on. ### 4.5 Qualitative Results In Fig. 5, we show some qualitative results for the AudioSetZSL dataset. We use a representative frame from the video to show the results for both audio and video. We observe that our model makes understandable mistakes in a few cases, e.g. in the second audio to video retrieval example, for the dog audio query a cat video is retrieved which looks similar in shape to that of a dog. In the video to audio retrieval, we find an interesting incorrect retrieval, the second query example of train video contains a retrieval audio example from the class car which is actually a train audio and is incorrectly labeled in the dataset. The same train video query also has an incorrect retrieval of truck, which is incorrect but is very similar (in audio modality). We encourage the readers to look at the result videos available at https://krantiparida.github.io/projects/dstc.html for a better understanding of qualitative results. ## 5 Conclusion We proposed a novel framework for the task of cross-modal retrieval by aligning data from two different modalities. We proposed a Discriminative Semantic Transitive Consistency (DSTC) loss which ensures that the class label of the data remains the same even after transferring it to other modality, and after a second successive translation bringing it back to the original modality. The methods projects the modalities onto a representation space with individual modality classifiers, and has modality translator networks to enable cross-modal retrieval. We provided extensive ablation experiments to understand the contributions of the different components. We also compared quantitatively on three challenging public benchmarks with existing methods, and showed qualitatively that the method is capable of achieving diverse retrievals. We will release code and trained models upon acceptance. ## References * [1] Bruce Thompson. Canonical correlation analysis. Encyclopedia of statistics in behavioral science, 2005. * [2] Yashaswi Verma and CV Jawahar. A support vector approach for cross-modal search of images and texts. Computer Vision and Image Understanding, 154:48–63, 2017. * [3] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. 2011\. * [4] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299–6308, 2017. * [5] Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014. * [6] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Soundnet: Learning sound representations from unlabeled video. In Advances in neural information processing systems, pages 892–900, 2016. * [7] Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Cross-modal scene networks. IEEE transactions on pattern analysis and machine intelligence, 40(10):2303–2314, 2017. * [8] Liangli Zhen, Peng Hu, Xu Wang, and Dezhong Peng. Deep supervised cross-modal retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10394–10403, 2019. * [9] Kranti Parida, Neeraj Matiyali, Tanaya Guha, and Gaurav Sharma. Coordinated joint multimodal embeddings for generalized audio-visual zero-shot classification and retrieval of videos. In The IEEE Winter Conference on Applications of Computer Vision, pages 3251–3260, 2020. * [10] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017. * [11] Anjan Dutta and Zeynep Akata. Semantically tied paired cycle consistency for zero-shot sketch-based image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5089–5098, 2019. * [12] Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for visual learning. In European conference on computer vision, pages 801–816. Springer, 2016. * [13] Samuel Albanie, Arsha Nagrani, Andrea Vedaldi, and Andrew Zisserman. Emotion recognition in speech using cross-modal transfer in the wild. In Proceedings of the 26th ACM international conference on Multimedia, pages 292–301, 2018. * [14] Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, and Andrew Zisserman. Speech2action: Cross-modal supervision for action recognition. arXiv preprint arXiv:2003.13594, 2020. * [15] Chuang Gan, Hang Zhao, Peihao Chen, David Cox, and Antonio Torralba. Self-supervised moving vehicle tracking with stereo sound. In Proceedings of the IEEE International Conference on Computer Vision, pages 7053–7062, 2019. * [16] Andrew Owens and Alexei A Efros. Audio-visual scene analysis with self-supervised multisensory features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 631–648, 2018. * [17] Relja Arandjelovic and Andrew Zisserman. Look, listen and learn. In Proceedings of the IEEE International Conference on Computer Vision, pages 609–617, 2017. * [18] Jonathan Munro and Dima Damen. Multi-modal domain adaptation for fine-grained action recognition. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 3723–3726. IEEE, 2019. * [19] Ruohan Gao, Rogerio Feris, and Kristen Grauman. Learning to separate object sounds by watching unlabeled video. In Proceedings of the European Conference on Computer Vision (ECCV), pages 35–53, 2018. * [20] Hang Zhao, Chuang Gan, Andrew Rouditchenko, Carl Vondrick, Josh McDermott, and Antonio Torralba. The sound of pixels. In Proceedings of the European Conference on Computer Vision (ECCV), pages 570–586, 2018. * [21] Ruohan Gao, Changan Chen, Ziad Al-Halah, Carl Schissler, and Kristen Grauman. Visualechoes: Spatial image representation learning through echolocation. arXiv preprint arXiv:2005.01616, 2020. * [22] Pratik Mazumder, Pravendra Singh, Kranti Kumar Parida, and Vinay P Namboodiri. Avgzslnet: Audio-visual generalized zero-shot learning by reconstructing label features from multi-modal embeddings. arXiv preprint arXiv:2005.13402, 2020. * [23] Suwon Shon, Tae-Hyun Oh, and James Glass. Noise-tolerant audio-visual online person verification using an attention-based neural network fusion. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3995–3999. IEEE, 2019. * [24] Wangli Hao, Zhaoxiang Zhang, and He Guan. Cmcgan: A uniform framework for cross-modal visual-audio mutual generation. In AAAI 2018, 2018. * [25] Qing-Yuan Jiang and Wu-Jun Li. Deep cross-modal hashing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3232–3240, 2017. * [26] Feng Zheng, Yi Tang, and Ling Shao. Hetero-manifold regularisation for cross-modal hashing. IEEE transactions on pattern analysis and machine intelligence, 40(5):1059–1071, 2016. * [27] Bokun Wang, Yang Yang, Xing Xu, Alan Hanjalic, and Heng Tao Shen. Adversarial cross-modal retrieval. In Proceedings of the 25th ACM international conference on Multimedia, pages 154–162, 2017. * [28] Yandong Wen, Mahmoud Al Ismail, Weiyang Liu, Bhiksha Raj, and Rita Singh. Disjoint mapping network for cross-modal matching of voices and faces. arXiv preprint arXiv:1807.04836, 2018. * [29] Yue Cao, Mingsheng Long, Jianmin Wang, and Shichen Liu. Collective deep quantization for efficient cross-modal retrieval. In AAAI 2017, volume 1, page 5, 2017. * [30] Minnan Luo, Xiaojun Chang, Zhihui Li, Liqiang Nie, Alexander G Hauptmann, and Qinghua Zheng. Simple to complex cross-modal learning to rank. Computer Vision and Image Understanding, 163:67–77, 2017. * [31] Jose Costa Pereira and Nuno Vasconcelos. Cross-modal domain adaptation for text-based regularization of image semantics in image retrieval systems. Computer Vision and Image Understanding, 124:123–135, 2014. * [32] Vinay Kumar Verma, Aakansha Mishra, Ashish Mishra, and Piyush Rai. Generative model for zero-shot sketch-based image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [33] Sasi Kiran Yelamarthi, Shiva Krishna Reddy, Ashish Mishra, and Anurag Mittal. A zero-shot framework for sketch based image retrieval. In European Conference on Computer Vision, pages 316–333. Springer, 2018. * [34] Arsha Nagrani, Samuel Albanie, and Andrew Zisserman. Learnable pins: Cross-modal embeddings for person identity. In Proceedings of the European Conference on Computer Vision (ECCV), pages 71–88, 2018. * [35] Chao Li, Cheng Deng, Lei Wang, De Xie, and Xianglong Liu. Coupled cyclegan: Unsupervised hashing network for cross-modal retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 176–183, 2019. * [36] Lin Wu, Yang Wang, and Ling Shao. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Transactions on Image Processing, 28(4):1602–1612, 2018. * [37] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. * [38] Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. Real-time user-guided image colorization with learned deep priors. arXiv preprint arXiv:1705.02999, 2017. * [39] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse image-to-image translation via disentangled representations. In Proceedings of the European conference on computer vision (ECCV), pages 35–51, 2018. * [40] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. 2018\. * [41] Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. In Proceedings of the IEEE International Conference on Computer Vision, pages 5933–5942, 2019. * [42] Yipin Zhou, Zhaowen Wang, Chen Fang, Trung Bui, and Tamara Berg. Dance dance generation: Motion transfer for internet videos. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019. * [43] Tinghui Zhou, Philipp Krahenbuhl, Mathieu Aubry, Qixing Huang, and Alexei A Efros. Learning dense correspondence via 3d-guided cycle consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 117–126, 2016. * [44] Nilesh Kulkarni, Abhinav Gupta, and Shubham Tulsiani. Canonical surface mapping via geometric cycle consistency. In Proceedings of the IEEE International Conference on Computer Vision, pages 2202–2211, 2019. * [45] Marcella Cornia, Lorenzo Baraldi, Hamed R Tavakoli, and Rita Cucchiara. Towards cycle-consistent models for text and image retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0–0, 2018. * [46] Rafael Felix, Vijay BG Kumar, Ian Reid, and Gustavo Carneiro. Multi-modal cycle-consistent generalized zero-shot learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 21–37, 2018. * [47] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 776–780. IEEE, 2017. * [48] Jose Costa Pereira, Emanuele Coviello, Gabriel Doyle, Nikhil Rasiwasia, Gert RG Lanckriet, Roger Levy, and Nuno Vasconcelos. On the role of correlation and abstraction in cross-modal multimedia retrieval. IEEE transactions on pattern analysis and machine intelligence, 36(3):521–535, 2013. * [49] Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. Collecting image annotations using amazon’s mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 139–147. Association for Computational Linguistics, 2010. * [50] Jon R Kettenring. Canonical analysis of several sets of variables. Biometrika, 58(3):433–451, 1971. * [51] Harold Hotelling. Relations between two sets of variates. In Breakthroughs in statistics, pages 162–190. Springer, 1992. * [52] Xiaohua Zhai, Yuxin Peng, and Jianguo Xiao. Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Transactions on Circuits and Systems for Video Technology, 24(6):965–978, 2013. * [53] Yuxin Peng, Xin Huang, and Jinwei Qi. Cross-media shared representation by hierarchical learning with multiple deep networks. In IJCAI, pages 3846–3853, 2016. * [54] Yuxin Peng, Jinwei Qi, Xin Huang, and Yuxin Yuan. Ccl: Cross-modal correlation learning with multigrained fusion by hierarchical network. IEEE Transactions on Multimedia, 20(2):405–420, 2017. * [55] Meina Kan, Shiguang Shan, Haihong Zhang, Shihong Lao, and Xilin Chen. Multi-view discriminant analysis. IEEE transactions on pattern analysis and machine intelligence, 38(1):188–194, 2016. * [56] Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In International Conference on Machine Learning, pages 1083–1092, 2015. * [57] Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In International conference on machine learning, pages 1247–1255, 2013. * [58] Jan Rupnik and John Shawe-Taylor. Multi-view canonical correlation analysis. In Conference on Data Mining and Data Warehouses (SiKDD 2010), pages 1–4, 2010. * [59] Anurag Kumar, Maksim Khadkevich, and Christian Fügen. Knowledge transfer from weakly labeled audio using convolutional neural network for sound events and scenes. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 326–330. IEEE, 2018. * [60] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. * [61] Fangxiang Feng, Xiaojie Wang, and Ruifan Li. Cross-modal retrieval with correspondence autoencoder. In Proceedings of the 22nd ACM international conference on Multimedia, pages 7–16, 2014. ## Appendix A Datasets and Features In our experiments we used $2$ different kinds of cross-modal datasets. The first kind contains audio and video modality where as the second kind contains image and text modality. Audio-video Dataset: We use one of the recently proposed dataset, namely AudiosetZSL [9] involving both the audio and video modality for the task of multi-modal zero-shot learning. The features provided by the authors [9], were extracted using a neural network with the I3D architecture for the videos pre- trained on the kinetics dataset [4], and a recently proposed audio classification network [59] for audio. The audio network is not pre-trained with an auxiliary large dataset and is directly trained on the train set of AudiosetZSL. We use the $23$ ‘seen‘ classes out of total $33$ classes available in the dataset, which was split into ‘seen‘ and ‘unseen‘ classes for the zero-shot task. We use the same train, validation and test split, within the ‘seen‘ classes’ images as proposed in [9] and also perform weighted random sampling for training as the dataset is highly imbalanced and follows a long- tailed distribution. The portion of the dataset used contains $79795$, $26587$, $26593$ audio-video pairs in the train, val and test split respectively. Image-Text Datasets: We use two popular datasets, Wikipedia [48] and Pascal Sentence [49] involving image and text modalities. We obtain the image features from the fc7 layer of VGG19 [60] and the text features from the Sentence CNN [5] following [8]. We average the sentence features over all the sentences as there were multiple sentence for a single input image example. The extracted features for image and text are of $4096$ and $300$ dimensions respectively. The Pascal Sentence dataset contains $1000$ image-text pairs from $20$ different classes with $50$ examples per class. All the prior works using the dataset have randomly split the data into $800$, $100$ and $100$ (with equal number of data points from each class) for train, val and test set respectively following [61]. As there is no unique split for all the three sets, the numbers reported with different methods can vary depending upon the random split of the dataset. In order to have a fair comparison we perform random split $k(=10)$ times and report the mean and standard deviations over all the runs for the test set. Apart from this, we also used the features for a fixed train and test split provided by the authors of [8] (See Sec. 4.3 in the main paper). The Wikipedia dataset, has a total of $2866$ images from $10$ different classes of which $2173$, $693$ image-text pairs belong to train and test sets respectively. In this case also there is no validation data, all the prior works follow [61] to split the original test data further randomly into test and validation set consisting of $462$ and $231$ data points respectively. Similar to the previous dataset, as there is no fixed val set, we randomly split the original test set $k(=10)$ times into test and validation sets and report the mean and standard deviations over all the runs, for the obtained test set. ## Appendix B Implementation Details Audio-Video Network. We use a two-layered network each, for both the encoder (audio, video) networks ($\mathbf{E_{x}}$,$\mathbf{E_{y}}$) with input and output node of size $1024$ and $256$ respectively. The hidden unit sizes are fixed to be $256$ and $512$ for the audio and video network respectively. We use a single layer neural network as the classifier for each modality ($\mathbf{C_{x}}$, $\mathbf{C_{y}}$) with input of size $256$ and output of size $23$. We use a symmetric hour-glass type network for the transfer modules ($\mathbf{T_{xy}}$, $\mathbf{T_{yx}}$). We use the same network structure for transfer module of both the modality, i.e. a MLP with 3 layers. having same input and output dimension of $256$, $3$ hidden units of sizes $128$, $64$ and $128$ respectively. We use Batchnorm and ReLU non-linearity after each hidden layer for all the modules of the network. Image-Text Network. Similar to the audio-video network, the encoders and classifiers are both single hidden layer MLPs, the translators are hour glass type networks with a single hidden layer as well. We fix the encoder and classifier architectures for both the modality following [8]. The encoder architectures ($\mathbf{E_{x}}$, $\mathbf{E_{y}}$) are single layered neural network with $2048$ hidden units and $1024$ output units. The classifiers ($\mathbf{C_{x}}$, $\mathbf{C_{y}}$) are single layered neural network with $1024$ input units and $10$ output units. We use an hour-glass network for both the transfer modules ($\mathbf{T_{xy}}$, $\mathbf{T_{yx}}$) with input and output units of size $1024$ and single hidden layer of size $512$, and use BatchNorm and ReLU after each hidden layer. ## Appendix C Qualitative Results Figure 6: Top-5 retrieval results for AudioSetZSL dataset. The first three rows are the results for audio to video retrieval and the next three are the results for video to audio retrieval, modality of the example is indicated by the icon in the top left of the image. The correct retrieval examples are marked by the green border where as the wrong ones with red. We note that the proposed method is able to perform retrieval with large amount of diversity in the data. We provide here some additional qualitative results in Fig. 6. In addition, we request the readers to look at the videos available at https://krantiparida.github.io/projects/dstc.html for a better understanding of the retrieval results. The discussion below refers to the videos available in the above link. We provide four example video results each for audio-to-video and video-to- audio retrieval in separate files. As our method performs cross-modal retrieval, we have switched off the other modality which was not used respectively in the query and the retrieved examples, i.e. audio is muted for all the retrieved examples in case of audio-to-video retrieval and for all the query examples in case of video-to-audio retrieval. Similarly, video modality is turned off for the other case, where for illustration we show a random frame from the video as a representative image for the entire duration (the selected frame appearance is not used for retrieval and is shown in the result just for illustration). We mention some interesting observations from retrieval results below. Audio-to-Video Retrieval: * $-$ In audio to video (example-2), the wrong retrieval results are from the class truck but looking at the video result, it can be seen that in one case there is actually a car along with the truck in the scene and in the other case the retrieval result is a ‘pickup truck‘ which is annotated as truck class in the dataset. * $-$ In audio to video (example-3), the final retrieval result is from the class car instead of camera. But looking at the retrieval result we find out that the video contains only a car logo and it appears to be quite similar to that of a camera lens. * $-$ Similarly in audio to video (example-4), the wrong retrieval example is from the class sawing where as the query is from hammer class. Again looking at the results we find out that the act of performing sawing in the video is quite similar to that of hammering. Video-to-Audio Retrieval: * $-$ In video to audio (example-2), the incorrect retrieval is from the class truck whereas the query is from Ambulance. But listening to the retrieval result we find out that it contains the ambulance siren as well—the retrieval result is semantically correct and the annotation is incorrect for that example. * $-$ Similarly for video to audio (example-3), the query video is from the class sewing and the incorrect retrieval is from car. Again listening to the audio we find that it contains the sound of car engine, which in this particular case sounds similar to a sewing machine. * $-$ In video to audio (example-4), all the retrieval results are correct and listening to them we find that they contain variety of train sounds, i.e. sound of horn blowing, sound of speeding train etc. This demonstrates that the method is able to generalize well enough to capture the variation in data within a class. With these qualitative results we conclude that the method makes semantically sensible mistakes which are either due to the examples being very similar in the modality in question, or in rare case they have mistakes in annotations.
# PISTIS: An Event-Triggered Real-Time Byzantine-Resilient Protocol Suite (Extended Version) David Kozhaya1, Jérémie Decouchant2,∗, Vincent Rahli3,†, and Paulo Esteves- Verissimo4,∗ 1ABB Research Switzerland; 2TU Delft; 3University of Birmingham; 4KAUST - RC3 ∗Work partly performed while these authors were with the University of Luxembourg.†Rahli was partially supported by the National Cyber Security Centre (NCSC) project: Aion: Verification of Critical Components’ Timely Behavior in Probabilistic Environments. ###### Abstract The accelerated digitalisation of society along with technological evolution have extended the geographical span of cyber-physical systems. Two main threats have made the reliable and real-time control of these systems challenging: (i) uncertainty in the communication infrastructure induced by scale, and heterogeneity of the environment and devices; and (ii) targeted attacks maliciously worsening the impact of the above-mentioned communication uncertainties, disrupting the correctness of real-time applications. This paper addresses those challenges by showing how to build distributed protocols that provide both real-time with practical performance, and scalability in the presence of network faults and attacks, in probabilistic synchronous environments. We provide a suite of real-time Byzantine protocols, which we prove correct, starting from a reliable broadcast protocol, called _PISTIS_ , up to atomic broadcast and consensus. This suite simplifies the construction of powerful distributed and decentralized monitoring and control applications, including state-machine replication. Extensive empirical simulations showcase PISTIS’s robustness, latency, and scalability. For example, PISTIS can withstand message loss (and delay) rates up to 50$\%$ in systems with 49 nodes and provides bounded delivery latencies in the order of a few milliseconds. ###### Index Terms: real-time distributed systems, probabilistic losses, consensus, atomic broadcast, Byzantine resilience, intrusion tolerance. ## I Introduction The accelerated digitalisation of society has significantly shifted the way that physical infrastructures—including large continuous process plants, manufacturing shop-floors, power grid installations, and even ecosystems of connected cars—are operated nowadays. Technological evolution has made it possible to orchestrate a higher and finer degree of automation, through the proliferation of multiple sensing, computing, and communication devices that monitor and control such infrastructures. These monitoring and control devices are distributed by nature of the geographical separation of the physical processes they are concerned with. The overall systems, i.e., the physical infrastructures with their monitoring and control apparatus, are generally known as cyber-physical systems (CPS) [1]. However, transposing the monitoring and control functionality normally available in classical, real-time (i.e., adhering to given time bounds) and embedded systems, to the distributed CPS scenarios mentioned above, is a very challenging task, due to two main reasons. First, the scale of the systems as well as the heterogeneity of devices (sensors, actuators and gateways), induce uncertainty in the communication infrastructure interconnecting them, itself often diverse too, e.g., Bluetooth, Wireless IEEE 802.11, or Fiber [2, 3, 4, 5]. These communication uncertainties become evident [3, 4, 5], namely in the form of link faults and message delays, which hamper the necessary reliability and synchronism needed to realize real-time operations, be it when fetching monitoring data or when pushing decisions to controllers. Second, security vulnerabilities of many integrated devices, as well as the criticality of the managed physical structures, increase the likelihood of targeted attacks [6, 7]. Such attacks can aim to inflict inconsistencies across system components or to disrupt the timeliness and correctness of real- time applications. The consequences of such attacks can range from loss of availability to severe physical damage [8]. This paper addresses the challenges above, which render traditional approaches for building real-time communications, ineffective in wide-scale, uncertain, and vulnerable settings. We investigate, in particular, how to build large- scale distributed protocols that can provide real-time communication guarantees and can tolerate network faults and attacks, in probabilistic synchronous environments. These protocols simplify the construction of powerful distributed monitoring and control applications, including state- machine replication for fault tolerance. To our knowledge, literature, with the exception of [9, 10], has targeted achieving either real-time guarantees or Byzantine-resilience with network uncertainties, but not both. To bridge this gap, we present a protocol suite of real-time Byzantine protocols, providing several message delivery semantics, from reliable broadcast (_PISTIS_ 111PISTIS was a Greek goddess who represented the personified spirit (daimona) of trust, honesty and good faith.), through consensus (_PISTIS-CS_), to atomic broadcast (_PISTIS-AT_). PISTIS is capable of: (i) delivering real-time practical performance (i.e., correct nodes provide guarantees within given time bounds) in the presence of aggressive faults and attacks (i.e., one third of the nodes being Byzantine, and high message loss rates); and (ii) scaling with increasing system size. The main idea underlying PISTIS is an event-triggered signature based approach to constantly monitor the network connectivity among processes. Connectivity is measured thanks to the broadcast messages: processes embed signed monitoring information within the messages of the broadcast protocol and exclude themselves from the protocol when they are a threat to timeliness. Hence, PISTIS does not modularly build on membership/failure detector oracles (like in traditional distributed computing) but rather directly incorporates such functionalities within. In fact, modularity in this sense was proven to be impossible for algorithms implementing PISTIS-like guarantees [10]. In order to mask network uncertainties in a scalable manner, PISTIS uses a temporal and spatial gossip-style message diffusion with fast signature verification schemes. We empirically show that PISTIS is robust. For example PISTIS can tolerate message loss rates of up to 40$\%$, 50$\%$, 60$\%$, and 70$\%$ in systems with 25, 49, 73, and 300 nodes respectively: PISTIS has a negligible probability of being unavailable under such losses. We also show that PISTIS can meet the strict timing constraints of a large class of typical CPS applications, mainly in Supervisory Control And Data Acquisition (SCADA) and Internet of Things (IoT) areas, e.g., (1) fast automatic interactions ($\leq{20}\mbox{ms}$) for systems with up to 200 nodes, (2) power systems and substation automation applications ($\leq{100}\mbox{ms}$) for systems with up to 1000 nodes, and (3) slow speed auto-control functions ($\leq{500}\mbox{ms}$), continuous control applications ($\leq{1}\mbox{s}$) as well as operator commands of SCADA applications ($\leq{2}\mbox{s}$) for systems with 1000 nodes or more. Such SCADA and IoT applications could include up to hundreds of devices where reliable and timely communication is required. By using PISTIS as the baseline real-time Byzantine reliable broadcast protocol, we prove that (and show how) higher-level real-time Byzantine resilient abstractions can be modularly implemented, namely, consensus and atomic broadcast. Interestingly, we prove that this can be realized with negligible effort: (1) we exhibit classes of algorithms which are amenable to real-time operations by re-using existing synchronous algorithms from the literature; and (2) we rely on PISTIS, which addresses and tolerates the most relevant problems posed by the communication environment, including the impossibility of modularly handling membership/failure detection [10]. In short, our contributions are: * • The PISTIS protocol suite, which is to the best of our knowledge the first generic and modular protocol suite that provides message delivery guarantees for protocols ranging from Byzantine reliable broadcast to Byzantine atomic broadcast. PISTIS itself is an event-triggered real-time Byzantine reliable broadcast algorithm that has higher scalability and faster message delivery than conventional time-triggered real-time algorithms, in the presence of randomized and unbounded network disruptions. Building on top of PISTIS, we present classes of algorithms, PISTIS-CS and PISTIS-AT, that implement real- time Byzantine consensus and atomic broadcast, respectively. * • Correctness proofs of the PISTIS protocol suite. We provide the main proof results in this paper (exhaustive proofs are deferred to Appx. B). * • Extensive empirical simulations using Omnet++ [11] that showcase PISTIS’s robustness, latency, and scalability. Roadmap. The rest of the paper is organized as follows. Sec. II discusses related work. Sec. III details our system model. Sec. IV recalls the properties of a real-time Byzantine reliable broadcast, and presents our algorithm, PISTIS, in details. Sec. V shows and proves how real-time Byzantine atomic broadcast and consensus can be realized on top of PISTIS’s guarantees using classes of existing algorithms. Sec. VI evaluates the performance and reliability of PISTIS. Finally, Sec. VII concludes the paper. For space limitations, proofs and additional material are deferred to Appendices. ## II Related Work Reliable broadcast is a standard abstraction to ensure that the (correct) nodes of a distributed system agree on the delivery of messages even in the presence of faulty nodes. Byzantine reliable broadcast in particular guarantees that (correct) nodes agree even in the presence of arbitrary faults. It is a key building block of reliable distributed systems such as Byzantine Fault-Tolerant State Machine Replication protocols, which are nowadays primarily used in blockchain systems. Pioneered by the work of Dolev [12] and Bracha [13], many protocols have been proposed since then that are intended to work in various environments. The focus of our paper is on novel Byzantine broadcast primitives and protocols that achieve timeliness guarantees. This paper has evolved from, and improved over, a research line paved by [9, 14, 10] on timing aspects of reliable broadcast and Byzantine algorithms. Besides these works, the literature on broadcast primitives, to the best of our knowledge, either does not take into account timeliness and maliciousness or addresses them separately. Cristian et al. [9] assumed that all correct processes remain synchronously connected, regardless of process and network failures. This strong network assumption is too optimistic, both in terms of scale and timing behaviour, which in practice leads to poor performance (latency of approximately $2.4$ seconds with 25 processes—see Table I in Sec. VI-E for more details). Moreover, Cristian et al.’s system model does not allow processes that malfunction (e.g., by violating timing assumptions) to know that they are treated as faulty by the model. Our algorithm, in comparison, provides latencies in the range of few milliseconds and our model makes processes aware of their untimeliness. Verissimo et al. [14] addressed the timeliness problem by _weak-fail-silence_ : despite the capability of the transmission medium to deliver messages reliably and in real-time, the protocol should not be agnostic of potential timing or omission faults (even if sporadic). The bounded omissions assumption (pre-defined maximum number of omissions) of [14] could not be taken as is, if we were to tolerate higher and more uncertain faults (as we consider in this paper): it could easily lead to system unavailability in faulty periods. Hence we operate with much higher uncertainty levels (faults and attacks). Kozhaya et al. [10] devised a Byzantine-resilient algorithm that provides an upper bound on the delivery latency of messages. This algorithm is time- triggered and relies on an all-to-all communication that limits the algorithm’s scalability. Our work improves over [10] on several points: (i) we reduce the delivery latency (few milliseconds as shown in Fig. 7 and Fig. 8 compared to a few hundred as shown in [10, Fig. 8]—see also Table I for a comparison of worst case latencies) by adopting an event-triggered approach instead of a round-based one; (ii) we improve the system’s scalability (at least 5 times less bandwidth consumption) by adopting a gossip-based dissemination instead of an all-to-all communication; and (iii) we show how real-time broadcast primitives can be modularly used to build real-time Byzantine-resilient high-level abstractions like consensus and atomic broadcast. Guerraoui et al. [15] designed a scalable reliable broadcast abstraction that can also be used in a probabilistic setting where each of its properties can be violated with low probability. They achieve a scalable solution by relying on stochastic samples instead of quorums, where samples can be much smaller than quorums. As opposed to this work, our goal is to design a deterministic abstraction where the property are never violated: the real-time Byzantine- resilient reliable broadcast primitive discussed in Sec.IV is deterministic because late processes become passive, and therefore count as being faulty. In [16, 17], the authors present a Byzantine fault-tolerant SCADA system that relies on the Prime [18, 19] Byzantine Fault Tolerant State Machine Replication [20, 21] (BFT-SMR) protocol protocol to ensure both safety and latency guarantees. As opposed to PISTIS, Prime relies on an asynchronous primary-based BFT-SMR. As opposed to Prime, PISTIS-CS and PISTIS-AT algorithms are designed modularly from a timely reliable broadcast primitive; and PISTIS allows slow connections between any processes in a probabilistic synchronous environment, while Prime relies on the existence of a “stable” timely set of processes. ## III System and Threat Model ### III-A System Model Processes. We consider a distributed system consisting of a set $\mathit{\Pi}=\\{p_{0},p_{1},...,p_{N-1}\\}$ of $N>1$ processes. We assume that processes are uniquely identifiable and can use digital signatures to verify the authenticity of messages and enforce their integrity. We denote by $\sigma_{i}(v)$ the signature of value $v$ by process $p_{i}$. We often write $\sigma_{i}$, when the payload is clear from the context. Processes are synchronous, i.e., the delay for performing a local step has a fixed known bound (note that this does not apply to faulty processes—see below). Clocks. Processes have access to local clocks with a bounded and negligible rate drift to real time. These clocks do not need to be synchronized. Communication. Every pair of processes is connected by two logical uni- directional links, e.g., $p_{i}$ and $p_{j}$ are connected by links $l_{ij}$ and $l_{ji}$. Links can abstract a physical bus or a dedicated network link. We assume a _probabilistic synchronous communication model_. This means that in any transmission attempt to send a message over on link $l_{ij}$ (with $i\neq j$) at some time $t$, there is a probability $P_{ij}(t)$ that the message reaches its destination and within a maximum delay $d$ (known to the processes). $d$ is the upper time bound on non-lossy message delivery and $\epsilon_{1}<1-P_{ij}(t)<\epsilon_{2}\ll 1$ where $\epsilon_{1}$ and $\epsilon_{2}$ are small strictly positive values. Such violations exist in networks, as arguably all communication is prone to unpredictable disturbances, e.g., bandwidth limitation, bad channel quality, interference, collisions, and stack overflows [4]. Our probabilistic synchronous communication has been shown to be weaker, in some sense [39], than partial synchrony [38]. We further discuss and compare our model to existing traditional ones in Appx. A. We do not model correlated losses explicitly, as previous works like [10] have shown that such bursts can be mitigated and we leave it up to the applications to define how to deal with late messages (i.e., violating the $d$ delay assumption). ### III-B Threat Model Processes. We assume that some processes can exhibit arbitrary, a.k.a. Byzantine, behavior. Byzantine nodes can abstract processes that have been compromised by attackers, or are executing the algorithm incorrectly, e.g., as a result of some fault (software or hardware). A Byzantine process can behave arbitrarily, e.g., it may crash, fail to send or receive messages, delay messages, send arbitrary messages, etc. We assume that at most $f=\lfloor\frac{N-1}{3}\rfloor$ processes can be Byzantine. This formula was proved to be an upper bound for solving many forms of agreement in a variety of models such as in non-synchronous models [24, 25]. We allow nodes to become _passive_ in case they fail to execute in a timely fashion. As explained in Sec. IV-C, passive nodes stop executing key events to guarantee timeliness. A process that exhibits a Byzantine behavior or that enters the passive mode (see Sec. IV-C) is termed faulty. Otherwise, the process is said to be correct. Note that passive nodes are considered faulty (at least) during the time they are passive, but are not counted against the $f$ Byzantine faults. Therefore, more than $f$ nodes could be faulty in a system over the full lifespan of a system (up to $f$ nodes could be Byzantine, and up to $N$ processes could be momentarily passive). Clocks. The bounded and negligible rate drift assumption in Sec. III-A has to hold only on a per protocol execution basis, easily met by current technology (such as techniques relying on GPS [26] or trusted components [27]). Hence the clock of a correct process always behaves as described in Sec. III-A. Communication. We assume that Byzantine processes or network adversaries cannot modify the content of messages sent on a link connecting correct processes (implemented by authentication through unforgeable signatures [28]). ## IV Real-Time Byzantine Reliable Broadcast We now present our solution to guarantee that correct nodes reliably deliver broadcast messages in a timely fashion, despite Byzantine nodes, and communication disruptions. Sec. IV-A recalls the properties of the real-time Byzantine-resilient reliable broadcast (RTBRB) primitive [10]. Then, Sec. IV-B presents a high-level overview of the PISTIS event-triggered algorithm, which implements the RTBRB primitive, while Sec. IV-C provides a detailed presentation of PISTIS. Finally, Sec. IV-E explains how passive nodes can recover and become active again to ensure the liveness of the system. ### IV-A Real-time Byzantine Reliable Broadcast Abstraction ###### Definition 1 (RTBRB). The real-time Byzantine reliable broadcast (RTBRB) primitive guarantees the following properties [10], assuming every message is uniquely identified (e.g., using the pair of a sequence number and a process id—the broadcaster’s id).222RTBRB’s properties are equivalent to the ones of the Byzantine reliable broadcast abstraction defined in [29, Module 3.12,p.117], excluding _Timeliness_. In this abstraction, a process broadcasts a message by invoking $\mbox{{RTBRB-broadcast}}()$. Similarly, a process delivers a message by invoking $\mbox{{RTBRB-deliver}}()$. * • RTBRB-Validity: If a correct process $p$ broadcasts $m$, then some correct process eventually delivers $m$. * • RTBRB-No duplication: No correct process delivers message $m$ more than once. * • RTBRB-Integrity: If some correct process delivers a message $m$ with sender $p_{i}$ and process $p_{i}$ is correct, then $m$ was previously broadcast by $p_{i}$. * • RTBRB-Agreement: If some correct process delivers $m$, then every correct process eventually delivers $m$. * • RTBRB-Timeliness: There exists a known $\Delta_{\mathtt{R}}$ such that if a correct process broadcasts $m$ at real-time $t$, no correct process delivers $m$ after real time $t+\Delta_{\mathtt{R}}$. It is important to note that the above abstraction does not enforce ordering on the delivery of messages sent. We elaborate more on that and how to achieve order in Sec. V. Note also that in a system consisting of correct and faulty nodes, these properties ensure that correct nodes deliver broadcast messages within a bounded delay, while no such guarantee is (and can be) provided about faulty nodes. ### IV-B Overview of PISTIS This section presents a high-level description of _PISTIS_. For simplicity, we assume the total number of processes to be $N=3f+1$, in which case a Byzantine quorum has a size of $2f+1$. PISTIS guarantees RTBRB properties deterministically despite the probabilistic lossy network. However, this comes at the price of PISTIS triggering an entire system fail-safe (shutdown) and a reinitialization of system state when violating RTBRB-Timeliness is inevitable. We show later in Sec. VI that the probability of PISTIS causing such system fail-safe (and hence violating an RTBRB property if fail-safe was not triggered) is negligible. System Awareness. Given that broadcasts can be invoked at unknown times, there might exist a correct process in $\mathit{\Pi}\setminus\\{p_{i}\\}$ that is unaware of $p_{i}$’s broadcast for an unbounded amount of time after it was issued, since all links can lose an unbounded number of messages. The occurrence of such scenarios may hinder the system’s ability of delivering real-time guarantees. To this end, we require that every process $p_{j}$ constantly exchanges messages with the rest of the system. This regular message exchange aims at capturing how well $p_{j}$ is connected to other processes, and hence to what extent $p_{j}$ is up-to-date with what is going on in the system (and to what extent the system knows about $p_{j}$’s state). We achieve this constant periodic message exchange via a function, which we call proof-of- connectivity.333Periodic message exchange (heartbeats) has been used to discover the network state in many monitoring algorithms [30, 31] It requires each process to diffuse heartbeats to the rest of the system in overlapping rounds: a new round is started every $d$ time units, and each round is of a fixed duration $\mathbb{T}$, where $d<\mathbb{T}$. (Sec. VI shows that $\mathbb{T}=8d$ is a reasonably good value, while Sec. IV-D highlights the need for overlapping rounds.) A round consists in repeatedly (every $d$ units of time) diffusing a signed heartbeat message to $X$ other processes. $X$ stands for the number of processes to which a process sends a message in a communication step. The value of $X$ is fixed at deployment time (i.e., does not change over the execution of a system) and can range between $0$ and $N-1$. It is used to avoid network congestions by enforcing that processes selectively send their messages to an arbitrary subset of the system. Each round consists then in repeatedly sending $\lceil\frac{\mathbb{T}}{d}\rceil$ times a message, each time to $X$ other nodes. Note that even though the value of $X$ is fixed, in any given round the set of $X$ processes to which the message is sent in every repetition can change such that the union of processes to which the message is sent in all $\lceil\frac{\mathbb{T}}{d}\rceil$ repetitions in that round covers all processes in the system. This is possible when $N\leq X\times\lceil\frac{\mathbb{T}}{d}\rceil$, which we always guarantee in practice. Heartbeat messages are uniquely identified by sequence numbers, which are incremented prior to each round. On receipt of a heartbeat message, a correct process appends its own signature to it as well as all other seen signatures relative to that heartbeat; and sends it to $X$ other processes. At the end of each round, if a process does not receive at least $2f+1$ signatures (including its own) on its own heartbeat, it enters the passive mode. Figure 1: Example of a proof-of-connectivity run, where $X=2f+1$, and where 2 repetitions allow covering all nodes Fig. 1 provides an example of a run of the proof-of-connectivity protocol, depicted as a message sequence diagram, in a system composed of 4 processes. This figure depicts part of the three first rounds of proof-of-connectivity initiated by $p_{0}$ (we only show the messages sent by $p_{0}$ to avoid cluttering the picture), namely ${\mathit{PoC}}_{0}$ in blue, ${\mathit{PoC}}_{1}$ in orange, and ${\mathit{PoC}}_{2}$ in purple. In addition, in that case, each proof of connectivity round is of length $\mathbb{T}=6d$. Therefore, the blue ${\mathit{PoC}}_{0}$ heartbeats are sent 6 times between $d_{0}$ and $d_{5}$, the orange ${\mathit{PoC}}_{1}$ heartbeats are sent 6 times between $d_{1}$ and $d_{6}$, and the purple ${\mathit{PoC}}_{2}$ heartbeats are sent 6 times between $d_{2}$ and $d_{7}$. If by the end of ${\mathit{PoC}}_{0}$, $p_{0}$ has not received $2f$ replies to its heartbeats, it will become passive. Diffusing Broadcasts. PISTIS relies on two types of messages (Echo and Deliver messages) to ensure that broadcast values are delivered in a timely fashion. Processes exchange Echo messages either to start broadcasting new values, or in response to received Echo messages. Echo messages help processes gather a valid quorum (a Byzantine write quorum [32] of size $2f+1$) of signatures on a single value $v$ relative to a broadcast instance. A broadcast instance is identified by the id of the process broadcasting $v$ and a sequence number. Echo messages help prevent system inconsistencies when malicious nodes send different values with the same sequence number (same broadcast instance) to different recipients. However, additional messages, namely Deliver messages, are needed to help achieve delivery within a bounded time after the broadcast. When a process $p_{i}$ receives a value $v$ through an Echo message, it appends its signature to the message as well as all other signatures it has received relative to $v$; and sends it to $X$ other processes. In addition, when $p_{i}$ receives a value for the first time, it triggers a local timer of duration $\mathbb{T}$. Upon receiving a value signed by more than $2f$ processes, a process delivers that value. However, a process that does not receive more than $2f$ signatures on time (i.e., before the timer expires) enters the passive mode. In case multiple values are heard relative to a single process and sequence number (equivocation), then the first heard value is the one to be echoed. Note that processes continue executing the proof-of- connectivity function during the _echo_ and _deliver_ phases however by piggybacking heartbeats to echo/deliver messages. As opposed to Echo messages that are diffused (i.e., re-transmitted temporally and sporadically) for a duration $\mathbb{T}$, Deliver messages are diffused for $2\mathbb{T}$. This is needed to ensure that if some correct processes start diffusing a message between some time $t$ and $t+\mathbb{T}$, possibly at different times, then there must be a $\mathbb{T}$-long period of time where all of them are diffusing the message (see Lemma 4 in Appx. B for more details). Given a large enough collection of such processes ($f+1$ correct processes), this allows other processes to learn about delivered values in a timely fashion. Figure 2: Example of a PISTIS run where $X=2f+1$, and where 2 repetitions allow covering all nodes Fig. 2 provides an example of a run of PISTIS, depicted as a message sequence diagram. The system is composed of 4 processes. This figure depicts part of the echo (in blue) and deliver (in orange) phases of one broadcast initiated by $p_{0}$ (for the purpose of this illustration, only the messages sent by $p_{0}$ are shown). The purple “broadcast” and “deliver” tags indicate the times at which $p_{0}$ initiated its broadcast, and delivered it. In this example, the echo phase is initially meant to last for a duration of $\mathbb{T}=6d$. However, it happens here that $p_{0}$ received $2f$ echo messages for its broadcast by $3d+k$, where $0<k<d$, which is why $d_{3}$ is shorter than the other intervals. Therefore, $p_{0}$ stops its echo phase and starts its deliver phase at $3d+k$. As mentioned above, the deliver phase lasts for $2\mathbb{T}$. If $p_{0}$ has not received $2f$ deliver messages in return by the end of that deliver phase, then it becomes passive. Algorithm 1 proof-of-connectivity($\mathbb{T}$) @ process $p_{i}$ 1:$seq=[0]^{n}$; // stores smallest valid sequence number per process. 2:$sq=0$; // local sequence number. 3:$\mathcal{R}_{\mathit{HB}}=[\emptyset]^{n}$; // stores signatures on last $\lceil\frac{\mathbb{T}}{d}\rceil$ heartbeats of processes. 4: 5:upon event $\texttt{initialization}()\vee\texttt{check-connectivity}()$ do 6: trigger $\texttt{Timeout}({\mathit{msg}},\mathbb{T})$; 7: Execute h-diffuse$\left(\langle p_{i},sq\rangle,\\{\sigma_{i}\\}\right)$; 8: $\mathcal{R}_{\mathit{HB}}[p_{i}].add(\langle p_{i},sq\rangle;\\{\sigma_{i}\\})$; $sleep(d)$; $sq$++; 9: if $sq-seq[p_{i}]>\lceil\frac{\mathbb{T}}{d}\rceil$ then $seq[p_{i}]$++; 10: end if 11: trigger $\texttt{check-connectivity}()$; 12: 13:upon event $\texttt{Expired-Timer}(\langle p_{i},sq^{\prime}\rangle,{\mathit{timeout}})$ do 14: if $|\mathcal{R}_{\mathit{HB}}[p_{i}].getsig(sq^{\prime})|\leq\leavevmode\nobreak\ 2f$ then 15: // gets signatures on message with sequence number $sq^{\prime}$ 16: Initiate passive mode; 17: else $\mathcal{R}_{\mathit{HB}}[p_{i}].remove(sq^{\prime})$; // remove entry with seq. num. $sq^{\prime}$ 18: end if 19: 20:upon event receive $\mbox{{HB}}\left(\langle p_{j},sq^{\prime}\rangle,\Sigma\right)$ do 21: if $(sq^{\prime}\geq{seq[p_{j}]})$ then 22: $\mathcal{R}_{\mathit{HB}}[p_{j}].setsig(sq^{\prime},\mathcal{R}_{\mathit{HB}}[p_{j}].getsig(sq^{\prime})\cup\Sigma\cup\\{\sigma_{i}\\})$; 23: if $j\neq i\land sq^{\prime}\neq seq[p_{j}]$ then 24: Execute h-diffuse$\left(\langle p_{j},sq^{\prime}\rangle,\mathcal{R}_{\mathit{HB}}[p_{j}].getsig(sq^{\prime})\right)$; 25: end if 26: end if 27: if $sq^{\prime}>(seq[p_{j}]+\lceil\frac{\mathbb{T}}{d}\rceil)\land j\neq{i}$ then 28: $seq[p_{j}]=sq^{\prime}-\lceil\frac{\mathbb{T}}{d}\rceil$; 29: $\mathcal{R}_{\mathit{HB}}[p_{j}].remove(sq^{\prime\prime})$, $\forall sq^{\prime\prime}<seq[p_{j}]$; 30: end if 31: 32:Function h-diffuse$\left({\mathit{msg}},{\mathit{\Sigma}}\right)$ 33: for $(\texttt{int}\ i=0$; $i\leq\lceil\frac{\mathbb{T}}{d}\rceil$; $i$++) do 34: send $\mbox{{HB}}\left({\mathit{msg}},\Sigma\right)$ to $X$ other processes; 35: $sleep(d);$ 36: end for 37: ### IV-C Detailed Presentation of PISTIS Algorithm 2 PISTIS @ process $p_{i}$ 1:Execute proof-of-connectivity($\mathbb{T}$); 2: 3:upon event $\mbox{{RTBRB-broadcast}}(p_{i},{\mathit{sq}},v)$ do 4: Execute proof-of-connectivity in piggyback mode; 5: Initialize $\mathcal{R}_{\mathit{echo}}(p_{i},{\mathit{sq}},v)=\\{\sigma_{i}\\}$; 6: Execute $\texttt{b-diffuse}(\langle p_{i},{\mathit{sq}},v\rangle,\mathbb{T},\texttt{echo})$; 7: 8:upon event receive $\mbox{{Echo}}\left(\langle p_{j},{\mathit{sq}},v\rangle,\Sigma\right)$ do 9: if $\nexists\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},...)$ then 10: Initialize $\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)=\\{\sigma_{i}\\}\cup\Sigma$; 11: Execute proof-of-connectivity in piggyback mode; 12: if $|\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)|\leq 2f$ then 13: Execute $\texttt{b-diffuse}(\langle p_{j},{\mathit{sq}},v\rangle,\mathbb{T},\texttt{echo})$; 14: else Execute $\texttt{deliver- msg}(p_{j},{\mathit{sq}},v,\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v))$; 15: end if 16: else if $\exists\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)$ then 17: $\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)=\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)\cup\Sigma$; 18: if $|\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)|>2f$ (for the first time) then 19: Execute $\texttt{deliver- msg}(p_{j},{\mathit{sq}},v,\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v))$; 20: end if 21: else if $\exists\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v^{\prime}\neq{v})$ then 22: // $p_{j}$ has lied about message with ${\mathit{sq}}$ 23: if $|\Sigma|>2f$ then 24: remove $\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v^{\prime})$; 25: $\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)=\Sigma$; 26: Execute $\texttt{deliver-msg}(p_{j},{\mathit{sq}},v,\Sigma)$; 27: end if 28: end if 29: 30:upon event receive $\mbox{{Deliver}}\left(\langle p_{j},{\mathit{sq}},v,\Sigma\rangle,\Sigma^{\prime}\right)$ do 31: if $\nexists\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)$ then 32: $\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)=\mathcal{R}_{\mathit{echo}}(p_{j},{\mathit{sq}},v)\cup\Sigma$; 33: Execute $\texttt{deliver-msg}(p_{j},{\mathit{sq}},v,\Sigma)$; 34: end if 35: ${\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)=\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)\cup\Sigma^{\prime}}$; 36: 37:upon event $\texttt{Expired- Timer}({\mathit{msg}},{\mathit{timeout}},{\mathit{mode}})$ do 38: if $\exists\mathcal{R}_{{\mathit{mode}}}({\mathit{msg}})\wedge|\mathcal{R}_{{\mathit{mode}}}({\mathit{msg}})|\leq 2f$ then 39: switch ${\mathit{mode}}$ do 40: case echo 41: if no lie is discovered on ${\mathit{msg}}$ then 42: Initiate passive mode; 43: end if 44: case deliver 45: Initiate passive mode; 46: end if 47: 48:Function $\texttt{b-diffuse}({\mathit{msg}},{\mathit{timeout}},{\mathit{mode}})$ 49: trigger $\texttt{Timeout}({\mathit{msg}},{\mathit{timeout}},{\mathit{mode}})$; 50: for $(\texttt{int}\ i=0$; $i\leq\lceil\frac{{\mathit{timeout}}}{d}\rceil$; $i$++) do 51: $\Sigma=\mathcal{R}_{{\mathit{mode}}}({\mathit{msg}})$; 52: switch ${\mathit{mode}}$ do 53: case echo 54: send $\mbox{{Echo}}\left({\mathit{msg}},\Sigma\right)$ to $X$ random processes; 55: case deliver 56: send $\mbox{{Deliver}}\left({\mathit{msg}},\Sigma\right)$ to $X$ random processes; 57: sleep($d$); 58: end for 59: 60:Function $\texttt{deliver-msg}_{p_{i}}(p_{j},{\mathit{sq}},v,\Sigma)$ 61: if $\nexists\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)$ then 62: Execute proof-of-connectivity in piggyback mode; 63: trigger $\mbox{{RTBRB-deliver}}(p_{j},{\mathit{sq}},v)$; 64: Initialize $\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)=\\{\sigma_{i}\\}$; 65: Stop sending any $\mbox{{Echo}}\left(\right)$ 66: end if 67: Execute $\texttt{b-diffuse}(\langle p_{j},{\mathit{sq}},v,\Sigma\rangle,2\mathbb{T},\texttt{deliver})$; 68: We now discuss PISTIS (Algorithm 2) in more details. Note that all functions presented in Algorithms 1 and 2 are non-blocking. PISTIS’s proof of correctness can be found in Appx. B. Process states. Processes can become passive under certain scenarios by calling “Initiate passive mode”. A passive node stops broadcasting and delivering messages to guarantee timeliness but otherwise keeps on replying to messages to help other processes. Processes that were behaving correctly thus far, are considered faulty when they initiate a passive mode and can notify the application above of this fact. Later in this section, we show how processes in the passive mode can come back to normal operation by calling “Initiate active mode”. Ensuring sufficient connectivity. In PISTIS every process executes the proof- of-connectivity Algorithm 1. Namely, a process $p_{i}$ forms a heartbeat $\mbox{{HB}}\left(\langle p_{i},sq\rangle,\\{\sigma_{i}\\}\right)$, where $sq$ is $p_{i}$’s current heartbeat sequence number and $\sigma_{i}$ is $p_{i}$’s signature on $\langle p_{i},sq\rangle$. Process $p_{i}$ also stores (in array $\mathcal{R}_{\mathit{HB}}$) for every process (including itself) all signatures it receives on heartbeats with a valid sequence number. A valid heartbeat sequence number for some process $p_{j}$ is a sequence number $\geq{seq[p_{j}]}$. Heartbeats with lower sequence numbers are simply ignored. To avoid receiving heartbeats from older rounds, we update $seq[p_{j}]$ every time a heartbeat with a sequence number over $seq[p_{j}]+\lceil\frac{\mathbb{T}}{d}\rceil$ is receiver (lines 27–28). After forming its heartbeat, $p_{i}$ sets a timeout of duration $\mathbb{T}$, and sends this heartbeat to $X>f$ random processes $\lceil\frac{\mathbb{T}}{d}\rceil$ times (lines 32–36). Process $p_{i}$ increments its heartbeat sequence number and repeats this whole procedure every $d<\mathbb{T}$. Upon incrementing its heartbeat sequence number, $p_{i}$ updates its own valid heartbeat sequence numbers (lines 9–10). A process $p_{i}$ receiving $\mbox{{HB}}\left(\langle p_{j},sq^{\prime}\rangle,\Sigma\right)$ ignores this heartbeat if $sq^{\prime}$ is smaller than the smallest valid heartbeat sequence number known for $p_{j}$. Otherwise, $p_{i}$ updates $p_{j}$’s valid heartbeat sequence numbers (lines 27–30) and the list of all seen signatures on these valid heartbeats (line 22). Then, $p_{i}$ diffuses the heartbeat with the updated list of seen signatures to $X$ random processes (line 24). When a timer expires, $p_{i}$ checks $\mathcal{R}_{\mathit{HB}}[p_{i}]$ for the number of accumulated signatures on its corresponding heartbeat. If that number is $\leq 2f$, $p_{i}$ enters the passive mode; otherwise it removes the corresponding entry from $\mathcal{R}_{\mathit{HB}}[p_{i}]$ (lines 13–19). Broadcasting a message. A process $p_{i}$ that wishes to broadcast a value $v$, calls $\mbox{{RTBRB-broadcast}}(p_{i},{\mathit{sq}},v)$ from Algorithm 2 (lines 3–7), where ${\mathit{sq}}$ is a sequence number that uniquely identifies this broadcast instance. Given such an event, $p_{i}$ produces a signature $\sigma_{i}$ for the payload $\langle p_{i},{\mathit{sq}},v\rangle$. It then triggers a timeout of duration $\mathbb{T}$ and sends an $\mbox{{Echo}}\left(\langle p_{i},{\mathit{sq}},v\rangle,\\{\sigma_{i}\\}\right)$ message $\lceil\frac{\mathbb{T}}{d}\rceil$ times to $X$ other random processes. Proof- of-connectivity information from $p_{i}$ is now piggybacked on these messages, as on all other Echo and Deliver messages. Sending and Receiving Echoes. When $p_{i}$ receives an $\mbox{{Echo}}\left(\langle p_{j},{\mathit{sq}},v\rangle,\Sigma\right)$, $p_{i}$ reacts differently depending on whether it is not already echoing for this instance (lines 8–15), already echoing $v$ (lines 16–20), or already echoing a different value (lines 21–27). In all three cases, $p_{i}$ starts delivering a message (and stops sending echoes) as soon as at least $2f+1$ distinct signatures have been collected for that message. Sending and Receiving Deliver Messages. When $p_{i}$ receives $\mbox{{Deliver}}\left(\langle p_{j},{\mathit{sq}},v,\Sigma\rangle,\Sigma^{\prime}\right)$ for the first time (lines 60–67), it delivers $\langle p_{j},{\mathit{sq}},v,\Sigma\rangle$, and sends $\mbox{{Deliver}}\left(\langle p_{j},{\mathit{sq}},v,\Sigma\rangle,\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)\right)$ using $\texttt{b-diffuse}()$. In case that _deliver_ message is not the first one received (lines 30–35), $p_{i}$ aggregates all seen signatures for $\langle p_{j},{\mathit{sq}},v\rangle$ in $\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)$ (all functions that use $\mathcal{R}_{\mathit{deliver}}(p_{j},{\mathit{sq}},v)$ now use the new updated value). Process Passive Mode. When a timeout set by process $p_{i}$ with parameters $({\mathit{msg}},{\mathit{timeout}},{\mathit{mode}})$ expires, $p_{i}$ enters the passive mode if the set $\mathcal{R}_{{\mathit{mode}}}$ has less than $2f+1$ distinct signatures, for ${\mathit{mode}}=\texttt{deliver}$. For ${\mathit{mode}}=\texttt{echo}$, $p_{i}$ enters passive mode if in addition to $\mathcal{R}_{{\mathit{mode}}}$ not having $2f+1$ signatures, $p_{i}$ did not discover a lie for that broadcast instance. ###### Remark 1. Any message of the form $\mbox{{Echo}}\left(\langle p_{j},{\mathit{sq}},v\rangle,\Sigma_{1}\right)$ or $\mbox{{Deliver}}\left(\langle p_{j},{\mathit{sq}},v,\Sigma_{2}\rangle,\Sigma_{3}\right)$ is termed invalid if: (1) $\Sigma_{1}$ contains an incorrect signature, and similarly for $\Sigma_{2}$ and $\Sigma_{3}$; or (2) $\Sigma_{1}$ does not contain a signature from $p_{j}$, and similarly for $\Sigma_{2}$; or (3) $\Sigma_{2}$ has less than $2f+1$ signatures. Invalid messages are simply discarded. ###### Remark 2. We assume that processes sign payloads of the form $(p_{i},{\mathit{sq}},v,\mathsf{E})$ for echo messages and of the form $(p_{i},{\mathit{sq}},v,\mathsf{D})$ for deliver messages. We use the $\mathsf{E}$ and $\mathsf{D}$ tags to distinguish echo and deliver payloads, thereby ensuring that an attacker cannot use echo signatures as deliver signatures. Note that echo signatures are sent as part of deliver messages as a proof that a quorum of processes echoed a certain value. ### IV-D PISTIS’ properties As mentioned at the beginning of this section, PISTIS is correct in the sense that it satisfies all five properties of the RTBRB primitive presented in Sec. IV-A: ###### Theorem 1 (Correctness of PISTIS). Under the model presented in Sec. III, the PISTIS algorithm presented in Fig. 2 implements the RTBRB primitive. A proof of this theorem can be found in Appx. B. Let us point out here that the $\Delta_{\mathtt{R}}$ bound of the RTBRB-Timeliness property turns out to be $3\mathbb{T}$. Let us also highlight the crux of this proof here. As illustrated above, a correct node $p_{i}$ that broadcasts a message $m$ a time $t$ is guaranteed to start delivering $m$ by $t_{d}=t+\mathbb{T}$. In addition thanks to the $2\mathbb{T}$ delivery period, we are also guaranteed that a collection, called $B$, of $2f+1$ nodes, will only deliver $m$ for a $\mathbb{T}$-long period that starts before $t_{d}+\mathbb{T}$. PISTIS’s proof-of-connectivity (PoC) mechanism then ensures that any other correct node $p_{j}$ will execute a PoC round during which a correct node $r\in{B}$ delivers $m$ to $p_{j}$, piggybacked to a heartbeat, thereby guaranteeing that $p_{j}$ delivers $m$ timely. In particular, overlapping PoC rounds allow for all correct nodes to have a PoC round that coincide with that $\mathbb{T}$-long period (called $D$ here), during which the correct nodes in $B$ deliver $m$, thereby allowing all correct nodes to deliver $m$. If PoC rounds were consecutive and not overlapping, a correct node could miss the deliver message (piggybacked with PoC messages) sent during $D$ if it were to receive PoC messages for a round (i.e., sequence number) $s$ sent before $D$, and for round $s+1$ sent after $D$, thereby staying active while not delivering. ### IV-E Byzantine-Resilient Recovery If process $p_{i}$ detects that it is executing under bad network conditions, it enters the passive mode and signals the upper application. As a result, $p_{i}$ stops broadcasting and delivering broadcast messages (by not executing line 3 and line 63) to avoid violating RTBRB-Timelines. However, $p_{i}$ continues participating in the dissemination of the broadcast and proof-of- connectivity messages to avoid having too many nodes not collecting enough messages and hence becoming passive. Once the network conditions are acceptable again, $p_{i}$ can recover and resume delivering broadcast messages. More precisely, a process $p_{i}$ that enters passive mode at time $t$ can operate normally again if the interval $[t,t+\Delta_{\mathtt{R}}]$ is free of any passive mode initiations. This $\Delta_{\mathtt{R}}$ duration ensures that the messages delivered by a recovered process $p_{i}$ do not violate any RTBRB properties. After a delay $\Delta_{\mathtt{R}}$, nodes will resume their full participation in the protocol, and either deliver messages or stay on hold. Note that in case of multiple broadcast instances, passive nodes that become active again should learn the latest sequence number of broadcasts for other nodes. Otherwise Byzantine nodes can exploit this to hinder the liveness of the system. ###### Remark 3. Given that processes can now shift between passive and active modes, we specify our notion of correct processes as follows. A system run is modeled by a trace of events happening during that run. An event has a timestamp and a node associated with it. Moreover, an event can either be a correct event or a Byzantine event. Given an algorithm $A$, a process $p$ is deemed correct w.r.t. $A$ and a trace $\tau$, if: (1) it follows its specification from $e_{1}$, the first correct $A$-related event (i.e., an event of algorithm $A$) happening in $\tau$, to $e_{2}$, the last correct $A$-related event happening in $\tau$; (2) $p$’s events between $e_{1}$ and $e_{2}$ must all be correct; (3) $p$ must also have followed its specification since it last started; and (4) $p$ must never have lost its keys (so that no other node can impersonate $p$ when $p$ follows its specification). The results presented below also hold for this definition of correctness, because correct processes are required to be active through the entire broadcast instance. This recovery mechanism improves the overall resilience of the system. Indeed, having all processes in passive mode can occur if $2f+1$ nodes are passive, which is now harder to achieve if nodes can recover sufficiently fast enough. ## V Beyond a Reliable Broadcast Unlike liveness in asynchronous reliable broadcast, the RTBRB-Timeliness property (a safety property) introduces a scent of physical ordering. This ordering is due to the fact that timeliness stipulates, for each execution, a termination event to occur “at or before” some $\Delta_{\mathtt{R}}$ on the time-line. This said, the reader may wonder to what extent does the real-time Byzantine-resilient reliable broadcast (of Sec. IV-A) help in establishing total order? The answer to this question lies in examining what happens to multiple broadcasts issued by the same or by different nodes. When multiple broadcasts interleave, e.g., when they are issued within a period shorter than $\Delta_{\mathtt{R}}$ (the upper time bound on delivering a message), messages might be delivered to different processes in different orders. The timeliness property of the real-time Byzantine-resilient reliable broadcast only ensures that a message $m$ that is broadcast at time $t$ is delivered at any time in $[t,t+\Delta_{\mathtt{R}}]$. Thus, to ensure total order on all system events, e.g., for implementing State Machine Replication, additional abstractions need to be built on top of the real-time Byzantine-resilient reliable broadcast primitive that we have developed so far. In this section, we investigate how to modularly obtain such an order on system events while still preserving real-time and Byzantine-resilience. We define two build blocks that build on top of RTBRB, namely the RTBC real-time Byzantine consensus abstraction (Def. 2)—a fundamental building block for state machine replication, atomic broadcast and leader election [29]; and the RTBAB real-time atomic broadcast abstraction (Def. 4)—to establish total order on system events. We then provide characterizations of classes of algorithms that implement these abstractions: Thm. 2 provides a characterization of the PISTIS-CS class of algorithms that implement RTBC, while Thm. 3 provides a characterization of the PISTIS-AT class of algorithms that implement RTBAB. Finally, we provided examples of algorithms that belong to these classes (see Examples 1 and 2). We start with the following assumption that constrains the ways processes can communicate. ###### Assumption 1. Correct processes access the network only via the RTBRB primitive, namely using the two operations: $\mbox{{RTBRB-broadcast}}()$ and $\mbox{{RTBRB- deliver}}()$. From Assumption 1, a correct process $p_{i}$ that receives a message from an operation other than $\mbox{{RTBRB-deliver}}()$ simply ignores that message by dropping it. ### V-A Real-Time Byzantine Consensus Roughly speaking, solving the Byzantine consensus problem requires the agreement of distributed processes on a given value, even though some of the processes may fail arbitrarily. Byzantine consensus was first identified by Pease et al. [42], and formalized as the interactive consistency problem. An algorithm achieves interactive consistency if it allows the non-faulty processes to come to a consistent view of the initial values of all the processes, including the faulty ones. Once interactive consistency has been reached, the non-faulty processes can reach consensus by applying a deterministic averaging or filtering function on the values of their view. We apply the following assumption to reach consensus. ###### Assumption 2. Once interactive consistency terminates, every correct process scans the obtained vector and decides on the value that appears at least $2f+1$ times. If no such value exists, then the process decides $\bot$, a distinguished element that indicates that no value has been decided. ###### Definition 2 (RTBC). The real-time Byzantine consensus (RTBC) abstraction is expressed by the following properties:444The properties of RTBC are the same as the ones of the traditional (strong) Byzantine consensus defined in [38] (see also [29, Module 5.11,p.246]), excluding the _Timeliness_ property. * • RTBC-Validity: If all correct processes propose the same value $v$, then any correct process that decides, decides $v$. Otherwise, a correct process may only decide a value that was proposed by some correct process or $\bot$. * • RTBC-Agreement: No two correct processes decide differently. * • RTBC-Termination: Correct processes eventually decide. * • RTBC-Timeliness: If a correct process $p_{i}$ proposes a value to consensus at time $t$, then no correct process decides after $t+\Delta_{\mathtt{C}}$. In RTBC a process $p_{i}$ can propose a value $v$ to consensus by invoking $\mbox{{RTBC-propose}}(p_{i},{\mathit{inst}},v)$, where ${\mathit{inst}}$ is a sequence number that uniquely identifies a RTBC instance. Similarly, a process $p_{i}$ decides on a value $v$ by invoking $\mbox{{RTBC- decide}}(p_{i},{\mathit{inst}},v)$. In addition $\mbox{{RTBC- init}}({\mathit{inst}})$ instantiate a new instance of RTBC with id ${\mathit{inst}}$, i.e., for sequence number ${\mathit{inst}}$. ###### Definition 3. An algorithm is said to be _bounded_ if it only uses a known bounded number of communication rounds. ###### Theorem 2 (Characterization of the PISTIS-CS class). Let PISTIS-CS be the class of _bounded_ (Def. 3) algorithms that implements interactive consistency under Assumptions 1 and 2. Then, PISTIS-CS algorithms also implement RTBC in our model (described in Sec. III). See Appx. C for a proof of this result. ###### Example 1 (Examples of PISTIS-CS algorithms). Because the interactive consistency problem has been solved using different algorithms that satisfy Def. 3, our result applies to various existing algorithms, such as [42, 34, 35, 36]. ### V-B Real-Time Byzantine-Resilient Atomic Broadcast ###### Definition 4 (RTBAB). A real-time Byzantine-resilient atomic broadcast (RTBAB) has the same properties as RTBRB (with a different timeliness bound) plus an additional ordering property (therefore, we only present the properties that differ from RTBRB’s): * • RTBAB-Timeliness: There exists a known $\Delta_{\mathtt{A}}$ such that if a correct process broadcasts $m$ at time $t$, no correct process delivers $m$ after real time $t+\Delta_{\mathtt{A}}$. * • RTBAB-Total order: Let $m_{1}$ and $m_{2}$ be any two messages and suppose that $p_{i}$ and $p_{j}$ are any two correct processes that deliver $m_{1}$ and $m_{2}$. If $p_{i}$ delivers $m_{1}$ before $m_{2}$, then $p_{j}$ delivers $m_{1}$ before $m_{2}$. We now define the class of algorithms (called ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$), through the properties listed below, that modularly implement RTBAB properties. ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ algorithms make use of a single RTBRB instance and multiple instances of RTBC. We first constrain a ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ algorithm to start an RTBRB instance within a bounded amount of time for any broadcast call. ###### Property 1. If a correct process $p_{i}$ RTBAB-broadcasts a message $m$ at time $t$, then it also RTBRB-broadcasts $m$ by time $t+\Delta_{\mathtt{B}}$, for some bounded $\Delta_{\mathtt{B}}$. We then require a ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ algorithm to start (or end in case this has already been done before) an RTBC instance, within a bounded amount of time, every time the RTBRB instance delivers. ###### Property 2. If a correct process RTBRB-delivers a message $m$ at time $t$, such that $m$’s broadcaster is also correct, then it either RTBC-proposes or RTBC-decides $m$ by $t+\Delta_{\mathtt{P}}$, for some bounded $\Delta_{\mathtt{P}}$. In addition, the next property constrains the values that can be proposed at each RTBC instance, namely that at most one non-$\bot$ value can be proposed at each instance. ###### Property 3. Given an RTBC instance ${\mathit{inst}}$, there exists a value $v$, such that each correct process either RTBC-propose $v$ or $\bot$ at ${\mathit{inst}}$. Next, we require a ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ algorithm to deliver a RTBC-decided value within a bounded amount of time (Property 4) and to ensure that non-RTBC-decided values are re-proposed in later RTBC rounds (Property 5). ###### Property 4. If a correct process RTBC-decides a message $m$ at time $t$, then it also RTBAB-delivers $m$ by time $t+\Delta_{\mathtt{D}}$, for some bounded $\Delta_{\mathtt{D}}$. ###### Property 5. A correct process $p_{i}$ that proposes a value $v$ at a given time $t$, using a given RTBC instance ${\mathit{inst}}$, and such that this instance does not decide $v$, also RTBC-propose $v$ at some instance ${\mathit{inst}}+k$, where $0<k$. Moreover, $p_{i}$ RTBC-proposes $v$ at the smallest instance between ${\mathit{inst}}+1$ and ${\mathit{inst}}+k$ where $m$ is proposed by some process. Finally, we require that nodes participate in all successive RTBC instances in a monotonic fashion. ###### Property 6. Correct processes RTBC-propose exactly one value per RTBC instance; propose values in all RTBC instances (i.e., for all instances ${\mathit{inst}}\in\mathbb{N}$); in increasing order w.r.t. the instance numbers of the RTBC instances (i.e., if $p_{i}$ proposes values at times $t_{1}$ and $t_{2}$ using the RTBC instances ${\mathit{inst}}_{1}$ and ${\mathit{inst}}_{2}$, respectively, and $t_{1}<t_{2}$, then ${\mathit{inst}}_{1}<{\mathit{inst}}_{2}$); and not in parallel (i.e., if $p_{i}$ proposes a value at time $t$ using an RTBC instance ${\mathit{inst}}$, and that this RTBC instance has not decided by time $t^{\prime}>t$, then $p_{i}$ does not propose any other value between $t$ and $t^{\prime}$). ###### Definition 5. Let ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ be the class of _round-based_ algorithms that satisfy the properties 1, 2, 3, 4, 5, and 6. ###### Theorem 3 (Characterization of the PISTIS-AT class). Let PISTIS-AT be the class of ${\mathcal{R}}\mathit{ound}{\mathcal{B}}\mathit{ased}$ algorithms that implement the traditional Byzantine total-order broadcast under Assumption 1. Then, PISTIS-AT algorithms also implement RTBAB in our system (described in Sec. III). To prove Theorem 3, it is sufficient to prove that a RTBAB-broadcasted value $m$ is always RTBAB-delivered within a bounded amount of time. Because of the round-based property, $m$ must be RTBRB-proposed and RTBRB-decided within a bounded amount of time. Consequently there is (within a bounded amount of time) an RTBC instance where “enough” correct nodes RTBC-propose $m$, so that $m$ gets RTBC-decided upon and RTBAB-delivered within a bounded amount of time. The proof of Theorem 3 is detailed in Appx. D. We have introduced bounds for each of the operations executing in bounded time, namely $\Delta_{\mathtt{R}}$ (Def. 1), $\Delta_{\mathtt{C}}$ (Def. 2), $\Delta_{\mathtt{W}}$ (Alg. 3), $\Delta_{\mathtt{B}}$ (Prop. 1), $\Delta_{\mathtt{P}}$ (Prop. 2), $\Delta_{\mathtt{D}}$ (Prop. 4), and $\Delta_{\mathtt{A}}$ (Def. 4). Those bounds are not assumed to be related to each other. However, the bound for $\Delta_{\mathtt{A}}$ we exhibit in Theorem 3’s proof is a combination of all the other bounds discussed above. ###### Example 2 (Example of a PISTIS-AT algorithm). Finally, algorithm 3 provides an example of a PISTIS-AT algorithm that implements RTBAB modularly, which we adapted from [29, Alg.6.2,p.290] to guarantee timeliness. Algorithm 3 Example of a PISTIS-AT algorithm @process $p_{i}$ 1:upon event $\mbox{{RTBAB-init}}(\mbox{rtbab})$ do 2: ${\mathit{unordered}}=[]^{n}$; ${\mathit{next}}=[0]^{n}$; ${\mathit{seq}}=0$; 3: ${\mathit{delivered}}=\emptyset$; ${\mathit{busy}}=\texttt{False}$; ${\mathit{inst}}=0$; 4: 5:upon event $\mbox{{RTBAB-broadcast}}(p_{i},m)$ do 6: trigger $\mbox{{RTBRB-broadcast}}(p_{i},{\mathit{seq}},m)$; 7: ${\mathit{seq}}$++; 8: 9:upon event $\mbox{{RTBRB-deliver}}(p_{j},{\mathit{num}},m)$ do 10: if ${\mathit{num}}=next[p_{j}]$ then 11: ${\mathit{next}}[p_{j}]={\mathit{next}}[p_{j}]+1$; 12: if $m\notin{\mathit{delivered}}$ then 13: ${\mathit{unordered}}[p_{j}]={\mathit{unordered}}[p_{j}]\textbf{.append}(\langle p_{j},m\rangle)$; 14: end if 15: else $\\{\texttt{wait}(\Delta_{\mathtt{W}});\leavevmode\nobreak\ \text{{trigger}}\leavevmode\nobreak\ \mbox{{RTBRB- deliver}}(p_{j},{\mathit{num}},m);\\}$ 16: end if 17: 18:upon event $\exists p_{j}:{\mathit{unordered}}[p_{j}]\neq[]\wedge{\mathit{busy}}=\texttt{False}$ do 19: ${\mathit{busy}}=\texttt{True}$; 20: trigger $\mbox{{RTBC-init}}({\mathit{inst}})$; 21: // initiate a new real-time Byzantine consensus instance 22: if ${\mathit{unordered}}[leader({\mathit{inst}})]\neq[]$ then 23: $m={\mathit{unordered}}[\texttt{leader}({\mathit{inst}})]\textbf{.head}()$; 24: else $\\{m=\bot$;} 25: end if 26: trigger $\mbox{{RTBC-propose}}(p_{i},{\mathit{inst}},m)$; 27: 28:upon event $\mbox{{RTBC- decide}}(p_{i},{\mathit{inst}}^{\prime},{\mathit{decided}})$ do 29: if ${\mathit{inst}}^{\prime}={\mathit{inst}}$ then 30: if ${\mathit{decided}}\notin{\mathit{delivered}}\wedge{\mathit{decided}}\neq\bot$ then 31: ${\mathit{delivered}}={\mathit{delivered}}\cup\\{{\mathit{decided}}\\}$; 32: trigger $\mbox{{RTBAB- deliver}}(\texttt{leader}({\mathit{inst}}),{\mathit{decided}})$; 33: end if 34: ${\mathit{unordered}}[\texttt{leader}({\mathit{inst}})]\textbf{.remove}({\mathit{decided}})$; 35: ${\mathit{inst}}$++; ${\mathit{busy}}=\texttt{False}$; 36: else $\\{\texttt{wait}(\Delta_{\mathtt{W}});\leavevmode\nobreak\ \mbox{{trigger}}\ \mbox{{RTBC- decide}}(p_{i},{\mathit{inst}}^{\prime},{\mathit{decided}});\\}$ 37: end if 38: 39:Function $\texttt{leader}({\mathit{instance}})$ $\\{\text{{return}}({\mathit{instance}}\mod{n});\\}$ 40: ## VI Evaluation and Comparison In this section, we evaluate PISTIS’s reliability, latency, and incurred overhead on network bandwidth. ### VI-A PISTIS’s latency vs. related systems’ latency We begin with a latency comparison between PISTIS and other related works based on the worst case incurred delay. We compute worst case delays from the bounds established for each algorithm (a direct experimental evaluation would not be fair, since not all previous work [9] consider probabilistic synchronous networks). Later sections provide an experimental comparison with RT-ByzCast [10], the system most related to ours. We elaborate in what follows on the computation of the worst case delays. First we refine the definition of $d$ introduced in Sec. III-A. Let $d_{n}$ be the maximum network delay, and $d_{p}$ be the maximum local processing time, which includes the cryptographic operations overhead, such that $d$ can be decomposed as $d_{p}+d_{n}$. Christian et al. [9] compute the worst case delay as $10*(f+2)*(n-1)*d_{n}$ where $f$ is the maximum number of faulty processes, $n$ the total number of processes, and $d_{n}$ the network delay. In this work, $d_{p}$ is equal to $10$. Kozhaya et. al [10] compute the worst-case delay as $3*R*d$, where $R$ is the number of consecutive synchronous communication rounds the same message gets disseminated (time-triggered re-transmissions). PISTIS’s worst case delay is proved to be $3*\mathbb{T}$. To ensure fairness and consistency with the latency experiments presented below, we set $R=8$ and $\mathbb{T}=8d$. However, due to PISTIS’s signature management (see, for example, the optimizations described in Sec. VI-B), PISTIS’s worst case delay can be alternatively computed as $(3*8*d_{n})+(2*N*d_{p})$. This is in part due to the fact that in PISTIS nodes avoid re-verifying already verified signatures. Our results, shown in Table I, show that PISTIS has the best worst case latencies of all algorithms for $d_{n}=1ms$ (as mentioned above, in the first column $d_{p}=10$, while in the last two columns $d_{p}$ is such that $1<d_{p}<10$, and can be derived from the numbers provided in the table). | [9] | [10] | PISTIS ---|---|---|--- $N=25$, $f=8$ | 2,400 ms | 26 ms | 25.6 ms $N=50$, $f=16$ | 8,640 ms | 70 ms | 27 ms $N=100$, $f=33$ | 34,650 ms | 150 ms | 30 ms Table I: Worst case latencies Two main observations can be made: (1) compared to the other protocols, PISTIS has superior performance due to the fact that PISTIS is event triggered, utilizes fast signature schemes, reduces the number of signatures created and verified, sends fewer messages (which increase individual message failures) and allows processes for fast detection of their tardiness; and (2) PISTIS’s expected performance in practice (see Fig. 7) is significantly better than the worst case delay bound reported in the table. ### VI-B Implementation Optimizations We implemented three optimizations to improve the performance of PISTIS (as described in Sec. IV-C). (1) If a process $p_{i}$ knows that some process $p_{j}$ has already received $2f+1$ _echo_ signatures for some message $m$, $p_{i}$ stops sending echoes related to $m$ to $p_{j}$. Every process implements this optimization by maintaining a list, say $\mathcal{L}$, that contains all the processes from which it has heard $2f+1$ signatures for a given message. During a broadcast, a process diffuses a message to $X$ processes at random among $\mathit{\Pi}\setminus\mathcal{L}$. Processes do the same for _deliver_ messages. (2) Processes do not verify signatures that they have already received. (3) Processes skip messages that only contain signatures that were already received. ### VI-C Implementation Configuration and Settings We implemented PISTIS in C++ on the Omnet++ 5.4.1 network simulator [11]. In order to accurately measure PISTIS’s communication overhead, we configure network links to have a non-limiting 1Gbps throughput, and a communication latency of either 1ms or 5ms. We evaluated PISTIS’s performance using two signature schemes of similar security guarantees, and available in the OpenSSL library [37]: RSA-2048 (i.e., 256 bytes long signatures) and ECDSA with prime256v1 curves (i.e., 71 bytes long signatures). We use broadcast messages of sizes equal to 1B and 1KB. We run our simulations for systems with $N\in\\{25,49,73,300\\}$ processes in fully connected networks, and for several values of $X$, which is the number of processes each process forwards a message $m$ to during diffusion. We consider the probability of losing/omitting a message sent at any point in time to be $i/10$, where $0\leq{i}\leq{9}$. ### VI-D PISTIS’s Reliability To assess PISTIS’s reliability, we evaluate the probability that a correct process enters the passive mode. Such probability is a crucial measure: a process becoming passive may lead the system to shutdown and hence to stop delivering messages. Namely, when $N=3f+1$, a single correct process staying passive for long-enough can, in the worst case (when $f$ Byzantine processes are not sending messages), leave $2f$ correct processes, which would not be enough to gather quorums of size $2f+1$, leading those $2f$ processes to also become passive. For a given value of $N$ and $p$, we invoke a broadcast at one of the processes and record any non-Byzantine process that crashed itself during broadcast. We obtain our results by repeating each experiment $10^{5}$ times, and we report the probability that a process crashes itself as: $(\text{num. of experiments with self-crashed processes})/{10^{5}}$ We study the impact of several parameters, including $\mathbb{T}$, $N$, $X$, $f$, and $p$, on PISTIS’s reliability, and determine which values should be used to enforce an intended system reliability. Fig. 3 shows that the system’s reliability increases with its size and $\mathbb{T}$’s value for large enough values of $\mathbb{T}/d$. For example, when $\mathbb{T}=8d$, a system with 25 (resp. 49) processes operates with high reliability (i.e., there is a negligible probability that a process becomes passive) under message loss rates reaching up to $40\%$ (resp. $50\%$). Fig. 4 shows that the actual number of Byzantine processes, which varies between $0$ and $f$ (the maximum number of tolerable Byzantine nodes), influences the system’s resiliency. As expected, with fewer processes being Byzantine, higher message loss rates are tolerated without any process shutdown. Impact of the diffusion fanout. In the results presented so far, processes forward each message to $X=f+1$ other random processes. We now study the effect of $X$ by measuring PISTIS’s reliability when it varies. Fig. 5 shows that increasing $X$ helps increase the overall system reliability. As expected increasing the fanout (value of $X$) reduces the probability of having a non- Byzantine node becoming passive. Figure 3: Probability of a correct process becoming passive when $\mathbb{T}=6d$ or $\mathbb{T}=8d$, and $X=f+1$ (without recovery) Figure 4: Probability of a correct process becoming passive in a system of $49$ processes (i.e., $f=16$) using $\mathbb{T}=8d$ and $X=17$, when 0, 4, 8, 12 or 16 processes are faulty (without recovery) Figure 5: Probability of a correct process becoming passive in a system of $49$ processes using $\mathbb{T}=8d$, and where $X$ varies (without recovery) Recovery. Fig. 6 details the probability that no Byzantine quorum remains active after a broadcast instance when the message loss probability increases. First, one can observe that the recovery mechanisms improve the resiliency of the system. For example, with $N=49$, PISTIS can tolerate a 70% message loss rate without system-wide crashes thanks to the recovery mechanisms, improving over the value of 50% obtained without recovery. Second, we show that one can further improve the system’s tolerance to message losses by overprovisioning the system. By using three more nodes, i.e., 52 in total, the system can tolerate $f=16$ Byzantine nodes and now tolerate up to 80% of message losses. Figure 6: Probability that no Byzantine quorum remains active in systems of 49 or 52 processes, when $\mathbb{T}=8d$, $X=17$, and $f=16$ processes are Byzantine. ### VI-E PISTIS latency and bandwidth consumption Next, we evaluate PISTIS’s incurred bandwidth and latency. For these experiments, we average results over 1,000 runs. We use $\mathbb{T}=8d$, since our reliability results show it allows a very large number of message losses to be tolerated. However, we now run our experiments without any message losses to measure the worst case bandwidth consumption. We measure both the protocol latency and bandwidth consumption depending on the value of $X$ that the processes use. We also compare the average latency and bandwidth consumption of PISTIS with that of RT-ByzCast [10]. Note that RT-ByzCast [10] uses ECDSA signatures and all-to-all communication ($X=N$). Latency. Fig. 7 and 8 detail the latency for a broadcast message to be delivered by all correct processes in systems of size 25, 49, and 73 (i.e., where $f\in\\{8,16,24\\}$): PISTIS delivers with latencies within $[3\text{ms},60\text{ms}]$ depending on the network delay $d$ and signature scheme used RSA vs. ECDSA. The latency increases when $N$ increases, and decreases when $X$ increases. We draw the following conclusions: (1) PISTIS is slower than RT-ByzCast for $X<f$. For $X\geq f$ PISTIS is on a par with RT- ByzCast until some $X\leq 3f$ ($X\leq 2f$ for systems with up to 400 nodes, see Table II) after which PISTIS is faster; (2) PISTIS’s absolute improvement over RT-Byzcast becomes more significant with increased link delay; (3) When delivering latencies on par with or better than RT-ByzCast, PISTIS can do so with a lower network overhead as presented next (see Fig. 9 and 10). Figure 7: Average latency with a 1ms link latency with $\mathbb{T}=8d$ and without message losses. The dotted lines indicate RT-ByzCast’s values [10]. Figure 8: Average latency with a 5ms link latency. The dotted lines indicate RT-ByzCast’s values [10]. Network bandwidth consumption. We now measure PISTIS’s bandwidth overhead per broadcast invocation, using RSA and ECDSA signatures. Fig. 9 and 10 present the bandwidth consumption for 1B payloads with 1ms and 5ms link delay, respectively. One can observe that with $X=f+1$ and when using ECDSA signatures, PISTIS’s bandwidth consumption is 3.2 times lower than that of RT- ByzCast. We also observe that when using ECDSA signatures there is a fanout between $2f+1$ and $3f+1$ such that below this fanout PISTIS’s average bandwidth consumption is lower than RT-ByzCast’s, while past that threshold, PISTIS’s average bandwidth consumption becomes greater than RT-ByzCast’s. This is partly due to the fact that PISTIS being event-based sometimes consumes more bandwidth. However, we see in those figures that PISTIS provides a useful trade-off between latency and bandwidth consumption. Fig. 11 shows as well that the bandwidth consumption increases reasonably when the message payload is increased to 1KB. Besides bandwidth, Fig. 12 (Appx. E) shows that PISITS also sends less message than RT-ByzCast. Figure 9: Average bandwidth consumption per node and per communication link with a 1ms link latency without message losses. The dotted lines indicate RT-ByzCast’s values [10]. Figure 10: Average bandwidth consumption per node and per communication link with a 5ms link latency without message losses. The dotted lines indicate RT-ByzCast’s values [10]. Figure 11: Average bandwidth consumption per node and per communication link with a 1ms link latency using either 1B or 1KB messages, without message losses N | Bdw, $X_{\mathit{min}}$ | Bdw, $X_{\mathit{mid}}$ | Bdw, $X_{\mathit{max}}$ | Bdw [10] | Lat, $X_{\mathit{min}}$ | Lat, $X_{\mathit{mid}}$ | Lat, $X_{\mathit{max}}$ | Lat [10] ---|---|---|---|---|---|---|---|--- 25 | 0.6 | 1.2 | 1.7 | 1.4 | 21.1 | 11.0 | 11.1 | 20.9 49 | 1.0 | 2.2 | 3.1 | 2.6 | 22.3 | 12.4 | 12.0 | 22.0 73 | 1.5 | 3.2 | 4.6 | 3.9 | 23.6 | 13.1 | 13.2 | 23.1 200 | 3.8 | 8.4 | 12.5 | 10.4 | 31.5 | 20.7 | 19.7 | 29.3 300 | 5.7 | 12.5 | 18.6 | 15.6 | 41.2 | 31.2 | 27.4 | 38.0 400 | 7.6 | 16.7 | 25.0 | 20.9 | 59.7 | 43.0 | 32.0 | 41.2 500 | 9.4 | 20.8 | 31.1 | 26.0 | 85.1 | 63.0 | 40.0 | 51.6 1000 | 18.7 | 41.4 | 62.2 | 52 | 296.3 | 213.1 | 98.5 | 116.2 Table II: Pistis bandwidth consumption (Mbps) and broadcast duration (ms) with larger systems ($f=\lfloor N/3\rfloor$), where $X_{\mathit{min}}=f+1$, $X_{\mathit{mid}}=2f+1$ and $X_{\mathit{max}}=N$. Scalability with the system size. We also evaluated how PISTIS’ latency and bandwidth consumption evolve with larger system sizes, namely up to 1000 nodes for $X\geq f+1$ and a 5ms link latency. Table II summarizes the results obtained for $X=f+1$, $X=2f+1$ and $X=N$. Our results show that PISTIS outperforms RT-ByzCast and provides latencies suitable for (1) fast automatic interactions ($\leq{20}\mbox{ms}$) for systems with up to 200 nodes, (2) power systems and substation automation applications ($\leq{100}\mbox{ms}$) for systems with up to 1000 nodes, and (3) slow speed auto-control functions ($\leq{500}\mbox{ms}$), continuous control applications ($\leq{1}\mbox{s}$) and operator commands of SCADA applications ($\leq{2}\mbox{s}$) for systems with 1000 nodes or more. ## VII Conclusion In this paper, we studied how to build large-scale distributed protocols that tolerate network faults and attacks while providing real-time communication. We introduced a suite of proven correct algorithms, starting from a baseline real-time Byzantine reliable broadcast algorithm, called PISTIS, all the way up to real-time Byzantine atomic broadcast and consensus algorithms. PISTIS is empirically shown to be robust, scalable, and capable of meeting timing deadlines of real CPS applications. PISTIS withstands message loss (and delay) rates up to 50$\%$ in systems with 49 nodes and provides bounded delivery latencies in the order of a few milliseconds. PISTIS improves over the state- of-the-art in scalability and latency through its event-triggered nature, gossip-based communications, and fast signature verifications. Our work simplifies the construction of powerful distributed and decentralized monitoring and control applications of various CPS domains, including state- machine replication for fault and intrusion tolerance. ## References * [1] J.. Moyne and D.. Tilbury “The Emergence of Industrial Control Networks for Manufacturing Control, Diagnostics, and Safety Data” In _Proc. of the IEEE_ 95.1, 2007, pp. 29–47 * [2] Romain Jacob et al. “End-to-end Real-time Guarantees in Wireless Cyber-physical Systems” In _RTSS_ , 2016 * [3] L. Schenato et al. “Foundations of Control and Estimation Over Lossy Networks” In _Proceedings of the IEEE_ 95.1, 2007, pp. 163–187 * [4] Dacfey Dzung, Rachid Guerraoui, David Kozhaya and Yvonne-Anne Pignolet “To Transmit Now Or Not To Transmit Now” In _SRDS_ , 2015 * [5] DLC+VIT4IP “D1.1 Scenarios and Requirements Specification”, 2010 URL: http://www.dlc-vit4ip.org/wb/media/Downloads/D1.1-V0.5-20100910-team.pdf * [6] M.. Patel and A. Aggarwal “Security attacks in wireless sensor networks: A survey” In _ISSP_ , 2013 * [7] F. Januário, C. Carvalho, A. Cardoso and P. Gil “Security challenges in SCADA systems over Wireless Sensor and Actuator Networks” In _ICUMT_ , 2016 * [8] Pavel Polityuk, Oleg Vukmanovic and Stephen Jewkes “Ukraine’s power outage was a cyber attack: Ukrenergo”, 2017 URL: https://www.reuters.com/article/us-ukraine-cyber-attack-energy/ukraines-power-outage-was-a-cyber-attack-ukrenergo-idUSKBN1521BA * [9] Flaviu Cristian, Houtan Aghili, H. Strong and Danny Dolev “Atomic Broadcast: From Simple Message Diffusion to Byzantine Agreement” In _Inf. Comput._ 118.1, 1995, pp. 158–179 * [10] D. Kozhaya, J. Decouchant and P. Esteves-Verissimo “RT-ByzCast: Byzantine-Resilient Real-Time Reliable Broadcast” In _IEEE Trans. Comput._ 68.3, 2019, pp. 440–454 * [11] “OMNeT++”, Last accessed: Feb 24, 2020 URL: https://omnetpp.org * [12] Danny Dolev “Unanimity in an Unknown and Unreliable Environment” In _FOCS_ IEEE Computer Society, 1981, pp. 159–168 DOI: 10.1109/SFCS.1981.53 * [13] Gabriel Bracha “Asynchronous Byzantine Agreement Protocols” In _Inf. Comput._ 75.2, 1987, pp. 130–143 * [14] P. Verissimo, L. Rodrigues and M. Baptista “AMp: A Highly Parallel Atomic Multicast Protocol” In _ACM SIGCOMM_ , 1989 * [15] Rachid Guerraoui et al. “Scalable Byzantine Reliable Broadcast” DISC, 2019 DOI: 10.4230/LIPIcs.DISC.2019.22 * [16] Amy Babay et al. “Deploying Intrusion-Tolerant SCADA for the Power Grid” IEEE/IFIP DSN, 2019, pp. 328–335 DOI: 10.1109/DSN.2019.00043 * [17] Amy Babay et al. “Network-Attack-Resilient Intrusion-Tolerant SCADA for the Power Grid” IEEE/IFIP DSN, 2018 DOI: 10.1109/DSN.2018.00036 * [18] Yair Amir, Brian A. Coan, Jonathan Kirsch and John Lane “Byzantine replication under attack” IEEE/IFIP DSN, 2008 DOI: 10.1109/DSN.2008.4630088 * [19] Yair Amir, Brian A. Coan, Jonathan Kirsch and John Lane “Prime: Byzantine Replication under Attack” In _IEEE Trans. Dependable Sec. Comput._ 8.4, 2011, pp. 564–577 DOI: 10.1109/TDSC.2010.70 * [20] Fred B. Schneider “Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial” In _ACM Comput. Surv._ 22.4, 1990, pp. 299–319 DOI: 10.1145/98163.98167 * [21] Miguel Castro and Barbara Liskov “Practical Byzantine Fault Tolerance” OSDI, 1999 URL: https://dl.acm.org/citation.cfm?id=296824 * [22] Dacfey Dzung, Rachid Guerraoui, David Kozhaya and Yvonne-Anne Pignolet “Never Say Never - Probabilistic and Temporal Failure Detectors” In _IPDPS_ , 2016 * [23] Danny Dolev, Cynthia Dwork and Larry Stockmeyer “On the Minimal Synchronism Needed for Distributed Consensus” In _JACM_ 34.1, 1987 * [24] Danny Dolev “The Byzantine Generals Strike Again” Stanford University, CA, USA: Stanford University, 1981 * [25] Michael J. Fischer, Nancy A. Lynch and Michael Merritt “Easy Impossibility Proofs for Distributed Consensus Problems” In _Distributed Computing_ 1.1, 1986, pp. 26–39 DOI: 10.1007/BF01843568 * [26] S. Viswanathan, R. Tan and D… Yau “Exploiting Power Grid for Accurate and Secure Clock Synchronization in Industrial IoT” In _RTSS_ , 2016 * [27] Paulo Verissimo and António Casimiro “The Timely Computing Base Model and Architecture” In _IEEE Trans. Comput._ 51.8, 2002, pp. 916–930 * [28] Jee Hea An, Yevgeniy Dodis and Tal Rabin “On the Security of Joint Signature and Encryption”, EUROCRYPT, 2002 * [29] Christian Cachin, Rachid Guerraoui and Luís Rodrigues “Introduction to Reliable and Secure Distributed Programming” Springer-Verlag, 2011 * [30] Marcos Kawazoe Aguilera, Carole Delporte-Gallet, Hugues Fauconnier and Sam Toueg “On implementing omega in systems with weak reliability and synchrony assumptions” In _Distributed Computing_ 21.4, 2008, pp. 285–314 DOI: 10.1007/s00446-008-0068-y * [31] R. Guerraoui, D. Kozhaya and Y.. Pignolet “Right on Time Distributed Shared Memory” In _RTSS_ , 2016 * [32] Dahlia Malkhi and Michael K. Reiter “Byzantine Quorum Systems” ACM STOC, 1997 DOI: 10.1145/258533.258650 * [33] M. Pease, R. Shostak and L. Lamport “Reaching Agreement in the Presence of Faults” In _JACM_ 27.2, 1980, pp. 228–234 * [34] D. Dolev and H.. Strong “Authenticated Algorithms for Byzantine Agreement” In _SIAM J. Comput._ 12.4, 1983, pp. 656–666 * [35] Danny Dolev and Rüdiger Reischuk “Bounds on Information Exchange for Byzantine Agreement” In _JACM_ 32.1, 1985, pp. 191–204 * [36] Leslie Lamport, Robert Shostak and Marshall Pease “The Byzantine Generals Problem” In _ACM Transactions on Programming Languages and Systems_ 4/3, 1982, pp. 382–401 * [37] “OpenSSL”, Last accessed: Feb 25, 2020 URL: https://www.openssl.org/ ## References * [38] Danny Dolev, Cynthia Dwork and Larry Stockmeyer “On the Minimal Synchronism Needed for Distributed Consensus” In _JACM_ 34.1, 1987 * [39] Dacfey Dzung, Rachid Guerraoui, David Kozhaya and Yvonne-Anne Pignolet “Never Say Never - Probabilistic and Temporal Failure Detectors” In _IPDPS_ , 2016 * [40] Tushar Deepak Chandra and Sam Toueg “Unreliable Failure Detectors for Reliable Distributed Systems” In _JACM_ 43.2, 1996, pp. 225–267 * [41] Paulo Verissimo and Carlos Almeida “Quasi-Synchronism: a step away from the traditional fault-tolerant real-time system models” In _Bulletin of the Technical Committee on Operating Systems and Application Environments (TCOS)_ 7.4, 1995, pp. 35–39 * [42] M. Pease, R. Shostak and L. Lamport “Reaching Agreement in the Presence of Faults” In _JACM_ 27.2, 1980, pp. 228–234 | David Kozhaya is a Senior Scientist at ABB Research, Switzerland. He received his PhD degree in Computer Science in 2016, from EPFL, Switzerland, where he was granted a fellowship from the doctoral school. His primary research interests include reliable distributed computing, real-time distributed systems, and fault- and intrusion-tolerant distributed algorithms. ---|--- | Jérémie Decouchant is an Assistant Professor at TU Delft, the Netherlands. He received his Ph.D in Computer Science in 2015 from the Grenoble-Alpes University, France. His research interests include resilient distributed computing, privacy-preserving systems, and their application to Blockchain, genomics, and machine learning. ---|--- | Vincent Rahli is a Senior Lecturer at the University of Birmingham. He received his Ph.D in Computer Science from Heriot-Watt University, UK. His research focuses on designing, formalizing, and using type theories and on the verification of distributed systems using proof assistants. ---|--- | Paulo Esteves-Veríssimo is a professor at the KAUST University (KSA), and Director of the Resilient Computing and Cybersecurity Center (RC3 - https://rc3.kaust.edu.sa/). He was a member of the Sci&Tech. Comm. of ECSO EU Cyber Security Organisation, Chair of IFIP WG 10.4 on Dependable Comp. and F/T, and vice-Chair of the Steer. Comm. of the DSN conference. He is Fellow of IEEE and of ACM, and associate editor of the IEEE TETC journal, author of over 200 peer-refereed publications and co-author of 5 books. He is currently interested in resilient computing, and its potential to improve classic cybersecurity techniques, in: SDN-based infrastructures; autonomous vehicles from earth to space; distributed control systems; digital health and genomics; or blockchain and cryptocurrencies. ---|--- ## Appendix A Differences Between Probabilistic Synchrony and Other Standard Models #### Comparison with fully asynchronous models Our model is more informative than traditional fully asynchronous models. More precisely, asynchronous models do not make any assumptions regarding message transmission and processing delays, while we assume that messages are delivered within a maximum transmission delay $d$ with high probability. #### Comparison with synchronous models Our communication model is a probabilistic synchronous one. We recall that in every transmission attempt a link may (with some probability) violate reliability and timeliness by dropping the message or delivering it within a delay $>d$. In case of message loss (omission) a sender that needs to re- transmit that message again faces yet another risk of transmission failure. Due to omissions (losses in consecutive transmission attempts) and the required follow-up re-transmissions, the time it takes to send a message reliably from one process to another (measured from the time of the first transmission attempt) may be unbounded. So, despite links being reliable and timely with high probability, our communication system is no longer synchronous. #### Comparison with partially synchronous models In comparison with partial synchrony [38], which assumes that communication becomes forever synchronous after some unknown point in time, our probabilistic synchronous model guarantees only finite synchronous periods (with variable durations) that may occur randomly during the lifetime of the system. In fact such probabilistic synchronous communication has been shown to be weaker, in some sense [39], than partial synchrony. For example, while the celebrated failure detectors of [40] can be implemented in partially synchronous systems they are impossible to implement in the systems with probabilistic synchronous communication [39]. #### The need for probabilistic synchrony models Probabilistic synchronous models (such as the one presented here or [41, 39]) are more “realistic” than synchronous models in the sense that timing assumptions cannot always be ensured in distributed systems because, for example, of the difficulty of guaranteeing reliable communication between the nodes of a system. Making the probability of timing failures (e.g., that messages might be delivered after $d$) transparent to the model and protocols makes them more robust. For example, it allows designing protocols where messages might not always arrive within a specified maximum transmission delay. Systems that require processes to operate in a timely fashion, such as mission critical systems, can therefore dynamically adapt to such untimely situations to ensure that timing guarantees are fulfilled. #### Comparison with quasi-synchronous models Quasi-synchronous models [41] address the timing issues mentioned above. In [41] synchronism is characterized by the following properties: P1—processing speeds are bounded and known; P2—message delivery delays are bounded and known; P3—local clock rate drifts are bounded and known; P4—load patterns are bounded and known; and P5—differences among local clocks are bounded and known. A system is quasi-synchronous if it satisfies properties P1–P5, and at least one of those does not hold with some known non-zero probability. As in a quasi-synchronous model, in our probabilistic model P2 only holds with high probability. Note, however, that in our probabilistic model we do not assume that differences among local clocks are bounded and known. ## Appendix B Correctness of PISTIS (Algorithm 2)—Proof of Theorem 1 ###### Lemma 1 (Validity). If a correct process $p_{i}$ broadcasts $m$ then $p_{i}$ eventually delivers $m$. ###### Proof outline. Because $p_{i}$ is correct, it will hear echoes of $m$ from $2f+1$ processes (including $p_{i}$) by $t+\mathbb{T}$, where $t$ is the time $p_{i}$ broadcasted $m$. This is true as otherwise, i.e., if less than $2f+1$ echoes for $m$ are heard, $p_{i}$ would kill itself (hence is no longer correct). Indeed, $p_{i}$ triggered a timer (see line 49 of Algorithm 2) when it started broadcasting $m$ (see line 6). Because $p_{i}$ received $2f+1$ echoes for $m$, it must have delivered $m$ too (see lines 14, 19, and 26 of Algorithm 2). ∎ ###### Lemma 2 (No duplication). No correct process delivers message $m$ more than once. ###### Proof outline. According to line 60 of Algorithm 2 a process only delivers a message if the corresponding $\mathcal{R}_{\mathit{deliver}}$ does not exist, and creates one right after delivering, thereby preventing from delivering a message twice. ∎ ###### Lemma 3 (Integrity). If some correct process $p_{j}$ delivers a message $m$ with correct sender $p_{i}$, then $m$ was previously broadcasted by $p_{i}$. ###### Proof outline. Because $p_{j}$ delivered $m$, it must have received $2f+1$ signed echoes for $m$ (see lines 14, 19, 26, and 33 of Algorithm 2). As mentioned in Remark 1, an echo message is not handled unless it is signed by the claimed sender. More precisely, upon receipt of a message of the form $\mbox{{Echo}}\left(\langle p_{i},{\mathit{sq}},v\rangle,\Sigma\right)$ or $\mbox{{Deliver}}\left(\langle p_{i},{\mathit{sq}},v,\Sigma\rangle,\Sigma^{\prime}\right)$, $p_{j}$ only handle the message if $\Sigma$ contains a signature from $p_{i}$. Now, because the sender $p_{i}$ is correct, it must have indeed sent an echo message for $\langle p_{i},v\rangle$. Finally, we prove by induction on the chain of local events happening at $p_{i}$ (a correct process) that led to this message being sent, that $p_{i}$ must have broadcasted it. ∎ ###### Lemma 4 (Intersecting delivery). Let $p$ be a correct process that starts delivering some message $m$ at some time $t_{d}$. Then, there exists a collection $B$ of $2f+1$ processes such that all correct processes in $B$ only deliver $m$ for a full $\mathbb{T}$ duration starting some time prior to $t_{d}+\mathbb{T}$. ###### Proof outline. Let us first point out that because $p$ starts delivering at $t_{d}$, and because it is correct, $2f+1$ processes must have received this deliver message by $t_{d}+\mathbb{T}$ (otherwise $p$ would kill itself because it wouldn’t be connected—the proof-of-connectivity is executed in piggyback mode). Let $A$ be this collection of $2f+1$ processes (note that $p\in{A}$). For each correct process $q\in{A}$, $q$ must have started delivering some time prior to $t_{d}+\mathbb{T}$. Let us now prove this lemma by induction on $t_{d}$. Either a correct process within $A$ started delivering prior to $t_{d}$ or not. If one did, in which case $t_{d}>0$, then we conclude by our induction hypothesis. Otherwise all correct nodes in $A$ (at least $f+1$) are only delivering starting from $t_{d}$. Because they start delivering prior to $t_{d}+\mathbb{T}$, and because they deliver for $2\mathbb{T}$, it must be that all correct processes within that collection only deliver $m$ for a full $\mathbb{T}$ duration starting at most by $t_{d}+\mathbb{T}$ (until at most $t_{d}+2\mathbb{T}$). ∎ ###### Lemma 5 (Timely agreement). If a correct process $p_{i}$ broadcasts $m$ at real time $t$, then all correct processes deliver $m$ by $t+3\mathbb{T}$. ###### Proof outline. Since $p_{i}$ is correct during this broadcast, then it must have received $2f+1$ echoes for $m$ and must then have started delivering $m$ at $t_{d}\in[t,t+\mathbb{T}]$. By Lemma 4, there exists a collection $B$ of $2f+1$ processes such that all correct processes in $B$ only deliver $m$ for a full $\mathbb{T}$ duration starting some time prior to $t_{d}+\mathbb{T}$. Now, every other correct process $p_{j}$ must be connected to $2f+1$ processes in any proof-of-connectivity period $\mathit{pc}=[t_{0},t_{0}+\mathbb{T}]$—let $C(\mathit{pc})$ denote those $2f+1$ processes. Therefore, because there are $3f+1$ processes, there must be a correct process, say $r$, and a proof-of- connectivity period $\mathit{pc}=[t_{j},t_{j}+\mathbb{T}]$ at $p_{j}$ such that: (1) $r$ is in the intersection of $B$ and $C(\mathit{pc})$ (there must be at least one correct process in that intersection because it is of size $f+1$); and such that (2) $p_{j}$ received $m$ during $\mathit{pc}$ from $r$, which sent it at most by $t_{d}+2\mathbb{T}$. Therefore, $p_{j}$ must have delivered by $t+3\mathbb{T}$. ∎ ###### Lemma 6 (Agreement). If some correct process $p_{i}$ delivers $m$, then all correct processes eventually deliver $m$. ###### Proof outline. This is a straightforward consequence of Lemma 5. ∎ ###### Lemma 7 (Timeliness). If a correct process $p_{i}$ broadcasts $m$ at real time $t$, then no correct process delivers $m$ after $t+3\mathbb{T}$. ###### Proof outline. This is a straightforward consequence of Lemma 5. ∎ ## Appendix C Correctness of PISTIC-CS—Proof of Theorem 2 Recall that since Algorithm $\mathcal{A}$ implements interactive consistency, then when $\mathcal{A}$ eventually terminates all correct processes will have the same vector of proposals where the values relative to correct processes are indeed what these correct processes have proposed. In fact interactive consistency [42] guarantees the two following properties: * IC.1 The non-faulty processors compute exactly the same vector. * IC.2 The element of this vector corresponding to a given non-faulty processor is the private value of that processor. We now prove Thm. 2, i.e., that assuming algorithm $\mathcal{A}$ implements interactive consistency in a known bounded number of communication rounds (this is used to prove Lemma 11), as well as Assumptions 1 and 2, then $\mathcal{A}$ implements RTBC (see Sec. V-A) in our system model (see Sec. III). ###### Lemma 8 (RTBC-Termination). Every correct process eventually decides. ###### Proof outline. By Assumption 1, a correct process $p_{i}$ accesses the network only through the RTBRB primitive. Therefore, because $p_{i}$ is correct and therefore does not enter passive mode while executing RTBRB, it must terminate. By IC.1 $p_{i}$ must compute a vector. Finally, $p_{i}$ will apply the deterministic function described in Assumption 2 to that vector to obtain a value $v$, which is the value $p_{i}$ decides upon. ∎ ###### Lemma 9 (RTBC-Agreement). No two correct processes decide differently. ###### Proof outline. Let $p_{i}$ be a correct process that decides upon a value $v_{i}$, and $p_{j}$ be a correct process that decides upon a value $v_{j}$. Again, by Assumption 1, $p_{i}$ and $p_{j}$ must not enter passive mode while using the RTBRB primitive. By IC.1, $p_{i}$ and $p_{j}$ must compute the same vector $V$. Both $p_{i}$ and $p_{j}$ apply the deterministic function described in Assumption 2 to this vector $V$. Therefore, $v_{i}$ must be equal to $v_{j}$. ∎ ###### Lemma 10 (RTBC-Validity). If all correct processes propose the same value $v$, then any correct process that decides, decides $v$. Otherwise, a correct process may only decide a value that was proposed by some correct process or the special value $\bot$. ###### Proof outline. First, note that by Assumption 1, correct processes must not enter passive mode while using the RTBRB primitive. Now, if all correct processes propose the same value $v$, then by IC.2 the obtained interactive consistency vector computed by a correct process should contain $v$ a number of times equal to the number of correct processes, i.e., at least $2f+1$ times. Finally, since all correct processes apply the deterministic function described in Assumption 2 to their vectors, they must all decide on $v$. Let us now assume that not all correct processes propose the same value $v$. If a correct process $p$ decides upon a value $v^{\prime}$ then by Assumption 2, it must be that either (1) its interactive consistency vector contains at least $2f+1$ times this value $v^{\prime}$; or (2) that $v^{\prime}$ is the special value $\bot$. In case $v^{\prime}$ appears $2f+1$ times in $p$’s interactive consistency vector, then by IC.2, it must be that $v^{\prime}$ was proposed by a correct process. This concludes the proof. ∎ ###### Lemma 11 (RTBC-Timeliness). If a correct process $p_{i}$ proposes a value to consensus at time $t$, then no correct process decides after $t+\Delta_{\mathtt{C}}$. ###### Proof outline. The way we implement consensus is first by reaching interactive consistency and applying a deterministic function after. The deterministic function is a computational load that requires scanning the consistency vector and hence has a known bounded duration since we assume that correct processes are synchronous. Therefore, it is sufficient to prove that the interactive consistency protocol finishes in a bounded duration (in the sense that correct processes compute their interactive consistency vectors in a bounded amount of time). Recall that we assume that Algorithm $\mathcal{A}$ requires a bounded number of communication rounds to terminate, say $k$. By Assumption 1 processes send and receive messages over the network only via the RTBRB primitive. Hence any communication round has a bounded duration, that being a multiple, say $m$, of $\Delta_{\mathtt{R}}$, the duration needed by the RTBRB primitive to complete (which is at most $3\mathbb{T}$). Therefore, because by Assumption 1, correct processes must not enter passive mode while using the RTBRB primitive, it must be that correct processes will decide before $t+(k\times{m}\times{3\mathbb{T}})$, which concludes our proof. ∎ ## Appendix D Correctness of PISTIC-AT—Proof of Theorem 3 ### D-A Reduction to RTBAB To prove Thm. 3, we only have to prove that Algorithm $\mathcal{A}$ satisfies the RTBAB-Timeliness property. ###### Proof outline. Let us assume that the correct process $p_{i}$ RTBAB-broadcasts $m$ at time $t$. We have to prove that no correct process RTBAB-delivers $m$ after real time $t+\Delta_{\mathtt{A}}$, for some $\Delta_{\mathtt{A}}$. We prove this by proving the stronger result that there exists a $\Delta_{\mathtt{A}}$ such that all correct processes RTBAB-deliver $m$ by $t+\Delta_{\mathtt{A}}$. By Property 1, $p_{i}$ RTBRB-broadcasts $m$ with some sequence number ${\mathit{seq}}_{t}$ by time $t+\Delta_{\mathtt{B}}$. By RTBRB-Validity, RTBRB-Timeliness and RTBRB-Agreement, all correct processes RTBRB-deliver $m$ by some time $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}$. By Property 2, all correct processes will RTBC-propose or RTBC-decide $m$ by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}$. If one correct process RTBC-decides $m$ by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}$, then by the RTBC properties, all correct processes will RTBC-decide by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}+\Delta_{\mathtt{C}}$, and by Property 4, they will RTBAB-deliver by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}+\Delta_{\mathtt{C}}+\Delta_{\mathtt{D}}$, which concludes our proof. Therefore, let us now consider the case where they all RTBC-propose $m$ by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}$. However, it might be that they RTBC-propose $m$ in different RTBC instances. We want to prove that there will be an RTBC instance ${\mathit{inst}}_{m}$ where “enough” correct nodes RTBC-propose $m$ at that instance, by time $t+\Delta_{m}$ (for some fixed $\Delta_{m}$), so that it results in ${\mathit{inst}}_{m}$ deciding $m$. Then, by RTBC-Termination, RTBC-Agreement, RTBC-Timeliness, and Property 4, we can conclude that all correct processes RTBAB-deliver $m$ by time $t+\Delta_{m}+\Delta_{\mathtt{C}}+\Delta_{\mathtt{D}}$. Let us now prove that such an instance ${\mathit{inst}}_{m}$ indeed exists. Because all correct processes RTBC-propose $m$ by $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}$, there must be a greatest instance ${\mathit{inst}}_{g}$ such that a correct process $p_{g}$ RTBC-proposes $m$ at some time $t_{k}\leq{t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}}$. Now, either (1) $m$ was RTBC-decided at a prior instance ${\mathit{inst}}_{p}$ (by all correct processes, by the RTBC properties), or (2) not. In case it was (i.e., case (1)), all correct processes must have RTBC-decided $m$ by time $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}+\Delta_{\mathtt{C}}$ by the RTBC properties and because ${\mathit{inst}}_{p}$ must have been dealt with by $p_{g}$ before ${\mathit{inst}}_{g}$ by Property 6. Now, by Property 4, it must be that all correct processes must have RTBAB-delivered $m$ by time $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}+\Delta_{\mathtt{C}}$. Let us now focus on case (2), i.e., $m$ was not RTBC-decided at a prior instance. By Property 3, correct processes must be RTBC-proposing either $m$ or $\bot$ at instance ${\mathit{inst}}_{g}$. Let us prove that they cannot propose $\bot$, in which case we conclude using RTBC-Validity and Property 4, and $\Delta_{\mathtt{A}}$ is again $t+\Delta_{\mathtt{B}}+\Delta_{\mathtt{R}}+\Delta_{\mathtt{P}}+\Delta_{\mathtt{C}}$. We prove that correct processes cannot propose $\bot$ at instance ${\mathit{inst}}_{g}$ by contradiction. Let us assume that some correct process $p_{j}$ votes for $\bot$ at instance ${\mathit{inst}}_{g}$ (therefore, $p_{j}$ cannot be $p_{g}$). By definition of ${\mathit{inst}}_{g}$, it must be that $p_{j}$ votes for $m$ at a prior instance ${\mathit{inst}}_{p}$. Because it is an instance prior to ${\mathit{inst}}_{p}$, as mentioned above, $m$ was not RTBC-decided at that instance. Therefore, by Property 3, and RTBC- Validity, it must be that this instance ended up in $\bot$ being decided. Finally, we obtain a contradiction from the fact that $p_{j}$ must also RTBC- propose $m$ at instance ${\mathit{inst}}_{g}$, which we prove by induction on the list of instances between ${\mathit{inst}}_{p}$ and ${\mathit{inst}}_{g}$ and using Property 5. ∎ ### D-B PISTIS-AT: a Class of Algorithms Implementing RTBAB Algorithm 3 provides an example of a PISTIS-AT algorithm, which implements the RTBAB primitive presented in Sec. V-B. We assume here that a process broadcasts a message by invoking $\mbox{{RTBAB-broadcast}}()$, and delivers a message invoking $\mbox{{RTBAB-deliver}}()$. In addition, $\mbox{{RTBAB- init}}(\mbox{rtbab})$ instantiates a new instance of RTBAB with id rtbab. To guarantee total order, each process maintains a monotonically increasing sequence number ${\mathit{seq}}$, which is incremented every time $\mbox{{RTBAB-broadcast}}()$ is called. ###### Lemma 12. Given an RTBAB instance ${\mathit{inst}}$, such that $p_{i}$ is the leader of ${\mathit{inst}}$, all correct processes will either RTBC-propose a value received from $p_{i}$ or $\bot$ (in case they have not received any new message from $p_{i}$ since the last one they processed). Moreover, given two correct processes that RTBC-propose such values at instance ${\mathit{inst}}$, it must be that either those values are equal (to the $k^{th}$ new value broadcasted by $p_{i}$, for some $k$) or one of them is $\bot$ (in case the corresponding process has not received $p_{i}$’s $k^{th}$ broadcasted new value yet, and has already processed all previous broadcasted value from $p_{i}$). ###### Proof outline. This can be proved by induction on causal time. The first time those correct processes RTBC-propose a value at an instance such that $p_{i}$ is the leader, it must be that either this value is the first value RTBAB-broadcasted by $p_{i}$, or $\bot$. The inductive case goes as follows: we assume that our property is true at a given instance ${\mathit{inst}}$ such that $p_{i}$ is the leader, and where correct processes RTBC-propose either $v$ (the $(k-1)^{th}$ new value proposed by $p_{i}$) or $\bot$, and we prove that the property is still true at the next such instance ${\mathit{inst}}^{\prime}$. By RTBC-Validity, it must be that correct processes either RTBC-decide $v$ or $\bot$, and by RTBC- Agreement, they must not decide differently. Therefore, if they decide $v$ at instance ${\mathit{inst}}$, then $v$ will be added to the ${\mathit{delivered}}$ set, and therefore never added to ${\mathit{unordered}}$ again; and in addition, it will be removed from ${\mathit{unordered}}$. At the next instance ${\mathit{inst}}^{\prime}$, these processes will vote either for the $k^{th}$ new value proposed by $p_{i}$ or for $\bot$ if they have not received that $k^{th}$ new value. In particular, if one of those correct processes RTBC-proposed $\bot$ because it had not received $v$ yet, then at instance ${\mathit{inst}}^{\prime}$ it will either propose the $k^{th}$ new value proposed by $p_{i}$ (since $v$ is skipped because already delivered), or $\bot$ in case it has not received this $k^{th}$ new value yet. Otherwise if they decide $\bot$, then the correct processes that voted for $v$ will still vote for $v$ at ${\mathit{inst}}^{\prime}$, and those that voted for $\bot$ will either keep on voting for $\bot$ if they still have not received $v$, or finally receive $v$ and start voting for $v$. Note that by RTBAB-Agreement, all correct processes must eventually receive $v$. ∎ In order to obtain time bounds that do not depend on Algorithm 3’s variable, we make the following assumption: ###### Assumption 3. Correct processes wait for $\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}}+({n}\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}}))$ between two different broadcasts. As we will see below, this is the time it takes to guarantee that all correct processes RTBAB-deliver an RTBRB-broadcasted value. ###### Lemma 13. Algorithm 3 satisfies Property 1. ###### Proof outline. Property 1 holds because Algorithm 3 RTBTB-broadcasts messages on each call to RTBAB-broadcast (see lines 5 and 6 of Algorithm 3). ∎ ###### Lemma 14. Algorithm 3 satisfies Property 4. ###### Proof outline. If a value $v$ (different from $\bot)$ is RTBC-decided at time $t$, and the RTBC instance is the current instance, and $v$ is not in ${\mathit{delivered}}$, then it is RTBAB-delivered. If $v$ is in ${\mathit{delivered}}$, then it must be that it was added to that set in the past, in which case it was delivered at that time. Now, if the RTBC instance is not the current instance, Algorithm 3 retries handling the messages after a while. The number of times a process will retry handling deliver messages is bounded because instances are handled in a monotonic order and are bounded in time according to RTBC-Timeliness. ∎ ###### Lemma 15. Under Assumption 3, Algorithm 3 satisfies Property 2. ###### Proof outline. First of all, let us point out that RTBRB-deliver messages are treated in monotonic order. Let us now consider three cases. In the following, we first provide variable-dependent bounds, and we then explain how to get independent bounds using Assumption 3. Case (1): Whenever a process $p_{i}$ receives a RTBRB-deliver message $m$ at time $t$ with sequence number ${\mathit{num}}$, broadcasted by $p_{j}$, which is the next one to receive (i.e., ${\mathit{num}}={\mathit{next}}[p_{j}]$), and if $m$ is not already in ${\mathit{delivered}}$, then $p_{i}$ will append $m$ to its ${\mathit{unordered}}[p_{j}]$ list. We now have to prove that $m$ will then be RTBC-proposed or RTBC-decided by some time $t+\Delta_{\mathtt{P}}$, for some bounded $\Delta_{\mathtt{P}}$. Because $m$ is now in $p_{i}$’s ${\mathit{unordered}}[p_{j}]$ list, the event line 18 will be triggered at least until $m$ is removed from the list. Because Algorithm 3 uses the rotating coordinator paradigm, then a value broadcasted by some process $p_{k}$ is voted upon using an RTBC instance only every $n$ (the total number of processes) instances (i.e., whenever $p_{k}$ is the leader). However, there might be other values before $m$ in the ${\mathit{unordered}}[p_{j}]$ lists maintained by the processes. The processes have to RTBC-decide these previous values to start RTBC-proposing $m$ if $m$ has not been RTBAB-delivered in the meantime (otherwise we can conclude because RTBAB-delivered messages are RTBC-decided upon). Because of the rotating coordinator scheme, and by the RTBC properties and Lemma 12, we get the guarantee that $m$ will be RTBC-proposed by $t+(n\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})\times({\mathit{num}}+1))$, where $\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}}$ is the time it takes to complete an RTBC instance, and $n\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})$ is the time it takes to rotate through the leaders ($\Delta_{\mathtt{W}}$ is the time processes wait for before re-trying to handle a message—see line 15 and line 36). Now, thanks to Assumption 3, we can derive that all previous values stored in ${\mathit{unordered}}[p_{j}]$ have already been decided upon when correct processes deliver $m$. Therefore, we get that $m$ will be RTBC-proposed by $t+(n\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}}))$. Case (2): If $m$ is already in ${\mathit{delivered}}$, then $p_{i}$ must have already RTBC-decided $m$ according to lines 28–31. Case (3): If $m$ is not the next value that $p_{i}$ is supposed to receive, it will re-try RTBRB-delivering $m$ after $\Delta_{\mathtt{W}}$ until it has received all the previous values. The RTBRB properties guarantee that if some correct process $p_{j}$ broadcasts a value $v$ at time $t$, then correct processes will deliver $v$ by $t+\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}}$. Therefore, it must be that correct processes will have stored $m$ (and all previous values) in their ${\mathit{unordered}}[p_{j}]$ list by $t+(\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}})\times({\mathit{num}}+1)$. Finally, following the same argument as above, we get that $m$ will be RTBC- proposed by $t+((\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}})\times({\mathit{num}}+1))+(n\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})\times({\mathit{num}}+1))$. As mentioned above, thanks to Assumption 3, we can derive that all previous values stored in ${\mathit{unordered}}[p_{j}]$ have already been RTBRB- delivered and RTBC-decided upon when correct processes deliver $m$. Therefore, we get that $m$ will be RTBC-proposed by $t+((\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}}))+(n\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}}))$. ∎ ###### Lemma 16. Algorithm 3 satisfies Property 3. ###### Proof outline. This is a straightforward consequence of Lemma 12. ∎ ###### Lemma 17. Algorithm 3 satisfies Property 5. ###### Proof outline. Let $p_{i}$ be a correct process that proposes a value $v$, with broadcaster $p_{j}$, at a given time $t$, using a given RTBC instance ${\mathit{inst}}$, and such that this instance does not decide $v$. By Lemma 12, all correct processes propose $v$ or $\bot$ at that instance. By the RTBC properties, because ${\mathit{inst}}$ does not decide $v$, it must decide $\bot$. Therefore, $p_{i}$ will increment its RTBC instance number but will keep $m$ at the head of its ${\mathit{unordered}}[p_{j}]$ list. After a full rotation through the leaders, it will RTBC-propose $v$ again at the later instance ${\mathit{inst}}+n$, where $0<n$. Moreover, no correct process will propose $v$ between ${\mathit{inst}}$ and ${\mathit{inst}}+n$ because $p_{j}$ ($v$’s broadcaster) is the leader of ${\mathit{inst}}$ and ${\mathit{inst}}+n$ but not of the instances in between, and $v$ can only be in the ${\mathit{unordered}}[p_{j}]$ lists. ∎ ###### Lemma 18. Algorithm 3 satisfies Property 6. ###### Proof outline. By design, correct processes RTBC-propose exactly one value per RTBC instance because they only start proposing a value in a new instance if ${\mathit{busy}}$ is False; in which case they set ${\mathit{busy}}$ to True; wait for this instance to complete; and finally increment the RTBC instance number and set back ${\mathit{busy}}$ to False. Correct processes propose values in all RTBC instances and monotonically because they increment the the RTBC instance number by one every time an RTBC instance complete. Finally, correct processes do not run RTBC instances in parallel thanks to the ${\mathit{busy}}$ flag. ∎ ### D-C Direct Proof of Algorithm 3’s Correctness ###### Lemma 19 (RTBAB-Validity). If a correct $p_{i}$ process broadcasts $m$, then $p_{i}$ eventually delivers $m$. ###### Proof outline. By RTBRB-Validity, $p_{i}$ eventually delivers $m$ with sequence number ${\mathit{num}}$. If $p_{i}$ has already delivered $m$, i.e., $m\in{\mathit{delivered}}$, then we are done. Otherwise, because $p_{i}$ broadcasts messages monotonically (and without gaps), it will append $m$ to its list of unordered messages (line 13 of Algorithm 3). Therefore, line 18 will be triggered until $m$ is removed from the list, as long as $p_{i}$ eventually resets ${\mathit{busy}}$ to False once it has set it to True, which is true by RTBC-termination. When finally $p_{i}$ is the leader of its current instance, say ${\mathit{inst}}_{1}$, and that $m$ is at the head of $p_{i}$’s unordered list, $p_{i}$ will RTBC-propose $m$. By RTBC-Validity, either all the correct processes RTBC-propose $m$, in which case $p_{i}$ delivers $m$; or some correct processes RTBC-propose values different from $m$. As mentioned above, such proposed values must then be $\bot$, in which case $p_{i}$ might RTBC-decide $m$ or $\bot$. Again as mentioned above, if $p_{i}$ does not deliver $m$, it will again either decide $m$ or $\bot$ at the next instance where it is the leader. Because by RTBAB-Agreement, all correct processes eventually receive $m$, it must be that eventually, $p_{i}$ RTBC-decides $m$ for an instance where it is the leader, and in turn RTBAB-deliver $m$. ∎ ###### Lemma 20 (RTBAB-No duplication). No message is delivered more than once. ###### Proof outline. This property straightforwardly follows trivially from the fact that delivered values are added to the ${\mathit{delivered}}$ set line 31, and from the fact that a process always checks whether it has delivered a message $m$ before delivering $m$ (see line 30). ∎ ###### Lemma 21 (RTBAB-Integrity). If some correct process delivers a message $m$ with initial sender $p_{i}$ and process $p_{i}$ is correct, then $m$ was previously broadcast by $p_{i}$. ###### Proof outline. First of all, the RTBAB-delivered value $m$ (which must be different from $\bot$) with sender $p_{i}$ (i.e., such that $p_{i}$ is the leader of the current instance) must have been RTBC-decided upon. By RTBC-Agreement and RTBC-Termination, it must be that the correct sender $p_{i}$ has also RTBC- decided upon $m$. It must be that $m$ was it $p_{i}$’s own ${\mathit{unordered}}$ list. Therefore, it must be that $p_{i}$ RTBRB- delivered $m$. Finally, by RTBRB-Integrity, it must be that $p_{i}$ previously broadcasted $m$. ∎ ###### Lemma 22 (RTBAB-Agreement). If some message $m$ is delivered by any correct process, then every correct process eventually delivers $m$. ###### Proof outline. Let $p_{i}$ be the process that RTBAB-delivered $m$ at instance ${\mathit{inst}}$, such that $p_{l}$ is the leader of that instance. This delivered value must be different from $\bot$, and must have been RTBC-decided upon. By RTBC-Agreement and RTBC-Termination, it must be that all correct processes eventually RTBC-decide $m$ as well. Let $p_{j}$ be one such correct process. We have to prove that $p_{j}$ RTBAB-delivers $m$ also at instance ${\mathit{inst}}$. By RTBRB-Agreement, it must be that $p_{j}$ eventually receives the same broadcasts as $p_{i}$, among other things, those for which $p_{l}$ is the leader. From RTBC-Agreement and RTBC-Termination, it must be that all correct processes eventually decide the same values for each RTBC instance. Therefore, $p_{j}$ will eventually reach instance ${\mathit{inst}}$, and will therefore also RTBAB-deliver $m$. ∎ ###### Lemma 23 (variable-dependent RTBAB-Timeliness). There exists a known $\Delta_{\mathtt{A}}$ such that if a correct process $p_{i}$ broadcasts $m$ at time $t$, no correct process delivers $m$ after real time $t+\Delta_{\mathtt{A}}$, where $\Delta_{\mathtt{A}}$ depends on ${\mathit{seq}}_{t}$, the current sequence number at the time $m$ is broadcasted. ###### Proof outline. Timeliness follows from RTBRB-Timeliness and RTBC-Timeliness, as well as of the fact that Algorithm 3 rotates through the processes (processes might have to wait a full rotation before they get a chance to decide on a messages that was RTBAB-broadcasted). Let $\Delta_{\mathtt{C}}$ be the time it takes for all correct processes to decide on a value using RTBC (see RTBC-Timeliness). Let $\Delta_{\mathtt{R}}$ be the time it takes for all correct processes to deliver a message using RTBRB (which exists by RTBRB-Timeliness). Assume that $p_{i}$ assigns the sequence number ${\mathit{seq}}_{t}$ with the message $m$. As mentioned above, we assume that $p_{i}$ RTBAB-broadcasts $m$ at time $t$. Because correct processes might still be RTBRB-delivering messages when they gets the RTBRB-deliver message for $m$, they might not be able to RTBRB- deliver $m$ right away (it might be that ${\mathit{seq}}_{t}>{\mathit{next}}[p_{i}]$). However, we are guaranteed that all correct processes will have delivered $m$ by time $T_{1}=t+((\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}})\times({\mathit{seq}}_{t}+1))$ (where $\Delta_{\mathtt{W}}$ is the time processes wait for before re-trying to handle a message—see line 15 and line 36). Note that at that time, processes might be RTBAB-delivering other messages broadcasted by other processes than $p_{i}$. Also, there might already be some messages from $p_{i}$ to RTBAB-deliver before $m$ (all those with sequence numbers less than ${\mathit{seq}}_{t}$). In case $p_{i}$ is currently not the leader, it might have to wait a full rotation through the processes to get a chance to be the leader again. Given the fact that all correct processes have $m$ in their ${\mathit{unordered}}$ list by time $T_{1}$, a full rotation will take at most ${n}\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})$. Because processes might have to process ${\mathit{seq}}_{t}$ messages from $p_{i}$ before they get a chance to process $m$, it follows that $m$ will be RTBAB-delivered by $t+((\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}})\times({\mathit{seq}}_{t}+1))+({n}\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})\times({\mathit{seq}}_{t}+1))$. ∎ As mentioned in Def. 4, the RTBAB timeliness bound is different from the RTBRB one. $\Delta_{\mathtt{A}}$ is the RTBAB bound, while $\Delta_{\mathtt{R}}$ is the RTBRB bound. ###### Lemma 24 (RTBAB-Timeliness). Under Assumption 3, there exists a known $\Delta_{\mathtt{A}}$ such that if a correct process $p_{i}$ broadcasts $m$ at time $t$, no correct process delivers $m$ after real time $t+\Delta_{\mathtt{A}}$. ###### Proof outline. Using Assumption 3 and a proof similar to the one of Lemma 23, we derive that messages RTBAB-broadcasted at time $t$ are RTBAB-delivered by $t+(\Delta_{\mathtt{R}}+\Delta_{\mathtt{W}}+({n}\times(\Delta_{\mathtt{C}}+\Delta_{\mathtt{W}})))$. ∎ As mentioned in Def. 4, in addition to the RTBRB properties, RTBAB also include a total order property. ###### Lemma 25 (RTBAB-Total order). Let $m_{1}$ and $m_{2}$ be any two messages and suppose that $p_{i}$ and $p_{j}$ are any two correct processes that deliver $m_{1}$ and $m_{2}$. If $p_{i}$ delivers $m_{1}$ before $m_{2}$, then $p_{j}$ delivers $m_{1}$ before $m_{2}$. ###### Proof outline. Because $p_{i}$ RTBAB-delivers $m_{1}$ before $m_{2}$, it must have RTBC- decided $m_{1}$ at an instance ${\mathit{inst}}_{1}$ and $m_{2}$ at an instance ${\mathit{inst}}_{2}$ such that ${\mathit{inst}}_{1}<{\mathit{inst}}_{2}$. By RTBC-Agreement and RTBC- Termination, $p_{j}$ must also have RTBC-decided $m_{1}$ at ${\mathit{inst}}_{1}$ and $m_{2}$ at ${\mathit{inst}}_{2}$. Using a similar argument as in the proof of RTBAB-Agreement, we derive that $p_{j}$ must then also have RTBAB-delivered $m_{1}$ at instance ${\mathit{inst}}_{1}$ and $m_{2}$ at ${\mathit{inst}}_{2}$. ∎ ## Appendix E Evaluation Using Number of Messages Sent Figure 12: Average number of messages transmitted per node with a 1ms link latency, with system sizes equal to 25, 49 and 73 for Pistis and RT-ByzCast. To complement the bandwidth consumption evaluation that was previously reported, Fig. 12 presents the number of messages transmitted using either Pistis or RT-ByzCast. We considered systems containing 25, 49 and 73 nodes (i.e., 3f+1 for f equals to 8, 16 and 24). We used a 1ms network latency and 1B messages. RT-ByzCast’s values are reported with dashed horizontal lines. One can see that Pistis sends less messages when the value of $X$ decreases. In addition, PISTIS always sends less messages than RT-ByzCast. In particular, PISTIS and RT-ByzCast approximately send the same number of messages when $X=3f+1$. These results are consistent with the bandwidth consumption results reported in Sec. VI-E, and which therefore indicate that the main reason behind Pistis’ lower bandwidth consumption is a smaller number of messages exchanged.
# Multimodal Integration of Olfactory and Visual Processing through DCM analysis: Contextual Modulation of Facial Perception Gianluca Rho1,2,∗, Alejandro Luis Callara1,2, Francesco Bossi1, Dimitri Ognibene3,4, Cinzia Cecchetto5, Tommaso Lomonaco6, Enzo Pasquale Scilingo1,2,†, and Alberto Greco1,2,† 1Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy 2Research Center “E. Piaggio”, School of Engineering, University of Pisa, Pisa, Italy 3Università Milano-Bicocca, Milan, Italy 4University of Essex, Colchester, UK 5Department of General Psychology, University of Padova, Italy 6Department of Chemistry and Industrial Chemistry, University of Pisa, Italy †These authors contributed equally to the work<EMAIL_ADDRESS> ###### Abstract This study examines the modulatory effect of contextual hedonic olfactory stimuli on the visual processing of neutral faces using event-related potentials (ERPs) and effective connectivity analysis. The aim is to investigate how odors’ valence influences the cortical connectivity underlying face processing, and the role arousal enhanced by faces plays on such visual- odor multimodal integration. To this goal, a novel methodological approach combining electrodermal activity (EDA) and dynamic causal modeling (DCM) was proposed to examine cortico-cortical interactions changes. The results revealed that EDA sympathetic responses were associated with an increase of the N170 amplitude, which may be suggested as a marker of heightened arousal to faces. Hedonic odors had an impact on early visual ERP components, with increased N1 amplitude during the administration of unpleasant odor and decreased vertex positive potential (VPP) amplitude during the administration of both unpleasant and neutral odors. On the connectivity side, unpleasant odors strengthened the forward connection from the inferior temporal gyrus (ITG) to the middle temporal gyrus (MTG), involved in processing changeable facial features. Conversely, the occurrence of sympathetic responses was correlated with an inhibition of the same connection, and with an enhancement of the backward connection from ITG to the fusiform face gyrus. These findings suggest that negative odors may enhance the interpretation of emotional expressions and mental states, while faces capable of enhancing sympathetic arousal prioritize the processing of identity. The proposed methodology provides insights into the neural mechanisms underlying the integration of visual and olfactory stimuli in face processing. Keywords: face processing, olfactory stimuli, dynamic causal modeling, sympathetic responses, brain connectivity, ERP ## 1 Introduction The processing of facial expressions is a fundamental mechanism for perceiving others’ intentions and emotions [1]. In real-life situations, this process is not limited to the visual system alone but is influenced by contextual information from other sensory channels [2]. Olfactory stimuli have been found to play a crucial role in modulating the hedonic perception of faces [1, 3]. Previous studies have shown that the emotional valence conveyed by contextual odors can affect the recognition of facial expressions and subjective ratings of faces [4, 5, 6, 7, 8, 9]. These behavioral responses are accompanied by physiological changes, as evidenced by electroencephalographic (EEG) event-related potential (ERP) studies. These studies have reported an effect of the valence of the odors at both early sensory stages (P1/N1, N170, and Vertex Positivity Potential (VPP)) [8, 10, 11, 12] and later cognitive stages (late positive potential) of face processing [13, 14, 15]. However, ERP waveforms reflect the overall activation of the face-perception system, an ensemble of interconnected regions which categorize visual stimuli as faces by analyzing various factors such as expression, gender, and identity [16]. The close association between olfactory and visual areas has been established in previous research [17, 18]. Therefore, investigating the effect of emotional odors on the interactions among the areas of the face-perception system could provide valuable insights into the integration mechanism between olfaction and vision. Such investigations can enhance our understanding of how sensory cues interact to shape our perception of others’ facial expressions and emotional states. The standard ERP analysis based on components’ amplitude and latency can not provide a sufficient level of detail to investigate brain dynamics at the network level. Nevertheless, ERPs can be combined with more sophisticated techniques to provide a window on the cortical sources’ dynamics underlying face processing. In this context, EEG connectivity analysis allows to investigate the interaction among neuronal assemblies [19]. Particularly, effective connectivity estimates the direct and directional (i.e., causal) influence that a source exerts over another [19]. Among the various effective connectivity approaches [20, 19], Dynamic Causal Modeling (DCM) is a powerful model-based technique designed to test for the effects of experimental factors on the interactions among regions of a specified brain network, starting from observed electrophysiological or functional imaging responses [21, 22]. Several studies applied DCM to reveal the dynamics among sources of the face- perception network either through fMRI [23, 24, 25, 26, 27], MEG [28, 29], or intracranial EEG [30] recordings. In this light, DCM can be combined with the information provided by visual ERPs to investigate the modulatory effect of hedonic contextual odors on the cortico-cortical interactions among face- processing areas. A potential factor of interest in the analysis of the face-odor multimodal integration concerns the arousal elicited by faces. Specifically, within an event-related paradigm, several repetitions of the stimulus are presented over time, and it could be hypothesized that not all the faces are able to elicit the same affective response due to the subjective saliency of facial features [31, 32, 33]. This may affect the signal-to-noise ratio (SNR) of the observed ERPs, and potentially reduce or even obscure the modulatory effect of contextual odors on the connectivity. However, although arousal has already been shown to influence face-evoked ERPs [34, 35], its effects when faces are presented with concomitant olfactory stimuli are still poorly investigated. Arousal has a crucial effect on the processing of external inputs. Motivationally relevant stimuli, such as the perception of a face with specific intrinsic features, are able to generate a transitory and automatic enhancement of arousal [36, 37], that entails a prioritized processing of the stimulus in the visual stream [35]. This mechanism is associated with a series of physiological changes, including a top-down (i.e., endogenous) affective influence on sensory gain control [35], an increase in the amplitude of frequency-specific brain oscillations [38, 32] and in the magnitude of ERP components [34, 35]. In this light, testing for the actual occurrence of enhanced arousal is crucial to ensure that a face has been successfully perceived, thus maximizing the SNR of ERP responses and the consequent effect size of odors’ modulation. Modeling the effects of arousal at both the ERP and DCM connectivity levels requires knowing in advance whether a stimulus has the property of eliciting it. However, such a prior knowledge is far from being easily identified, since particular features that could characterize a perceived stimulus as relevant (e.g., facial identity, eye gaze, expression) may vary on a subjective basis. Besides brain activity, arousal is known to influence autonomic nervous system (ANS) dynamics. Particularly, states of high arousal are associated with an increase of peripheral sympathetic activity [39]. In this light, electrodermal activity (EDA) can be exploited as an objective means to identify the occurrence of enhanced arousal to a given stimulus, possibly related to intrinsic facial features or to emotional expressions [40, 41, 42, 43]. EDA is comprised of a slow-varying tonic component overimposed to a fast- varying phasic component. Particularly, the latter is represented by a series of stimulus-evoked skin conductance responses (SCRs), whose elicitation is driven by peripheral sympathetic neural bursts: i.e., the sudomotor nerve activity (SMNA) [39]. Accordingly, sympathetic responses to faces observed from SMNA can be adopted as a reliable marker of enhanced arousal. In this work, we investigate the effects of hedonic olfactory stimuli and arousal enhancement on face perception, as measured by ERP components and source effective connectivity. Specifically, we hypothesize that odors’ valence modulates the strength of specific pathways within the face-perception network, and that arousal evoked by the saliency of faces could play a role on such a multimodal sensory integration. To this aim, we acquired the EDA and EEG signals from 22 healthy volunteers performing a passive visual stimulation task with neutral faces and background pleasant, neutral and unpleasant odors. We propose a novel methodological approach based on the convex-optimization- based EDA (cvxEDA) framework [44] to identify face-evoked peripheral sympathetic responses and characterize the stimuli according to their property of eliciting arousal. The outcome of this classification procedure is then adopted to model the effects of odors’ valence and arousal as between-trial factors on the visual ERP components evoked by faces. We then exploit such ERPs to carry out a DCM and Parametric Empirical Bayes (PEB) analysis of odors and arousal on the connectivity among cortical sources found to be activated by the task. Particularly, PEB allows to build a hierarchical analysis of effective connectivity, where single-subject estimates about neuronal parameters of interest (e.g., connections’ strength) are treated as stochastic effects at the group level [45]. To validate our aforementioned hypothesis on the role of enhanced arousal to faces, we aim to assess: (i) whether such mechanism is effectively reflected by the observed ERPs, and (ii) the modulation operated by odors on cortico-cortical interactions at different levels of arousal. ## 2 Material and Methods ### 2.1 Participants Twenty healthy volunteers (age 27 $\pm$ 3, 5 females) were enrolled in the study. Volunteers did not report any history of neurological and cardiovascular diseases, anosmia or being tested positive to COVID-19 over the past 6 months. Volunteers were not allowed to have any food nor drink in the 30 minutes preceding the experiment. The study was conducted according with the guidelines of the Declaration of Helsinki, and approved by the Bioethics Committee of the University of Pisa Review No. 14/2019, May 3rd, 2019. ### 2.2 Olfactory stimuli We selected three different odorants: i.e., banana (isoamyl acetate; $CH_{3}COOCH_{2}CH_{2}CH(CH_{3})_{2}$), isovaleric acid ($(CH_{3})_{2}CHCH_{2}COOH$), and n-butanol ($CH_{3}CH_{2}CH_{2}CH_{2}OH$). We chose such odorants to convey positive (banana), negative (isovaleric acid), and neutral (n-butanol) valence according to previous literature results[46, 47]. For each subject, we prepared $1ml$ isointense solutions diluting pure odorant substances in distilled water according to the ratios of 1/20 (banana, n-butanol) and 1/10 (isovaleric acid) respectively. Odorants were delivered through an in-house built computer-controlled olfactometer at a flow rate of $60ml/min$. ### 2.3 Olfactometer device The 4-channels computer-controlled olfactometer used herein is composed by i) a pressure regulator to set air pressure at 7Bar, ii) four stainless-steel containers (50mL) equipped with o-rings, stainless steel caps and clamping rings to ensure a gas-tight closure, iii) 10 low dead-volume three-way solenoid valves (Parker Hannifin, Italy), iv) a digital flow meter (Honeywell, Italy), and v) a disposable nasal cannula. An Arduino in-house code controlled the solenoid valves allowing them being opened and closed through a well- defined sequence of actions. The software allows the delivery of pure clean air (i.e., all valves are opened) or odors kept at room temperature within the containers. Components were connected to each other using polytetrafluoroethylene (PTFE) fittings and tubings (internal diameter of 0.3mm) to reduce the olfactometer dead-volume up to 1mL. The nasal cannula is connected to the olfactometer though a 3 meter long PTFE line. The olfactometer showed a negligible memory effect and a low background emissions of chemicals in the air/odors mainstream as determined by thermal desorption coupled to gas-chromatography and mass spectrometry protocol [48], an overall air flow delivery variability less than 1$\%$, and a rise time (10-90$\%$ of the final value) close to 300 ms. ### 2.4 Visual stimuli For the visual stimuli of neutral faces, we used the Chicago Face Database [49]. We chose 128 different actors showing a neutral expressions. Particularly, we selected a balanced number of actors across gender, age, and ethnicity, to mitigate potential confounding effects related to the intrinsic characteristics of the actors. The pictures were presented in a completely randomized order with respect to the olfactory condition. The visual stimuli were shown on a 15” laptop screen with a refresh rate of 60Hz and a resolution of 1920x1080. The pictures’ size was 15cm in width and 10cm in height, and were displayed at about 50cm from the eyes of the participants, resulting in a visual angle of about 17°. ### 2.5 Experimental protocol The experimental protocol was divided into two parts. First, we presented the three odorants in a randomized order for a duration of about 10s, and we asked the participants to evaluate the perceived hedonic content in terms of valence (from -2 to +2) and arousal (from 1 to 5) according to the Self-Assessment Manikin (SAM) test [50]. Second, we designed an experimental protocol comprised of 128 trials. As schematically reported in Fig.1, each trial consisted of: (1) 3s of dark gray background; (2) 3s of a dark gray background with a white fixation cross; (3) 1.5s of neutral face image presentation; (4) 6s of inter-trial rest. Within each inter-trial rest, subjects were asked to evaluate the facial expression in terms of valence and arousal according with the SAM test, through an interactive interface. The facial images were presented in combination with clean air or one of the three different odorants scored in the first part of the protocol, for a total of 128/4=32 trials for each olfactory condition. Each odorant was delivered starting from the onset of the dark gray background to the end of the visual stimulus. A wash-out with clean air was performed during the inter-trial rest. We presented both visual and olfactory stimuli in a randomized order through the PsychoPy software [51] Figure 1: Schematic illustration of the experimental protocol. Each trial consisted of: (1) 3s of dark gray background; (2) 3s of fixation cross; (3) 1.5s of neutral face image presentation; (4) 6s of inter-trial rest where subjects rated the valence and arousal of neutral faces according to the self- assessment manikin (SAM) test. A random odor among banana, n-butanol, isovaleric acid and clean air was delivered starting from (1) to the end of (3). A wash-out with clean air was performed during (4). ### 2.6 EEG and EDA acquisition EEG signal was acquired using a high-density 128-channel geodesic EEG System 300 from Electrical Geodesic, Inc. (EGI). Electrodes were grounded through two additional channels placed between Cz and Pz and referenced through Cz. We always kept electrode impedances below 20$k\Omega$ during the acquisitions. EEG was acquired at the sampling frequency of 500Hz. EDA was acquired using a Shimmer3 GSR+ unit (Shimmer, USA) at the sampling frequency of 250Hz. We recorded EDA through a pair of Ag/AgCl electrodes placed on the proximal phalanx of the first and second fingers of the non- dominant hand, respectively. ### 2.7 EDA-driven sympathetic activity estimation We implemented a procedure based on the analysis of EDA signal to estimate the occurrence of enhanced sympathetic responses associated with the visual presentation of faces. A well-known problem in EDA analysis concerns the temporal overlapping of consecutive SCRs, which may hamper the association between a given response and its potential triggering stimulus [39, 44, 52]. To address this issue, we adopted the cvxEDA [44] model to recover an estimate of the SMNA from the observed EDA responses. Specifically, cvxEDA considers that each SCR is preceded in time by sparse and discrete bursts of SMNA. These bursts are characterized by a higher temporal resolution compared to phasic activity, and can thus be exploited to identify the time instants at which peripheral sympathetic responses evoked by faces occur [44]. Accordingly, we assumed that visual stimuli eliciting an enhanced peripheral sympathetic response could be followed by the occurrence of an SCR and, thus, a non-zero SMNA neural burst. Operationally, for each subject and for each odor condition (i.e., pleasant, unpleasant, neutral, air), we undersampled the EDA to the sampling frequency of 50Hz, and we performed a Z-scoring on the data [44]. Then, we applied cvxEDA to obtain an estimate of the SMNA. We identified discrete events of enhanced sympathetic arousal associated with the presentation of visual stimuli as those epochs having non-zero SMNA bursts occurring in the (1-5)s after stimulus onset. This choice is supported by several studies indicating that a stimulus-evoked SCR is observed to occur within that range of latency after stimulus’ onset [39, 53]. A schematic illustration of the procedure is reported in Fig.2. For each odor condition, we then extracted: (1) the number of epochs with/without a stimulus-related sympathetic response (nSymp), (2) the average latency of sympathetic responses, (3) the average amplitude of the SMNA, computed as the average amplitude across epochs and then over time, and (4) the subject-average SCR generated by the SMNA bursts. Figure 2: Schematic illustration of the proposed procedure to assess stimuli- related peripheral sympathetic responses from the EDA signal. The raw EDA was preprocessed by downsampling at the frequency of 50Hz and applying a Z-score transformation in order to be given as input of the cvxEDA algorithm. SMNA estimates are then epoched in the (0-8)s interval with respect to the onset of each stimulus (i.e., 0s). Significant sympathetic responses to the stimuli are identified as non-zero SMNA bursts occurring in the (1-5)s interval after stimulus onset ### 2.8 EEG preprocessing We preprocessed the EEG signal using EEGLAB [54]. First, we filtered the data with a zero-phase low-pass antialiasing filter and then undersampled it to the sampling frequency of 100Hz. Afterwards, we applied a zero-phase high-pass filter at the cutoff frequency of 0.1Hz to improve data stationarity. We removed flat and poorly correlated channels by exploiting the method presented in [55]. Specifically, each channel was compared with its reconstructed version obtained from the spherical interpolation of its neighbors and was removed if the correlation coefficient was less than a user-defined threshold. Here, we used a correlation threshold of 0.8. After visual inspection, we recovered the removed channels through spherical interpolation, and we re- referenced the data to its average. For each subject, we epoched the EEG signal from -200ms to 1000ms with respect to the onset of visual stimuli (i.e., 0ms), and we visually inspected the data to remove epochs contaminated by artifact activity. A number of 125 $\pm$ 7 (mean $\pm$ standard deviation) epochs was retained over the subjects. Finally, we decomposed EEG data through ICA [56], and we removed ICs resembling artifact activity (e.g., muscular, ocular and other sources of noise) through visual inspection of their associated time course, scalp map, and power spectrum. Clean EEG epochs were corrected for their baseline (i.e., from -200ms to 0ms) and low-pass filtered at the cutoff frequency of 30Hz using default settings in EEGLAB. We grouped the epochs according to the odor condition, and we further distinguished them based on the presence or absence of a peripheral sympathetic activation (hereinafter referred to as Symp condition). Accordingly, we obtained subject-average ERPs for a total of 8 conditions: clean air, n-butanol, banana, and isovaleric acid, with/without the presence of a sympathetic response. We focused on relevant ERP components associated with the visual processing of faces, as well as their typical regions of interest (ROIs) on the scalp, based on the previous literature [8, 11, 57, 58, 16, 15, 10] (see Table 1 for details). Table 1: Channels region of interest (ROI) and time interval (ms) for each of the investigated ERP components. Channels’ name is reported according to the geodesic EGI 128-channels cap. | Channels | Time interval ---|---|--- P1 | E59 E65 E66 E70 E71 E76 E83 E84 E90 E91 | 120-160 ms N1 | | E6 E7 E13 E30 E31 E37 E54 E55 E79 E80 E87 --- E105 E106 E112 E129 120-160 ms rN170 | E96 E97 E101 E102 E108 | 160-200 ms lN170 | E45 E46 E50 E51 E58 | 160-200 ms VPP | | E7 E13 E30 E31 E37 E54 E55 E79 E80 E87 --- E105 E106 E112 E129 160-200 ms P2 | E59 E65 E66 E70 E71 E76 E83 E84 E90 E91 | 220-300 ms LPP | E52 E53 E60 E61 E62 E67 E77 E78 E85 E86 E92 | 300-500 ms Specifically, we extracted: (1) the P1 component in the 120-160ms interval after stimulus onset in the occipital region around O1 and O2, (2) the N1 component at the same latency as P1, in the central region around Cz, (3) the right/left N170 component in the 160-200ms interval after stimulus onset in the parieto-temporal regions near P7 and P8, respectively, and (4) the Vertex Positive Potential (VPP) in the 160-200ms interval after stimulus onset in the same ROI of the N1 component. Moreover, for a comparison with the previous literature, we also identified: (5) the P2 component at 220-300ms after stimulus onset in the same ROI of P1, and (6) the Late Positive Potential (LPP), a sustained positivity at 300-500ms after stimulus onset around Pz. For each subject and for each of these components, we extracted the mean amplitudes across the respective time intervals and ROIs. ### 2.9 Statistical analysis on self-report, EDA and ERP data We investigated for possible differences in the valence and arousal subjective ratings of olfactory stimuli through multiple Wilcoxon sign-rank tests, with a significance level of $\alpha=0.05$. P-values were adjusted for multiple comparisons testing with Bonferroni correction. Analogously, we tested for an effect of olfactory stimuli on the valence and arousal subjective ratings of faces through pairwise Wilcoxon tests ($\alpha=0.05$). Concerning the physiological data, we tested for significant differences in the amplitude and latency of SMNA responses across odor conditions through separate 1x4 within-subject ANOVAs ($\alpha=0.05$). Multiple comparison testing was controlled with false-discovery rate (FDR) for multiple testing under dependency [59]. We further tested for an interaction between the occurrence of sympathetic responses and the olfactory stimuli through a within-subject two-way ANOVA on the number of EDA epochs grouped by odor condition (i.e., clean-air, n-butanol, banana, isovaleric acid) and the presence/absence of an SMNA response, respectively ($\alpha=0.05$). Finally, we tested for a significant effect of contextual odors and sympathetic responses on the amplitude of each ERP component described in Section 2.8 through separate two-way ANOVAs, with the odor condition (i.e., clean-air, n-butanol, banana, isovaleric acid) and the presence/absence of an EDA-driven sympathetic response as within-subject factors (p-values corrected with FDR, $\alpha=0.05$). Post-hoc comparisons were conducted with a t-test, and the resulting p-values were adjusted with the Bonferroni correction ($\alpha=0.05$). ### 2.10 Effective connectivity analysis with DCM The analysis of effective connectivity was carried out using SPM12 [60]. We used DCM for ERPs [61, 22] to investigate the modulatory effect of odors and sympathetic responses on the effective connectivity. Specifically, DCM models the observed ERPs by combining a physiologically plausible neuronal model of interacting cortical regions, and a spatial forward model that maps the cortical activity to observed EEG data. Each region is described by three interconnected neuronal subpopulations: interneurons, spiny stellate cells, and pyramidal neurons. Regions are coupled to each other through extrinsic connections, that are distinguished into forward, backward, and lateral according to the hierarchical organization of the cortex [62]. The effect of administered sensory stimuli on neuronal dynamics is accounted for through specific input connections modeling the afferent activity relayed by subcortical structures to the spiny stellate layer of target cortical regions [61, 22]. Notably, such inputs are the same for each experimental condition. Accordingly, differences among ERPs due to either contextual or stimulus attributes are explained by modulatory coupling gains on the connection strengths [61, 22]. The activity of pyramidal neurons from each region is then projected to the EEG channels through an electromagnetic forward model which accounts for the field spread effects in the head. #### 2.10.1 Network specification The DCM for ERP framework explains ERP dynamics as arising from a small number of brain regions [63]. The selection of which brain regions to include in the network for DCM analysis can be made using either prior knowledge from the literature or source reconstruction techniques. Here, we adopted a group source reconstruction approach based on Multiple Sparse Priors (MSP) implemented in SPM12 [64]. Particularly, while MSP has been shown to potentially reduce localization error with high-density EEG systems [64], group-inversion yields better results compared to individual inversions by introducing additional constraints on the set of sources explaining subjects’ data [65]. Operationally, we used channels’ position co-registered to the MNI standard head template as provided by SPM. Then, we inverted ERPs activity on a cortical mesh comprising 8196 dipoles in the time range from -200 to 400 ms. For each subject, we then created contrasts of log power differences in the 0-400 ms time range, collapsed over the experimental conditions, against the prestimulus window (i.e., -200 to 0 ms). These contrast images were smoothed with an 8mm Gaussian kernel to create a 3D volumetric NIFTI image. Images were entered into a one-sample t-test design in SPM12 and we tested for significant changes in power with respect to the prestimulus through an F-contrast. Significant regions (p$<$0.05; Family-Wise Error Rate corrected at the cluster level) were labeled according with the Automated Anatomical Labeling (AAL) atlas [66]. #### 2.10.2 DCM subject-level connectivity analysis We performed DCM analysis on the subject-average ERPs relative to the odor conditions, and with and without a sympathetic response. Accordingly, for each subject, we specified a factor analysis on the single-subject connectivity, with Odor and Symp as between-trial factors. We reduced the data to the first four principal components (PC) or modes of EEG channels’ mixture. Such a choice is a trade-off between the computational cost of DCM model inversion and the percentage of variance explained by the data [67, 68, 69]. Notably, reducing ERP data to their first four PC is indicated as sufficient to capture the components of interest [61]. We focused model inversion on the 0-400 ms time window with respect to the stimulus onset, through a Hanning window. Finally, we adopted the ERP neural mass model (NMM) to model temporal dynamics within and between the network sources. Concerning the forward model, we modeled the spatial activity of brain sources as equivalent current dipoles (ECD option in SPM12) on the cortex. The passive volume conduction effects on the dipoles’ electric field were modeled through a BEM model of the head made of three layers, i.e., cortex, skull and brain, whose conductances were set to 0.33, 0.0042 and 0.33 S/m, respectively. We allowed the effect of both Odor and Symp to modulate all the connections of the network, including self-inhibitory effects on each node. #### 2.10.3 PEB group-level connectivity analysis We inferred the significant modulatory effects of Odor and Symp on the group- level connectivity through the PEB framework [45, 70]. Such a framework allows to build statistical hierarchical models where single-subject DCM parameters of interest can be treated as random effects on the group-level connectivity: $\displaystyle Y_{i}$ $\displaystyle=\Gamma(\theta_{i}^{(1)})+\epsilon_{i}^{(1)}$ (1) $\displaystyle\theta^{(1)}$ $\displaystyle=(X_{b}\otimes X_{w})\theta^{(2)}+\epsilon^{(2)}$ (2) More specifically, single-subject ERPs $Y_{i}$ are explained by a DCM $\Gamma(.)$ with unknown parameters $\theta_{i}^{(1)}$, plus a zero-mean white Gaussian noise residual $\epsilon^{(1)}$. Parameter estimates of interest are then grouped together across subjects (i.e., $\theta_{i}^{(1)}$) and modeled at the group-level (2) through a General Linear Model (GLM) having design matrix $X=(X_{b}\otimes X_{w})$ and group parameters $\theta^{(2)}$. The $X_{b}$ matrix models the between-subject effects, whereas the $X_{w}$ matrix models which single-subject parameters are influenced by such effects. The $\otimes$ symbol denotes the Kronecker product. Any unexplained between- subject difference (e.g., non-modeled sources of variations, random effects) is modeled by the zero-mean white Gaussian residuals $\epsilon^{(2)}$. We grouped together the single-subject estimates associated with the modulatory effect of Odor and Symp such that $\theta^{(1)}=[\theta_{Odor}^{(1)},\,\theta_{Symp}^{(1)}]^{T}$, and we fitted a PEB model with between-subject design matrix $X_{b}=1^{T}$ and within- subject design matrix $X_{w}=I$ to estimate the average effect of such conditions on each connection of the group connectivity. We then applied a greedy search to infer the best combination of group-level parameters $\theta^{(2)}$ describing the average effects of Odor and Symp. Specifically, we iteratively applied Bayesian Model Reduction (BMR) to obtain an estimate of nested PEB models with/without a particular set of connections, as well as their Posterior Probability (Pp) of being the best model describing the observed data [71]. We then computed a Bayesian Model Average (BMA) over the models resulted from the last iteration of greedy-search procedure. BMA performs an average of the parameter posterior densities across models, weighted for their Pp. Accordingly, we obtained a set of group-level parameters whose value is no longer conditional on the particular model assumed. Finally, we thresholded the BMA results pruning away those parameters having a Pp of being modulated by either Odor or Symp lower than 0.95. We based such thresholding on the free-energy of the models estimated during the BMR procedure (see Appendix 3 of [45]). ## 3 Results ### 3.1 SAM statistical analysis results In Fig.3 we report the statistical analysis results of the SAM on the administered odorants (Mean $\pm$ Standard Error (SE)). Subjects rated isovaleric acid as significantly more unpleasant (valence = $-0.62\pm 0.18$; Fig.3a) and more arousing (arousal = $2.81\pm 0.22$; Fig.3b) compared to banana and n-butanol. We did not find any significant differences between the valence and arousal ratings of banana (valence = $0.38\pm 0.20$; arousal = $2.33\pm 0.21$) and n-butanol (valence = $0.10\pm 0.18$; arousal = $1.86\pm 0.17$). Figure 3: SAM statistical analysis results for the a) valence and b) arousal ratings (Mean $\pm$ Standard Error (SE)) of the administered odorants (i.e., Banana, Butanol, Isovaleric Acid). Subjects perceived isovaleric acid as significantly more unpleasant ($-0.62\pm 0.18$) and more arousing ($2.81\pm 0.22$) than the other odorants. On the other hand, we did not find any significant difference for both the valence (clean air = $-0.05\pm 0.05$; banana = $-0.14\pm 0.0.06$; n-butanol = $-0.03\pm 0.05$; isovaleric acid = $-0.09\pm 0.07$) and arousal (clean air = $2.17\pm 0.16$; banana = $2.23\pm 0.17$; n-butanol = $2.18\pm 0.16$; isovaleric acid = $2.15\pm 0.17$) ratings of faces among different odor conditions. ### 3.2 EDA statistical analysis results We did not find any significant differences among odor conditions neither for the average amplitude nor the latency of SMNA responses. Conversely, we found a significant effect of nSymp following the 2x4 ANOVA ($F_{6,64}=5.76,p<10^{-5}$; FDR-corrected). More specifically, we found a lower number of stimuli associated with a sympathetic response (11.76 $\pm$ 3.06), compared to the stimuli without the manifestation of a sympathetic response (19.61 $\pm$ 2.63), irrespective of the odor condition. ### 3.3 ERP statistical analysis results We found a significant effect for the odor condition on the N1 ($F_{6,64}=6.65,p<10^{-4}$; FDR-corrected) and the VPP ($F_{6,64}=3.63,p<10^{-4}$; FDR-corrected) components at the central ROI around the Cz channel. In Fig.4, we report the grand-average ERP for each of the odor conditions, together with the N1 and VPP significant intervals highlighted in blue and green, respectively, and a schematic representation of the ROI on the scalp map. Particularly, post-hoc analysis highlighted a greater N1 amplitude during the administration of isovaleric acid, with respect to clean air ($p<10^{-3}$). Moreover, we observed a lower VPP amplitude during the administration of both isovaleric acid ($p<10^{-3}$) and n-butanol ($p<10^{-3}$), with respect to clean air. Figure 4: ERP grand-averages for the Clean Air (blue), Banana (red), N-Butanol (yellow), and Isovaleric Acid (purple) conditions evaluated at the central ROI, in the time range from -200ms to 800ms with respect to the stimulus onset (i.e., 0ms). ERPs were averaged across 15 electrodes around Cz, schematically represented by the red dots on the scalp map on the top-left of the figure. The blue area highlights a significant effect for the odors on the average N1 amplitude, computed over the 120-160 ms interval ($p<0.05$, FDR-corrected). On the other hand, the green area highlights a significant effect for the odors on the average VPP amplitude, computed over the 160-200 ms interval ($p<0.05$, FDR-corrected). Post-hoc analysis showed a greater N1 amplitude during Isovaleric Acid, with respect to Clean Air ($p<0.05$, corrected with Bonferroni). Moreover, the VPP was significantly lower for both the Isovaleric Acid and the N-Butanol, with respect to Clean Air ($p<0.05$, corrected with Bonferroni). Moreover, we observed a significant effect for the Symp condition on the amplitude of the left N170 component ($F_{1,17}=5.10,p=0.03$). Particularly, as reported in Fig.5, the presence of a sympathetic response resulted in the increase of the N170 amplitude. We did not find a significant interaction between Odor and Symp for any of the ERP components investigated. Figure 5: ERP grand-averages for the Clean Air Symp-absent (blue), Clean Air Symp-present (red), Banana Symp-absent (yellow), Banana Symp-present (purple), N-Butanol Symp-absent (green), N-Butanol Symp-present (light blue), Isovaleric Acid Symp-absent (brown), and Isovaleric Acid Symp-present (dark blue) conditions, evaluated at the left ROI around P7, in the time range from -200ms to 800ms with respect to the stimulus onset (i.e., 0ms). ERPs were averaged across 5 electrodes around P7, schematically represented by the red dots on the scalp map on the top-left of the figure. The gray area highlights a significant effect for the Symp condition on the average N170 amplitude, computed over the 160-200ms interval ($p<0.05$, corrected with Bonferroni). Specifically, the N170 deflection was greater during sympathetic responses (i.e., Symp-present), irrespective of the odor condition. ### 3.4 Network identification results We found activation of the right Inferior Occipital Gyrus (rIOG; MNI coordinates: 48 -76 -2), right Fusiform Gyrus (rFFG; MNI coordinates: 38 -18 -32), right and left Inferior Temporal Gyrus (rITG; MNI coordinates: 52 -38 -20; lITG; MNI coordinates: -50 -38 -22), right and left Medial Temporal Gyrus (rMTG; MNI coordinates: 52 -62 14; lMTG; MNI coordinates: -54 -64 10), and right Secondary Visual Cortex (rVII; MNI coordinates: -10 -98 -10) (p$<$0.05; Family-Wise Error Rate corrected at the cluster level). We specified the network for DCM analysis focusing on the IOG, FFG, ITG, and MTG for their central role in the processing of faces and their role in the integration of visual and olfactory stimuli [25, 26, 72, 73, 74, 75]. Furthermore, in line with the previous literature, we focused on the right visual stream of face processing [25, 76, 72, 24, 77]. We specified rIOG$\rightarrow$rFFG, rFFG$\rightarrow$rITG and rITG$\rightarrow$rMTG as forward-connections, and rFFG$\rightarrow$rIOG, rITG$\rightarrow$rFFG and rMTG$\rightarrow$rITG as backward connections according with previously reported evidence on the nodes’ hierarchical structure [25, 78, 79, 80, 24] (Fig.6). We hypothesized IOG as the first stage responsible for face processing in our network [81, 26, 82, 24]. Accordingly, we modeled the effect of the thalamic sensory input relay of ”faces” directly entering the IOG (i.e., the network input). Figure 6: Axial (a), coronal (b) and sagittal (c) views of the network to be modeled with DCM. The network includes the Inferior Occipital Gyrus (IOG), Fusiform Face Gyrus (FFG), Inferior Temporal Gyrus (ITG), and Medial Temporal Gyrus (MTG). These nodes were found through a group-inversion based on the MSP approach in SPM12. Nodes were labeled according with the Automated Anatomical Labeling (AAL) atlas. We specified rIOG$\rightarrow$rFFG, rFFG$\rightarrow$rITG and rITG$\rightarrow$rMTG as forward-connections, and rFFG$\rightarrow$rIOG, rITG$\rightarrow$rFFG and rMTG$\rightarrow$rITG as backward connections. ### 3.5 DCM subject-level connectivity analysis Following the results of SAM and ERP analysis, we decided to focus our connectivity study on the effects of isovaleric acid and arousal enhancement on the visual processing of faces. Accordingly, we built a 2x2 factorial DCM analysis with factors Odor (i.e., clean air vs. isovaleric acid) and Symp (i.e., sympathetic vs. no sympathetic response). For each subject, the DCM model was successfully fitted to the observed data without observing any problem of early convergence. The models showed a good representation of the data, with an average explained variance of $90.77\%$. In Fig.7 we report the result of the ERP reduction to the first four principal modes of EEG channels’ mixture, together with the modes predicted by the model, for an exemplary subject. As depicted, PCA channels’ reduction yielded a parsimonious yet efficient representation of the data. Specifically, the first three EEG modes represented data variance associated with the P1, N170, and P2 components, whereas the fourth mode represented data variance mainly associated with the VPP. Such a procedure allowed us to focus parameters’ estimation on the main features of the observed ERPs, while dramatically reducing noise and computational complexity. Figure 7: First four principal modes of channels’ ERPs for the CleanAir (blue), CleanAir Symp (red), Isovaleric Acid (green), Isovaleric Acid Symp (pink) conditions for an exemplary subject. Dashed lines indicate the observed ERPs, whereas solid lines indicate the ERPs generated by the DCM model. The model was inverted in the 0-400 ms interval with respect to the stimulus onset. ### 3.6 PEB group-level connectivity analysis In Fig.8, we report the results of the greedy-search and BMA procedure on the PEB parameters associated with the average modulatory effects of the Odor (i.e., clean-air vs. isovaleric acid) and Symp (i.e., absence vs. presence of a sympathetic response) conditions on the group connectivity. These BMA parameters indicate the effect size of the average group-modulatory effect associated with the experimental variables of interest in the connectivity. In particular, we report only those parameters having a Pp$>$0.95 of being different from zero. The parameters surviving such thresholding are represented by a black bar showing the estimated posterior effect size, and a pink error bar representing the corresponding 90$\%$ credibility interval. Concerning the Odor condition, we observed an increase of the forward connection strength from ITG to MTG (effect size: $0.24\pm 0.09$). This indicates that the administration of isovaleric acid induced a significant (i.e., Pp$>$0.95) excitatory effect on the strength of the ITG$\rightarrow$MTG connection at the group level, with respect to the administration of clean- air. On the other hand, we observed a decrease of the ITG$\rightarrow$MTG connection strength (effect size: $-0.19\pm 0.07$) associated with the Symp condition. This, instead, indicates that the presence of a peripheral sympathetic response induced an inhibitory effect on the strength of such connection at the group level. Finally, we also observed a strengthening effect of the Symp condition on the backward connection from ITG to FFG (effect size: $0.15\pm 0.07$), indicating that peripheral sympathetic responses were associated with an excitatory effect on such connection. Figure 8: Results of BMA on the group effective connectivity among the ITG, FFG, ITG, and MTG. For each experimental condition (i.e., Odor, Symp), we report the effect sizes associated with their average modulatory effects on the connections’ strength. In particular, we report only those parameters having a posterior probability (Pp) $>$ 0.95 of being different from zero. For each significant parameter, the height of the black bar indicates the estimated average posterior effect size, whereas the pink error bar indicates the corresponding 90$\%$ credibility interval. The brain figures depict the rendering of the network on the axial plane. For each condition, connections reporting a significant positive modulation are depicted in purple, whereas connections reporting a significant negative modulation are depicted in red. ## 4 Discussion In this study, we investigated the modulatory effect of contextual hedonic olfactory stimuli on the visual processing of neutral faces through the analysis of ERP components and effective connectivity. Particularly, we assumed that enhanced arousal to the perception of faces could play a role on their integration with olfactory stimuli, improving the SNR of the observed responses and the consequent effect size of the odors [40, 83]. To this aim, we applied a novel methodological approach exploiting EDA-driven SMNA to classify visual stimuli into two cases: i.e., ”eliciting” or ”not-eliciting” a stimulus-evoked peripheral sympathetic response. We included the outcome of this procedure as an additional experimental factor, together with the valence of contextual odors, into a standard analysis of ERP components. We further modeled observed ERPs through DCM and PEB to test for the presence of specific cortical connections being concomitantly modulated by enhanced arousal and hedonic olfactory stimuli. Our results highlight the role arousal plays on the visual processing of human faces and its multimodal integration with olfactory stimuli. The ERP analysis revealed an effect of EDA-driven arousal on the left N170 component, irrespective of the odor condition. More specifically, the occurrence of a sympathetic response was associated with an enhanced N170 amplitude. Previous studies have found a significant correlation between the N170 amplitude and the perceived level of arousal associated with faces, irrespective of their valence, such that arousing stimuli were associated with increased ERP responses [34]. The N170 is considered to be the earliest marker of higher-order visual processing, marking the structural encoding of a stimulus as a face. In particular, during the N170 time window, it is hypothesized that multiple cortical processes underlie the fine-grained categorization of faces based on configural processing of their features, e.g., eyes, nose, mouth, and the spatial relationship among them [16, 84]. In this light, we suggest that the higher N170 amplitude observed during sympathetic responses could represent a marker of enhanced arousal elicited by salient features of faces. In addition to the effect of arousal, we observed a greater N1 amplitude during the administration of isovaleric acid, with respect to clean air. Furthermore, we observed a lower VPP amplitude during the administration of both isovaleric acid and n-butanol, again with respect to clean air. These findings are in line with previous studies reporting an effect of hedonic odors on early visual ERP components [11, 12, 8]. The VPP, together with its inverse N170, is known for being particularly sensitive to faces [85, 3, 58], and it has also been implicated in contextual odor effects [11]. Hence, our findings on the VPP modulation by both the unpleasant (i.e., isovaleric acid) and neutral (i.e., n-butanol) odors, are in agreement with previous studies reporting an overall effect of olfactory stimuli irrespective of their valence [11]. On the other hand, greater amplitudes of the N1/P1 component have been associated with the enhanced allocation of attention towards faces [10, 3]. Overall, given the ERPs results, we can state that hedonic odors generate a contextual effect and thus a top-down influence on face processing in different stages. In this view, we may suggest that the administration of isovaleric acid had a greater arousing effect on the early processing of visual stimuli, compared to the other odorants. This is further corroborated by our findings on the subjective ratings of odors, showing isovaleric acid as being perceived as significantly more arousing and unpleasant compared to banana and n-butanol. Accordingly, we decided to carry out the remainder of our study focusing on the contrast between isovaleric acid and clean air. Using DCM for ERP and PEB, we investigated the concomitant effect of Odor and Symp on the brain connectivity at the group level. The study revealed interesting results. On the one hand, we found that isovaleric acid, compared to clean air, strengthened the forward connection from the ITG to the MTG. In classical models of face processing [78, 25, 79], the MTG and the superior temporal sulcus (STS) are cortical areas specialized in processing changeable facial features (e.g., emotional expressions, gaze, lip movements). The strengthening of the connection towards these areas due to isovaleric acid may represent a crucial aspect of our study. Variable characteristics of the face are fundamental features for intra-species communication and social interaction [86]. For this reason, an interpretation of this result may consist of the fact that a negative odor can influence face processing mostly for interaction purposes. A repulsive smell would enhance the ability to interpret intentions, expressions and mental states of a possible other person (or threatening agent) in a more efficient way, bringing therefore an evolutionary advantage. On the other hand, the same connection suffered from an inhibitory effect when a simultaneous peripheral sympathetic response occurred, irrespective of the odor condition. Moreover, the occurrence of such sympathetic responses caused an excitatory effect on the backward connection from ITG to FFG. Face processing network models [78, 79] highlight the functional dissociation between the fusiform face area (in the FFG) and temporal areas. As stated above, MTG and STS are designated to process changeable facial aspects, while the FFA is crucial to process invariant aspects of the face to access information related to the identity. According to our results, the facial information processing appears to stop at the FFG stage for faces that generate a sympathetic response, since forward connections towards the MTG are inhibited while backward connections to the FFG are enhanced. Therefore, we can speculate that the processing of faces that evoke a sympathetic response may be focused on identity-related information, to the disadvantage of emotional expressions and other changeable features. In other words, some intrinsic facial features create a sympathetic response which, in turn, may result in a top-down enhanced processing of identity. Speculatively, if a face generates a sympathetic response it is likely that this individual will feel in jeopardy, or more generally anxious. In this situation, fast processing of the face identity may represent an advantage in order to understand whether s/he is in actual danger and, in that case, adopt the appropriate fight or flight strategies connected to higher arousal. Moreover, these dynamics seem to suggest a crucial role ITG plays in processing sympathetic arousal that can be associated with odor perception. The ITG has been described as an associative area integrating information regarding face features from the FFG and IOG to process the identity of faces [87]. Furthermore, the ITG sends bidirectional projections to limbic areas involved in emotions and memory processing, such as the amygdala, hippocampus, entorhinal cortex, and to the frontal cortices [87, 88]. In this context, previous studies have reported a role for the ITG in visual short-term memory [89, 90, 91], as well as in hedonic olfactory tasks and in the perception of faces with background odor cues [73, 92]. In this light, the ITG appears to be a hub in which dissociation in different pathways for affective processing of faces are handled. The methodology proposed in this study could represent an effective means to quantitatively study the effect of sympathetic arousal on both the ERPs and effective connectivity. Indeed, in previous studies, we adopted subjective ratings as a means to quantify arousal and investigate its central correlates at the connectivity level [93, 69]. However, affective ratings may deviate from physiological responses [69], which are supposed instead to represent an objective window on emotional stimuli processing (see e.g. [94]). Other studies instead investigated the coupling between EDA and EEG dynamics through correlation [95, 96, 97, 98], phase and amplitude coupling [99], and coherence measures [100]. Yet, to the best of our knowledge, a methodology to investigate the effect of sympathetic arousal on the EEG effective connectivity as estimated through EDA is missing. In this light, we proposed an approach based on the identification of enhanced sympathetic arousal through the analysis of EDA and the connectivity framework of DCM for ERP. Particularly, we adopted cvxEDA as an efficient approach to recover an estimate of the hidden SMNA, and identified the time instants at which stimulus-evoked sympathetic responses have occurred. The DCM framework then allowed to investigate such effects on the effective connectivity with the highest level of physiological plausibility. It is worth highlighting that the observed peripheral sympathetic responses could be attributed either to the effect of background olfactory stimuli, the presentation of visual faces, or the combination of both. With this regard, our EDA analysis results did not show any significant effects of the administered background olfactory stimuli on either the amplitude, the latency, or the frequency of the sympathetic responses following the presentation of human faces. Accordingly, it is reasonable to assume that the observed peripheral sympathetic responses time-locked to the presentation of faces could be associated with the intrinsic arousing properties of the faces themselves. Thus, the proposed methodology allows accounting for two distinct, yet concomitant, effects on the EEG cortical connectivity, i.e., the modulation of contextual hedonic odors on the perception of human faces, and the modulation of enhanced sympathetic arousal elicited by the visual presentation of faces. Nevertheless, to the best of our knowledge, this is the first investigation of EDA dynamics in the scenario of the multimodal integration between visual faces and olfactory stimuli. Hence, further investigations will be necessary to corroborate this hypothesis. The choice of focusing our DCM analysis only on the right hemisphere was supported by the literature[25, 76, 72, 24, 77, 101] and allowed us to limit the number of parameters (i.e., nodes, connections) at the advantage of less uncertainty on their estimates. However, this can be seen in contrast to the ERP analysis that revealed an effect of sympathetic arousal on the left N170 component, while no effects were observed in the right hemisphere. Nevertheless, it is worth noting that multiple sources underlie the processing of faces in the N170 (160-200)ms interval. Hence, ERP components may reflect the global activation of the face perception system, potentially leading to small or null effects [16]. In this view, DCM offers a powerful means to investigate the effect of sympathetic arousal to a deeper extent through a physiologically plausible description of how observed dynamics are associated with its underlying cortical underpinnings. Accordingly, future works should increase statistical power by including more subjects and extend our network to also include left-hemisphere ITG and MTG found during ERP inversion. The framework proposed does not include the amplitude of the SMNA in the model. This is a limitation of the model since we cannot exclude that a potential relationship between central responses and the magnitude of the sympathetic neural bursts could exist. Yet, modeling such an effect through DCM would require to specify a single-trial within-subject modulatory input where each administered stimulus is associated to its respective level of sympathetic arousal (as quantified through the amplitude of SMNA). To the best of our knowledge, the DCM for ERP framework is limited to model between-trials effects (i.e., differences among grand-average ERPs), rather than single-trial effects. Accordingly, we limited our methodology to the evaluation of connectivity differences in the binary case of the presence/absence of an evoked peripheral sympathetic response, without taking into account their magnitude. SMNA responses may be triggered by a number of factors different from the perception of the designed administered stimulus (in this context, human faces). Particularly, spontaneous fluctuations, i.e., sudomotor nerve responses which stem from uncontrolled cognitive and emotional processes [39], may lead to the erroneous detection of a face-evoked peripheral sympathetic responses. While we attempted at mitigating such effects constraining stimulus-evoked SMNA bursts in the 1-5s time window after stimulus onset [39], we cannot exclude that errors in the binary classification of the stimuli may still occur. Another potential limitation of our methodology concerns the relationship between EDA-driven peripheral sympathetic responses and EEG central responses. Specifically, we could not explicitly include EDA dynamics into the analysis of effective connectivity, as the DCM framework only allows to model modulatory effects on the observed EEG data due to either contextual factors or properties of the administered stimuli [22]. Accordingly, nothing can be concluded about the causal relationship between EEG and EDA responses. To overcome these limitations, future studies will aim develop a framework including EDA dynamics as a representative node in the connectivity network. ## 5 Conclusion In this work, we proposed a novel methodological approach based on the analysis of EDA and EEG to investigate the effect of contextual hedonic odors and sympathetic arousal on the visual processing of neutral faces. Our main findings highlighted a higher N1 component during the unpleasant odor condition, whereas enhanced arousal to faces increased the N170 component. Moreover, both factors exerted a significant modulatory effect on the effective connectivity among cortical areas involved in face processing. Particularly, the unpleasant odor strengthened the forward connection from ITG to MTG, whereas the same connection was inhibited by the simultaneous occurrence of face-evoked sympathetic responses. These results may suggest that, on the one hand, face processing in the context of an unpleasant odor are more focused on changeable facial aspects related to social interaction. On the other hand, increased arousal appears to enhance identity processing in the FFG. Future works will include SMNA dynamics as a representative node in the connectivity network, to investigate EDA-EEG causal interactions and deepen the understanding of sympathetic arousal’s role on the visual-odor multimodal integration. ## Conflict of interest statement The authors declare no conflict of interest. The research leading to these results has received partial funding from the Italian Ministry of Education and Research (MIUR) in the framework of the CrossLab project (Departments of Excellence). This research has received partial funding from European Union Horizon 2020 Programme under grant agreement n 824153 of the project “POTION: Promoting Social Interaction through Emotional Body Odours”. Research partly funded by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR - Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI”, funded by the European Commission under the NextGeneration EU programme. This publication was produced with the co-funding European Union - Next Generation EU, in the context of The National Recovery and Resilience Plan, Investment 1.5 Ecosystems of Innovation, Project Tuscany Health Ecosystem (THE), Spoke 3 “Advanced technologies, methods, materials and heath analytics” CUP: I53C22000780001 ## References ## References * [1] Syrjänen E, Fischer H, Liuzza MT, Lindholm T, Olofsson JK. A review of the effects of valenced odors on face perception and evaluation. i-Perception. 2021;12(2):20416695211009552. * [2] Aviezer H, Ensenberg N, Hassin RR. The inherently contextualized nature of facial emotion perception. Current opinion in psychology. 2017;17:47-54. * [3] Damon F, Mezrai N, Magnier L, Leleu A, Durand K, Schaal B. Olfaction in the multisensory processing of faces: A narrative review of the influence of human body odors. Frontiers in Psychology. 2021;12. * [4] Li D, Jia J, Wang X. Unpleasant food odors modulate the processing of facial expressions: An event-related potential study. Frontiers in neuroscience. 2020;14:686. * [5] Cook S, Fallon N, Wright H, Thomas A, Giesbrecht T, Field M, et al. Pleasant and unpleasant odors influence hedonic evaluations of human faces: An event-related potential study. Frontiers in human neuroscience. 2015;9:661. * [6] Cook S, Kokmotou K, Soto V, Fallon N, Tyson-Carr J, Thomas A, et al. Pleasant and unpleasant odour-face combinations influence face and odour perception: An event-related potential study. Behavioural Brain Research. 2017;333:304-13. * [7] Cook S, Kokmotou K, Soto V, Wright H, Fallon N, Thomas A, et al. Simultaneous odour-face presentation strengthens hedonic evaluations and event-related potential responses influenced by unpleasant odour. Neuroscience Letters. 2018;672:22-7. * [8] Syrjänen E, Wiens S, Fischer H, Zakrzewska M, Wartel A, Larsson M, et al. Background odors modulate N170 ERP component and perception of emotional facial stimuli. Frontiers in Psychology. 2018;9:1000. * [9] Syrjänen E, Fischer H, Olofsson JK. Background odors affect behavior in a dot-probe task with emotionally expressive faces. Physiology & behavior. 2019;210:112540. * [10] Adolph D, Meister L, Pause BM. Context counts! Social anxiety modulates the processing of fearful faces in the context of chemosensory anxiety signals. Frontiers in human neuroscience. 2013;7:283. * [11] Leleu A, Godard O, Dollion N, Durand K, Schaal B, Baudouin JY. Contextual odors modulate the visual processing of emotional facial expressions: An ERP study. Neuropsychologia. 2015;77:366-79. * [12] Steinberg C, Dobel C, Schupp HT, Kissler J, Elling L, Pantev C, et al. Rapid and highly resolving: affective evaluation of olfactorily conditioned faces. Journal of Cognitive Neuroscience. 2012;24(1):17-27. * [13] Callara AL, Cecchetto C, Dal Bò E, Citi L, Gentili C, Vanello N, et al. Human body odors of happiness and fear modulate the late positive potential component during neutral face processing: a preliminary ERP study on healthy subjects. In: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE; 2022. p. 4093-6. * [14] Hartigan A, Richards A. Disgust exposure and explicit emotional appraisal enhance the LPP in response to disgusted facial expressions. Social Neuroscience. 2017;12(4):458-67. * [15] Rubin D, Botanov Y, Hajcak G, Mujica-Parodi LR. Second-hand stress: inhalation of stress sweat enhances neural response to neutral faces. Social cognitive and affective neuroscience. 2012;7(2):208-12. * [16] Rossion B. Understanding face perception by means of human electrophysiology. Trends in cognitive sciences. 2014;18(6):310-8. * [17] Zatorre RJ, Jones-Gotman M, Rouby C. Neural mechanisms involved in odor pleasantness and intensity judgments. Neuroreport. 2000;11(12):2711-6. * [18] Jadauji JB, Djordjevic J, Lundström JN, Pack CC. Modulation of olfactory perception by visual cortex stimulation. Journal of Neuroscience. 2012;32(9):3095-100. * [19] Friston KJ. Functional and effective connectivity: a review. Brain connectivity. 2011;1(1):13-36. * [20] Schoffelen JM, Gross J. Source connectivity analysis with MEG and EEG. Human brain mapping. 2009;30(6):1857-65. * [21] Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19(4):1273-302. * [22] Kiebel SJ, Garrido MI, Moran RJ, Friston KJ. Dynamic causal modelling for EEG and MEG. Cognitive neurodynamics. 2008;2(2):121-36. * [23] Frässle S, Krach S, Paulus FM, Jansen A. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing. Scientific reports. 2016;6(1):1-17. * [24] Nguyen VT, Breakspear M, Cunnington R. Fusing concurrent EEG–fMRI with dynamic causal modeling: Application to effective connectivity during face perception. Neuroimage. 2014;102:60-70. * [25] Kessler R, Rusch KM, Wende KC, Schuster V, Jansen A. Revisiting the effective connectivity within the distributed cortical network for face perception. NeuroImage: Reports. 2021;1(4):100045. * [26] Li J, Liu J, Liang J, Zhang H, Zhao J, Rieth CA, et al. Effective connectivities of cortical regions for top-down face processing: a dynamic causal modeling study. Brain Research. 2010;1340:40-51. * [27] Fairhall SL, Ishai A. Effective connectivity within the distributed cortical network for face perception. Cerebral cortex. 2007;17(10):2400-6. * [28] Chen CC, Henson R, Stephan KE, Kilner JM, Friston KJ. Forward and backward connections in the brain: a DCM study of functional asymmetries. Neuroimage. 2009;45(2):453-62. * [29] Garvert MM, Friston KJ, Dolan RJ, Garrido MI. Subcortical amygdala pathways enable rapid face processing. NeuroImage. 2014;102:309-16. * [30] Sato W, Kochiyama T, Uono S, Matsuda K, Usui K, Usui N, et al. Bidirectional electric communication between the inferior occipital gyrus and the amygdala during face processing. Human Brain Mapping. 2017;38(9):4511-24. * [31] Lang PJ, Bradley MM, Fitzsimmons JR, Cuthbert BN, Scott JD, Moulder B, et al. Emotional arousal and activation of the visual cortex: an fMRI analysis. Psychophysiology. 1998;35(2):199-210. * [32] Balconi M, Pozzoli U. Arousal effect on emotional face comprehension: frequency band changes in different time intervals. Physiology & behavior. 2009;97(3-4):455-62. * [33] Eimer M. The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport. 2000;11(10):2319-24. * [34] Almeida PR, Ferreira-Santos F, Chaves PL, Paiva TO, Barbosa F, Marques-Teixeira J. Perceived arousal of facial expressions of emotion modulates the N170, regardless of emotional category: Time domain and time–frequency dynamics. International Journal of Psychophysiology. 2016;99:48-56. * [35] Hietanen JK, Kirjavainen I, Nummenmaa L. Additive effects of affective arousal and top-down attention on the event-related brain responses to human bodies. Biological psychology. 2014;103:167-75. * [36] Lang PJ. The emotion probe: Studies of motivation and attention. American psychologist. 1995;50(5):372. * [37] Lang PJ, Bradley MM, Cuthbert BN. Emotion and attention: Stop, look, and listen. Cahiers de psychologie cognitive. 1998;17(4-5):997-1020. * [38] Balconi M, Lucchiari C. Consciousness and arousal effects on emotional face processing as revealed by brain oscillations. A gamma band analysis. International Journal of Psychophysiology. 2008;67(1):41-6. * [39] Boucsein W. Electrodermal activity. Springer Science & Business Media; 2012. * [40] Nieuwenhuis S, De Geus EJ, Aston-Jones G. The anatomical and functional relationship between the P3 and autonomic components of the orienting response. Psychophysiology. 2011;48(2):162-75. * [41] Frith CD, Allen HA. The skin conductance orienting response as an index of attention. Biological psychology. 1983;17(1):27-39. * [42] Barry RJ, Furedy JJ. Stimulus intensity and novelty interact in elicitation of the phasic electrodermal orienting response. International Journal of Psychophysiology. 1993;14(3):249-54. * [43] Spinks JA, Blowers GH, Shek DT. The role of the orienting response in the anticipation of information: A skin conductance response study. Psychophysiology. 1985;22(4):385-94. * [44] Greco A, Valenza G, Lanata A, Scilingo EP, Citi L. cvxEDA: A convex optimization approach to electrodermal activity processing. IEEE Transactions on Biomedical Engineering. 2015;63(4):797-804. * [45] Zeidman P, Jafarian A, Seghier ML, Litvak V, Cagnan H, Price CJ, et al. A guide to group effective connectivity analysis, part 2: Second level analysis with PEB. NeuroImage. 2019 Oct;200:12-25. * [46] Hummel T, Sekinger B, Wolf SR, Pauli E, Kobal G. ‘Sniffin’sticks’: olfactory performance assessed by the combined testing of odor identification, odor discrimination and olfactory threshold. Chemical senses. 1997;22(1):39-52. * [47] Naudin M, El-Hage W, Gomes M, Gaillard P, Belzung C, Atanasova B. State and trait olfactory markers of major depression. 2012. * [48] Lomonaco T, Salvo P, Ghimenti S, Biagini D, Vivaldi F, Bonini A, et al. Stability of volatile organic compounds in sorbent tubes following SARS-CoV-2 inactivation procedures. Journal of Breath Research. 2021;15(3):037102. * [49] Ma DS, Correll J, Wittenbrink B. The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods. 2015;47(4):1122-35. * [50] Bradley MM, Lang PJ. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry. 1994;25(1):49-59. * [51] Peirce JW. PsychoPy—psychophysics software in Python. Journal of neuroscience methods. 2007;162(1-2):8-13. * [52] Greco A, Valenza G, Scilingo EP. Emotions and Mood States: Modeling, Elicitation, and Recognition. In: Advances in Electrodermal Activity Processing with Applications for Mental Health. Springer; 2016. p. 45-54. * [53] Sjouwerman R, Lonsdorf T. Latency of skin conductance responses across stimulus modalities. Psychophysiology. 2019;56(4):e13307. * [54] Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of neuroscience methods. 2004;134(1):9-21. * [55] Mullen TR, Kothe CA, Chi YM, Ojeda A, Kerth T, Makeig S, et al. Real-time neuroimaging and cognitive monitoring using wearable dry EEG. IEEE Transactions on Biomedical Engineering. 2015;62(11):2553-67. * [56] Makeig S, Bell A, Jung TP, Sejnowski TJ. Independent component analysis of electroencephalographic data. Advances in neural information processing systems. 1995;8. * [57] Zhang D, Liu Y, Zhou C, Chen Y, Luo Y. Spatial attention effects of disgusted and fearful faces. PLoS One. 2014;9(7):e101608. * [58] Trautmann-Lengsfeld SA, Dominguez-Borras J, Escera C, Herrmann M, Fehr T. The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis. PLoS One. 2013;8(6):e66997. * [59] Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Annals of statistics. 2001:1165-88. * [60] Ashburner J, Barnes G, Chen CC, Daunizeau J, Flandin G, Friston K, et al. SPM12 manual. Wellcome Trust Centre for Neuroimaging, London, UK. 2014;2464:4. * [61] David O, Kiebel SJ, Harrison LM, Mattout J, Kilner JM, Friston KJ. Dynamic causal modeling of evoked responses in EEG and MEG. NeuroImage. 2006;30(4):1255-72. * [62] Felleman DJ, Van Essen DC. Distributed hierarchical processing in the primate cerebral cortex. Cerebral cortex (New York, NY: 1991). 1991;1(1):1-47. * [63] Penny W, Iglesias-Fuster J, Quiroz YT, Lopera FJ, Bobes MA. Dynamic causal modeling of preclinical autosomal-dominant Alzheimer’s disease. Journal of Alzheimer’s Disease. 2018;65(3):697-711. * [64] López JD, Litvak V, Espinosa JJ, Friston K, Barnes GR. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM. NeuroImage. 2014;84:476-87. * [65] Litvak V, Friston K. Electromagnetic source reconstruction for group studies. Neuroimage. 2008;42(4):1490-8. * [66] Rolls ET, Huang CC, Lin CP, Feng J, Joliot M. Automated anatomical labelling atlas 3. Neuroimage. 2020;206:116189. * [67] Kiebel SJ, David O, Friston KJ. Dynamic causal modelling of evoked responses in EEG/MEG with lead field parameterization. NeuroImage. 2006;30(4):1273-84. * [68] Litvak V, Mattout J, Kiebel S, Phillips C, Henson R, Kilner J, et al. EEG and MEG data analysis in SPM8. Computational intelligence and neuroscience. 2011;2011. * [69] Rho G, Callara AL, Cecchetto C, Vanello N, Scilingo EP, Greco A. Valence, Arousal, and Gender Effect on Olfactory Cortical Network Connectivity: A Study Using Dynamic Causal Modeling for EEG. IEEE Access. 2022;10:127313-27. * [70] Friston K, Zeidman P, Litvak V. Empirical Bayes for DCM: a group inversion scheme. Frontiers in systems neuroscience. 2015;9:164. * [71] Friston KJ, Litvak V, Oswal A, Razi A, Stephan KE, Van Wijk BC, et al. Bayesian model reduction and empirical Bayes for group (DCM) studies. Neuroimage. 2016;128:413-31. * [72] Elbich DB, Molenaar PC, Scherf KS. Evaluating the organizational structure and specificity of network topology within the face processing system. Human brain mapping. 2019;40(9):2581-95. * [73] Cecchetto C, Fischmeister FPS, Reichert JL, Bagga D, Schöpf V. When to collect resting-state data: the influence of odor on post-task resting-state connectivity. NeuroImage. 2019;191:361-6. * [74] A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. * [75] Elfgren C, van Westen D, Passant U, Larsson EM, Mannfolk P, Fransson P. fMRI activity in the medial temporal lobe during famous face processing. Neuroimage. 2006;30(2):609-16. * [76] Jacques C, Jonas J, Maillard L, Colnat-Coulbois S, Koessler L, Rossion B. The inferior occipital gyrus is a major cortical source of the face-evoked N170: Evidence from simultaneous scalp and intracerebral human recordings. Human brain mapping. 2019;40(5):1403-18. * [77] Bötzel K, Schulze S, Stodieck SR. Scalp topography and analysis of intracranial sources of face-evoked potentials. Experimental brain research. 1995;104(1):135-43. * [78] Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends in cognitive sciences. 2000;4(6):223-33. * [79] Gobbini MI, Haxby JV. Neural systems for recognition of familiar faces. Neuropsychologia. 2007;45(1):32-41. * [80] Rolls ET. Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areas. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 1992;335(1273):11-21. * [81] Uono S, Sato W, Kochiyama T, Kubota Y, Sawada R, Yoshimura S, et al. Time course of gamma-band oscillation associated with face processing in the inferior occipital gyrus and fusiform gyrus: A combined fMRI and MEG study. Human Brain Mapping. 2017;38(4):2067-79. * [82] Jamieson AJ, Davey CG, Harrison BJ. Differential modulation of effective connectivity in the brain’s extended face processing system by fearful and sad facial expressions. Eneuro. 2021;8(2). * [83] Sokolov EN. Perception and the conditioned reflex. 1963. * [84] Maurer D, Le Grand R, Mondloch CJ. The many faces of configural processing. Trends in cognitive sciences. 2002;6(6):255-60. * [85] Batty M, Taylor MJ. Early processing of the six basic facial emotional expressions. Cognitive brain research. 2003;17(3):613-20. * [86] Allison T, Puce A, McCarthy G. Social perception from visual cues: role of the STS region. Trends in cognitive sciences. 2000;4(7):267-78. * [87] Tovée MJ, Cohen-Tovée EM. The neural substrates of face processing models: A review. Cognitive Neuropsychology. 1993;10(6):505-28. * [88] Van Hoesen GW. The parahippocampal gyrus: new observations regarding its cortical connections in the monkey. Trends in neurosciences. 1982;5:345-50. * [89] Fuster JM. Inferotemporal units in selective visual attention and short-term memory. Journal of Neurophysiology. 1990;64(3):681-97. * [90] Iwai E, Osawa Y, Okuda H. Monkey inferotemporal cortex as a long-term visual memory area. Vision, Memory and the Temporal Lobe Elsevier, Amsterdam. 1990:1-12. * [91] Miyashita I. Associative representation of visual long-term memory in the neurons of the primate temporal cortex. Vision, memory and the temporal lobe. 1990:75-88. * [92] Riley JD, Fling BW, Cramer SC, Lin JJ. Altered organization of face-processing networks in temporal lobe epilepsy. Epilepsia. 2015;56(5):762-71. * [93] Rho G, Callara AL, Vanello N, Gentili C, Greco A, Scilingo EP. Odor valence modulates cortico-cortical interactions: a preliminary study using DCM for EEG. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE; 2021. p. 604-7. * [94] LeDoux J. The emotional brain: The mysterious underpinnings of emotional life. Simon and Schuster; 1998. * [95] Mobascher A, Brinkmeyer J, Warbrick T, Musso F, Wittsack HJ, Stoermer R, et al. Fluctuations in electrodermal activity reveal variations in single trial brain responses to painful laser stimuli—A fMRI/EEG study. Neuroimage. 2009;44(3):1081-92. * [96] Stuldreher IV, Thammasan N, Van Erp JB, Brouwer AM. Physiological synchrony in EEG, electrodermal activity and heart rate detects attentionally relevant events in time. Frontiers in Neuroscience. 2020;14:575521. * [97] Posada-Quintero HF, Chon KH. Phasic component of electrodermal activity is more correlated to brain activity than tonic component. In: 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI). IEEE; 2019. p. 1-4. * [98] Lim C, Barry R, Gordon E, Sawant A, Rennie C, Yiannikas C. The relationship between quantified EEG and skin conductance level. International journal of psychophysiology. 1996;21(2-3):151-62. * [99] Kroupi E, Vesin JM, Ebrahimi T. Phase-amplitude coupling between EEG and EDA while experiencing multimedia content. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE; 2013. p. 865-70. * [100] Ural G, Kaçar F, Canan S. Wavelet phase coherence estimation of EEG signals for neuromarketing studies. NeuroQuantology. 2019;17(2):112-20. * [101] Walla P. Olfaction and its dynamic influence on word and face processing: Cross-modal integration. Progress in neurobiology. 2008;84(2):192-209.
* (67) Silmore, K. S.; Howard, M. P.; Panagiotopoulos, A. Z. Vapor-liquid equilibrium and surface tension of fully flexible Lennard-Jones chains. Mol. Phys. 2017, 115, 320--327. * (68) Regy, R. M.; Zheng, W.; Mittal, J. Using a sequence-specific coarse-grained model for studying protein liquid-liquid phase separation. Methods Enzymol. 2021, 646, 1--17. * (69) Rauscher, S.; Pomès, R. The liquid structure of elastin. eLife 2017, 6, e26526. * (70) Zheng, W.; Dignon, G. L.; Jovic, N.; Xu, X.; Regy, R. M.; Fawzi, N. L.; Kim, Y. C.; Best, R. B.; Mittal, J. Molecular details of protein condensates probed by microsecond long atomistic simulations. J. Phys. Chem. B 2020, 124, 11671--11679. * (71) Lytle, T. K.; Sing, C. E. Transfer matrix theory of polymer complex coacervation. Soft Matter, 2017, 13, 7001--7012. * (72) Chang, L.-W.; Lytle, T. K.; Radhakrishna, M.; Madinya, J. J.; Vélez, J.; Sing, C. E.; Perry, S. L. Sequence and entropy-based control of complex coacervates. Nat. Comm. 2017, 8, 1273. * (73) Sing, C. E.; Perry, S. L. Recent progress in the science of complex coacervation. Soft Matter 2020, 16, 2885--2914. * (74) Kastelic, M.; Kalyuzhnyi, Y. V.; Vlachy, V. Modeling phase transitions in mixtures of $\beta$-$\gamma$ lens crystallins. Soft Matter 2016, 12, 7289--7298. * (75) Nguemaha, V.; Zhou, H.-X. Liquid-liquid phase separation of patchy particles illuminates diverse effects of regulatory components on protein droplet formation. Sci. Rep. 2018, 8, 6728. * (76) Espinosa, J. R.; Joseph, J. A.; Sanchez-Burgosa, I.; Garaizara, A.; Frenkel, D.; Collepardo-Guevara, R. Liquid network connectivity regulates the stability and composition of biomolecular condensates with many components. Proc. Natl. Acad. Sci. U.S.A., 2020 117, 13238--13247. * (77) Wertheim, M. S. Fluids with highly directional attractive forces. IV. Equilibrium polymerization. J. Stat. Phys. 1986, 42, 477--492. * (78) Lin, Y.-H.; Wessén, J.; Pal, T.; Das, S.; Chan, H. S. Numerical techniques for applications of analytical theories to sequence-dependent phase separations of intrinsically disordered proteins. Methods Mol. Biol. (Springer-Nature), 2022 in press [q-bioBM-arXiv:2201.01920]. * (79) Qian, D.; Michaels, T. C. T.; Knowles, T. P. J. Analtyical solution to the Flory-Huggins model. J. Phys. Chem. Lett. 2022, 13, 7853--7860. * (80) Martin, E. W.; Holehouse, A. S.; Peran, I.; Farag, M.; Incicco, J. J.; Bremer, A.; Grace, C. R.; Soranno, A.; Pappu, R. V.; Mittag, T. Valence and patterning of aromatic residues determine the phase behavior of prion-like domains. Science 2020, 367, 694--699. * (81) Amin, A. N.; Lin, Y.-H.; Das, S.; Chan, H. S. Analytical theory for sequence-specific binary fuzzy complexes of charged intrinsically disordered proteins. J. Phys. Chem. B 2020, 124, 6709--6720. * (82) Hazra, M. K.; Levy, Y. Charge pattern affects the structure and dynamics of polyampholyte condensates. Phys. Chem. Chem. Phys. 2020, 22, 19368--19375. * (83) Hazra, M. K.; Levy, Y. Affinity of disordered protein complexes is modulated by entropy-energy reinforcement. Proc. Natl. Acad. Sci. U.S.A. 2022, 119, e2120456119. * (84) Sawle, L.; Ghosh, K. A theoretical method to compute sequence dependent configurational properties in charged polymers and proteins. J. Chem. Phys. 2015, 143, 085101. * (85) Lin, Y.-H.: Brady, J. P.; Chan, H. S.; Ghosh, K. A unified analytical theory of heteropolymers for sequence-specific phase behaviors of polyelectrolytes and polyampholytes. J. Chem. Phys. 2020, 152, 045102. * (86) Rumyantsev, A. M.; Johner, A.; Tirrell, M. V.; de Pablo, J. J. Unifying weak and strong charge correlations within the random phase approximation: Polyampholytes of various sequences. Macromolecules 2022, 55, 6260--6274. * (87) Das, R. K.; Pappu, R. V. Conformations of intrinsically disordered proteins are influenced by linear sequence distributions of oppositely charged residues. Proc. Natl. Acad. Sci. U.S.A. 2013, 110, 13392--13397. * (88) Ghosh, K.; Huihui, J.; Phillips, M.; Halder, A. Rules of physical mathematics govern intrinsically disordered proteins. Annu. Rev. Biophys, 2022, 51, 355--376. * (89) Wessén, J.; Pal, T.; Das, S.; Lin, Y.-H.; Chan, H. S. A simple explicit-solvent model of polyampholyte phase behaviors and its ramifications for dielectric effects in biomolecular condensates. J. Phys. Chem. B 2021, 125, 4337--4358. * (90) Wessén, J.; Pal, T.; Chan, H. S. Field theory description of ion association in phase separation of polyampholytes. J. Chem. Phys. 2022, 156, 194903. * (91) Sawle, L.; Huihui, J.; Ghosh, K. All-atom simulations reveal protein charge decoration in the folded and unfolded ensemble is key in thermophilic adaptation. J. Chem. Theory Comput. 2017, 13, 5065--5075. * (92) Firman, T.; Ghosh, K. Sequence charge decoration dictates coil-globule transition in intrinsically disordered proteins. J. Chem. Phys. 2018, 148, 123305. * (93) Huihui, J.; Firman, T.; Ghosh, K. Modulating charge patterning and ionic strength as a strategy to induce conformational changes in intrinsically disordered proteins. J. Chem. Phys. 2018, 149, 085101. * (94) Huihui, J.; Ghosh, K. An analytical theory to describe sequence-specific inter-residue distance profiles for polyampholytes and intrinsically disordered proteins. J. Chem. Phys. 2020, 152, 161102. * (95) Song, J.; Ng, S. C.; Tompa, P.; Lee, K. A. W.; Chan, H. S. Polycation-$\pi$ interactions are a driving force for molecular recognition by an intrinsically disordered oncoprotein family. PLoS Comput. Biol. 2013, 9, e1003239. * (96) Vernon, R. M.; Chong, P. A.; Tsang, B.; Kim, T. H.; Bah, A.; Farber, P.; Lin, H.; Forman-Kay, J. D. Pi-Pi contacts are an overlooked protein feature relevant to phase separation. eLife 2018, 7, e31486. * (97) Wang, J.; Choi, J. M.; Holehouse, A. S.; Lee, H. O.; Zhang, X.; Jahnel, M.; Maharana, S.; Lemaitre, R.; Pozniakovsky, A.; Drechsel, D., et al. A molecular grammar governing the driving forces for phase separation of prion-like RNA binding proteins. Cell 2018, 174, 688--699. * (98) Li, S.; Yoshizawa, T.; Yamazaki, R.; Fujiwara, A.; Kameda, T.; Kitahara, R. Pressure and temperature phase diagram for liquid-liquid phase separation of the RNA-binding protein fused in sarcoma. J. Phys. Chem. B 2021, 125, 6821--6829. * (99) Kamagata, K.; Ariefai, M.; Takahashi1, H.; Hando, A.; Subekti, D. R. G.; Ikeda, K.; Hirano, A.; Kameda, T. Rational peptide design for regulating liquid–liquid phase separation on the basis of residue–residue contact energy. Sci. Rep. 2022, 12, 13718. * (100) Hydropathy patterning complements charge patterning to describe conformational preferences of disordered proteins. Zheng, W.; Dignon, G.; Brown, M.; Kim, Y. C.; Mittal, J. J. Phys. Chem. Lett. 2020, 11, 3408--3415. * (101) Song, J.; Li, J.; Chan, H. S. Small-angle X-ray scattering signatures of conformational heterogeneity and homogeneity of disordered protein ensembles. J. Phys. Chem. B 2021, 125, 6451--6478 * (102) Li, H.; Tang, C.; Wingreen, N. S. Nature of driving force for protein folding: A result from analyzing the statistical potential. Phys. Rev. Lett. 1997, 79, 765--768. * (103) Chan, H. S. Folding alphabets. Nat. Struct. Biol. 1999 6, 994--996. * (104) Kapcha, L. H.; Rossky, P. J. A simple atomic-level hydrophobicity scale reveals protein interfacial structure. J. Mol. Biol. 2014, 426, 484--498. * (105) Urry, D. W.; Gowda, D. C.; Parker, T. M.; Luan, C. H.; Reid, M. C.; Harris, C. M.; Pattanaik, A.; Harris, R. D. Hydrophobicity scale for proteins based on inverse temperature transitions. Biopolymers 1992, 32, 1243--1250. * (106) Regy, R. M.; Thompson, J.; Kim, Y. C.; Mittal, J. Improved coarse-grained model for studying sequence dependent phase separation of disordered proteins. Protein Sci. 2021, 30, 1371--1379. * (107) Dannenhoffer-Lafage, T.; Best, R. B. A data-driven hydrophobicity scale for predicting liquid–liquid phase separation of proteins. J. Phys. Chem. B 2021, 125, 4046--4056. * (108) Miyazawa, S.; Jernigan, R. L. Estimation of effective interresidue contact energies from protein crystal structures: quasi-chemical approximation. Macromolecules 1985, 18, 534--552. * (109) Miyazawa, S.; Jernigan, R. L. Residue-residue potentials with a favourable contact pair term and an unfavourable high packing density term, for simulation and threading. J. Mol. Biol. 1996, 256, 623--644. * (110) Kim, Y. C.; Hummer, G. Coarse-grained models for simulations of multiprotein complexes: Application to ubiquitin binding. J. Mol. Biol. 2008, 375, 1416--1433. * (111) Joseph, J. A.; Reinhardt, A.; Aguirre, A.; Chew, P. Y.; Russell, K. O.; Espinosa, J. R.; Garaizar, A.; Collepardo-Guevara, R. Physics-driven coarse-grained model for biomolecular phase separation with near-quantitative accuracy. Nat. Comput. Sci. 2021, 1, 732--743. * (112) Villegas, J. A.; Levy, E. D. A unified statistical potential reveals that amino acid stickiness governs nonspecific recruitment of client proteins into condensates. Protein Sci. 2022, 31, e4361. * (113) Godzik, A.; Koliński, A.; Skolnick, J. Are proteins ideal mixtures of amino acids? Analysis of energy parameter sets. Protein Sci. 1995, 4, 2107--2117. * (114) Computational methods for protein folding: Scaling a hierarchy of complexities. In Current Topics in Computational Molecular Biology; Jiang, T., Xu Y., Zhang, M. Q., Eds.; The MIT Press, Cambridge, Massachusetts, U.S.A., 2002, Chapter 16, pp 403--447. * (115) Karplus, P. A. Hydrophobicity regained. Protein Sci. 1997, 6, 1302--1307. * (116) DeVido, D. R.; Dorsey, J. G.; Chan, H. S.; Dill, K. A. Oil/water partitioning has a different thermodynamic signature when the oil solvent chains are aligned than when they are amorphous. J. Phys. Chem. B 1998 102, 7272--7279. * (117) Fossat, M. J.; Zeng, X.; Pappu, R. V. Uncovering differences in hydration free energies and structures for model compound mimics of charged side chains of amino acids. J. Phys. Chem. B 2021, 125, 4148--4161. * (118) Lum, K.; Chandler, D.; Weeks, J. D. Hydrophobicity at small and large length scales. J. Phys. Chem. B 1999, 103, 4570--4577. * (119) Makowski, M.; Sobolewski, E.; Czaplewski, C.; Liwo, A.; Ołdziej, S.; No, J. H.; Scheraga, H. A. Simple physics-based analytical formulas for the potentials of mean force for the interaction of amino acid side chains in water. 3. Calculation and parameterization of the potentials of mean force of pairs of identical hydrophobic side chains. J. Phys. Chem. B 2007, 111, 2925--2931. * (120) Makowski, M.; Sobolewski, E.; Czaplewski, C.; Oldziej, S.; Liwo, A.; Scheraga, H. A. Simple physics-based analytical formulas for the potentials of mean force for the interaction of amino acid side chains in water. IV. Pairs of different hydrophobic side chains. J. Phys. Chem. B 2008, 112, 11385--11395. * (121) Chan, H. S.; Zhang, Z.; Wallin, S.; Liu, Z. Cooperativity, local-nonlocal coupling, and nonnative interactions: Principles of protein folding from coarse-grained models. Annu. Rev. Phys. Chem. 2011 62, 301--326. * (122) Shimizu, S.; Chan, H. S. Configuration-dependent heat capacity of pairwise hydrophobic interactions. J. Am. Chem. Soc. 2001, 123, 2083--2084. * (123) Shimizu, S.; Chan, H. S. Origins of protein denatured state compactness and hydrophobic clustering in aqueous urea: Inferences from nonpolar potentials of mean force. Proteins: Struct. Funct. Genet. 2002 49, 560--566. * (124) Chan, H. S.; Shimizu, S.; Kaya, H. Cooperativity principles in protein folding. Methods Enzymol. 2004, 380, 350--379. * (125) Chan, H. S. Protein folding: Matching speed and locality. Nature 1998 392, 761--763. * (126) Statt, A.; Casademunt, H.; Brangwynne, C. P.; Panagiotopoulos, A. Z. Model for disordered proteins with strongly sequence-dependent liquid phase behavior. J. Chem. Phys. 2020, 152, 075101. * (127) Rana, U.; Brangwynne, C. P.; Panagiotopoulos, A. Z. Phase separation versus aggregation behavior for model disordered proteins. J. Chem. Phys. 2021, 155, 125101. * (128) Jorgenson, W. L.; Tirado-Rives, J. The OPLS [optimized potentials for liquid simulations] potential functions for proteins, energy minimizations for crystals of cyclic peptides and crambin. J. Am. Chem. Soc. 1988 110, 1657--1666. * (129) Wang, Z.-G. Fluctuation in electrolyte solutions: The self energy. Phys. Rev. E 2010, 81, 021501. * (130) Riggleman, R. A.; Kumar, R.; Fredrickson, G. H. Investigation of the interfacial tension of complex coacervates using field-theoretic simulations. J. Chem. Phys. 2012, 136, 024903. * (131) Bragg, W. L.; Williams, E. J. The effect of thermal agitation on atomic arrangement in alloys. Proc. Roy. Soc. A (London) 1934, 145, 699--730. * (132) Dill, K. A. Theory for the folding and stability of globular proteins. Biochemistry 1985, 24, 1501--1509. * (133) Fredrickson, G. H.; Ganesan, V.; Drolet, F. Field-theoretic computer simulation methods for polymers and complex fluids. Macromolecules 2002, 35, 16--39. * (134) Parisi, G. On complex probabilities. Phys. Lett. B 1983, 131, 393--395. * (135) Klauder, J. R. A Langevin approach to fermion and quantum spin correlation functions. J. Phys. A: Math. Gen. 1983, 16, L317--L319. * (136) Parisi, G.; Wu, Y.-S. Perturbation theory without gauge fixing. Scientia Sinica 1981 24, 483--496. * (137) Chan, H. S.; Halpern, M. B. New ghost-free infrared-soft gauges. Phys. Rev. D 1986, 33, 540--547. * (138) Chan, H. S.; Halpern, M. B. Continuum-regularized quantum gravity. Zeitschrift Für Physik C 1987, 36, 669--693. * (139) Rumpf, H. Stochastic quantum gravity in $D$ dimensions. Prog. Theor. Phys. 1993, 111, 63--81. * (140) Weeks, J. D.; Chandler, D.; Andersen, H. C. Role of repulsive forces in determining the equilibrium structure of simple liquids. J. Chem. Phys. 1971, 54, 5237--5247. * (141) Wang, X.; Ram\́mathrm{i}rez-Hinestrosa, S.; Dobnikar, J.; Frenkel, D. The Lennard-Jones potential: when (not) to use it. Phys. Chem. Chem. Phys. 2929, 22, 10624--10633. * (142) Weyman, A.; Mavrantzas, V. G.; Öttinger, H. C. Field-theoretic simulation beyond $\delta$-interactions: Overcoming the inverse potential problem in auxiliary field models. J. Chem. Phys. 2021, 155, 024106. * (143) Yukawa, H. On the interaction of elementary particles. I. Proc. Phys.-Math. Soc. (Japan) 1935, 17, 48--57. * (144) Riback, J. A.; Eeftens, J. M.; Lee, D. S. W.; Quinodoz, S. A.; Beckers, L.; Becker, L. A.; Brangwynne, C. P. Viscoelastic RNA entanglement and advective flow underlie nucleolar form and function. bioRxiv 2021.12.31.474660 2022, doi: https://doi.org/10.1101/2021.12.31.474660. * (145) Itzykson, C.; Zuber, J.-B. Quantum Field Theory; McGraw-Hill Inc.:New York, NY 1980. * (146) Tros, M.; Zheng, L.; Hunger, J.; Bonn, M.; Bonn, D.; Smits, G. J.; Woutersen, S. Picosecond orientational dynamics of water in living cells. Nat. Commun. 2017, 8, 1--7. * (147) Ashbaugh, H. S.; Hatch, H. W. Natively unfolded protein stability as a coil-to-globule transition in charge/hydropathy space. J. Am. Chem. Soc. 2008, 130, 9536--9542. * (148) Lau, K. F.; Dill, K. A. A lattice statistical mechanics model of the conformational and sequence spaces of proteins. Macromolecules 1989, 22, 3986--3997. * (149) O’Toole, E. M.; Panagiotopoulos, A. Z. Monte Carlo simulation of folding transitions of simple model proteins using a chain growth algorithm. J. Chem. Phys. 1992, 97, 8644--8652. * (150) Anderson, J. A.; Glaser, J.; Glotzer, S. C. HOOMD-blue: A Python package for high-performance molecular dynamics and hard particle Monte Carlo simulations. Comput. Mater. Sci. 2020, 173, 109363. * (151) Anderson, J. A.; Lorenz, C. D.; Travesset, A. General purpose molecular dynamics simulations fully implemented on graphics processing units. J. Comput. Phys. 2008, 227, 5342--5359. * (152) Martyna, G. J.; Tobias, D. J.; Klein, M. L. Constant pressure molecular dynamics algorithms. J. Chem. Phys. 1994, 101, 4177--4189. * (153) Tuckerman, M. E.; Alejandre, J.; López-Rendón, R.; Jochim, A. L.; Martyna, G. J. A Liouville-operator derived measure-preserving integrator for molecular dynamics simulations in the isothermal-isobaric ensemble. J. Phys. A 2006, 39, 5629--5651. * (154) LeBard, D. N.; Levine, B. G.; Mertmann, P.; Barr, S. A.; Jusufi, A.; Sanders, S.; Klein, M. L.; Panagiotopoulos, A. Z. Self-assembly of coarse-grained ionic surfactants accelerated by graphics processing units. Soft Matter 2012, 8, 2385--2397. * (155) Matsen, M. W.; Beardsley, T. M. Field-theoretic simulations for block copolymer melts using the partial saddle-point approximation. Polymers 2021, 13, 2437. * (156) Lennon, E. M.; Mohler, G. O.; Ceniceros, H. D.; Garca-Cervera, C. J.; Fredrickson, G. H. Numerical solutions of the complex Langevin equations in polymer field theory. Multiscale Modeling & Simulation 2008, 6, 1347--1370. * (157) Cheung, M. S.; Garc\́mathrm{i}a, A. E.; Onuchic, J. N. Protein folding mediated by solvation: Water expulsion and formation of the hydrophobic core occur after the structural collapse. Proc. Natl. Acad. Sci. U.S.A. 2002, 99, 685--690. * (158) Liu, Z.; Chan, H. S. Desolvation is a likely origin of robust enthalpic barriers to protein folding. J. Mol. Biol. 2005, 349, 872--889. * (159) Kaya, H.; Uzunoǧlu, Z.; Chan, H. S. Spatial ranges of driving forces are a key determinant of protein folding cooperativity and rate diversity. Phys. Rev. E 2013, 88, 044701. TOC graphics
# Walk These Ways: Tuning Robot Control for Generalization with Multiplicity of Behavior Gabriel B. Margolis Pulkit Agrawal Improbable AI Lab, Massachusetts Institute of Technology {gmargo<EMAIL_ADDRESS> ###### Abstract Learned locomotion policies can rapidly adapt to diverse environments similar to those experienced during training but lack a mechanism for fast tuning when they fail in an out-of-distribution test environment. This necessitates a slow and iterative cycle of reward and environment redesign to achieve good performance on a new task. As an alternative, we propose learning a single policy that encodes a structured family of locomotion strategies that solve training tasks in different ways, resulting in Multiplicity of Behavior (MoB). Different strategies generalize differently and can be chosen in real-time for new tasks or environments, bypassing the need for time-consuming retraining. We release a fast, robust open-source MoB locomotion controller, Walk These Ways, that can execute diverse gaits with variable footswing, posture, and speed, unlocking diverse downstream tasks: crouching, hopping, high-speed running, stair traversal, bracing against shoves, rhythmic dance, and more. Video and code release: https://gmargo11.github.io/walk-these-ways/ Figure 1: Multiplicity of Behavior (MoB) enables a human to tune a single quadruped policy trained on flat ground to diverse unseen environments. Top row: A low-frequency gait fails to sprint on slippery terrain (Gait 2; inset) but tuning it to high frequency results in success (Gait 1). However, a low frequency and high footswing height are necessary for stair traversal (Gait 2; middle image). A low footswing and wide stance (Gait 3) makes the robot robust to leg shoves, but Gait 1, which succeeded at sprinting, fails. Tuning gait thus aids in generalizing to different tasks. Bottom row: Examples of other behaviors enabled by our controller. > Keywords: Locomotion, Reinforcement Learning, Task Specification ## 1 Introduction Recent works have established that quadruped locomotion controllers trained with reinforcement learning in simulation can successfully be transferred to traverse challenging natural terrains [1, 2, 3]. Adaptation to diverse terrains is accomplished by estimating terrain properties from sensory observations that are then used by the controller (i.e., online system identification). The success of this paradigm relies on two assumptions: a priori modeling of environment parameters that can vary during deployment and the ability to estimate these parameters from sensory observations. To bypass the first assumption, one possibility is to widely vary a large set of environment parameters during training. However, this creates a hard learning problem due to creation of challenging or infeasible locomotion scenarios. To simplify learning, typically the designer chooses a subset of parameters that are randomized in a carefully restricted range. Even in this setup, additional measures such as a learning curriculum and reward shaping are necessary for successful learning in simulation. As a result of these practical restrictions on the expressiveness of the simulation, quite often the robot encounters scenarios during deployment that were not modeled during training. For instance, if the robot is only presented with flat ground and terrain geometry is not varied during training, it may fail to traverse non-flat terrains such as stairs. In such a case, it is common to tweak the training environments or the reward functions and re-train the policy. This iterative loop of re-training and real-world testing is tedious. To make things worse, in some scenarios such iteration is insufficient because it is not possible to accurately model or sense important environment properties. For example, thick bushes are both hard to simulate due to compliance and hard to sense because depth sensors do not distinguish them from walls. Thus, the robot may attempt to climb over thick bushes like a rock or move through them with an overly conservative gait that leaves the robot stuck. The examples above illustrate that even for the most advanced sim-to-real systems, the real world offers new challenges. We broadly refer to scenarios that can be simulated but are not anticipated during training and the situations which cannot be simulated or identified from sensory observations as out-of-distribution cases. We present a framework for policy learning that enables improved performance in out-of-distribution scenarios under some assumptions detailed below. Our key insight is that given a task, there are multiple equally good solutions (i.e., under-specification [4]) that have equivalent training performance but can generalize in different ways. For instance, the task of walking on flat ground only imposes a constraint on the velocity of robot’s body, but not on how the legs should move, or high should the torso be above the ground, etc. Consider two different walking behaviors: crouch where the robot keeps its torso close to the ground and stomp where the torso is high and also the legs have a high foot swing. While both crouch and stomp succeed at walking on flat ground, their generalization to out-of- distribution scenarios is different: with crouch the robot can traverse under obstacles but not stairs, whereas with stomp it can climb over curbs/stairs but not move under obstacles (Figure 1). Out of the many possible locomotion behaviors that succeed in the training environment, typical reinforcement learning formulations result in a policy that only embodies one solution and therefore expresses a single generalizing bias. To facilitate generalization to diverse scenarios, we propose a technique, Multiplicity of Behavior (MoB), that given the same observation history and a small set of behavior parameters outputs different walking behaviors. When faced with an unseen scenario, one can test different behaviors by varying these parameters, which affords much quicker iteration than re-training a new policy and facilitates collection of online demonstrations by a human pilot. The utility of MoB depends on the assumption that some subset of behaviors successful in the restricted training environment will also succeed in the out-of-distribution target environment. To demonstrate that this is true in a meaningful sense, we chose an extreme case: train a single policy for quadrupedal walking only on flat ground and evaluate on non-flat terrains and new tasks. We show that a human operator can tune behaviors in real-time to enable successful locomotion in the presence of diverse unseen terrains and dynamics including uneven ground, stairs, external shoves, and constrained spaces (Figure 1). The same tuning mechanisms can be used to compose behaviors to perform new tasks such as payload manipulation and rhythmic dance (Figure 1). This work contributes a robust low-level quadruped controller that can execute diverse structured behaviors, which we we hope will be a useful building block for future locomotion applications. It can serve as a platform for collecting quadruped demonstrations for diverse tasks which is enabled by an interpretable high-level control interface. Furthermore, our controller showcases MoB as a practical tool for out-of-distribution generalization. While our implementation of MoB leverages expert knowledge in the domain of locomotion, in general, the technique of learning multiple methods of achieving goals to facilitate generalization is a promising approach with potential for broad application in sim-to-real reinforcement learning for robotics. Figure 2: A learned controller transitions between classical structured gaits: trotting, pronking, pacing, and bounding in place at alternating frequencies $2$Hz and $4$Hz. Images show the robot achieving contact phases of each gait, with stance feet highlighted in red. Black shading in the bottom plot reflects the timing reference variables $\textbf{t}_{t}$ for each foot; colored bars report the contact states measured by foot sensors. As we later expose, diverse behaviors can facilitate novel downstream capabilities. ## 2 Background Auxiliary Rewards. Locomotion gaits learned using only the task reward (e.g. velocity tracking) have not been shown to successfully transfer to the real world. It is necessary to train using auxiliary rewards that bias the robot to maintain a particular contact schedule, action smoothness, energy consumption, or foot clearance [1, 2, 3, 5, 6, 7], to compensate for the sim-to-real gap. Such auxiliary rewards can be interpreted as biases for generalization. For example, foot clearance enables the robot to be robust when the terrain in the real world is more uneven, or the robot’s body sinks more in the real world than in simulation [7]. If an agent fails on a real-world task, it is common practice to manually tune auxiliary rewards to encourage the emergence of successful real-world behavior. However, such tuning is tedious because it requires repeated iterations of training and deployment. Furthermore, such tuning is task-specific and must be repeated for new tasks commanded to the agent. The difficulty of designing a single set of auxiliary rewards that promote generalization in diverse set of downstream tasks is illustrated in the top row insets of Figure 1: each shows an instance where a fixed auxiliary reward would lead to failure for one task, despite working well for another. Learning Locomotion with Parameterized Style. Closely related recent work on bipeds has explicitly included gait parameters in the task specification through parameter-augmented auxiliary reward terms, including tracking rewards for contact patterns [8] and target foot placements [9]. [8] used gait- dependent reward terms in bipedal locomotion to control the timing offset between the two feet and the duration of the swing phase. The reward structure of [8] inspired ours, but due to its small space of command parameters, did not propose and would not support application to compositional tasks or out- of-distribution environments. To similar effect, other works have imposed parameterized style through libraries of reference gaits. [10, 11] generated a library of reference trajectories and trained a goal-conditioned policy to imitate them. [12] demonstrated that a small discrete set of motion styles can be learned simultaneously from a reference trajectory library. Learning with Diversity Objectives. Several prior methods aim to automatically learn a collection of high-performing and diverse behaviors. Quality diversity (QD) methods [13, 14, 15] learn a library of diverse policies by enforcing a novelty objective defined among trajectories. They typically perform this optimization using evolutionary strategies. QD has demonstrated benefits including improved optimization performance and reuse of skills for online adaptation [13, 14]. Another line of work uses unsupervised objectives for skill discovery in RL, towards improving optimization [16] or out-of-distribution generalization [17, 18]. Unsupervised diversity approaches hold promise, but they have not yet scaled to real robotic platforms and do not produce an grounded interface of parameters for guiding behavior. Hierarchical Control Using Gait Parameters. Several works have learned a high- level policy to accomplish downstream tasks by modulating the gait parameters of a low-level model-based controller. This approach has been previously applied to energy minimization [19] and vision-guided foot placement [20, 21]. These works relied on a model-predictive low-level controller to execute the different gaits in absence of a gait-conditioned learned controller. The tradeoffs between model-based control and reinforcement learning are well discussed in prior literature [3, 22]. Our work enables revisiting hierarchical approaches with learning at both the high and low levels. ## 3 Method To obtain MoB, we train a conditional policy $\pi(\cdot|\textbf{c}_{t},\textbf{b}_{t})$ that achieves tasks specified by the command ($\textbf{c}_{t}$) in multiple ways that result from different choices of behavior parameters, $\textbf{b}_{t}$. The question arises of how to define $\textbf{b}_{t}$. We could learn behaviors using an unsupervised diversity metric, but these behaviors might not be useful [4] and are not human tunable. To overcome these issues, we leverage human intuition about useful behavior parameters ($\textbf{b}_{t}$) corresponding to gait properties like foot swing motion, body posture, and contact schedule [6, 7, 8, 23]. During training, the agent receives a combination of task rewards (for velocity tracking), fixed auxiliary rewards (to promote sim-to-real transfer and stable motion), and finally augmented auxiliary rewards (that encourage locomotion in the desired style). During deployment in a novel environment, a human operator can tune behavior of the policy by changing its input $\textbf{b}_{t}$. Term | Equation | Weight ---|---|--- $r_{v^{{\textrm{cmd}}}_{x,y}}$: xy velocity tracking | $\exp\\{{-{|\textbf{v}_{xy}-\textbf{v}^{\text{cmd}}_{xy}|^{2}}/{\sigma_{vxy}}}\\}$ | $0.02$ $r_{\omega^{{\textrm{cmd}}}_{z}}$: yaw velocity tracking | $\exp\\{{-{(\boldsymbol{\omega}_{z}-\boldsymbol{\omega}_{z}^{{\textrm{cmd}}})^{2}}/{\sigma_{\omega z}}}\\}$ | $0.01$ $r_{c^{{\textrm{cmd}}}_{f}}$: swing phase tracking (force) | $\sum_{\textrm{foot}}[1-C^{{\textrm{cmd}}}_{\textrm{foot}}(\boldsymbol{\theta}^{{\textrm{cmd}}},t)]\exp\\{{-{|\textbf{f}^{\textrm{foot}}|^{2}}/{\sigma_{cf}}}\\}$ | $-0.08$ $r_{c^{{\textrm{cmd}}}_{v}}$: stance phase tracking (velocity) | $\sum_{\textrm{foot}}[C^{{\textrm{cmd}}}_{\textrm{foot}}(\boldsymbol{\theta}^{{\textrm{cmd}}},t)]\exp\\{{-{|\textbf{v}^{\textrm{foot}}_{xy}|^{2}}/{\sigma_{cv}}}\\}$ | $-0.08$ $r_{\boldsymbol{h}_{z}^{{\textrm{cmd}}}}$: body height tracking | $(\boldsymbol{h_{z}}-\boldsymbol{h}_{z}^{{\textrm{cmd}}})^{2}$ | $-0.2$ $r_{\boldsymbol{\phi}^{{\textrm{cmd}}}}$: body pitch tracking | $(\boldsymbol{\phi}-\boldsymbol{\phi}^{{\textrm{cmd}}})^{2}$ | $-0.1$ $r_{\boldsymbol{s}_{y}^{{\textrm{cmd}}}}$: raibert heuristic footswing tracking | $(\textbf{p}_{x,y,\textrm{foot}}^{f}-\textbf{p}_{x,y,\textrm{foot}}^{f,\text{cmd}}(\boldsymbol{s}_{y}^{{\textrm{cmd}}}))^{2}$ | $-0.2$ $r_{\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}}$: footswing height tracking | $\sum_{\textrm{foot}}(\boldsymbol{h}_{z,\textrm{foot}}^{f}-\boldsymbol{h}_{z}^{f,{\textrm{cmd}}})^{2}C^{{\textrm{cmd}}}_{\textrm{foot}}(\boldsymbol{\theta}^{{\textrm{cmd}}},t)$ | $-0.6$ z velocity | $\textbf{v}_{z}^{2}$ | $-410-4$ roll-pitch velocity | $|\boldsymbol{\omega}_{xy}|^{2}$ | $-210-5$ foot slip | $|\textbf{v}^{\textrm{foot}}_{xy}|^{2}$ | $-810-4$ thigh/calf collision | $\mathbbm{1}_{\text{collision}}$ | $-0.02$ joint limit violation | $\mathbbm{1}_{q_{i}>q_{max}||q_{i}<q_{min}}$ | $-0.2$ joint torques | $|\boldsymbol{\tau}|^{2}$ | $-210-5$ joint velocities | $|\dot{\textbf{q}}|^{2}$ | $-210-5$ joint accelerations | $|\ddot{\textbf{q}}|^{2}$ | $-510-9$ action smoothing | $|\textbf{a}_{t-1}-\textbf{a}_{t}|^{2}$ | $-210-3$ action smoothing, 2nd order | $|\textbf{a}_{t-2}-2\textbf{a}_{t-1}+\textbf{a}_{t}|^{2}$ | $-210-3$ Table 1: Reward structure: task rewards, augmented auxiliary rewards, and fixed auxiliary rewards. TaskAugmented AuxiliaryFixed Auxiliary ### 3.1 Task Structure for MoB Task Specification. We consider the task of omnidirectional velocity tracking. This task is specified by a $3$-dimensional command vector $\textbf{c}_{t}=[\textbf{v}_{x}^{{\textrm{cmd}}},\textbf{v}_{y}^{{\textrm{cmd}}},\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}]$ where $\textbf{v}_{x}^{{\textrm{cmd}}},\textbf{v}_{y}^{{\textrm{cmd}}}$ are the desired linear velocities in the body-frame x- and y- axes, and $\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ is the desired angular velocity in the yaw axis. Behavior Specification. We parameterize the style of task completion by an $8$-dimensional vector of behavior parameters, $\textbf{b}_{t}$: $\textbf{b}_{t}=[\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},\boldsymbol{\theta}_{2}^{{\textrm{cmd}}},\boldsymbol{\theta}_{3}^{{\textrm{cmd}}},\boldsymbol{f}^{{\textrm{cmd}}},\boldsymbol{h}_{z}^{{\textrm{cmd}}},\boldsymbol{\phi}^{{\textrm{cmd}}},\boldsymbol{s}_{y}^{{\textrm{cmd}}},\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}].$ $\boldsymbol{\theta}^{{\textrm{cmd}}}=(\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},\boldsymbol{\theta}_{2}^{{\textrm{cmd}}},\boldsymbol{\theta}_{3}^{{\textrm{cmd}}})$ are the timing offsets between pairs of feet. These express gaits including pronking ($\boldsymbol{\theta}^{{\textrm{cmd}}}=(0.0,0,0)$), trotting ($\boldsymbol{\theta}^{{\textrm{cmd}}}=(0.5,0,0)$), bounding, ($\boldsymbol{\theta}^{{\textrm{cmd}}}=(0,0.5,0)$), pacing ($\boldsymbol{\theta}^{{\textrm{cmd}}}=(0,0,0.5)$), as well as their continuous interpolations such as galloping ($\boldsymbol{\theta}^{{\textrm{cmd}}}=(0.25,0,0)$). Taken together, the parameters $\boldsymbol{\theta}^{{\textrm{cmd}}}$ can express all two-beat quadrupedal contact patterns; Figure 2 provides a visual illustration. $\boldsymbol{f}^{{\textrm{cmd}}}$ is the stepping frequency expressed in $\text{\,}\mathrm{H}\mathrm{z}$. As an example, commanding $\boldsymbol{f}^{{\textrm{cmd}}}=$3\text{\,}\mathrm{H}\mathrm{z}$$ will result in each foot making contact three times per second. $\boldsymbol{h}_{z}^{{\textrm{cmd}}}$ is the body height command; $\boldsymbol{\phi}^{{\textrm{cmd}}}$ is the body pitch command. $\boldsymbol{s}_{y}^{{\textrm{cmd}}}$ is the foot stance width command; $\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}$ is the footswing height command. Reward function. All reward terms are listed in Table 1. Task rewards for body velocity tracking are defined as functions of the command vector $\textbf{c}_{t}$. Auxiliary rewards are used constrain the quadruped’s motion for various reasons. “Fixed” auxiliary rewards are independent of behavior parameters ($\textbf{b}_{t}$) and encourage stability and smoothness across all gaits for better sim-to-real transfer. During training, one concern is that the robot might abandon its task or choose an early termination when the task reward is overwhelmed by penalties from the auxiliary objectives. To resolve this, as in [7], we force the total reward to be a positive linear function of the task reward by computing it as $r_{\textrm{task}}\exp{(c_{\textrm{aux}}r_{\textrm{aux}})}$ where $r_{\textrm{task}}$ is the sum of (positive) task reward terms and $r_{\textrm{aux}}$ is the sum of (negative) auxiliary reward terms (we use $c_{\textrm{aux}}=0.02$). This way, the agent is always rewarded for progress towards the task, more when auxiliary objectives are satisfied and less when they are not. For MoB, we define augmented auxiliary rewards as functions of the behavior vector $\textbf{b}_{t}$. We designed these rewards to increase when the realized behavior matches $\textbf{b}_{t}$, and to not conflict with the task reward. This required some careful design of the reward structure. For example, when implementing stance width as a behavior parameter, a naive approach would be to simply reward a constant desired distance between left and right feet. However, this penalizes the robot during fast turning tasks requiring relative lateral motion of the feet. To avoid this, we implement the Raibert Heuristic, which suggests the necessary kinematic motion of the feet to achieve a particular body velocity and contact schedule [23, 24]. The Raibert Heuristic computes the desired foot position in the ground plane, $\boldsymbol{p}_{x,y,\textrm{foot}}^{f,\text{cmd}}(\boldsymbol{s}_{y}^{{\textrm{cmd}}})$, as an adjustment to the baseline stance width to make it consistent with the desired contact schedule and body velocity. To define the desired contact schedule, function $C^{{\textrm{cmd}}}_{\textrm{foot}}(\boldsymbol{\theta}^{{\textrm{cmd}}},t)$ computes the desired contact state of each foot from the phase and timing variable, as described in [8], with details given in the appendix. ### 3.2 Learning Diversified Locomotion Task and Behavior Sampling In order to learn graceful online transitions between behaviors, we resample the desired task and behavior within each training episode. To enable the robot to both run and spin fast, we sample task $\textbf{c}_{t}=(\textbf{v}_{x}^{{\textrm{cmd}}},\textbf{v}_{y}^{{\textrm{cmd}}},\boldsymbol{\omega}_{z}^{{\textrm{cmd}}})$ using the grid adaptive curriculum strategy from [3]. Then, we need to sample a target behavior $\textbf{b}_{t}$. First, we sample $(\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},\boldsymbol{\theta}_{2}^{{\textrm{cmd}}},\boldsymbol{\theta}_{3}^{{\textrm{cmd}}})$ as one of the symmetric quadrupedal contact patterns (pronking, trotting, bounding, or pacing) which are known as more stable and which we found a sufficient basis for diverse useful gaits. Then, the remaining command parameters $(\textbf{v}_{y}^{{\textrm{cmd}}},\boldsymbol{f}^{{\textrm{cmd}}},\boldsymbol{h}_{z}^{{\textrm{cmd}}},\boldsymbol{\phi}^{{\textrm{cmd}}},\boldsymbol{h}_{z}^{f,{\textrm{cmd}}},\boldsymbol{s}_{y}^{{\textrm{cmd}}})$ are sampled independently and uniformly. Their ranges are given in Table 5. Policy Input. The input to the policy is a $30$-step history of observations $\textbf{o}_{t-H...t}$, commands $\textbf{c}_{t-H...t}$, behaviors $\textbf{b}_{t-H...t}$, previous actions $\textbf{a}_{t-H-1...t-1}$, and timing reference variables $\textbf{t}_{t-H...t}$. The observation space $\textbf{o}_{t}$ consists of joint positions and velocities $\textbf{q}_{t},\dot{\textbf{q}}_{t}$ (measured by joint encoders) and the gravity vector in the body frame $\textbf{g}_{t}$ (measured by accelerometer). The timing reference variables $\textbf{t}_{t}=[\sin(2\pi t^{\textrm{FR}}),\sin(2\pi t^{\textrm{FL}}),\sin(2\pi t^{\textrm{RR}}),\sin(2\pi t^{\textrm{RL}})]$ are computed from the offset timings of each foot: $[t^{\textrm{FR}},t^{\textrm{FL}},t^{\textrm{RR}},t^{\textrm{RL}}]=[t+\boldsymbol{\theta}_{2}^{{\textrm{cmd}}}+\boldsymbol{\theta}_{3}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{1}^{{\textrm{cmd}}}+\boldsymbol{\theta}_{3}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{2}^{{\textrm{cmd}}}]$, where $t$ is a counter variable that advances from $0$ to $1$ during each gait cycle and ${}^{\textrm{FR}}$, ${}^{\textrm{FL}}$, ${}^{\textrm{RR}}$, ${}^{\textrm{RL}}$ are the four feet. This form is adapted from [8] to express quadrupedal gaits. Policy Architecture. Our policy body is an MLP with hidden layer sizes $[512,256,128]$ and ELU activations. Besides the above, the policy input also includes estimated domain parameters: the velocity of the robot body and the ground friction, which are predicted from the observation history using supervised learning in the manner of [7]. The estimator module is an MLP with hidden layer sizes $[256,128]$ and ELU activations. We did not analyze the impact of this estimation on performance but found it useful for visualizing deployments. Action Space. The action $\textbf{a}_{t}$ consists of position targets for each of the twelve joints. A zero action corresponds to the nominal joint position, $\hat{\textbf{q}}$. The position targets are tracked using a proportional-derivative controller with $k_{p}=20,k_{d}=0.5$. ### 3.3 Design Choices for Sim-to-Real Transfer Domain Randomization. For better sim-to-real transfer, we train a policy that is robust to a range of robot’s body mass, motor strength, joint position calibration, the ground friction and restitution, and the orientation and magnitude of gravity. As we are interested in studying out-of-distribution generalization, we only train on flat ground without any randomization of terrain geometry. This choice also simplified training. The randomization ranges of all parameters are in Appendix A. Latency and Actuator Modeling. Directly identifying invariant properties avoids overly conservative behavior from unnecessary domain randomization. We perform system identification to reduce the sim-to-real gap in the robot dynamics. Following [22], we train an actuator network to capture the non- ideal relationship between PD error and realized torque. Separately, we identify a latency of around $20\text{\,}\mathrm{ms}$ in our system and model this as a constant action delay during simulation. Gait | $0.0\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ | $1.0\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ | $2.0\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ | $3.0\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ ---|---|---|---|--- Trotting | $9_{\pm 1}$ | $24_{\pm 1}$ | $53_{\pm 5}$ | $98_{\pm 9}$ Pronking | $32_{\pm 1}$ | $43_{\pm 2}$ | $68_{\pm 5}$ | $112_{\pm 5}$ Pacing | $13_{\pm 3}$ | $25_{\pm 2}$ | $55_{\pm 3}$ | $99_{\pm 6}$ Bounding | $22_{\pm 2}$ | $39_{\pm 4}$ | $78_{\pm 5}$ | $127_{\pm 35}$ Gait-free Baseline | $17_{\pm 5}$ | $35_{\pm 5}$ | $64_{\pm 10}$ | $102_{\pm 14}$ Trotting ($\boldsymbol{f}^{{\textrm{cmd}}}=$2\text{\,}\mathrm{Hz}$$) | $11_{\pm 2}$ | $25_{\pm 1}$ | $55_{\pm 4}$ | $104_{\pm 8}$ Trotting ($\boldsymbol{f}^{{\textrm{cmd}}}=$3\text{\,}\mathrm{Hz}$$) | $9_{\pm 1}$ | $24_{\pm 1}$ | $53_{\pm 5}$ | $98_{\pm 9}$ Trotting ($\boldsymbol{f}^{{\textrm{cmd}}}=$4\text{\,}\mathrm{Hz}$$) | $9_{\pm 1}$ | $26_{\pm 0}$ | $60_{\pm 4}$ | $114_{\pm 12}$ Trotting ($\boldsymbol{h}_{z}^{{\textrm{cmd}}}=$20\text{\,}\mathrm{cm}$$) | $9_{\pm 1}$ | $26_{\pm 1}$ | $56_{\pm 3}$ | $102_{\pm 8}$ Trotting ($\boldsymbol{h}_{z}^{{\textrm{cmd}}}=$30\text{\,}\mathrm{cm}$$) | $9_{\pm 1}$ | $24_{\pm 1}$ | $53_{\pm 5}$ | $98_{\pm 9}$ Trotting ($\boldsymbol{h}_{z}^{{\textrm{cmd}}}=$40\text{\,}\mathrm{cm}$$) | $10_{\pm 1}$ | $23_{\pm 1}$ | $52_{\pm 4}$ | $95_{\pm 9}$ [table]A table beside a figure Table 2: Behavior tuning enables interventional studies on the relationship between gait properties and performance criteria within a single policy. Here, we illustrate how power consumption varies across speeds for common quadrupedal gaits and for a baseline policy without gait constraint. Several structured gaits surpass the efficiency of unconstrained gait across all speeds. ### 3.4 Materials Simulator and Learning Algorithm. We define our training environment in the Isaac Gym simulator [6, 25]. We train policies using Proximal Policy Optimization [26]; details in Appendix A. Hardware. We deploy our controller in the real world on the Unitree Go1 Edu robot [27]. An onboard Jetson TX2 NX computer runs our trained policy. We implement an interface based on Lightweight Communications and Marshalling (LCM) [28] to pass sensor data, motor commands, and joystick state between our code and the low-level control SDK provided by Unitree. For both training and deployment, the control frequency is 50Hz. Gait-free Baseline. To understand the impact of MoB on performance, we compare our controller to a baseline velocity-tracking controller (the “gait-free baseline”). The gait-free baseline is trained by the method above, but excludes all augmented auxiliary rewards (Table 1). Therefore, it only learns one solution to the training environment and its actions are independent of behavior parameters $\textbf{b}_{t}$. ## 4 Experimental Results ### 4.1 Sim-to-Real Transfer and Gait Switching We deploy the controller learned in simulation in the real world and first evaluate its performance on flat ground similar to the training environment. To start, we demonstrate generating and switching between structured gaits that are well-known in the locomotion community. Figure 2 shows torques and contact states during transition between trotting, pronking, bounding, and pacing while alternating $\boldsymbol{f}^{{\textrm{cmd}}}$ between $2\text{\,}\mathrm{Hz}$ and $4\text{\,}\mathrm{Hz}$. We find that all gait parameters are consistently tracked after sim-to-real transfer. Videos (i)-(iv) on the project website visualize the different gaits obtained by modulating each parameter in $\textbf{b}_{t}$ individually. Gait | $r_{v^{{\textrm{cmd}}}_{x,y}}$ | $r_{\omega^{{\textrm{cmd}}}_{z}}$ | $r_{c^{{\textrm{cmd}}}_{f}}$ | $r_{c^{{\textrm{cmd}}}_{v}}$ | Survival ---|---|---|---|---|--- Trotting | $0.80_{\pm 0.01}^{(0.95)}$ | $0.76_{\pm 0.00}^{(0.89)}$ | $0.95_{\pm 0.00}^{(0.97)}$ | $0.98_{\pm 0.00}^{(0.98)}$ | $0.88_{\pm 0.01}^{(1.00)}$ Pronking | $0.84_{\pm 0.01}^{(0.94)}$ | $0.77_{\pm 0.01}^{(0.85)}$ | $0.96_{\pm 0.00}^{(0.96)}$ | $0.97_{\pm 0.00}^{(0.98)}$ | $0.82_{\pm 0.02}^{(1.00)}$ Pacing | $0.76_{\pm 0.01}^{(0.91)}$ | $0.76_{\pm 0.01}^{(0.81)}$ | $0.94_{\pm 0.00}^{(0.96)}$ | $0.98_{\pm 0.00}^{(0.98)}$ | $0.87_{\pm 0.02}^{(1.00)}$ Bounding | $0.80_{\pm 0.01}^{(0.88)}$ | $0.73_{\pm 0.01}^{(0.86)}$ | $0.94_{\pm 0.00}^{(0.96)}$ | $0.98_{\pm 0.00}^{(0.98)}$ | $0.82_{\pm 0.01}^{(1.00)}$ Gait-free | $0.81_{\pm 0.03}^{(0.96)}$ | $0.74_{\pm 0.06}^{(0.92)}$ | – | – | $0.83_{\pm 0.01}^{(1.00)}$ Table 3: Zero-shot generalization to platform terrain (visualized right). Pacing and trotting yield the best survival time in out-of-distribution deployment, outperforming the gait-free baseline. Pronking attains the best velocity tracking performance, with similar survival time to the baseline. We report the fraction of maximum episodic reward. Superscript reports performance in flat training environment with no platforms. Subscript reports standard deviation across three random seeds. ### 4.2 Leveraging MoB for Generalization Tuning for New Tasks. After training using a generic locomotion objective, one might wish to tune a controller’s behavior to optimize a new metric in the original environment. MoB facilitates this if some subset of learned behaviors outperform the gait-free policy by the new task metric. Energy efficiency (simulated): We consider the task of minimizing the mechanical power consumption ($\text{\,}\mathrm{J}\mathrm{/}\mathrm{s}$) measured by summing the product of joint velocity and torque at each of the 12 motors: $\sum_{i}\max(\boldsymbol{\tau}_{i}\dot{\textbf{q}}_{i},0)$. As shown in Figure 2, several choices of the contact schedule $\boldsymbol{\theta}^{{\textrm{cmd}}}$, height $\boldsymbol{h}_{z}^{{\textrm{cmd}}}$, and frequency $\boldsymbol{f}^{{\textrm{cmd}}}$ outperform the gait-free policy in this unseen metric, consuming less energy across all speeds. Therefore, tuning the behavior parameters can facilitate tuning performance on a new objective (energy efficiency) in the original training environment. Payload manipulation: We experiment with another task where the robot is required to transport a ball from one place to another, then bend its body so the ball is deposited on the ground. A gait-free policy couldn’t do this; while many possible body posture profiles are valid to solve the training environment, the gait-free policy will simply converge to one at random. In a real-world experiment, we demonstrate how MoB-enabled body posture control can be repurposed for teleoperated payload manipulation. The operator pilots the robot to the delivery location with the body level, then pitches the body backward, modulating $\boldsymbol{\phi}^{{\textrm{cmd}}}$, to dump the payload (Figure 1, bottom row, second from left). Tuning for New Environments. In the previous section, we showed that MoB can support repurposement to novel tasks in the training environment. In the real world, there is also always a long tail of environments that are not modeled in training. We demonstrate the behavior of our controller trained only on flat ground in the presence of challenging non-flat terrains and disturbances such as curbs, bushes, hanging obstacles, and shoves. Successes suggest that the space of policies learned by MoB contains some behaviors that transfer better than the baseline to unseen environments. Platform terrain (simulated): We evaluate the performance of our robot in traversing randomly positioned platforms with height up to $16\text{\,}\mathrm{cm}$ (Table 3). We report two metrics: mean reward and mean survival time as a fraction of the maximum episode length ($10\text{\,}\mathrm{s}$). For each metric, we find that by modulating the contact schedule $\boldsymbol{\theta}^{{\textrm{cmd}}}$ or footswing height $\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}$, we can outperform the gait-free policy. (Figure 3, Appendix Figure 10). Therefore, it is possible to improve performance in an out-of-distribution terrain by modulating the parameters of the MoB policy. Climbing over curbs: Previous works demonstrating blind obstacle traversal on a quadruped [1, 2] learned a foot-trapping reflex where the robot first trips, then raises its foot over the obstacle. In contrast, with the help of a human pilot, our gait-conditioned policy with high footswing command enables fast and smooth obstacle traversal without tripping, despite training on a simpler flat-ground terrain. In the real world, we demonstrate that modulating footswing height $\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}$ enables our controller to climb smoothly across stairs and curbs (Figure 1, bottom left). Hacking through thick bushes: Extremely thick bushes pose a methodological challenge for state-of-the-art perceptive locomotion controllers. They are hard to simulate, because they comply against the whole body of the robot, and they are hard to sense as they are indistinguishable from solid obstacles to depth sensors. Therefore, prior works would either attempt to climb over bushes as obstacles or fall back on a robust proprioceptive controller that is unaware of the semantic context. In contrast, by modulating the gait parameters for gait frequency $\boldsymbol{f}^{{\textrm{cmd}}}$ and footswing height $\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}$ in situ, a human operator can guide our system quickly through challenging brush. Navigating confined spaces: Consider the scenario where the robot needs to go under a bar. The gait-free baseline cannot accomplish this; in the absence of such constraints during training, it will converge to a fixed body height profile. Our system with MoB can navigate confined spaces through modulation of the body height $\boldsymbol{h}_{z}^{{\textrm{cmd}}}$. In a real-world example, the robot was able to crawl under a $22\text{\,}\mathrm{cm}$ bar; the robot body thickness is $13\text{\,}\mathrm{cm}$, leaving $9\text{\,}\mathrm{cm}$ of clearance beneath the robot. Anticipative bracing against shoves: Widening the stance can make a quadruped more robust to hard shoves, but absent a perception module to anticipate a human kick, the robot would always need to walk in a widened stance to be prepared for a shove. This interferes with performance in other tasks like running efficiently, so learned locomotion controllers without MoB often provide incentive to keep the feet nominally below the hips [2, 7]. With MoB, widening the stance width $\boldsymbol{s}_{y}^{{\textrm{cmd}}}$ in the pilot’s anticipation of a shove enables the robot to remain upright. Exploring the Speed and Range of Behavior Adaptation. If we want to use our low-level controller for high-level tasks, one thing we might desire is to switch between gaits even at high speeds, which might be useful for applications such as parkour. Another is to transition between diverse gaits with precise timing for a synchronous task like dancing. Agile forward leap: As a demonstration of gait transitions at high speed, we modulate contact schedule, velocity, and gait frequency at to encode an agile forward leap (Figure 3). The robot first accelerates to a target speed of $3\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ at a trot while increasing its step frequency from $2\text{\,}\mathrm{Hz}$ to $4\text{\,}\mathrm{Hz}$, then switches to pronking at $2\text{\,}\mathrm{Hz}$ for one second, then decelerates to a standstill while trotting. During the leap phase, the distance from the location of the robot’s front feet at takeoff to the location of the hind feet upon landing is $60\text{\,}\mathrm{cm}$. Choreographed dance: To demonstrate precisely timed transition between diverse gaits, we program a sequence of gait parameters to generate a dance routine synchronized to a jazz song with a tempo of 90 bpm. At this tempo, combinations of phases $0$, $0.25$, and $0.5$ with frequencies of $1.5\text{\,}\mathrm{Hz}$ and $3\text{\,}\mathrm{Hz}$ yield eighth, quarter, half, and full beat gaps between consecutive footsteps. We also modulate body height and velocity in time with the music. An assistant script procedurally generates gait parameters, fed into the controller in open-loop fashion. Figure 3: We demonstrate that behavior transitions can be performed even in quick sequence at high speed for synthesis of agile maneuvers. Emulating gap crossing on flat ground, we show that this can facilitate crossing a gap wider than the robot’s body length in a single leap. ## 5 Discussion and Limitations Our experiments show that the benefits of adding MoB can come at a cost to in- distribution task performance, specifically limiting the robot’s flat-ground sprinting performance (Table 4, appendix). Heat maps reveal that our behavior parameterization is restrictive for combinations of high linear and angular velocity. To quantify and control the tradeoff between task performance and reward shaping is an interesting future direction, for which some prior methods have been proposed [29]. MoB confers a single learned policy a structured and controllable space of diverse locomotion behaviors for each state and task in the training distribution. This yields a set of ‘knobs’ to tune the performance of motor skills in unseen test environments. The system currently requires a human pilot to manually tune its behavior. In the future, the autonomy of our system could be extended by automating behavior selection using imitation from real- world human demonstrations, or by using a hierarchical learning approach to automatically self-tune the controller during deployment. #### Acknowledgments We thank the members of the Improbable AI lab for the helpful discussions and feedback on the paper. We are grateful to MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing HPC resources. This research was supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, and the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/). This research was also sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein. Author Contributions * • Gabriel B. Margolis concieved, designed, and implemented the controller, ran all experiments, and played the primary role in paper writing. * • Pulkit Agrawal advised the project and contributed to its conceptual development, experimental design, positioning, and writing. ## References * Lee et al. [2020] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. _Sci. Robot._ , 5(47):eabc5986, Oct. 2020. doi:10.1126/scirobotics.abc5986. * Kumar et al. [2021] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid motor adaptation for legged robots. In _Proc. Robot.: Sci. and Syst. (RSS)_ , Virtual, July 2021. doi:10.48550/arXiv.2107.04034. * Margolis et al. [2022] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion via reinforcement learning. _Proc. Robot.: Sci. and Syst. (RSS)_ , June 2022. doi:10.48550/arXiv.2205.02824. * Agrawal [2022] P. Agrawal. The task specification problem. In _Conference on Robot Learning_ , pages 1745–1751. PMLR, 2022. * Fu et al. [2021] Z. Fu, A. Kumar, J. Malik, and D. Pathak. Minimizing energy consumption leads to the emergence of gaits in legged robots. In _Proc. Conf. Robot Learn. (CoRL)_ , pages 928–937, London, UK, Nov. 2021. doi:10.48550/arXiv.2111.01674. * Rudin et al. [2021] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massively parallel deep reinforcement learning. In _Proc. Conf. Robot Learn. (CoRL)_ , pages 91–100, London, UK, Nov. 2021. doi:10.48550/arXiv.2109.11978. * Ji et al. [2022] G. Ji, J. Mun, H. Kim, and J. Hwangbo. Concurrent training of a control policy and a state estimator for dynamic and robust legged locomotion. _IEEE Robot. Automat. Lett. (RA-L)_ , 7(2):4630 – 4637, Apr. 2022. doi:10.1109/LRA.2022.3151396. * Siekmann et al. [2021] J. Siekmann, Y. Godse, A. Fern, and J. Hurst. Sim-to-real learning of all common bipedal gaits via periodic reward composition. In _Proc. IEEE Int. Conf. Robot. Automat. (ICRA)_ , pages 7309–7315, Xi’an, China, June 2021. doi:10.1109/ICRA48506.2021.9561814. * Duan et al. [2022] H. Duan, A. Malik, J. Dao, A. Saxena, K. Green, J. Siekmann, A. Fern, and J. Hurst. Sim-to-real learning of footstep-constrained bipedal dynamic walking. In _Proc. IEEE Int. Conf. Robot. Automat. (ICRA)_ , Philadelphia, USA, May 2022. doi:10.1109/ICRA46639.2022.9812015. * Li et al. [2021] Z. Li, X. Cheng, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Reinforcement learning for robust parameterized locomotion control of bipedal robots. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 2811–2817. IEEE, 2021. * Shao et al. [2021] Y. Shao, Y. Jin, X. Liu, W. He, H. Wang, and W. Yang. Learning free gait transition for quadruped robots via phase-guided controller. _IEEE Robot. Automat. Lett. (RA-L)_ , 7(2):1230–1237, 2021. * Vollenweider et al. [2022] E. Vollenweider, M. Bjelonic, V. Klemm, N. Rudin, J. Lee, and M. Hutter. Advanced skills through multiple adversarial motion priors in reinforcement learning. _arXiv preprint arXiv:2203.14912_ , 2022. * Mouret and Clune [2015] J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. _arXiv preprint arXiv:1504.04909_ , 2015. * Cully et al. [2015] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret. Robots that can adapt like animals. _Nature_ , 521(7553):503–507, 2015. * Lim et al. [2022] B. Lim, L. Grillotti, L. Bernasconi, and A. Cully. Dynamics-aware quality-diversity for efficient learning of skill repertoires. In _2022 International Conference on Robotics and Automation (ICRA)_ , pages 5360–5366. IEEE, 2022. * Eysenbach et al. [2018] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skills without a reward function. _arXiv preprint arXiv:1802.06070_ , 2018. * Kumar et al. [2020] S. Kumar, A. Kumar, S. Levine, and C. Finn. One solution is not all you need: Few-shot extrapolation via structured maxent rl. _Advances in Neural Information Processing Systems_ , 33:8198–8210, 2020. * Gaya et al. [2021] J.-B. Gaya, L. Soulier, and L. Denoyer. Learning a subspace of policies for online adaptation in reinforcement learning. _arXiv preprint arXiv:2110.05169_ , 2021. * Yang et al. [2022] Y. Yang, T. Zhang, E. Coumans, J. Tan, and B. Boots. Fast and efficient locomotion via learned gait transitions. In _Proc. Conf. Robot Learn. (CoRL)_ , pages 773–783, Nov. 2022. doi:10.48550/arXiv.2104.04644. * Margolis et al. [2021] G. B. Margolis, T. Chen, K. Paigwar, X. Fu, D. Kim, S. Kim, and P. Agrawal. Learning to jump from pixels. In _Proc. Conf. Robot Learn. (CoRL)_ , pages 1025–1034, London, UK, Nov. 2021. doi:10.48550/arXiv.2110.15344. * Yu et al. [2021] W. Yu, D. Jain, A. Escontrela, A. Iscen, P. Xu, E. Coumans, S. Ha, J. Tan, and T. Zhang. Visual-locomotion: Learning to walk on complex terrains with vision. In _Proc. Conf. Robot Learn. (CoRL)_ , Oct. 2021. * Hwangbo et al. [2019] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter. Learning agile and dynamic motor skills for legged robots. _Sci. Robot._ , 4(26):aau5872, Jan. 2019. doi:10.1126/scirobotics.aau5872. * Kim et al. [2019] D. Kim, J. Di Carlo, B. Katz, G. Bledt, and S. Kim. Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control. _arXiv preprint_ , 2019. doi:10.48550/arXiv.1909.06586. * Raibert [1986] M. H. Raibert. _Legged Robots That Balance_. MIT press, 1986. * Makoviychuk et al. [2021] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, et al. Isaac Gym: High performance GPU-based physics simulation for robot learning. _arXiv preprint_ , 2021. doi:10.48550/arXiv.2108.10470. * Schulman et al. [2017] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. _arXiv preprint_ , 2017. doi:10.48550/arXiv.1707.06347. * [27] Unitree Robotics, Go1, 2022, https://www.unitree.com/products/go1, [Online; accessed Jun. 2022]. * Huang et al. [2010] A. S. Huang, E. Olson, and D. C. Moore. LCM: Lightweight communications and marshalling. In _2010 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 4057–4062. IEEE, 2010. * Chen et al. [2022] E. Chen, H. Zhang-Wei, J. Pajarinen, and P. Agrawal. Redeeming intrinsic rewards via constrained optimization. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2022. Ablation | $r_{v^{{\textrm{cmd}}}_{x,y}}$ | $r_{\omega^{{\textrm{cmd}}}_{z}}$ | $r_{c^{{\textrm{cmd}}}_{f}}$ | $r_{c^{{\textrm{cmd}}}_{v}}$ ---|---|---|---|--- Trotting | $0.92_{\pm 0.01}$ | $0.70_{\pm 0.04}$ | $0.98_{\pm 0.00}$ | $0.95_{\pm 0.00}$ Pronking | $0.85_{\pm 0.01}$ | $0.64_{\pm 0.05}$ | $0.98_{\pm 0.00}$ | $0.94_{\pm 0.00}$ Pacing | $0.83_{\pm 0.02}$ | $0.66_{\pm 0.04}$ | $0.96_{\pm 0.01}$ | $0.94_{\pm 0.01}$ Bounding | $0.88_{\pm 0.01}$ | $0.63_{\pm 0.05}$ | $0.96_{\pm 0.01}$ | $0.95_{\pm 0.00}$ Gait-free Baseline | $0.94_{\pm 0.02}$ | $0.76_{\pm 0.01}$ | – | – Table 4: Removing gait constraints results in improved velocity tracking task performance on flat ground. Heat maps (right) break down the mean task reward for each velocity command, revealing that the gait-free approach is most beneficial for combinations of high linear and angular velocity. ## Appendix A Training Details The ranges used for domain and command randomization are provided in Table 5. The hyperparameters used for PPO are provided in Table 6. The hyperparameters used in the curriculum are provided in Table 7. Figure 6 illustrates data flow during training. The curriculum engine first samples from a Gaussian distribution centered at one of the four main gaits (trotting, pronking, bounding, pacing); then it samples velocity commands from a grid distribution according to the method of [3]; then finally it samples the stepping frequency and body height uniformly. The policy and simulator are rolled out, and the reward is computed as a function of the gait parameters. Then the curriculum is updated if the episodic reward meets the thresholds given in Table 7. Term Minimum Maximum Units Payload Mass $-1.0$ $3.0$ $\text{\,}\mathrm{kg}$ Motor Strength $90$ $110$ $\%$ Joint Calibration $-0.02$ $0.02$ $\text{\,}\mathrm{rad}$ Ground Friction $0.40$ $1.00$ – Ground Restitution $0.00$ $1.00$ – Gravity Offset $-1.0$ $1.0$ $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}^{2}$ $\textbf{v}_{x}^{{\textrm{cmd}}}$ – – $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\textbf{v}_{y}^{{\textrm{cmd}}}$ $-0.6$ $0.6$ $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ – – $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\boldsymbol{f}^{{\textrm{cmd}}}$ $1.5$ $4.0$ $\text{\,}\mathrm{Hz}$ $\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},\boldsymbol{\theta}_{2}^{{\textrm{cmd}}},\boldsymbol{\theta}_{3}^{{\textrm{cmd}}}$ $0.0$ $1.0$ – $\boldsymbol{h}_{z}^{{\textrm{cmd}}}$ $0.10$ $0.45$ $\text{\,}\mathrm{m}$ $\boldsymbol{\phi}^{{\textrm{cmd}}}$ $-0.4$ $0.4$ $\text{\,}\mathrm{rad}$ $\boldsymbol{s}_{y}^{{\textrm{cmd}}}$ $0.05$ $0.45$ $\text{\,}\mathrm{m}$ $\boldsymbol{h}_{z}^{f,{\textrm{cmd}}}$ $0.03$ $0.25$ $\text{\,}\mathrm{m}$ Table 5: Randomization ranges for dynamics parameters (top) and commands (bottom) during training. $\textbf{v}_{x}^{{\textrm{cmd}}},\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ are adapted according to a curriculum. Figure 4: Pronking and trotting gaits are easier to learn and tend to dominate pacing and bounding early in training. However, when discovered, pacing and bounding gaits can yield good performance and later become preferred for some downstream tasks (Section 4.2). ## Appendix B Teleoperation Interface Figure 6 illustrates the mapping from remote control inputs to gait parameters used during teleoperation. The front bumpers on the top toggle between control modes to accommodate mixing our large number of gait parameters. Preprogrammed sequences such as dancing and leaping can be assigned to the rear bumpers. Figure 5: Controller mapping. Mapping of remote control inputs to gait parameters during robot teleoperation. The user can change between gaits at any time. Continuous interpolation between contact patterns is supported by our policy, but not mapped here. Lateral velocity is also supported by the controller but excluded from the mapping. Figure 6: Training architecture. The policy computes the action as a function of the gait parameters and state. The simulator computes the reward and state as a function of the gait parameters and actions. The curriculum engine periodically resamples gait parameters based on the reward. ## Appendix C Extended Performance Analysis Impact of Gait Frequency on High-speed Running. We evaluate them impact of gait frequency on performance of the robot at high speeds. Figure 10 reports our result that higher gait frequency is necessary to yield good tracking performance for higher-speed running. Impact of Footswing Height on Platform Terrain Performance. We evaluate them impact of footswing height on performance of the robot on the out-of- distribution platform terrain. Figure 10 reports our result that higher swing heights yield improved platform traversal, outperforming the gait-free policy. Flat Ground Velocity Heatmaps for More Gaits. We provide velocity heatmaps in Table 11 for pronking, pacing, and bounding gaits to supplement the trotting and gait-free heatmaps provided in Table 4. Forward and Backward Locomotion. During evaluation in the random platforms environment, we found that walking backward leads to fewer failures than walking forward. Figure 8 illustrates this phenomenon by plotting the mean failure rate of each gait at each test velocity. Possible explanations include (i) recovery strategies that are dependent on knee orientation and (ii) the weight distribution of the robot. Real-world Robustness Demonstrations. We conducted several hours of real-world testing of different gaits across a variety of laboratory and outdoor environments. A selection of this footage is available on the project website. The robot was able to traverse down stairs, up and down granular and slippery terrain, and to respond to external perturbations. To provide insight into the robot’s disturbance response, we plot the joint torques, contact states, and learned state estimate of the robot in Figure 7. The robot trots at two different frequencies and is shoved twice by the operator, once from each side. The state estimator correctly predicts the direction of the lateral velocity increase (orange), and adapts the joint torques and contact schedule to correct. ## Appendix D Contact Schedule Parameterization At each control timestep, we compute the desired contact from the gait parameters $\boldsymbol{\theta}^{{\textrm{cmd}}}$ as follows. First, we increment the global timing variable $t$ by $\frac{\boldsymbol{f}^{{\textrm{cmd}}}}{f_{\pi}}$ where $\boldsymbol{f}^{{\textrm{cmd}}}$ is the commanded stepping frequency and $f_{\pi}$ is the control frequency. Then, we compute separate timing variables for each foot, clipped between $0$ and $1$: $[t^{\textrm{FR}},t^{\textrm{FL}},t^{\textrm{RR}},t^{\textrm{RL}}]=\text{clip}([t+\boldsymbol{\theta}_{2}^{{\textrm{cmd}}}+\boldsymbol{\theta}_{3}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{1}^{{\textrm{cmd}}}+\boldsymbol{\theta}_{3}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{1}^{{\textrm{cmd}}},t+\boldsymbol{\theta}_{2}^{{\textrm{cmd}}}],0,1)$ From these, we can directly compute the desired contact states: $C^{{\textrm{cmd}}}_{\textrm{foot}}(t^{\textrm{foot}}(\boldsymbol{\theta}^{{\textrm{cmd}}},t))=\Phi(t^{\textrm{foot}},\sigma)*(1-\Phi(t^{\textrm{foot}}-0.5,\sigma))+\Phi(t^{\textrm{foot}}-1,\sigma)*(1-\Phi(t^{\textrm{foot}}-1.5,\sigma))$ where $\Phi(x;\sigma)$ is the cumulative density function of the normal distribution: $\Phi(x;\sigma)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x}{\sigma})^{2}}$ which is an approximation to the Von Mises distribution used in [8] to form a smooth transition between stance and swing. Figure 7: Shove Robustness Test. Joint torques (top), contact states (middle), and velocity estimate (bottom) during trotting in the laboratory setting. Dashed boxes indicate shove events. The robot was first shoved to the right during trotting at low frequency, and then shoved to the left during trotting at high frequency. The learned state estimator correctly infers the change in lateral velocity and adjusts the joint torques and foot contacts to stabilize the robot. Figure 8: Forward vs Backward Walking on Platforms. Time to failure for different gaits and velocities in the random platforms environment (zero-shot test). The temperature bar unit is the mean fraction of a 20s episode elapsed before failure, between zero and one. The gait frequency is 3Hz. The pacing gait boasts the longest mean survival time, possibly due to higher footswings or more practice with recovery strategies during training. Interestingly, almost all failures occur during forward locomotion, and the robot is much more robust when moving backward. Possible explanations include (i) recovery strategies that are dependent on knee orientation and (ii) the weight distribution of the robot. Hyperparameter Value discount factor 0.99 GAE parameter 0.95 # timesteps per rollout 21 # epochs per rollout 5 # minibatches per epoch 4 entropy bonus ($\alpha_{2}$) 0.01 value loss coefficient ($\alpha_{1}$) 1.0 clip range 0.2 reward normalization yes learning rate 1e-3 # environments 4096 # total timesteps 2.58B optimizer Adam Table 6: PPO hyperparameters. Parameter Value Units $\textbf{v}_{x}^{{\textrm{cmd}}}$ initial [-1.0, 1.0] $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ initial [-1.0, 1.0] $\text{\,}\mathrm{rad}\mathrm{/}\mathrm{s}$ $\textbf{v}_{x}^{{\textrm{cmd}}}$ max [-3.0, 3.0] $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ max [-5.0, 5.0] $\text{\,}\mathrm{rad}\mathrm{/}\mathrm{s}$ $\textbf{v}_{x}^{{\textrm{cmd}}}$ bin size 0.5 $\text{\,}\mathrm{m}\mathrm{/}\mathrm{s}$ $\boldsymbol{\omega}_{z}^{{\textrm{cmd}}}$ bin size 0.5 $\text{\,}\mathrm{rad}\mathrm{/}\mathrm{s}$ $r_{v^{{\textrm{cmd}}}_{x,y}}$ threshold 0.8 – $r_{\omega^{{\textrm{cmd}}}_{z}}$ threshold 0.7 – $r_{c^{{\textrm{cmd}}}_{f}}$ threshold 0.95 – $r_{c^{{\textrm{cmd}}}_{v}}$ threshold 0.95 – Table 7: Curriculum parameters. Figure 9: Footswing Height vs Robustness: Impact of footswing height on time to failure on the platform terrain (Section 4.2). Increased footswing height yields better generalization to uneven terrain from flat terrain compared to the gait-free policy. Figure 10: Frequency vs Speed: Impact of trotting frequency on flat-ground velocity tracking reward across speeds (Section 4.2). Enforcing low frequency (2Hz) makes high speeds less attainable. The gait-free policy offers slightly better performance at the lowest and highest speeds. Figure 11: Flat Ground Velocity Tracking Heatmaps: We provide heatmaps as in Table 4 for the other major gaits: pronking, pacing, and bounding. In all cases, the policy forgoes performance on the training task of flat-ground velocity tracking to achieve diversity that will help accomplish new tasks. However, in-distribution task performance is maintained well for the lower range of speeds.
# Do Performance Aspirations Matter for Guiding Software Configuration Tuning? Tao Chen<EMAIL_ADDRESS>Loughborough UniversityLoughboroughUnited Kingdom and Miqing Li University of BirminghamBirminghamUnited Kingdom <EMAIL_ADDRESS> ###### Abstract. Configurable software systems can be tuned for better performance. Leveraging on some Pareto optimizers, recent work has shifted from tuning for a single, time-related performance objective to two intrinsically different objectives that assess distinct performance aspects of the system, each with varying aspirations to be satisfied, e.g., “the latency is less than 10s” while “the memory usage is no more than 1GB”. Before we design better optimizers, a crucial engineering decision to make therein is how to handle the performance requirements with clear aspirations in the tuning process. For this, the community takes two alternative optimization models: either quantifying and incorporating the aspirations into the search objectives that guide the tuning, or not considering the aspirations during the search but purely using them in the later decision-making process only. However, despite being a crucial decision that determines how an optimizer can be designed and tailored, there is a rather limited understanding of which optimization model should be chosen under what particular circumstance, and why. In this paper, we seek to close this gap. Firstly, we do that through a review of over 426 papers in the literature and 14 real-world requirements datasets, from which we summarize four performance requirement patterns that quantify the aspirations in the configuration tuning. Drawing on these, we then conduct a comprehensive empirical study that covers 15 combinations of the state-of- the-art performance requirement patterns, four types of aspiration space, three Pareto optimizers, and eight real-world systems/environments, leading to 1,296 cases of investigation. Our findings reveal that (1) the realism of aspirations is the key factor that determines whether they should be used to guide the tuning; (2) the given patterns and the position of the realistic aspirations in the objective landscape are less important for the choice, but they do matter to the extents of improvement; (3) the available tuning budget can also influence the choice for unrealistic aspirations but it is insignificant under realistic ones. To promote open science practice, we make our code and dataset publicly available at: https://github.com/ideas- labo/aspiration-study. Search-based software engineering, software configuration tuning, performance requirement, performance aspiration, multi-objective optimization.. ††copyright: acmlicensed††journal: TOSEM††journalyear: 2023††journalvolume: 1††journalnumber: 1††article: 1††publicationmonth: 1††price: 15.00††doi: 10.1145/3571853††ccs: Software and its engineering Search-based software engineering††ccs: Software and its engineering Empirical software validation††ccs: Software and its engineering Software performance ## 1\. Introduction Many software systems are highly configurable, such that there is a daunting number of configuration options (e.g., the max_spout in Apache Storm), which the software engineers can tune to meet the requirements of some performance objectives, e.g., improving latency, throughput, and resource consumption (Xu et al., 2015; Chen and Bahsoon, 2015; Gong and Chen, 2022; Chen et al., 2018a; Li et al., 2020a). Configuration tuning for software systems plays an integral role in Software Engineering as a recent interview reveals that industrial practitioners have recognized it as a key to the success of software products (Sayagh et al., 2020). Indeed, it has been reported that globally 59% of the software performance issues—wherein the performance requirements were severely violated—are related to ill-suited configuration rather than code (Han and Yu, 2016), leading to serious consequences. For example, in 2017-2018, configuration-related performance issues cost at least 400,000 USD per hour for more than 50% of the software companies worldwide111https://www.evolven.com/blog/downtime-outages-and-failures- understanding-their-true-costs.html. Finding good configurations (i.e., the possible combinational settings of the configuration options) is challenging, because: * • The default configuration is often far from ideal. Jamshidi and Casale (Jamshidi and Casale, 2016) show that the defaults for Apache Storm can lead to 480 times worse performance than some others. * • The configuration space can be large and the measurement is often expensive (Nair et al., 2020), rendering greedy search unrealistic. * • While traditionally software configuration tuning has been focusing on a single performance objective (Ye and Kalyanaraman, 2003; Oh et al., 2017; Xi et al., 2004; Li et al., 2014b; Behzad et al., 2013; Ramirez et al., 2009; Sinha et al., 2020; Guo et al., 2010; Chen, 2022), recent work raises the necessity of simultaneously tuning for multiple performance objectives. Our review (see Section 3) found that considering two performance objectives is the most common case (Gerasimou et al., 2018; Chen et al., 2018b; Nair et al., 2020; Hort et al., 2021). For example, naturally, improving the image quality while reducing the energy consumption are both critical for video encoders like x264; higher accuracy with shorter training time are two inherent performance objectives for deep learning models, e.g., the deep neural network supported by frameworks such as Keras. This further complicates the tuning process as the performance objectives may be conflicting and the extents to such a conflict are often unknown a priori (Chen et al., 2018b; Nair et al., 2020). To automatically tune software configuration for better performance, different approaches have been proposed, such as rule-based (Gias et al., 2019; Garlan et al., 2004), learning-based (Bao et al., 2019; Jamshidi et al., 2018), and search-based (Chen et al., 2018b; Nair et al., 2020; Singh et al., 2016; Calinescu et al., 2017, 2018; Kumar et al., 2020). Among these, search-based approach, primarily relying on the Pareto optimizers widely used in the Search-Based Software Engineering (SBSE) paradigm (Harman et al., 2012), has been a promising way to handle all the aforementioned challenges in software configuration tuning, especially in the presence of more than one performance objective (Singh et al., 2016; Calinescu et al., 2017, 2018; Chen et al., 2018b). In a nutshell, a Pareto optimizer most commonly maintains a population (or at least an archive) of configurations, which can be repeatedly reproduced and evaluated by directly profiling the software, aiming to find the Pareto optimal ones. The output is a set of configurations that are nondominated to each other, which approximates the Pareto front of the software system. ### 1.1. The Problem and Significance An important factor in software configuration tuning is the possible requirements with clear aspirations for the performance objectives (Ramirez and Cheng, 2011; Calinescu et al., 2017; Esfahani et al., 2011; Calinescu et al., 2018), for which we distinguish two important notions in this work: * • Aspiration: The information that allows us to quantify the extent to which the performance is considered satisfactory (or unsatisfactory). * • Performance requirement: The context under which the preference of the performance is defined. For example, according to the research in the Requirement Engineering community (Whittle et al., 2009; Baresi et al., 2010), it is not uncommon to have performance requirements from the requirement documents, such as “the latency shall be less than $x$” while “the memory usage shall be no more than $y$”, where “less than $x$” and “no more than $y$” are the clear aspirations therein. It is worth noting that not all performance requirements would contain aspiration, e.g., “the latency shall be low” is a requirement with no aspiration since nothing can be quantified with respect to the level of satisfaction. Indeed, given a scenario with clear aspirations in the performance requirements, it has been well-acknowledged that the information provided serves as useful metrics for the software engineers to conduct a posterior cherry-picking after the tuning completes, extracting the satisficing configuration(s) from the set produced by a Pareto optimizer (Li et al., 2022). The natural motivations behind this are: * • Given a fixed tuning budget, finding the optimal performance is not always feasible or even desirable to the stakeholders. * • The clear aspiration levels allow an implicit trade-off/preferences between the conflicting performance objectives according to the stakeholders. Regardless of the Pareto optimizer used, in the tuning process, existing work takes one of two intrinsically different optimization models to handle aspirations when tuning for two performance objectives, namely: Pareto search with aspirations (denoted as PS-w) (Calinescu et al., 2017; Martens et al., 2010; Gerasimou et al., 2016; Calinescu et al., 2018) and Pareto search without aspirations (denoted as PS-w/o) (Chen et al., 2018b; Singh et al., 2016; Nair et al., 2020; Koziolek et al., 2011). In PS-w, the performance requirements with aspirations are quantified in certain forms (we will elaborate on this in Section 3), which then serve as new search objectives in the tuning. The motivation is simple: since the aspirations provide information on the degree of satisficing, one can exploit this advantage to guide the tuning process. PS-w/o, in contrast, is more classic and simply ignores the aspirations in the tuning. The assumption here is that, since the search in whatever a Pareto optimizer is essentially an optimization process that seeks to find the Pareto optimal configurations, the tuning always aims to achieve the best possible performance, which preserves the tendency towards satisficing whatever aspirations222This assumes the most common case that the best possible performance is at least equally preferred than some other values.. For example, finding the Pareto optimal configuration latency=10s and memory usage=1GB will certainly meet the requirement and aspiration of “latency shall be less than 20s” while “memory usage shall be no more than 2GB”. This matches with Odhnoff’s argument that “optimizing” and “satisficing” are merely stylistically different but fundamentally the same (Odhnoff, 1965). Despite either of the two optimization models being respectively used by their corresponding research groups, the choice was mostly ad-hoc and there is often an implied belief that “they do not differ much hence can be used arbitrarily.” As such, there remains a rather limited understanding of which optimization model should be chosen under what particular circumstance, and why. This has been well-echoed by some researchers. Ghanbari et al. (Ghanbari et al., 2012) have stated that it is important to consider the choice, as the shape of the function that guides the tuning, especially after passing the aspirations, may impact the behavior of the optimizer; but they did not discuss what implication that would be. Yet another example from a recent work by Fekry et al. (Fekry et al., 2019) recommends that studying whether to leverage aspirations for guiding the optimizers and measuring its effectiveness is an important future challenge for software configuration tuning. Indeed, understanding in this regard is non-trivial as it will help practitioners to make more informed-decision, especially when given the expensive measurements of configurable software systems, it is unrealistic to always empirically compare the two models in a case-by-case manner. Furthermore, the insights can hint at future research directions for software configuration tuning: if the PS-w/o is more promising, then we can largely simplify the research to the design of an effective optimizer without considering the given requirements since the human inputs (i.e., the requirements/aspirations) are less important in the overall tuning process. On the other hand, if PS-w is overall more effective, then the problem can become more complicated but also provide more opportunities, e.g., future research can largely focus on how to better quantify those performance requirements and aspirations, together with how to better embed them into more specialized optimizers. To understand this, we have also tuned into the literature on general multi- objective optimization, with a particular focus on preference-driven multi- objective optimization (Wang et al., 2017; Bechikh et al., 2015; Li et al., 2020; Yu et al., 2019). However, we did not find answers that are directly relevant to our case, due to two reasons: (1) the representation of the preferences (e.g., weights and ranks) in preference-driven multi-objective optimization is different from the requirement patterns we summarized from the work for software configuration tuning; (2) they mainly develop algorithms/optimizers that are tailored to a specific preference representation while software configuration tuning often relies on a vanilla optimizer (Calinescu et al., 2017, 2018). Our work is, therefore, motivated by the desirability of the community to understand the following: > Should we incorporate requirements and aspirations to guide the software > configuration tuning process? If so, in what context and why? ### 1.2. Research Questions In this paper, we seek to fill the above gap via an empirical study that systematically compares PS-w and PS-w/o for tuning software configuration under two performance objectives. Suppose that there are some realistic aspirations (i.e., all the aspirations are achievable by tuning the configuration of the software system), the first research question (RQ) we wish to answer is: RQ1: Given performance requirements with realistic aspirations, of PS-w and PS-w/o, which can find a better set of configurations? RQ1 seeks to provide a global picture of the comparison between the two optimization models. However, the diverse possible requirement scenarios imply that the specific aspirations can be radically different in the objective landscape. For example, one may have higher expectations on latency while lower needs on throughput, or vice versus. Therefore, what we would like to understand in more detail is: RQ2: How do different realistic aspirations influence the result? RQ1 and RQ2 investigate under the normal context where the given aspirations are reasonable and achievable. However, since the actual aspirations are negotiated by the software engineers and stakeholders a priori, they could turn out to be unrealistic and may require attention beyond configurations, i.e., no configuration in the search landscape can reach the required aspiration levels for all performance objectives simultaneously, despite that may be possible for a single objective. This brings our next RQ, in which we ask: RQ3: What if the given aspirations are unrealistic? While we are interested in cases where the tuning budget is reasonably sufficient to achieve a good convergence, it is possible that, in real-world scenarios, there is a limited resource for tuning software configuration due to, e.g., pressure for quick release or task prioritization. Therefore, our last question aims to explore: RQ4: Is the given tuning resource (tuning budget) important to the choice between PS-w and PS-w/o? ### 1.3. Contributions To address these RQs , we conducted an extensive empirical study on 15 combinations of patterns to quantify aspirations, four types of aspiration space in the objective landscape, three Pareto optimizers, and eight real- world systems/environments with diverse performance objectives, leading to 1,296 cases of investigations. Briefly, the first contribution in this paper is a set of performance requirement patterns (for individual performance objectives) summarized from 426 papers in the literature from the Software Engineering community and 14 widely-used real-world requirements datasets. These patterns are: 1. (1) No aspiration is given but assuming that the optimal possible performance is preferred, e.g., “the lower latency is preferred”, meaning that one prefers the best possible latency. 2. (2) The performance in the aspiration space is equally good or otherwise there is a certain degree of tolerance, e.g., “the minimum latency shall ideally be 500ms”, implying that anything better than 500ms is equally good while a performance worse than that is acceptable but not ideal. 3. (3) The performance in the aspiration space is equally good while anything outside the space is unacceptable, e.g., “the latency shall be 500ms”. This suggests that a latency better than 500ms is equally good and no tolerance is allowed for performance worse than that. 4. (4) Preferring the optimal performance while anything outside the aspiration space is unacceptable, e.g., “the latency shall be at most 500ms”, reflecting that no tolerance is allowed for worse than 500ms while the lower the latency, the better. Our second contribution is the pragmatic findings that answer the aforementioned RQs over the 1,296 cases as follows: * • To RQ1: PS-w performs considerably better or similar to PS-w/o on 84% of the cases, out of which over 60% show statistically significant improvement. * • To RQ2: The improvement of PS-w over PS-w/o is often largely biased to a certain position of the aspiration space in the objective landscape, e.g., centered or left-shifted. * • To RQ3: PS-w/o is no worse than PS-w for 70% cases, wherein the difference is considerable with statistical significance for more than 85%. * • To RQ4: Under realistic aspirations, PS-w obtains consistently better outcomes than PS-w/o throughout the trajectory and with a speedup up to $10\times$. When the aspirations are unrealistic, in contrast, the two optimization models are competitive in the early stage of tuning but soon PS-w/o would lead to better results with considerably high speedup. Hence, we conjecture that the performance aspirations do matter for guiding bi-objective software configuration tuning in general. Yet, depending on the context, it can either be helpful or harmful. We provide, as part of the third contribution, some in-depth analysis and discussions on the reasons behind the above observations. More importantly, these findings allow us to derive our fourth contribution: the key lessons learned on the choice between PS-w and PS-w/o for bi-objective software configuration tuning, which are: * • Lesson 1: The choice on whether to exploit aspirations for guiding the tuning is primarily dependent on their realism. * • Lesson 2: It is unlikely that the combinations of patterns can change the decision on whether to incorporate aspiration in the tuning, but it can influence the benefit/detriment of aspiration-guided tuning. * • Lesson 3: The positions of realistic aspiration space in the objective space can largely affect the benefits brought by considering aspirations within tuning, but it is less likely to influence the choice. * • Lesson 4: The given tuning budget has a marginal impact on the choice when the aspirations are realistic. However, it can be an important factor to consider under unrealistic aspirations. Drawing on those lessons, our fifth contribution outlines three future opportunities for this field of research, namely: * • Landscape Analysis for Configurable Software Systems. * • Requirement-Robust Optimizer for Configuration Tuning. * • Study on the Relative Impact between Requirement Patterns to the Tuning. To promote open science practice, all the code, dataset, and necessary supplementary documents for this work can be publicly accessed at: https://github.com/ideas-labo/aspiration-study. The rest of this paper is organized as follows: Section 2 formalizes the problem and presents the motivating example. Section 3 discusses the patterns that quantify performance requirements with aspirations and how they were identified. Section 4 elaborates the design of our empirical study. Section 5 presents and analyzes the experiment results. Thereafter, Section 6 discusses the lessons learned and future opportunities, followed by threats to validity in Section 7. Finally, Sections 8 and 9 review the related work and conclude the paper, respectively. ## 2\. Theory In this section, we present the theoretical knowledge for understanding the purpose of this work. ### 2.1. Formal Definition #### 2.1.1. Background and Problem Formalization In the DevOps era, software configuration tuning involves two fundamental roles that interact frequently (Sayagh et al., 2020) — the stakeholders (whose benefit is directly affected by the software performance) negotiate their performance requirements with the software engineers, who then act as the operators to tune the configurations for satisfying these requirements. Beyond a single performance concern, recently there has been an increasing demand for considering multiple performance objectives (Hort et al., 2021; Chen et al., 2020). Among those, our literature review from Section 3 shows that 90% of the recent work has considered two performance objectives (Gerasimou et al., 2018; Chen et al., 2018b; Nair et al., 2020), such as the latency versus throughput for Storm; image quality versus energy usage for x264. This makes software configuration tuning with requirements in mind even more complex. Without loss of generality, we assume that a configurable software comes with a set of configuration options, whereby the $i$th option is denoted as $c_{i}$, which can be a binary, integer, or enumerate variable. A particular configuration is denoted as $\boldsymbol{\overline{c}}$. The search space, $\mathbfcal{C}$, is the Cartesian product of the possible values for all the $c_{i}$. Formally, given a scenario of requirements with clear aspirations for two performance objectives, the goal of PS-w for software configuration tuning is to find the configuration(s) that achieve: (1) $\displaystyle maximize~{}p_{x}(f_{1}(\boldsymbol{\overline{c}})),p_{y}(f_{2}(\boldsymbol{\overline{c}})),~{}~{}\boldsymbol{\overline{c}}\in\mathbfcal{C}$ whereby $f$ is the raw measurement of the performance value achieved by $\boldsymbol{\overline{c}}$; $p$ is the corresponding requirement pattern, which quantifies the degree of satisficing given $f(\boldsymbol{\overline{c}})$ (see Section 3). In this work, we consider cases where at least one $p$ contains a clear aspiration level333We use $p_{x}$ and $p_{y}$ to distinguish two performance requirement patterns.. In contrast, the goal of PS-w/o is to: (2) $\displaystyle minimize~{}f_{1}(\boldsymbol{\overline{c}}),f_{2}(\boldsymbol{\overline{c}}),~{}~{}\boldsymbol{\overline{c}}\in\mathbfcal{C}$ As can be seen, PS-w explicitly leverages information about the given requirements with clear aspirations to guide the search and tuning while PS-w/o assumes the basic Pareto optimality444We assume that all performance objectives are to be minimized; maximizing ones can be easily converted.. Figure 1. A performance requirement snippet from the requirement document of a real-world project in the PURE dataset (Ferrari et al., 2017). Figure 2. The aspiration space (highlighted by color) and aspiration levels within the bi- objective space (latency and throughput) for Storm under the Rolling Sort benchmark. #### 2.1.2. Aspiration Space Following the normal software engineering practice of requirement negotiation, it is likely that a single performance requirement can come with a clear aspiration, in which case we define aspiration space as the portion of performance points that are not inferior to the given aspiration level. A real-world example has been shown in Figure 1. Here, “the system shall support at least 1,000 concurrent users” contains a clear aspiration level of 1,000 users, meaning that the aspiration space covers throughput between 1,000 (inclusive) and the true optimum (which is case-dependent). Beyond such a one- dimensional case, it is easy to know that the aspiration space can be generalized to a two-dimensional case when the aspiration levels of two performance objectives are involved. For example, Figure 2 shows the aspiration space for the requirements “the system shall perform with 39800 users at a time” while “the latency shall be no worse than 160 seconds” for Storm (with log-transformed values ($\log_{10}$) and all performance objectives are to be minimized as we consider the reciprocal of Throughput). This forms the foundation of our analysis in what follows. Input: Configuration space $\mathcal{V}$; the system $\mathcal{F}$; a matrix of fitness quantified by the requirements $\Gamma$ Output: A set of nondominated configurations $\mathcal{S^{\prime}}$ 1 Randomly initialize a population of $n$ configurations $\mathcal{P}$ 2 /* measuring on the actual configurable system */ 3 measure($\mathcal{P},\mathcal{F}$) 4 5/* for PS-w, the fitness that guides the search is computed according to Equation (1) */ 6 if _PS-w_ then $\Gamma\leftarrow$getFitnessBasedonRequirements($\mathcal{P}$) 7 8while _The search budget is not exhausted_ do 9 $\mathcal{P^{\prime}}=\emptyset$ 10 while _$\mathcal{P^{\prime}} <n$_ do 11 12 /* for PS-w, selecting parents with respect to their compliance to the requirements */ 13 if _PS-w_ then $\\{s_{x},s_{y}\\}\leftarrow$mating($\mathcal{P},\Gamma$) 14 else $\\{s_{x},s_{y}\\}\leftarrow$mating($\mathcal{P}$) 15 $\\{o_{x},o_{y}\\}\leftarrow$doCrossoverAndMutation($\mathcal{V},s_{x},s_{y}$) 16 measure($o_{x},o_{y},\mathcal{{F}}$) 17 18 if _PS-w_ then $\Gamma\leftarrow$getFitnessBasedonRequirements($o_{x},o_{y}$) 19 $\mathcal{P^{\prime}}\leftarrow\mathcal{P^{\prime}}\bigcup\\{o_{x},o_{y}\\}$ 20 21 22 /* for PS-w, the configurations are preserved according to the fitness computed with respect to the requirements */ 23 if _PS-w_ then $\mathcal{U}\leftarrow$nondominatedSorting($\mathcal{P}\bigcup\mathcal{P^{\prime}},\Gamma$) 24 else $\mathcal{U}\leftarrow$nondominatedSorting($\mathcal{P}\bigcup\mathcal{P^{\prime}}$) 25 26 $\mathcal{P}\leftarrow$top $n$ configurations from $\mathcal{U}$ 27 28 29if _PS-w_ then return $\mathcal{S^{\prime}}\leftarrow$nondominatedConfigurations($\mathcal{P},\Gamma$) 30 else return $\mathcal{S^{\prime}}\leftarrow$nondominatedConfigurations($\mathcal{P}$) 31 Algorithm 1 Unified code for PS-w and PS-w/o with NSGA-II. #### 2.1.3. Pareto search with and without Aspirations for Tuning Software To illustrate the difference between PS-w and PS-w/o, a pseudo-code using NSGA-II as the underlying optimizer has been shown in Algorithm 1. As can be seen, PS-w and PS-w/o mainly differ in the fact that the former is guided by the information extracted from the given requirements and aspirations (denoted as $\Gamma$) while the latter runs without, i.e., it uses the raw values of the measured performance objectives. This means that all the fitness of configurations evaluated in the PS-w makes use of the $\Gamma$ while that of the configuration in PS-w/o does not. For example, under the raw performance, a latency of 500ms is certainly more preferred than the case of 700ms. However, under the requirement and aspiration that any latency less than 900ms is equally preferred, they are actually equivalent therein and hence PS-w reflects precisely that. As a result, the above generates two differences between PS-w and PS-w/o. Firstly, the process of deciding on which two configurations to be selected as parents for generating new configurations is guided differently (i.e., lines 10–11). Secondly, the environmental selection that determines what configurations to be preserved in the next iteration is also guided by different fitness (i.e., lines 17–18). As we will show, even with such a simple deviation the leading results can be radically different depending on the circumstances. ### 2.2. Motivating Scenario Taking x264 — a configurable video encoder — as a concrete example, a possible requirement scenario could involve performance requirements (denoted as $\mathbfcal{P}_{1}$) ‘‘the PNSR555PNSR stands for Peak signal-to-noise ratio, which measures the reconstruction quality for images; the larger the PNSR, the better. shall be at least 40dB’’ and ‘‘the energy usage shall be at most 80 watts’’. Here, there is a clear aspiration level 40dB and 80 watts for the performance attribute PNSR and energy usage, respectively. Indeed, depending on the requirement scenario, the preference for performance deviating from the aspiration level could vary even with a clear aspiration level (as we will discuss in Section 3). For instance, the above example may imply that one would not accept any performance worse than 40dB or 80 watts but prefers any configurations with better PNSR and energy usage. This means that, suppose there are three configurations $\boldsymbol{A}=\\{65dB,30watts\\}$, $\boldsymbol{B}=\\{80dB,25watts\\}$, and $\boldsymbol{C}=\\{35dB,10watts\\}$, the $\boldsymbol{C}$, although it has the best energy usage, would be ruled out as it fails to meet aspiration for PNSR; $\boldsymbol{B}$ would certainly be more ideal under such a requirement scenario since it has better results on both performance objectives than $\boldsymbol{A}$. In a different requirement scenario, the requirements (denoted as $\mathbfcal{P}_{2}$) may become ‘‘the PNSR shall be no worse than 40dB’’ while ‘‘the energy usage shall be no worse than 80 watts’’, which implies that one would not accept any performance worse than 40dB or 80 watts, but equally prefer anything that goes beyond 40dB and 80 watts. Here, $\boldsymbol{C}$ is ruled out again but $\boldsymbol{A}$ and $\boldsymbol{B}$ would become equally preferred as their PNSR and energy usages are better than 40dB and 80 watts, respectively. Of course, the given 40dB and/or 80 watts may well be unrealistic aspirations, i.e., none of the configurations would reach them (or at least no one can be found under the possible tuning budget). Figure 3. The preferred configurations for x264 by PS-w/o and PS-w given different requirement scenarios. To make the meaning of the above clear for PS-w and PS-w/o, Figure 3 illustrates what configurations are preferred when using PS-w and PS-w/o in the tuning under $\mathbfcal{P}_{1}$ or $\mathbfcal{P}_{2}$. Here, the quality of the configurations produced would need to be evaluated with respect to the requirements and PS-w prefers precisely what is needed therein. PS-w/o, in contrast, naturally prefers all configurations on the Pareto front. Intuitively, we note that PS-w/o would also prefer some configurations that are preferred by its PS-w counterpart. For example, when comparing Figure 3a and 3c, all the points preferred by PS-w are also preferred by PS-w/o (but not vice versus), hence they should converge to the same satisfiability under $\mathbfcal{P}_{1}$. In Figure 3b and 3c, although PS-w/o prefers different points to that of PS-w in the aspiration space, they should be able to reach the same degree of satisfaction with respect to the requirements because all configurations within the aspiration space are deemed equivalent when being evaluated by $\mathbfcal{P}_{2}$. Indeed, if both PS-w and PS-w/o can find all their preferred points in the space, then the engineers can simply cherry-pick the fully satisfied ones according to the given performance requirements from the final set of configurations returned. Yet, the unanswered question would be: is the above assumption true and hence there would be no difference regarding whether PS-w or PS-w/o is chosen? The rest of this paper provides an empirical understanding of the above confusion. ## 3\. How Requirements are Handled Here we describe the process of mining, classifying, and analyzing the real- world performance requirements with aspirations. We use Cohen’s Kappa coefficient ($\kappa$) (McHugh, 2012) to mitigate bias between authors — the classification is often regarded as unbiased and sustainable when $\kappa>0.7$ . In a nutshell, Cohen’s Kappa coefficient is generally thought to be a more robust measure than a simple percent agreement calculation between the raters, as it takes into account the possibility of the agreement occurring by chance. In this work, we use the coefficient in two aspects: * • Measure the agreement on which implication category a requirement belongs to (we have $\kappa=0.85$). * • Measure the agreement on which patterns that a paper assumes (we have $\kappa=0.76$). ### 3.1. Real-world Requirements with Aspirations To understand what are the common real-world performance requirements with aspirations and their implications in the industry, in Jan 2021, we mined the publicly available requirement dataset from Zenodo (under the Empirical Software Engineering label), GitHub, and the Google Dataset Search, using a keyword “requirement dataset”, as shown in Figure 4. The results led to 386 items, including duplication and many irrelevant ones which can be easily identified from their titles. As such, we filtered the candidates down to 14, within which we followed the criteria below to extract the most relevant ones for this study: * • The dataset has clearly documented requirement statements for the software systems to be built. * • The dataset contains labeled requirements for performance objectives or there is readily available code to do so. * • To ensure external validity, the dataset contains performance requirements for systems from different domains. Figure 4. Overview of dataset analysis and literature review. The process has resulted in nine shortlisted datasets, based on which we attempted to identify the statements of performance requirements according to the following rules: * • The performance requirement should contain a quantifiable aspiration level, such as “the system shall perform with 1500 users at a time”. In contrast, “the system shall be fast” is too vague to be quantified. * • To ensure fairness when comparing with the PS-w/o, we eliminate the performance requirements that do not prefer one extreme of the objective, such as “the display shall be refreshed every 60 seconds”. This is because such requirements prefer the performance to reach a clear aspiration (e.g., 60 seconds) instead of a maximum/minimum of the performance objective. Therefore, in such a case, PS-w should always be preferred, since there is no point to use PS-w/o which naturally maximizes/minimizes the objectives while does not take aspiration into account666Note that, indeed, in some cases, the preference of this kind of requirement can be derived by inferring from the context. Using the same example, if the display could not be refreshed because some long-running analyses could not be terminated within 60 seconds, then the preference would be to guarantee the ability to refresh every 60-sec or less. However, in our cases, most of those requirements come from the PROMISE dataset, which has no extra information other than some sentences describing the requirement. This makes it difficult for us to correctly infer the preferences implied. Hence, in the above example, we stick with the literal meaning that one would prefer and only prefer a refresh rate of 60 seconds; no more and no less.. The above has led us to rule out four datasets that contain no appropriate requirements. Table 1 shows details of the final five datasets used in our study (removing duplication). Table 1. Performance requirements with aspirations. Dataset | # Requirements | Link ---|---|--- Do et al. (Do et al., 2019) | 52 | https://github.com/aqd14/ICSR-2019 PROMISE (Menzies et al., 2012) | 48 | https://zenodo.org/record/268542 PURE (Ferrari et al., 2017) | 28 | https://zenodo.org/record/1414117 Shaukat et al. (Shaukat et al., 2018) | 13 | https://zenodo.org/record/1209601 Dalpiaz et al. (Dalpiaz et al., 2019) | 10 | https://zenodo.org/record/3309669 ### 3.2. Literature Search of Patterns As from Figure 4, we also conducted a literature search according to the best practice of a systematic literature review in software engineering (Kitchenham et al., 2009), containing search protocol, inclusion, and exclusion criteria. Our goal is to understand a single question: how are the implications of real- world performance requirements with aspirations, which are generic to the software systems as identified from Section 3.1, have been specifically quantified in current bi-objective software configuration tuning work? Note that we do not intend to be comprehensive, but rather to gather representatives. In Feb 2021, we conducted a full-text search over Google Scholar for papers published since 2010 from the software engineering community (we exclude the system-related papers for better representation in the community), using a focused search string below: > “requirement” AND (“multi objective” OR “multi goal” OR “multi criteria”) > AND (“performance” OR “non-functional”) AND (“configurable software” OR > “adaptive software”) AND (“tuning” OR “optimization”) This gives us 426 papers. We then filtered patents, inaccessible papers, and any non-English documents, leading to 393 papers. Next, we further extracted the papers by using the following inclusion criteria on the title and abstract: * • The paper is relevant to tuning the configuration of the software system. * • The paper seeks to improve or evaluate the performance objectives of the software system. * • The paper considers performance requirements. * • The paper is peer-reviewed and is not a survey or tutorial. A paper was ruled out if it does not meet all the above criteria, which resulted in 107 papers. Then, we removed papers based on the following exclusion criteria by reviewing the content: * • The considered performance requirements do not have a clear aspiration level. * • The paper tackle only a single performance objective. * • The paper does not have quantitatively experimental results with clear instructions on how the results were obtained. A paper was ruled out if it met any of the above criteria. Finally, we obtained 29 papers, as shown in Table 2. Table 2. Identified papers with aspiration quantification. Venue | # Papers | Venue | # Papers | Venue | # Papers ---|---|---|---|---|--- TSE (journal) | 3 | JSS (journal) | 6 | TAAS (journal) | 3 ASE (journal) | 2 | ESE (journal) | 1 | ICPE (conference) | 1 ICSE (conference) | 1 | FSE (conference) | 3 | ASE (conference) | 1 SEAMS (symposium) | 6 | ICSA (conference) | 1 | MODELS (conference) | 1 ### 3.3. Results Analysis #### 3.3.1. Number of Performance Objectives From the review, we found that 26 out of 29 (90%) of the papers considered two performance objectives in their tuning process. The remaining three papers take into account three or more. This is a clear sign that two performance objectives remain a state-of-the-art setting for tuning software configuration, which is consistent with the finding from the recent survey for a related field (Chen et al., 2020). Therefore, in this work, we focus on bi- objective software configuration tuning. #### 3.3.2. Implications We analyzed all 151 performance requirements with aspirations from Section 3.1, and found three possible implications on the aspiration space for a given performance objective: * • $\mathbf{\mathcal{I}_{1}}$: Anything in the aspiration space is equally preferred. This gives a clear upper aspiration bound without other information, e.g., “the server will support a maximum of 1,000 simultaneous users”; or there is a lower aspiration bound but clear information has been given for the cases when the performance reaches the aspiration space, e.g., “results shall be returned in under 15 seconds”. * • $\mathbf{\mathcal{I}_{2}}$: Anything not in the aspiration space is equally non-preferred. For example, “the system shall allow for a minimum of 6 users at the same time”, in which case there is only information for a clear lower aspiration bound. * • $\mathbf{\mathcal{I}_{3}}$: No information is available with respect to the aspiration space. This often refers to the requirements where there is a clear aspiration level, but no indication about whether it is an upper or lower aspiration bound while any other information is unavailable. For example “the system shall cater to 10 simultaneous users”. The distribution of the implications can be found in Figure 5(a) and we achieve a Kappa coefficient $\kappa=0.85$ for this. (a) $\\#$ requirements per implication (b) $\\#$ papers per pattern Pattern | Implication ---|--- $\boldsymbol{p}_{1}$ | $\mathcal{I}_{1}$, $\mathcal{I}_{3}$ $\boldsymbol{p}_{2}$ | $\mathcal{I}_{1}$, $\mathcal{I}_{2}$, $\mathcal{I}_{3}$ $\boldsymbol{p}_{3}$ | $\mathcal{I}_{2}$, $\mathcal{I}_{3}$ (c) Mappings Figure 5. Distribution of implications, patterns and their mappings (six papers consider more than one pattern). Figure 6. Requirement patterns with (and without) aspiration from the literature. $\alpha$ and $\beta$ denote the lower and upper bound of the performance objective, respectively. $d$ is the aspiration level and the aspiration space has been shaded. #### 3.3.3. Patterns Next, with the above implications in mind, we seek to understand how they are quantified within the 29 papers identified. This led to three state-of-the-art patterns on the functions to quantify requirements with aspiration level (assuming the lower bound is optimum). Suppose that $\alpha$ and $\beta$ denote the lower and upper bound of the performance objective, respectively; $d$ is the aspiration level, the patterns, and their quantification have been shown in Figure 6 and are explained below: * • $\boldsymbol{p_{1}}$: The performance in the aspiration space is equally good or otherwise there is a certain degree of tolerance (Figure 6b). The function can be formulated as: (3) $\displaystyle\boldsymbol{p_{1}}(x)=\begin{cases}{{\beta-x}\over{\beta-d}}&x>d\\\ 1&x\leq d\end{cases}$ * • $\boldsymbol{p_{2}}$: The performance in the aspiration space is equally good while anything outside the space is unacceptable (Figure 6c), such that: (4) $\displaystyle\boldsymbol{p_{2}}(x)=\begin{cases}0&x>d\\\ 1&x\leq d\end{cases}$ * • $\boldsymbol{p_{3}}$: Preferring the optimal performance while anything outside the aspiration space is unacceptable (Figure 6d), which is defined as: (5) $\displaystyle\boldsymbol{p_{3}}(x)=\begin{cases}0&x>d\\\ {{d-x}\over{d-\alpha}}&x\leq d\end{cases}$ Similarly, we can also formalize the requirement with no clear aspiration level involved (e.g., “the latency shall be small”), denoted as $\boldsymbol{p_{0}}$, which is illustrated in Figure 6a and can be formulated as follow: (6) $\displaystyle\boldsymbol{p_{0}}(x)={{\beta-x}\over{\beta-\alpha}}$ The distribution of the patterns has been shown in Figure 5(b) where we have $\kappa=0.76$, which is sustainable (McHugh, 2012). Through normalization in those patterns, the raw measurement of a performance objective is transformed into the satisficing degree with respect to a given aspiration space (if any), ranging between 0 and 1 where the latter means fully satisfied. As such, the transformation depends on the assumption of satisficing over measurements included or excluded by the aspiration space, which distinguishes the patterns. At this point, we can immediately see the mappings between the patterns and the extracted implications from the real- world dataset. Such mappings have been illustrated in Figure 5(c), from which we see that each pattern, except $\boldsymbol{p_{0}}$, can fit with at least two implications from the real-world requirements. For example, $\boldsymbol{p_{1}}$ can fit with $\mathcal{I}_{1}$ and $\mathcal{I}_{3}$, because the former prefers anything within the aspiration space and specifies nothing on the other extreme, while the latter has no information at all and thus one needs to rely on an assumption when quantifying $\mathcal{I}_{3}$ to guide the search, meaning that it has the possibility to fit with all the three patterns. From the above, it is confirmed that there exist patterns from current work which can reflect the implication of real-world performance requirements and their aspirations. We, therefore, will seek to examine all of them in our empirical study. Figure 7. Example of using the patterns for evaluating the goodness of configurations and guiding PS-w in the transformed space. ### 3.4. Respecting Requirements and Aspiration in Software Configuration Tuning While the above requirement patterns are the key to evaluating the “better” or “worse” in the set of configurations produced by any Pareto optimizer and optimization model, they directly influence the behavior of PS-w (they correspond to the $p_{n}$ in Equation (1)) but not that of the PS-w/o. Most importantly, those patterns allow us to precisely quantify what is the best configuration(s) amongst the configurations produced by those two optimization models given a set of requirements and aspirations. Figure 7 shows an example of evaluating the configurations (and guiding PS-w) in a transformed space when taking the requirements and aspirations into account, i.e., energy usage with $\boldsymbol{p_{1}}$ and aspiration of 80 watts while PNSR with $\boldsymbol{p_{3}}$ and aspiration of 40dB. Here, we certainly prefer the points within the aspiration space in contrast to those outside. However, for those points within the aspiration space, we only prefer those with better PNSR while the energy usage is deemed as equivalent (due to the implication of $\boldsymbol{p_{1}}$ and $\boldsymbol{p_{3}}$). The above is difficult to assess and quantify in the original space (Figure 7 left), since naturally the points that are non-dominated by each other (in the sense of the original objective values) are considered as equivalent when the requirements and aspiration are not involved. Therefore, the actual most preferred point (arrow highlighted) is not considered the best. In contrast, the evaluation becomes immediately obvious on what is the best point in the transformed space, where the energy and PNSR are converted by the equations for $\boldsymbol{p_{1}}$ and $\boldsymbol{p_{3}}$, respectively. Now, clearly, the most preferred point is the only non-dominated point therein (Figure 7 right). ## 4\. Empirical Study Design As shown in Figure 8, our methodology consists of the following steps: 1. Step 1: Assume that a requirement scenario has been negotiated by the software engineers and stakeholders, we quantify the requirements such that they are ready for the Pareto optimizers, i.e., in the forms of a combination of patterns from Section 3 and their aspiration space. In particular, to form a requirement scenario, the given combination of the patterns is denoted as a two-dimensional vector $\mathbfcal{P}$, such that there is at least one that comes with a clear aspiration level, e.g., $\mathbfcal{P}=\\{\boldsymbol{p}_{0},\boldsymbol{p}_{3}\\}$. In this work, we examine all possible combinations of the patterns (including $\boldsymbol{p_{0}}$). Under each combination, we also consider different aspiration spaces for our RQs; this will be further elaborated in Section 4.2. Figure 8. Overview of the empirical study. 2. Step 2: Run both PS-w/o and PS-w on different software systems. Particularly, when formulating the performance objectives, PS-w/o is steered by the raw measurements only777This is effectively identical to using $\boldsymbol{p_{0}}$ for all performance objectives. while PS-w is designed to be guided by the given vector of patterns $\mathbfcal{P}$ as the new objectives. To ensure fairness, both optimization models are examined under the same optimizer and we consider three representative optimizers in this work, i.e., NSGA-II (Deb et al., 2002), IBEA (Zitzler and Künzli, 2004), and MOEA/D (Zhang and Li, 2007). 3. Step 3: Measure the system as the search proceeds until the tuning budget has been exhausted; repeat 100 times. 4. Step 4: Evaluate the set of configurations thereafter using $\mathbfcal{P}$ as part of the Quality Evaluation phase. 5. Step 5: Go back to Step 1 if there are more combinations of patterns and aspirations to examine. Table 3. The considered requirement scenarios (in terms of the combination of the patterns identified from Section 3) and their example interpretations. The interpretations are based on the assumption that the performance objectives are $\\{latency,throughput\\}$ with possible aspiration levels $d_{1}$ and $d_{2}$, respectively. Possible $\mathbfcal{P}$ | Example Interpretation ---|--- $\\{\boldsymbol{p_{0}},\boldsymbol{p_{1}}\\}$ | Prefer better latency and throughput better than $d_{2}$, but any configurations better than $d_{2}$ are equally preferred; willing to accept throughput worse than $d_{2}$. $\\{\boldsymbol{p_{1}},\boldsymbol{p_{0}}\\}$ | Prefer better throughput and latency better than $d_{1}$, but any configurations better than $d_{1}$ are equally preferred; willing to accept latency worse than $d_{1}$. $\\{\boldsymbol{p_{0}},\boldsymbol{p_{2}}\\}$ | Prefer better latency and throughput better than $d_{2}$, but any configurations better than $d_{2}$ are equally preferred; do not accept throughput worse than $d_{2}$. $\\{\boldsymbol{p_{2}},\boldsymbol{p_{0}}\\}$ | Prefer better throughput and latency better than $d_{1}$, but any configurations better than $d_{1}$ are equally preferred; do not accept latency worse than $d_{1}$. $\\{\boldsymbol{p_{0}},\boldsymbol{p_{3}}\\}$ | Prefer better latency and throughput; do not accept throughput worse than $d_{2}$. $\\{\boldsymbol{p_{3}},\boldsymbol{p_{0}}\\}$ | Prefer better latency and throughput; do not accept latency worse than $d_{1}$. $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$ | Prefer latency better than $d_{1}$ and throughput better than $d_{2}$, but any configurations better than $d_{1}$ and $d_{2}$ are equally preferred; willing to accept latency and throughput worse than $d_{1}$ and $d_{2}$, respectively. $\\{\boldsymbol{p_{2}},\boldsymbol{p_{2}}\\}$ | Prefer latency better than $d_{1}$ and throughput better than $d_{2}$, but any configurations better than $d_{1}$ and $d_{2}$ are equally preferred; do not accept latency and throughput worse than $d_{1}$ and $d_{2}$, respectively. $\\{\boldsymbol{p_{3}},\boldsymbol{p_{3}}\\}$ | Prefer better latency and throughput; do not accept latency and throughput worse than $d_{1}$ and $d_{2}$, respectively. $\\{\boldsymbol{p_{1}},\boldsymbol{p_{2}}\\}$ | Prefer latency better than $d_{1}$ and throughput better than $d_{2}$, but any configurations better than $d_{1}$ and $d_{2}$ are equally preferred; willing to accept latency worse than $d_{1}$ but do not accept throughput worse than $d_{2}$. $\\{\boldsymbol{p_{2}},\boldsymbol{p_{1}}\\}$ | Prefer latency better than $d_{1}$ and throughput better than $d_{2}$, but any configurations better than $d_{1}$ and $d_{2}$ are equally preferred; willing to accept throughput worse than $d_{2}$ but do not accept latency worse than $d_{1}$. $\\{\boldsymbol{p_{1}},\boldsymbol{p_{3}}\\}$ | Prefer better throughput and latency better than $d_{1}$, but any configurations better than $d_{1}$ are equally preferred; willing to accept latency worse than $d_{1}$ but do not accept throughput worse than $d_{2}$. $\\{\boldsymbol{p_{3}},\boldsymbol{p_{1}}\\}$ | Prefer better latency and throughput better than $d_{2}$, but any configurations better than $d_{2}$ are equally preferred; willing to accept throughput worse than $d_{2}$ but do not accept latency worse than $d_{1}$. $\\{\boldsymbol{p_{2}},\boldsymbol{p_{3}}\\}$ | Prefer better throughput and latency better than $d_{1}$, but any configurations better than $d_{1}$ are equally preferred; do not accept latency worse than $d_{1}$ nor throughput worse than $d_{2}$. $\\{\boldsymbol{p_{3}},\boldsymbol{p_{2}}\\}$ | Prefer better latency and throughput better than $d_{2}$, but any configurations better than $d_{2}$ are equally preferred; do not accept latency worse than $d_{1}$ nor throughput worse than $d_{2}$. It is worth noting that, although the patterns from Section 3 are for single performance objective, they can be arbitrarily combined for the bi-objective software configuration tuning in the Scenario Identification phase of Step 1 (Calinescu et al., 2017; Martens et al., 2010; Gerasimou et al., 2016; Calinescu et al., 2018), as illustrated in Table 3. In the Configuration Tuning phase (Step 2 and 3), the patterns require normalization using the lower and/or upper bound (except for $\boldsymbol{p_{0}}$ and $\boldsymbol{p_{2}}$). However, since these are often unknown, we adopt a dynamic method wherein the raw measurements are normalized using the maximal and minimal values found so far as the tuning proceeds, which is common in SBSE for software configuration tuning (Shahbazian et al., 2020; Bowers et al., 2018). We record the raw measurements of each configuration throughout the tuning to efficiently utilize the tuning budget (Section 4.3.2). To mitigate stochastic bias, we repeat each experiment 100 runs. The study is conducted on a cluster of machines each with Intel i5 six cores CPU at 2.9GHz and 8GB memory, running numerous experiments in parallel over the course of five months ($24\times 7$). ### 4.1. Subject Software Systems We conduct our study on a set of real-world highly configurable software systems and environments that have been widely studied in existing work (Jamshidi et al., 2018; Nair et al., 2020; Jamshidi and Casale, 2016; Chen and Li, 2021). These are selected according to the criteria below: 1. (1) To ensure that the search landscape is not too trivial to be explored, the system should contain a mix of binary and enumerative configuration options. 2. (2) A full exploration of the search space is infeasible, i.e., it cannot be done within 24 hours. 3. (3) There are clear instructions on how to set up the benchmark under which the system will be measured. 4. (4) If the same system of an environment has been used with a different set of configuration options, choose those with relatively higher complexity, i.e., larger search space and more configuration options. For example, Storm can be tuned under different workload benchmarks, and we choose WordCount and RollingSort as the two that satisfy the above criteria. We firstly eliminated LLVM from (Nair et al., 2020), as it violates Criterion (1). Similarly, sort-256 and noc-CM-log is also ruled out due to their rather small search space which can be exhaustively explored in 24 hours, i.e., Criterion (2). We cannot consider the system SaC as there is no clear instruction on under what benchmark it can be profiled, which violates Criterion (3). We also noticed that Storm and Keras (with DNN or LSTM) have been much more commonly used than others, but with different configuration options and environments. Therefore, according to Criterion (4), we use the settings that lead to a much larger search space and more options. As shown in Table 4, the selected software systems come from diverse domains, e.g., video encoding, stream processing, and deep/machine learning, while having different performance objectives, scales, and search spaces. Their measurements are also expensive888Each measurement consists of 5 repeated samples and the median value is used., e.g., XGBoost needs 2,807 hours to explore less than 1% of its search space. We keep the same performance objectives, configuration options, and their ranges as studied in the prior work that made use of them, e.g., (Jamshidi et al., 2018; Nair et al., 2020; Jamshidi and Casale, 2016; Chen and Li, 2021), since those have been shown to be the key ones for the software systems under the related environment. As a result, although the software systems are the same, the actual search spaces are different, such as Storm/WC and Storm/RS. In particular, following what has been used in previous work, the environment/workload we consider are: Table 4. Configurable software systems studied. We run all software systems under their standard benchmarks. Storm and Keras (with DNN) use two benchmarks and three dataset, respectively. Software | Domain | Performance Objectives | $\\#$ Options | Search Space | Used By ---|---|---|---|---|--- Trimesh | Mesh solver | Latency and $\\#$ Iteration | 13 | 239,260 | (Nair et al., 2020; Chen and Li, 2021) x264 | Video encoding | PSNR and Energy Usage | 17 | 53,662 | (Nair et al., 2020; Chen and Li, 2021) Storm/WC | Stream processing | Latency and Throughput | 6 | 2,880 | (Nair et al., 2020; Chen and Li, 2021; Jamshidi and Casale, 2016; Jamshidi et al., 2018) Storm/RS | Stream processing | Latency and Throughput | 6 | 3,839 | (Nair et al., 2020; Chen and Li, 2021; Jamshidi and Casale, 2016; Jamshidi et al., 2018) Keras/Adiac | Deep learning | AUC and Inference Time | 13 | 3.99$\times 10^{13}$ | (Jamshidi et al., 2018) Keras/DSR | Deep learning | AUC and Inference Time | 13 | 3.32$\times 10^{13}$ | (Jamshidi et al., 2018; Chen and Li, 2021) Keras/SA | Deep learning | AUC and Inference Time | 13 | 2.66$\times 10^{13}$ | (Jamshidi et al., 2018) XGBoost | Machine learning | Accuracy and Training Time | 13 | 2.88$\times 10^{10}$ | (Jamshidi et al., 2018) * • Trimesh: we use the Shapenet dataset that contains 51,300 unique 3D models. In this work, we randomly sample 100 models as the standard benchmark. * • x264: for this, the benchmark used is a standard video of 1GB size, which was chosen randomly. * • Storm/WC: we use the WordCount as the benchmark. This is a typical simple streaming example where Storm is used to keep track of the words and their counts streaming in. WordCount generates a CPU-intensive workload. * • Storm/RS: similar to Storm/WC, here we use the RollingSort as the benchmark. Unlike WordCount, RollingSort generates a memory intensive workload. * • Keras/Adiac: we use the Deep Neural Network (DNN) from the Keras software and run it on the Adiac dataset. Generally, the dataset contains a task of automatic identification of diatoms (unicellular algae) among 31 classes with a training and testing size of 390 and 391, respectively. * • Keras/DSR: we use the DNN from the Keras software and run it on the DiatomSizeReduction dataset. The dataset concerns the prediction of four types of diatoms with a training and testing size of 16 and 306, respectively. * • Keras/SA: we use the DNN from the Keras software and run it on the ShapesAll dataset. Generally, the dataset aims to test contour/image and skeleton-based descriptors; there are 60 classes with a training and testing size of 600 each. * • XGBoost: we use the Covertype dataset that contains 54 forest cover type from cartographic variables only. The size of the dataset is 581,012 and we follow a 70%-30% training and testing split. Indeed, the analyzed dataset and literature in Section 3 may not specifically target the software systems considered in this work. However, the extracted implication and patterns are rather generic such that they can be applied to different cases. Further, some widely studied performance objectives (from both the dataset and literature) are overwhelmingly applicable. For example, latency- and throughput-related requirements (with different aspiration levels) are prevalent for a wide range of software (Nair et al., 2020). Table 5. Aspiration levels and spaces for the configurable software systems studied (used for all combinations of patterns). $l$, $r$, $c$, and $u$ denote left-shifted, right-shifted, centered, and unrealistic aspirations, respectively. Software | Performance Objectives | $l$ | $r$ | $c$ | $u$ ---|---|---|---|---|--- Trimesh | $\\{$Latency (s), # Iterations$\\}$ | $\\{81,4\\}$ | $\\{461,15\\}$ | $\\{135,7\\}$ | $\\{37,501\\}$ x264 | $\\{$PSNR (dB), Energy Usage (W)$\\}$ | $\\{50,3680\\}$ | $\\{37,462\\}$ | $\\{46,1260\\}$ | $\\{100,34\\}$ Storm/WC | $\\{$Throughput (msgs/m), Latency (ms)$\\}$ | $\\{16473,15677\\}$ | $\\{994,5\\}$ | $\\{8982,101\\}$ | $\\{34740,3\\}$ Storm/RS | $\\{$Throughput (msgs/m), Latency (ms)$\\}$ | $\\{1.3\times 10^{5},7819\\}$ | $\\{3006,5\\}$ | $\\{3.7\times 10^{4},126\\}$ | $\\{2.3\times 10^{5},1.9\\}$ Keras/Adiac | $\\{$AUC, Inference Time (ms)$\\}$ | $\\{0.030,44\\}$ | $\\{0.017,0.05\\}$ | $\\{0.028,3\\}$ | $\\{0.292,0.03\\}$ Keras/DSR | $\\{$AUC, Inference Time (ms)$\\}$ | $\\{0.307,123\\}$ | $\\{0.107,0.12\\}$ | $\\{0.300,25\\}$ | $\\{0.581,0.031\\}$ Keras/SA | $\\{$AUC, Inference Time (ms)$\\}$ | $\\{0.167,21\\}$ | $\\{0.157,0.07\\}$ | $\\{0.160,6\\}$ | $\\{0.325,0.04\\}$ XGBoost | $\\{$Accuracy (%), Training Time (s)$\\}$ | $\\{80,42\\}$ | $\\{54,3\\}$ | $\\{72,8\\}$ | $\\{92,1\\}$ ### 4.2. Aspiration Space To improve external validity, we consider aspiration levels that draw two types of aspiration space under two performance objectives: realistic and unrealistic ones. To that end, for each software system, we run all the Pareto optimizers for three hours each to obtain a landscape that contains an approximated Pareto front. We do so by ensuring that the obtained front is reasonably converged, i.e., increasing the budget only marginally changes the results. We then set the aspiration space based on such a front as summarized in Table 5. #### 4.2.1. Realistic Aspiration Space For software configuration tuning with two performance objectives, we say an aspiration space is realistic if there is at least one configuration that can reach the aspiration levels of both performance objectives. Using Storm/RS as an example in Figure 9, for the realistic ones under each combination of patterns, we set three aspiration space based on their positions in the objective space: left-shifted ($l$), right-shifted ($r$) and centered ($c$). In particular, $l$ is defined as using the value of the 20$th$ percentile for throughput and the value of the 80$th$ percentile for latency as their corresponding aspiration levels; similarly, $r$ uses the value of the 20$th$ percentile for latency and the value of the 80$th$ percentile for throughput; finally, $c$ uses the values of the 50$th$ percentile for both performance objectives. Clearly, despite covering diverse regions in the overall space of performance objectives, all those spaces contain at least one point (configuration). Note that the aspiration space is applicable to any combination of patterns with and without $\boldsymbol{p_{0}}$ (in Figure 9a and Figure 9b respectively), as long as there is a clear aspiration level for at least one performance objective. #### 4.2.2. Unrealistic Aspiration Space Since the aspiration level/space is negotiated beforehand, it may be unrealistic. In this work, we refer to an unrealistic aspiration space as the situation wherein the aspiration levels of two performance objectives can be at most reached one at a time, but not both simultaneously. For example, in the case of two performance objectives from Figure 9a, $u$ is an unrealistic aspiration space such that the level is achievable for either of the two objectives individually (as indicated by the dashed lines), but not for both, as there is no point (configuration) residing in the space. As a result, it is not applicable when only one performance objective contains clear aspiration, e.g., in Figure 9b. To define such a space, we set the value of 5$th$ percentile of both performance objectives as the corresponding aspiration levels, which we have found as sufficient to create an unrealistic aspiration space for each system. Figure 9. Distant aspiration space (shaded by different colors) under different combinations of patterns for Storm/RS. ### 4.3. Tuning Settings #### 4.3.1. Pareto Optimizer We consider three Pareto optimizers, i.e., NSGA-II (Deb et al., 2002), IBEA (Zitzler and Künzli, 2004), and MOEA/D (Zhang and Li, 2007), because: * • They have been widely used for software configuration tuning in prior work (Singh et al., 2016; Calinescu et al., 2017; Martens et al., 2010; Gerasimou et al., 2016; Calinescu et al., 2018; Chen et al., 2018b). * • They are the representatives of three fundamentally different frameworks for Pareto search (Emmerich and Deutz, 2018). * • They can, but do not have to, rely on a surrogate model(s) (Chen et al., 2018b), which greatly reduces the noise in our empirical study. All the above optimizers are adopted for both PS-w and PS-w/o based on the implementations in jMetal (Durillo and Nebro, 2011). #### 4.3.2. Tuning Budget In this work, we set a budget of one hour for each run as commonly used for expensive SBSE problems (Li et al., 2020b). However, directly relying on the time as a termination criterion can suffer severe interference during the tuning as numerous experiments need to be run in parallel. To prevent this, for each software system, we did the following to convert the one-hour tuning budget into the number of unique measurements: 1. (1) incrementally (100 each step) measuring distinct configurations on a dedicated machine using random sampling until the one-hour time budget is exhausted. 2. (2) repeating the above 5 times and collect the number of measurements. 3. (3) the median of the 5 repeats serves as the key termination criterion of the tuning thereafter (in Table 6). Note that in each run of the tuning, we cached the measurement of every distinct configuration for direct reuse. Hence, only the distinct configurations would consume the budget. Table 6. Population size and measurement tuning budget. Software | Population Size | # Measurements | Software | Population Size | # Measurements ---|---|---|---|---|--- Trimesh | 10 | 500 | x264 | 50 | 1,500 Storm/WC | 50 | 500 | Storm/RS | 30 | 700 Keras/Adiac | 50 | 700 | Keras/DSR | 60 | 500 Keras/SA | 60 | 500 | XGBoost | 30 | 300 #### 4.3.3. Parameters For the three optimizers in all cases, we apply the binary tournament, boundary mutation, and uniformed crossover, as used in prior work (Chen et al., 2018b; Chen et al., 2019). The mutation and crossover rates are set to 0.1 and 0.9, respectively, which also follows the most common setting for software configuration tuning (Chen et al., 2018b; Chen et al., 2019). Other specific settings for IBEA and MOEA/D are kept as default values, which have been shown to be effective (Chen et al., 2018b). For each system, we pragmatically set the population size via: 1. (1) examining different sizes in pilot runs under the budget in Table 6, i.e., $\\{10,20,...,$ $100\\}$, over all optimizers, combinations of patterns and aspiration (on both PS-w and PS-w/o). 2. (2) recording the average change rate of population over the last 10% generations using $g={1\over k}\times\sum^{k}_{i=0}{c_{i}\over s}$, where $k$ is the number of the last 10% generations; $c_{i}$ denotes the number of different configurations in the ith generation compared with those in the i-1th generation; $s$ is the population size. 3. (3) the largest population size where $g\leq 0.1$ across all conditions (or 10, if no size satisfies the above constraints) will be used. The results are also shown in Table 6. In this way, we seek to reach a balance between convergence (smaller population change) and diversity (larger population size) under the given tuning budget. That is, increasing the budget will unlikely change the result. This has been practiced in (Gerasimou et al., 2018; Chen and Li, 2021). ### 4.4. Analysis and Comparison #### 4.4.1. Metric To make a comparison and determine which optimization model is better in this work, we need to measure the “best” with two conditions in mind: * • Condition 1: The metric needs to be able to comprehensively compare the different sets of configurations as produced by the Pareto optimizers, covering diverse quality aspects, such as convergence and diversity. * • Condition 2: The metric should be able to reflect the given requirement scenarios, i.e., taking the given patterns identified from Section 3 into account when conducting the comparisons and evaluations. To that end, we use Hypervolume (HV) (Zitzler and Thiele, 1998; Zitzler et al., 2007) as the basic metric to assess the quality of the configuration set produced in each run. In a nutshell, HV measures the volume between all points of a configuration set and a reference point (usually a nadir point); the larger the volume, the better convergence, and diversity that the set achieves. HV is chosen in this work because: * • HV is a comprehensive metric that covers all quality aspects of a configuration set, i.e., convergence, uniformity, spread, and cardinality (Li and Yao, 2019; Li et al., 2022), which meets Condition 1. * • HV also does not require a reference Pareto front and is Pareto compliant999Generally speaking, a quality indicator being Pareto compliant means that its evaluation result does not conflict with the Pareto dominance relation between two solution sets. More strictly, if a solution set $A$ is better (Zitzler et al., 2003) than $B$ (i.e., for any solution in $B$, there exists one solution in $A$ that covers (dominates or is equivalent to) it, and there exists at least one solution in $A$ that is not covered by any solution in $B$), then $A$ is always evaluated better than $B$ by the indicator., which fits our case as the true Pareto front is unknown. * • By following the guidelines proposed by (Li et al., 2018; Li et al., 2022), we landed on HV as the appropriate metric for our SBSE problem. (a) Evaluating with HV in the original space (b) Evaluating with HV in the transformed space Figure 10. Evaluation of HV with and without requirements/aspirations. Since we are interested in a requirement scenario that has a specific combination of patterns and aspiration space ($\mathbfcal{P}$) in the objective space, the original HV, which always favors the configurations that are close to the entire Pareto front, is no longer suitable. Therefore, we need to transfer these preferences into the HV following the guidance by Li et al. (Li et al., 2022) and leveraging the patterns and quantification from Section 3 (for satisfying Condition 2). Using the same example from Section 2.2, as shown in Figure 10, the requirement scenario is that: the stakeholders prefer better PNSR and energy usage better than 80 watts, but any configurations better than 80 watts are equally preferred; willing to accept energy usage worse than 80 watts but do not accept PNSR worse than 40dB. This means for any points in the aspiration space, the ones with better PNSR would be preferred more. Therefore, point $\boldsymbol{A}$ is the best based on the requirements and should contribute the most to the chosen metric. However, directly applying HV would make some configurations, which are less preferred to the requirements, contribute significantly to the HV value (Figure 10a). This would misleadingly evaluate some sets that have many non-preferred points to have a very good HV value. In contrast, when transferring the information of patterns before using HV (i.e., in the transformed space), the above requirements and aspirations can be better complied with, as $\boldsymbol{A}$ is certainly the one that contributes the most and other non-preferred points tend to have no or little contributions. (Figure 10b) To that end, we extend the HV in this work. Suppose that there are $m$ performance objectives (we have $m=2$ in this work) and $\mathbfcal{A}$ is a produced configuration set wherein the vector of a configuration’s raw measurements is $\boldsymbol{\overline{x}_{i}}=\\{x_{1},x_{2},...x_{m}\\}$, we calculate HV based on the converted satisficing value of $\boldsymbol{\overline{x}_{i}}$ according to the given $\mathbfcal{P}$. We call it aspiration-aware HV (dubbed A-HV), which is formulated as: (7) $A\mathchar 45\relax HV(\mathbfcal{A})=\lambda(\bigcup_{\boldsymbol{\overline{x}_{i}}\in\mathbfcal{A}}\\{\boldsymbol{v}|\mathbfcal{P}(\boldsymbol{\overline{x}_{i}})\prec\boldsymbol{v}\prec\boldsymbol{r}\\})$ where $\lambda$ is the Lebesgue measure that quantifies the volume (Zitzler and Thiele, 1998) as used in the original HV; $\boldsymbol{r}$ is the reference nadir point, which is often taken as the 1.1 times of the range of the nondominated set (Li et al., 2022), hence in our case, this would be $\\{-0.1,-0.1\\}$ as $\mathbfcal{P}(\boldsymbol{\overline{x}_{i}})$ converts the outputs to $[0,1]$. Like HV, a higher A-HV value is better. To ensure fair comparison with A-HV, we use the minimum and/or maximum values (of each performance objective) from all experiments for the posterior normalization in the patterns. To enable more intuitive exposition, we report on the % gain of the A-HV for considering requirements and aspirations in the tuning, i.e., PS-w, over that for PS-w/o on each run, which is defined as: (8) $\text{\% Gain}={{{x_{i}-y_{i}}\over{y_{i}}}}\times 100$ whereby $x_{i}$ and $y_{i}$ are the A-HV value at the $i$th run for PS-w and PS-w/o, respectively, in their sorted lists. Clearly, a positive % gain indicates that the aspirations are helpful (PS-w is better) while a negative value implies they are harmful (PS-w/o is better); zero gain means identical result. #### 4.4.2. Statistical Validation We use the standard methods to interpret the significance of the results over 100 runs in each case (Arcuri and Briand, 2011; Kampenes et al., 2007): * • Wilcoxon test: We apply the Wilcoxon test (Wilcoxon, 1945) with $a=0.05$ (Arcuri and Briand, 2011) to investigate the statistical significance of the A-HV comparisons over all 100 runs, as it is a non-parametric statistical test that makes little assumption about the data distribution and has been recommended in software engineering research for pair-wise comparisons (Arcuri and Briand, 2011). * • $\mathbf{\hat{A}_{12}}$ effect size: To ensure that a $p<0.05$ is not caused by a trivial amount of the samples, we apply $\hat{A}_{12}$ (Vargha and Delaney, 2000) to measure the effect size. In this work, $\hat{A}_{12}>0.5$ denotes PS-w wins wherein it has better A-HV for more than 50% of the runs. $\hat{A}_{12}\geq 0.6$ or $\hat{A}_{12}\leq 0.4$ indicate a non-trivial effect size. Since there are 100 runs (instead of the commonly used 30), we use a stricter interpretation by which $0.6\leq\hat{A}_{12}<0.7$ (0.3 ¡ $\hat{A}_{12}\leq 0.4$), $0.7\leq\hat{A}_{12}<0.8$ (0.2 ¡ $\hat{A}_{12}\leq 0.3$), and $\hat{A}_{12}\geq 0.8$ ($\hat{A}_{12}\leq 0.2$) indicate small, medium, and large effect, respectively. ## 5\. Results and Findings In this section, we present the results of our empirical study and address the research questions posed in Section 1. ### 5.1. RQ1: Which is Better under Requirements with Realistic Aspirations? #### 5.1.1. Method To answer RQ1, we compare PS-w and PS-w/o across 15 combinations of patterns, three realistic aspiration spaces ($l$, $r$, and $c$), three optimizers and eight subject systems, leading to $15\times 3\times 3\times 8=1,080$ cases. Since we are interested in a pair-wise comparison of the A-HV under each case, the Wilcoxon test and $\hat{A}_{12}$ are used to verify the statistical significance over 100 runs. #### 5.1.2. Results As an overview, Figure 11 shows a summary of the $\hat{A}_{12}$ outcomes across the cases. Clearly, we see that PS-w performs overwhelmingly better than its PS-w/o counterpart. In particular, PS-w wins for 61% (657/1080) of the cases and loses for 16% (175/1080), while there is a 23% (248/1080) tie. In other words, PS-w is better or similar for 84% (905/1080) of the cases in contrast to the 39% (423/1080) when using PS-w/o. Statistically, PS-w wins 572 cases with $\hat{A}_{12}\geq 0.6$ and $p<0.05$, while there are only 127 significant cases when it loses. Figure 11. Summary of the wins by PS-w and PS-w/o together with their detailed statistics validation results. Table 7. Comparing PS-w and PS-w/o under realistic requirements and aspirations over 100 runs. max width=.1 and max width=.1 denote the average (Avg) and standard error (SE) of the positive and negative % gain, respectively. max width=.1 means zero gain overall. The column “PS-w” and “PS-w/o” show the number of cases that the corresponding optimization model wins. 9 (6) means one wins on 9 cases within which 6 shows statistical significance, i.e., $\hat{A}_{12}\geq 0.6$ (or $\hat{A}_{12}\leq 0.4$) and $p<0.05$ (each combination of requirement patterns has 9 cases in total, as there are 3 aspiration space and 3 optimizers). The blue cells denote PS-w wins more while red cells mean it loses more. max width = 1 | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 8 (7) | 1 (1) | 0 | 1.1% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 7 (7) | 2 (2) | 0 | -4.6% (1.4%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 5 (5) | 4 (4) | 0 | 36.4% (10.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 7 (7) | 2 (2) | 0 | -3.5% (2.4%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (5) | 3 (2) | 0 | 29.0% (7.2%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 8 (8) | 1 (1) | 0 | 2.9% (0.6%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 3 (0) | 2 (1) | 4 | -4.6% (1.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (0) | 5 (1) | 3 | -7.6% (5.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (5) | 3 (3) | 0 | 17.8% (7.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 3 (0) | 3 (1) | 3 | -3.5% (5.2%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (0) | 4 (1) | 3 | -6.2% (1.6%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (6) | 3 (3) | 0 | 6.0% (5.2%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (9) | 0 (0) | 0 | 3.1% (1.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (6) | 3 (3) | 0 | -1.4% (5.9%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 4 (4) | 5 (3) | 0 | 28.1% (8.2%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.2% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (6) | 2 (2) | 1 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (6) | 1 (1) | 0 | 4.9% (4.2%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (5) | 3 (3) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 3 (2) | 6 (4) | 0 | 7.2% (2.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (6) | 3 (3) | 0 | 2.1% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (0) | 0 (0) | 8 | 5.0% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 3 (3) | 6 (6) | 0 | 3.6% (1.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (0) | 0 (0) | 8 | 5.0% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 5 (5) | 4 (3) | 0 | 10.0% (2.8%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (9) | 0 (0) | 0 | 2.8% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (6) | 3 (3) | 0 | 9.3% (2.8%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (6) | 1 (1) | 0 | 6.3% (3.5%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 8 (8) | 1 (1) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 8 (8) | 1 (1) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (8) | 0 (0) | 1 | 82.6% (17.2%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (6) | 3 (3) | 0 | -0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 7 (6) | 2 (1) | 0 | 90.4% (17.4%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 67.4% (13.4%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 0 (0) | 8 | 83.0% (17.4%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (8) | 0 (0) | 0 | 142.1% (18.8%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 0 (0) | 8 | 83.0% (17.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 7 (7) | 2 (2) | 0 | 91.5% (17.6%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (9) | 0 (0) | 0 | 67.9% (13.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (6) | 3 (3) | 0 | 91.5% (17.6%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (8) | 1 (0) | 0 | 142.9% (19.6%) | max width=.1 (a). Trimesh | | (b). x264 | | (c). Storm/WC | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 8 (8) | 1 (1) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (5) | 3 (2) | 0 | -0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (7) | 1 (1) | 0 | 1.0% (2.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 8 (7) | 1 (0) | 0 | 114.8% (19.9%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 5 (5) | 4 (4) | 0 | 1.5% (1.9%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 145.6% (18.5%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 0 (0) | 7 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (1) | 0 (0) | 7 | 117.0% (20.2%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 8 (8) | 1 (1) | 0 | 95.5% (14.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (1) | 0 (0) | 7 | 1.0% (2.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 0 (0) | 7 | 116.0% (20.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 5 (4) | 4 (3) | 0 | 1.5% (1.9%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (9) | 0 (0) | 0 | 151.4% (19.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 8 (7) | 1 (0) | 0 | 112.5% (19.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (7) | 1 (1) | 0 | 139.5% (19.2%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 6 (5) | 3 (2) | 0 | -0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 7 (7) | 2 (1) | 0 | 5.0% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (4) | 3 (2) | 0 | 8.3% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 2 (1) | 7 (6) | 0 | -1.2% (0.6%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (0) | 0 (0) | 8 | 5.0% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (5) | 2 (1) | 1 | 10.6% (5.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (0) | 0 (0) | 8 | 5.0% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 0 (0) | 8 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 8.4% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 6 (4) | 3 (2) | 0 | -1.4% (0.6%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 9.6% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 6 (5) | 3 (1) | 0 | 5.5% (5.0%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 6 (3) | 3 (2) | 0 | 0.2% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 8 (8) | 1 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 6 (5) | 2 (1) | 1 | 1.1% (1.9%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 8 (7) | 1 (0) | 0 | 9.1% (6.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (4) | 3 (2) | 0 | 1.3% (1.9%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (5) | 3 (1) | 0 | 22.6% (5.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 0 (0) | 7 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (1) | 0 (0) | 7 | 18.1% (7.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 8 (7) | 1 (1) | 0 | 23.9% (3.6%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (1) | 0 (0) | 7 | 1.0% (2.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 0 (0) | 7 | 9.1% (6.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (8) | 0 (0) | 0 | 1.5% (2.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 8 (8) | 1 (1) | 0 | 49.1% (6.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 8 (8) | 1 (1) | 0 | 22.6% (7.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 9 (8) | 0 (0) | 0 | 44.6% (6.9%) | max width=.1 (d). Storm/RS | | (e). Keras/Adiac | | (f). Keras/DSR | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 5 (5) | 3 (2) | 1 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 4 (3) | 5 (4) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (5) | 3 (3) | 0 | 0.5% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (3) | 3 (0) | 0 | -0.5% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 6 (6) | 2 (1) | 1 | 0.4% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.7% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 4 (3) | 3 (1) | 2 | -0.3% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.7% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 4 (3) | 5 (4) | 0 | -0.2% (0.3%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 8 (5) | 0 (0) | 1 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (3) | 0 (0) | 1 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{0}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 4 (4) | 5 (5) | 0 | -0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{0}}\\}}$ | 6 (5) | 3 (1) | 0 | 1.7% (0.2%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 5 (3) | 4 (4) | 0 | 0.3% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 0 (0) | 0 (0) | 9 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 9 (5) | 0 (0) | 0 | 1.9% (0.2%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 9 (9) | 0 (0) | 0 | 0.1% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 8 (4) | 0 (0) | 1 | 1.9% (0.2%) | max width=.1 | (g). Keras/SA | | (h). XGBoost | | To provide a more comprehensive view on the different systems and requirement scenarios, in Table 7, we see that PS-w performs considerably better in general, as it achieves reasonably well positive gains on the majority of the cases (up to 145% improvement on A-HV in average) with generally more statistically significance wins. It is worth noting that we observed particularly high gains on PS-w under Storm (Table 7c and Table 7d). This is attributed to the highly diverse performance between configurations for the system, as what has been reported in prior work (Chen and Li, 2021; Jamshidi and Casale, 2016; Nair et al., 2020). It is exciting to see that the superiority of PS-w is consistent across the given requirement patterns — a clear sign to confirm that the requirements can offer important guidance to steer the tuning. However, as expected, when the scenario requires $\boldsymbol{p_{1}}$ or $\boldsymbol{p_{0}}$ only, PS-w and PS-w/o perform mostly identical (or very similar). As for the very few cases where PS-w is inferior to PS-w/o, the results can be the cause of some accidentally encountered local optima issues, which we will discuss in greater detail in what follow. Therefore, we say: RQ1: Given realistic aspiration space, PS-w is 84% of the time similar or better than PS-w/o with considerable improvements, suggesting that the requirements and aspirations are beneficial for guiding the tuning in such a situation. Yet, the benefits can vary depending on some particular combinations of the patterns, i.e., it tends to be blurred when only the $\boldsymbol{p_{1}}$ and/or $\boldsymbol{p_{0}}$ is given. (a) x264, PS-w wins (b) x264, PS-w/o wins Figure 12. Example runs of the final configuration sets (with NSGA-II) under realistic aspiration space indicated by the shaded areas. #### 5.1.3. Discussion To understand what causes the results under realistic aspiration space, in Figure 12 we show a common example from x264, where all PSNR values better than its aspiration are equally preferred and no worse results are acceptable ($\boldsymbol{p_{2}}$), while the energy usage is desired to be as low as possible, even if its aspiration has already been exceeded ($\boldsymbol{p_{3}}$). Figure 12a is a superior case of PS-w, in which we see that the aspirations drive the tuning to focus more on the local regions within the objective space, hence the points of PS-w is much less spread than those of PS-w/o (as see in the Original Space). Such a “focused pressure” is mostly sufficient to help find some more preferred regions by the scenario under a fixed tuning budget, hence the PS-w has better A-HV than PS-w/o (larger volume, as seen in the A-HV area). However, PS-w is not always beneficial. As reported by Chen and Li (Chen and Li, 2021), Nair et al. (Nair et al., 2020), and the others (Jamshidi and Casale, 2016; Ha and Zhang, 2019), configurable software systems are known to exhibit a high degree of sparsity, i.e., the close configurations can also have radically different performance, thus only a small amount of them may achieve certain performance range, causing rather sparse objective points (e.g., Figure 9). For example, switching the wait_strategy in Storm can have dramatic impacts on the performance, despite that it is merely a single change on an option. This is because the wait_strategy conserves CPU usage depending on whether the wait is a fixed interval or is progressively determined based on the length of the queue at runtime, therefore it has a large impact on latency and throughput. However, in the tuning, it is represented as a single configuration option with a value chosen from $\\{0,1,2,3\\}$ where each value represents a distinct wait strategy. The presence of high sparsity exacerbates the problem of local optima traps — some undesired regions that are difficult to escape from by an optimizer. Occasionally, searching focally under high sparsity does cause PS-w to overemphasize the less desired local optima, which harms the results. This is why there are some cases where the PS-w show no advantage, as illustrated in Figure 12b where the points of PS-w are too densely populated compared with those of the PS-w/o (as see in the Original Space), causing the volume covered by PS-w is smaller than that of PS-w/o and smaller A-HV (as see in the A-HV area). It is interesting to observe that under certain combinations of patterns, i.e., with $\boldsymbol{p_{0}}$ and/or $\boldsymbol{p_{1}}$ only, both optimization models perform similarly. This makes sense, as in those cases the requirements would create similar discriminative power between configurations to that of PS-w/o (which is essentially guided by $\\{\boldsymbol{p_{0}},\boldsymbol{p_{0}}\\}$), generating configurations that are equally preferred under the given needs. ### 5.2. RQ2: How do Different Aspirations Influence the Comparisons? #### 5.2.1. Method To understand RQ2, we follow the procedure used for RQ1, but with particular focus on the results with respect to the three aspiration spaces used (i.e., $l$, $c$, and $r$). Figure 13. Sensitivity of the % gain on PS-w over PS-w/o to different positions of the realistic aspiration space. Each point is the average and standard error over all combinations of patterns and optimizers. $l$, $c$, and $r$ denote left-shifted, centered, and right-shifted position in the performance landscape, respectively. #### 5.2.2. Results Figure 13 plots the sensitivity of A-HV to the different aspiration spaces. While the overall conclusion is consistent with that for RQ1 over different patterns and systems, we see that there is often a strong bias on the gains for a certain position of the aspiration spaces. For example, on x264 and Keras/Adiac, the improvement of PS-w is particularly high for aspiration space located at the centered area of the objective space. In contrast, the gain is particularly high on left-shifted aspiration space under Storm/RS and centered space for Storm/WC, which is possible depending on the landscape of a system (as we will discuss next). Indeed, some aspiration spaces can easily cause the PS-w to be trapped at the local optima, making its improvements over PS-w/o blurred. For example, on Storm/WC with right-shifted aspiration space, this effect is largely detrimental and hence severely influence the benefits of PS-w. In summary, we have: RQ2: The improvement of PS-w over PS-w/o is often largely biased to certain position of the aspiration space in the performance landscape, e.g., centered or left-shifted. Yet, PS-w still performs more advantageously in general. Figure 14. A projected landscape of the performance objective Latency with respect to configuration options Splitters and Counters for Storm/WC. $c$ and $r$ denote centered and right-shifted aspiration space, respectively. Note that the aspirations spaces are bounded because the throughput objective is also considered; it is however not showed here for simpler exposition. #### 5.2.3. Discussion As discussed for RQ1, the main reason that PS-w can perform better than PS-w/o is due to the “focused search pressure”. However, this may not be always helpful if the tuning encounters complex local optima that are difficult for the optimizer to escape from. The high sensitivity of the gains to the positions of aspiration space suggests that the local optima can be distributed unevenly across the landscape. If the aspiration space covers many local optima regions, then certainly the gains of PS-w would be marginal. For example, in Figure 14, clearly the aspiration space $c$ (which covers the requirements for latency and throughput) would be bounded on some regions in the landscape with a much more smooth surface for the latency. However, for $r$, the region becomes highly rugged and steep, which involves some very difficult local optima. Unfortunately, we did not see consistent patterns of such a sensitivity across the configurable software systems, which makes sense as the performance landscape of those systems can be very different too. Figure 15. Summary of the wins by PS-w and PS-w/o together with their detailed statistics validation results. ### 5.3. RQ3: What if the Aspirations are Unrealistic? Table 8. Comparing PS-w and PS-w/o under unrealistic requirements and aspirations over 100 runs. Formats are the same as Table 7. max width = 1 | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (2) | 1 (1) | 0 | -1.1% (0.7%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -1.3% (22.8%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (2) | 0 | -30.7% (5.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (1) | 0 | -18.1% (8.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | 15.7% (17.2%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (1) | 0 | -24.6% (4.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (2) | 1 (1) | 0 | 0.1% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (2) | 0 | 12.3% (26.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (2) | 0 | -36.0% (5.5%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -3.8% (0.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 3 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 0 (0) | 3 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | -2.7% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -0.8% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -2.7% (0.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -0.7% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 0 (0) | 3 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 0 (0) | 3 | 0.0% (0.0%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (1) | 0 | -0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (2) | 1 (1) | 0 | 70.0% (29.8%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (1) | 0 | -95.5% (224.5%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (0) | 1 (1) | 0 | 53.0% (30.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -90.4% (218.8%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 2 (2) | 1 (1) | 0 | 119.1% (37.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -52.7% (136.9%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 2 (1) | 1 (1) | 0 | 74.2% (30.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 2 (0) | 1 (1) | 0 | 21.2% (21.7%) | max width=.1 (a). Trimesh | | (b). x264 | | (c). Storm/WC | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (2) | 1 (1) | 0 | 0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | -243.4% (541.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -147.7% (336.0%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | 6.2% (23.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (2) | 1 (1) | 0 | 195.3% (44.2%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -10.3% (19.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (1) | 0 | 81.9% (28.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -214.2% (478.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | -139.8% (320.2%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (1) | 2 (2) | 0 | -0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -56.6% (5.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -42.4% (5.1%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | 1.4% (20.4%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (2) | 1 (1) | 0 | 55.8% (28.3%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (2) | 0 | -9.7% (10.8%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 2 (2) | 0 | -26.4% (4.6%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -45.4% (5.1%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -57.9% (4.8%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (1) | 0 | 0.1% (0.1%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | -47.3% (122.7%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -33.6% (89.6%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -1.7% (26.9%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (0) | 0 | 53.3% (24.2%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 1 (1) | 2 (2) | 0 | -2.4% (18.5%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (0) | 0 | 69.8% (25.8%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 2 (1) | 1 (1) | 0 | 5.9% (11.0%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 1 (1) | 2 (2) | 0 | -33.3% (93.2%) | max width=.1 (d). Storm/RS | | (e). Keras/Adiac | | (f). Keras/DSR | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 3 (2) | 0 (0) | 0 | -1.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -64.6% (4.6%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -39.1% (4.7%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | 6.7% (17.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 0 (0) | 3 (3) | 0 | -46.2% (5.2%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | 4.9% (15.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 2 (2) | 0 | -26.6% (4.3%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -67.7% (4.3%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -38.3% (5.1%) | max width=.1 | | | PS-w | PS-w/o | Tie | Avg (SE) of A-HV Gain ---|---|---|---|--- ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (1) | 1 (1) | 0 | -1.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -58.3% (4.4%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -68.4% (3.8%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -51.0% (5.1%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 1 (0) | 2 (1) | 0 | 106.3% (36.4%) | max width=.1 ${\\{\boldsymbol{p_{1}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -60.3% (3.9%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{1}}\\}}$ | 2 (0) | 1 (1) | 0 | -0.0% (0.0%) | max width=.1 ${\\{\boldsymbol{p_{2}}\textit{,}\boldsymbol{p_{3}}\\}}$ | 0 (0) | 3 (3) | 0 | -63.2% (3.7%) | max width=.1 ${\\{\boldsymbol{p_{3}}\textit{,}\boldsymbol{p_{2}}\\}}$ | 0 (0) | 3 (3) | 0 | -64.8% (4.7%) | max width=.1 | (g). Keras/SA | | (h). XGBoost | | #### 5.3.1. Method To investigate RQ3, we omit the scenarios with $\boldsymbol{p_{0}}$ as they cannot create an unrealistic aspiration space. This has left us with nine combinations of patterns, which, together with three Pareto optimizers and eight subjects, provide $9\times 3\times 8=216$ cases. All other settings are identical to those for RQ1. #### 5.3.2. Results As the summary from Figure 15, we see that PS-w/o is generally better across all the cases, as it wins on 64% (139/216) while loses on 30% (65/216). There is also a 6% (12/216) tie. This means that PS-w/o is better or similar on 70% (151/216) of the cases against the 36% (77/216) for PS-w. Among these, PS-w/o wins 129 cases with $\hat{A}_{12}\leq 0.4$ and $p<0.05$ comparing with 41 of such cases when it loses. Similar results can be confirmed in Table 8 when inspecting specific system and requirement scenarios. Albeit there is a limited number of cases where PS-w is still advantageous, it is more common to show no improvement at all or even cause fairly negative gains, which could be up to an average of $-$243%. It also has overall much less statistically significant wins across the cases. Particularly, we found that under $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$ on all systems, the two optimization models perform similarly but PS-w tends to obtain more wins. This is because such a combination pattern is the only case where the unrealism of aspiration does not lead to too many incomparable configurations. Overall, we conclude that: RQ3: When the aspiration space is unrealistic, PS-w/o is safer as it is similar or reasonably better than PS-w for 70% of the time, meaning that the requirements and aspirations are more harmful for guiding the tuning in this case. Yet, the only exception applied to $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$. #### 5.3.3. Discussion Given unrealistic aspiration space, the most common cases are similar to the Storm/RS example in Figure 16, where the PS-w is commonly inferior to PS-w/o when the diversity tends to be high (Figure 16a), but sometimes superior to PS-w/o under limited diversity (Figure 16b). This is because in most of the cases, after being transformed using the requirements with unrealistic aspirations, PS-w tends to find too many incomparable configurations from the beginning (as in the cases other than $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$, most configurations are fully unsatisfied on at least one performance objective), implying that the guidance provided by an unrealistic aspiration space is dramatically weakened. Such an incomparability, although may prompt slightly better diversity to escape from the local regions, can often severely harm the tendency towards more preferred configurations that reach/exceed the aspirations, leading to worse A-HV (the smaller volume) than PS-w/o in Figure 16a. This is because no selection pressure (i.e., discriminative power) can be generated in such a case. It is also the reason why PS-w is not deteriorated by the unrealistic aspirations under $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$, which can still ensure that the configurations are comparable. Yet sometimes (Figure 16b), such a high incomparability does help PS-w to find a good configuration by chance (e.g., better than the aspiration of latency), which is more desired than those of PS-w/o, leading to better HV (the larger volume). Hence the PS-w remains better for certain cases, despite that the tuning would be easily trapped at that configuration due to the high sparsity. (a) Storm/RS, PS-w/o wins (b) Storm/RS, PS-w wins Figure 16. Example runs of the final configuration sets (with NSGA-II) under unrealistic aspiration space indicated by the shaded areas. ### 5.4. RQ4: Does the Given Tuning Resource Important? #### 5.4.1. Method To understand the resource efficiency of both optimization models in RQ4, for each system, we use the following procedure: 1. (1) Plot the trajectories of A-HV along with the number of measurements for both PS-w and PS-w/o, where each point is the average of all requirement patterns, aspiration spaces, and optimizers. 2. (2) Identify a baseline, $b$, taken as the smallest number of measurements that the baseline model consumes to achieve its best A-HV (say $T$). 3. (3) For the other model, find the smallest number of measurements, denoted as $m$, at which the average A-HV is equivalent to or better than $T$. 4. (4) Calculate the speedup over the baseline model, i.e., $s={b\over m}$, according to the metric used by Gao et al. (Gao et al., 2021). Since we found that the generally better optimization model differs depending on the realism of the aspiration space, we use PS-w/o and PS-w as the baseline for realistic and unrealistic aspiration situations, respectively. Figure 17. Speedup on PS-w over PS-w/o under realistic aspirations (each point is the average and standard error over all combinations of patterns, aspiration space, optimziers and their runs). Figure 18. Speedup on PS-w/o over PS-w under unrealistic aspirations (each point is the average and standard error over all combinations of patterns, optimziers and their runs). #### 5.4.2. Results From the results under realistic aspirations as shown in Figure 17, we see that PS-w outperforms PS-w/o throughout the trajectories over different configurable systems, which further strengthen our findings for RQ1. The improvement in resource efficiency has been remarkable: there is a speedup between $1.05\times$ and $10\times$. In contrast, when the given aspirations are unrealistic (Figure 18), PS-w/o is much more resource-efficient, as it enables a speedup from $1.18\times$ to $10\times$. This again complies with the findings for RQ3. However, under unrealistic aspirations, the advantages of PS-w/o may not be obvious at the early stage of the tuning; on some systems (e.g., Figure 18c and Figure 18f), it is even inferior to the PS-w until around 250 configurations have been measured. In summary, we found that: RQ4: Under realistic aspirations, PS-w often obtains consistently better A-HV than PS-w/o throughout the trajectory and with a speedup up to $10\times$. When the aspirations are unrealistic, in contrast, the two optimization models are competitive in the early stage of tuning but soon PS-w/o would lead to better results with considerably high speedup. #### 5.4.3. Discussion Under realistic aspirations, the reasons that PS-w has better A-HV throughout the trajectory with remarkably high speedup are two folds: firstly, as what we have already discussed for RQ1, the guidance provided by the requirements and aspirations are often helpful to enable the tuning to be more focus-driven, hence better utilizing the resources to explore the more promising area. Secondly, PS-w/o would waste the valuable tuning budget to explore those configurations that it favors, but would never be preferred under the given requirements, since it is naturally interested in the whole Pareto front. Therefore, the above difference enables PS-w to be a particularly attractive model for some systems, such as Storm, where the performance of diverse configurations can be radically different. The situation is completely different when the given aspirations are unrealistic and it is mainly due to the high incomparability in PS-w as mentioned for RQ3 — many of the configurations are incomparable when transformed using the requirements with unrealistic aspirations. It has been shown that this situation can cause severe issues for any Pareto optimizer (Li et al., 2014a), as the resources would have been spent mainly on exploration. However, such an incomparability can occasionally be helpful to explore some preferred configurations by chance, especially at the early stage of the tuning where the PS-w/o has yet explored enough space to pursue the Pareto front. As such, we see that at the beginning PS-w performs similarly to PS-w/o and, sometimes, even better. ## 6\. Lessons Learned In this section, we discuss how our findings can be useful for the practitioners in the field in light of the lessons learned and future opportunities discovered. Lesson 1: The choice on whether to exploit aspirations for guiding the tuning is primarily dependent on their realism. It is interesting to find that we cannot draw the conclusion to choose between PS-w and PS-w/o arbitrarily for software configuration tuning with two performance objectives, as opposed to what has been overwhelmingly assumed in existing work. Instead, from RQ1, RQ3, and RQ4, we discovered that the realism of the given aspirations is crucial to the choice: PS-w is more beneficial for realistic aspirations while PS-w/o is safer when the aspirations are unrealistic (given that the tuning budget is also sufficient). This raises the importance of understanding whether the given requirements and aspirations can be realistic, or the assumption therein, prior to choosing the right optimization model for tuning software configuration with two performance objectives. Lesson 2: Little combinations of patterns can change the decision on whether to incorporate aspiration in the tuning, but it can influence the benefit/detriment of aspiration-guided tuning. Although from RQ1, we noticed that the benefits of PS-w is blurred when the given combination of patterns contain $\boldsymbol{p_{1}}$ and/or $\boldsymbol{p_{0}}$ only, this does not change the decision as PS-w remains outperform its PS-w/o counterpart. The only definitive case is when the aspirations are unrealistic, PS-w should be chosen under a patterns of $\\{\boldsymbol{p_{1}},\boldsymbol{p_{1}}\\}$. Therefore, we envisage that the sensitivity of given patterns to the choice between PS-w and PS-w/o is marginal and we have discovered other more important factors. However, we do see that the extent of improvement/degradation from PS-w can be sensitive to the given patterns. Lesson 3: The positions of realistic aspiration space in the objective space can largely affect the benefits brought by considering aspirations within tuning, but it is less likely to influence the choice. An unexpected discovery from RQ2 is that, when given realistic aspirations, the position of the aspiration space can largely influence the benefits of PS-w. While this is unlikely to affect the choice between PS-w and PS-w/o, it does raise the need to systematically analyze the correlation between the aspiration space and the configuration landscape of the system, particularly on the likelihood of covering some difficult local optima and their implication. Lesson 4: The given tuning budget has marginal impact to the choice when the aspirations are realistic. However, it can be an important factor to consider under unrealistic aspirations. According to RQ4 we have also revealed that, given realistic aspirations, the choice between PS-w and PS-w/o is marginally sensitive to the tuning budget, but it can be influenced by the budget when the aspirations are unrealistic. This adds an extra layer of consideration for unrealistic aspirations. In this case, what we observed, in general, is that for a small tuning budget, the benefit of PS-w/o is much less justified, hence using either of the two optimization models may not lead to significantly different results. However, given sufficient budget, PS-w/o is likely to dominate its PS-w counterparts. Unfortunately, with the current evidence, it remains very difficult to precisely quantify how “small” or “large” the tuning budget is required to make such a distinction. The above lessons not only reveal the important factors for the practitioners to consider when choosing PS-w and PS-w/o for bi-objective software configuration tuning but also hint at a few future research opportunities in this regard. These are: * • Landscape analysis for configurable software systems: We have found that the realism of aspiration space, its position in the objective landscape, and tuning budget can be the key factors to consider when choosing between PS-w and PS-w/o. All of those are relevant to the landscape analysis of the configurable system itself. Indeed, by systematically analyzing any collected data, we are able to obtain more knowledge about the above factors, and hence make more informed decisions on whether to incorporate requirements into the tuning. * • Requirement-robust optimizer for configuration tuning: The realism of the aspiration is certainly the key factor in the choice between PS-w and PS-w/o. However, it may not be always possible to obtain such knowledge in advance, leaving uncertainty to the decision. In this regard, it would be desirable to combine the strength of PS-w and PS-w/o to design an optimizer that is robust to such an uncertainty in the requirements. Again, the landscape analysis from the previous opportunity can provide insights into the designs. * • Rigorous analysis of requirement patterns and their relationships to the tuning: Although we see little implication of the requirement patterns to the choice between PS-w and PS-w/o, it is important to better understand why they work more diversely on some of the patterns and how exactly they can affect the performance of PS-w. In fact, on the theoretical side, the quantification from Section 3 provides the foundation of theoretical reasoning for switching between patterns, which is important in the topic of requirement relaxation. For example, this can be achieved in two aspects: * – With the quantification of the patterns, one can formally show the relations between them. For example, since all points in $\boldsymbol{p_{1}}$ have a higher satisficing value than those of $\boldsymbol{p_{3}}$, we can say that $\boldsymbol{p_{1}}$ is a “relaxed” form of $\boldsymbol{p_{3}}$. * – Similarly, we can quantify the relationships between a pattern with two different aspiration levels. With the above understanding, we allow the software engineers to achieve more explainability in terms of the given requirements during the tuning. For example, once the tuning completes, one would know how to relax or tighten the requirements, such that the most preferred configuration can be found under the requirements. This can be a unified process that combines both requirement negotiations and the tuning itself. * • Interactive configuration tuning: On the empirical side, our findings provide a few insights on what to do under different circumstances during interactive tuning. For example, if the software engineers find that the tuning never (or rarely) produces configurations that satisfy the requirements/aspirations under PS-w, then one can immediately switch to PS-w/o instead before concerns about the suitability of the underlying optimizer. Similarly, one can influence the results produced by PS-w (or PS-w/o) by changing the position of the aspiration space. ## 7\. Threats to Validity As with other empirical studies in software engineering, our work may contain threats to construct validity in the following aspects: * • Metric: Pareto search produces a set of configurations, and thus the comparisons need to work on a set rather than a single configuration. We used HV, which is a comprehensive metric for evaluating solution sets, following the methodology proposed by Li et al. (Li et al., 2022). Since there can be different given sets of requirements with aspirations, the configuration sets ought to be compared under such a scenario. To that end, we extend the HV to explicitly consider the patterns of requirements, as discussed in Section 4.4.1. * • Statistics: The stochastic nature of the Pareto optimizers can raise threats to the stability of results. To mitigate such, we repeat the experiments 100 runs and use Wilcoxon test along with $\hat{A}_{12}$ to verify all pairwise comparisons. All the above methods have been recommended and widely used for Software Engineering research (Arcuri and Briand, 2011). Two factors may form threats to internal validity in our study: * • Tuning budget: Given the size of our study, we set a one-hour budget for each case, which is a common setting for expensive problems in SBSE (Li et al., 2020b). To mitigate the interference of our experiments, this is then converted into the number of unique measurements following systematic steps (Section 4.3.2). We have also analyzed the trajectories of A-HV, in Section 5.4, showing what would happen if a smaller budget is used. Admittedly, investigating a larger tuning budget may affect some of the results, but confirming this would need even more computational resources and time (due to the expensive tuning), which we will plan as part of future work. * • Optimizer setting: In this work, we follow what has been shown to be effective for a SBSE problem in the literature, as our aim is to compare the most common practices. The only part we could not have found for sure is the population size, which is highly problem-dependent. To tackle this, we have followed carefully designed criteria (Section 4.3.3) that strike a balance between reasonable convergence and the time required under the tuning budget. However, we do agree that exploring alternative parameter settings can be a thread that requires further exploration, which we leave as part of future work. Threats to external validity can come from various sources, including: * • Software systems: In this work, we select the eight most representative systems/environments from existing work on software configuration tuning based on carefully codified rules (Section 4.1). Those subject systems come from diverse domains and with different scales, performance objectives, and search spaces. A worth noting point is that the requirements extracted include those for more complex systems, such as Cyber-Physical systems, while the subjects we examined are mainly software systems. This does not severely invalid our conclusion because the extracted implication and patterns are rather generic such that they can be applied to different cases while there exist some performance attributes that are of relevance to a wide range of systems, e.g., latency- and throughput-related requirements (with different aspiration levels) (Nair et al., 2020). Nonetheless, we agree that this list of the studied systems is not exhaustive and we may miss some particular situations that can only become clear for more complex systems. Experimenting with more systems that are of diverse types may prove fruitful. A relevant point is that we did not examine our results on highly complex software systems that cut across the software and hardware layers. In those cases, the interaction between cross-layered configuration options can be more complex, leading to some different configuration landscapes (Iqbal et al., 2022). Therefore, examining those highly complex systems may provide new insights and further consolidate our findings. It is worth noting that it can be particularly attractive to relate the results with respect to the different types of software systems. However, unfortunately, we have not yet observed consistent patterns in the results according to the domain of systems, hence unable to draw a general conclusion thereupon. This can be attributed to two reasons: * – The workload and benchmark under which each of the systems runs are rather different, creating a distinct configuration landscape. * – Because of the above, the appropriate aspirations (levels) used are also different even for systems that are of the same domain. Again, using even more software systems may help us to achieve such, which we certainly plan to do for future work. However, this does not invalidate the
# 3D UX-Net: A Large Kernel Volumetric ConvNet Modernizing Hierarchical Transformer for Medical Image Segmentation Ho Hin Lee Vanderbilt University &Shunxing Bao Vanderbilt University &Yuankai Huo Vanderbilt University &Bennett A. Landman Vanderbilt University Correspondence to<EMAIL_ADDRESS> ###### Abstract The recent 3D medical ViTs (e.g., SwinUNETR) achieve the state-of-the-art performances on several 3D volumetric data benchmarks, including 3D medical image segmentation. Hierarchical transformers (e.g., Swin Transformers) reintroduced several ConvNet priors and further enhanced the practical viability of adapting volumetric segmentation in 3D medical datasets. The effectiveness of hybrid approaches is largely credited to the large receptive field for non-local self-attention and the large number of model parameters. We hypothesize that volumetric ConvNets can simulate the large receptive field behavior of these learning approaches with fewer model parameters using depth- wise convolution. In this work, we propose a lightweight volumetric ConvNet, termed 3D UX-Net, which adapts the hierarchical transformer using ConvNet modules for robust volumetric segmentation. Specifically, we revisit volumetric depth-wise convolutions with large kernel (LK) size (e.g. starting from $7\times 7\times 7$) to enable the larger global receptive fields, inspired by Swin Transformer. We further substitute the multi-layer perceptron (MLP) in Swin Transformer blocks with pointwise depth convolutions and enhance model performances with fewer normalization and activation layers, thus reducing the number of model parameters. 3D UX-Net competes favorably with current SOTA transformers (e.g. SwinUNETR) using three challenging public datasets on volumetric brain and abdominal imaging: 1) MICCAI Challenge 2021 FLARE, 2) MICCAI Challenge 2021 FeTA, and 3) MICCAI Challenge 2022 AMOS. 3D UX-Net consistently outperforms SwinUNETR with improvement from 0.929 to 0.938 Dice (FLARE2021) and 0.867 to 0.874 Dice (Feta2021). We further evaluate the transfer learning capability of 3D UX-Net with AMOS2022 and demonstrates another improvement of $2.27\%$ Dice (from 0.880 to 0.900). The source code with our proposed model are available at https://github.com/MASILab/3DUX-Net. ## 1 Introduction Significant progress has been made recently with the introduction of vision transformers (ViTs) Dosovitskiy et al. (2020) into 3D medical downstream tasks, especially for volumetric segmentation benchmarks Wang et al. (2021); Hatamizadeh et al. (2022b); Zhou et al. (2021); Xie et al. (2021); Chen et al. (2021). The characteristics of ViTs are the lack of image-specific inductive bias and the scaling behaviour, which are enhanced by large model capacities and dataset sizes. Both characteristics contribute to the significant improvement compared to ConvNets on medical image segmentation Tang et al. (2022); Bao et al. (2021); He et al. (2022); Atito et al. (2021). However, it is challenging to adapt 3D ViT models as generic network backbones due to the high complexity of computing global self-attention with respect to the input size, especially in high resolution images with dense features across scales. Therefore, hierarchical transformers are proposed to bridge these gaps with their intrinsic hybrid structure Zhang et al. (2022); Liu et al. (2021). Introducing the “sliding window” strategy into ViTs termed Swin Transformer behave similarly with ConvNets Liu et al. (2021). SwinUNETR adapts Swin transformer blocks as the generic vision encoder backbone and achieves current state-of-the-art performance on several 3D segmentation benchmarks Hatamizadeh et al. (2022a); Tang et al. (2022). Such performance gain is largely owing to the large receptive field from 3D shift window multi-head self-attention (MSA). However, the computation of shift window MSA is computational unscalable to achieve via traditional 3D volumetric ConvNet architectures. As the advancement of ViTs starts to bring back the concepts of convolution, the key components for such large performance differences are attributed to the scaling behavior and global self-attention with large receptive fields. As such, we further ask: Can we leverage convolution modules to enable the capabilities of hierarchical transformers? The recent advance in LK-based depthwise convolution design (e.g., Liu et al. Liu et al. (2022)) provides a computationally scalable mechanism for large receptive field in 2D ConvNet. Inspired by such design, this study revisits the 3D volumetric ConvNet design to investigate the feasibility of (1) achieving the SOTA performance via a pure ConvNet architecture, (2) yielding much less network complexity compared with 3D ViTs, and (3) providing a new direction of designing 3D ConvNet on volumetric high resolution tasks. Unlike SwinUNETR, we propose a lightweight volumetric ConvNet 3D UX-Net to adapt the intrinsic properties of Swin Transformer with ConvNet modules and enhance the volumetric segmentation performance with smaller model capacities. Specifically, we introduce volumetric depth-wise convolutions with LK sizes to simulate the operation of large receptive fields for generating self-attention in Swin transformer. Furthermore, instead of linear scaling the self-attention feature across channels, we further introduce the pointwise depth convolution scaling to distribute each channel-wise feature independently into a wider hidden dimension (e.g., $4\times$input channel), thus minimizing the redundancy of learned context across channels and preserving model performances without increasing model capacity. We evaluate 3D UX-Net on supervised volumetric segmentation tasks with three public volumetric datasets: 1) MICCAI Challenge 2021 FeTA (infant brain imaging), 2) MICCAI Challenge 2021 FLARE (abdominal imaging), and 3) MICCAI Challenge 2022 AMOS (abdominal imaging). Surprisingly, 3D UX-Net, a network constructed purely from ConvNet modules, demonstrates a consistent improvement across all datasets comparing with current transformer SOTA. We summarize our contributions as below: * • We propose the 3D UX-Net to adapt transformer behavior purely with ConvNet modules in a volumetric setting. To our best knowledge, this is the first large kernel block design of leveraging 3D depthwise convolutions to compete favorably with transformer SOTAs in volumetric segmentation tasks. * • We leverage depth-wise convolution with LK size as the generic feature extraction backbone, and introduce pointwise depth convolution to scale the extracted representations effectively with less parameters. * • We use three challenging public datasets to evaluate 3D UX-Net in 1) direct training and 2) finetuning scenarios with volumetric multi-organ/tissues segmentation. 3D UX-Net achieves consistently improvement in both scenarios across all ConvNets and transformers SOTA with fewer model parameters. ## 2 Related Work ### 2.1 Transformer-based Segmentation Significant efforts have been put into integrating ViTs for dense predictions in medical imaging domain Hatamizadeh et al. (2022b); Chen et al. (2021); Zhou et al. (2021); Wang et al. (2021). With the advancement of Swin Transformer, SwinUNETR equips the encoder with the Swin Transformer blocks to compute self- attention for enhancing brain tumor segmentation accuracy in 3D MRI Images Hatamizadeh et al. (2022a). Tang et al. extends the SwinUNETR by adding a self-supervised learning pre-training strategy for fine-tuning segmentation tasks. Another Unet-like architecture Swin-Unet further adapts Swin Transformer on both the encoder and decoder network via skip-connections to learn local and global semantic features for multi-abdominal CT segmentation Cao et al. (2021). Similarly, SwinBTS has the similar intrinsic structure with Swin-Unet with an enhanced transformer module for detailed feature extraction Jiang et al. (2022). However, the transformer-based volumetric segmentation frameworks still require lengthy training time and are accompanied by high computational complexity associated with extracting features at multi-scale levels Xie et al. (2021); Shamshad et al. (2022). Therefore, such limitations motivate us to rethink if ConvNets can emulate transformer behavior to demonstrate efficient feature extraction. ### 2.2 Depthwise convolution based Segmentation Apart from transformer-based framework, previous works began to revisit the concept of depthwise convolution and adapt its characteristics for robust segmentation. It has been proved to be a powerful variation of standard convolution that helps reduce the number of parameters and transfer learning Guo et al. (2019). Zunair et al. introduced depthwise convolution to sharpen the features prior to fuse the decode features in a UNet-like architecture Zunair & Hamza (2021). 3D U2-Net leveraged depthwise convolutions as domain adaptors to extract domain-specific features in each channel Huang et al. (2019). Both studies demonstrate the feasibility of using depthwise convolution in enhancing volumetric tasks. However, only a small kernel size is used to perform depthwise convolution. Several prior works have investigated the effectiveness of LK convolution in medical image segmentation. For instance, Huo et al. leveraged LK (7x7) convolutional layers as the skip connections to address the anatomical variations for splenomegaly spleen segmentation Huo et al. (2018); Li et al. proposed to adapt LK and dilated depthwise convolutions in decoder for volumetric segmentation Li et al. (2022). However, significant increase of FLOPs is demonstrated with LKs and dramatically reduces both training and inference efficiency. To enhance the model efficiency with LKs, Liu et al. proposed ConvNeXt as a 2D generic backbone that simulate ViTs advantages with LK depthwise convolution for downstream tasks with natural image Liu et al. (2022), while ConvUNeXt is proposed to extend for 2D medical image segmentation and compared only with 2D CNN-based networks (e.g., ResUNet Shu et al. (2021), UNet++ Zhou et al. (2019)) Han et al. (2022). However, limited studies have been proposed to efficiently leverage depthwise convolution with LKs in a volumetric setting and compete favorably with volumetric transformer approaches. With the large receptive field brought by LK depthwise convolution, we hypothesize that LK depthwise convolution can potentially emulate transformers’ behavior and efficiently benefits for volumetric segmentation. Figure 1: Overview of our proposed designed convolution blocks to simulate the behaviour of swin transformers. We leverage depthwise convolution and pointwise scaling to adapt large receptive field and enrich the features through widening independent channels. We further compare different backbones of volumetric ConvNets and Swin Transformer block architecture. The yellow dotted line demonstrates the differences in spatial position of widening feature channels in the network bottleneck. ## 3 3D UX-Net: Intuition Inspired by Liu et al. (2022), we introduce 3D UX-Net, a simple volumetric ConvNet that adapts the capability of hierarchical transformers and preserves the advantages of using ConvNet modules such as inductive biases. The basic idea of designing the encoder block in 3D UX-Net can be divided into 1) block- wise and 2) layer-wise perspectives. First, we discuss the block-wise perspective in three views: * • Patch-wise Features Projection: Comparing the similarities between ConvNets and ViTs, there is a common block that both networks use to aggressively downscale feature representations into particular patch sizes. Here, instead of flattening image patches as a sequential input with linear layer Dosovitskiy et al. (2020), we adopt a LK projection layer to extract patch- wise features as the encoder’s inputs. * • Volumetric Depth-wise Convolution with LKs: One of the intrinsic specialties of the swin transformer is the sliding window strategy for computing non-local MSA. Overall, there are two hierarchical ways to compute MSA: 1) window-based MSA (W-MSA) and 2) shifted window MSA (SW-MSA). Both ways generate global receptive field across layers and further refine the feature correspondence between non-overlapping windows. Inspired by the idea of depth-wise convolution, we have found similarities between the weighted sum approach in self-attention and the convolution per-channel basis. We argue that using depth-wise convolution with a LK size can provide a large receptive field in extracting features similar to the MSA blocks. Therefore, we propose compressing the window shifting characteristics of the Swin Transformer with a volumetric depth-wise convolution using a LK size (e.g., starting from $7\times 7\times 7$). Each kernel channel is convolved with the corresponding input channel, so that the output feature has the same channel dimension as the input. * • Inverted Bottleneck with Depthwise Convolutional Scaling: Another intrinsic structure in Swin Transformer is that they are designed with the hidden dimension of the MLP block to be four times wider than the input dimension, as shown in Figure 1. Such a design is interestingly correlated to the expansion ratio in the ResNet block He et al. (2016). Therefore, we leverage the similar design in ResNet block and move up the depth-wise convolution to compute features. Furthermore, we introduce depthwise convolutional scaling (DCS) with $1\times 1\times 1$ kernel to linearly scale each channel feature independently. We enrich the feature representations by expanding and compressing each channel independently, thus minimizing the redundancy of cross-channel context. We enhance the cross-channel feature correspondences with the downsampling block in each stage. By using DCS, we further reduce the model complexity by 5% and demonstrates a comparable results with the block architecture using MLP. Figure 2: Overview of the proposed 3D UX-Net with our designed convolutional block as the encoder backbone. LK convolution is used to project features into patch-wise embeddings. A downsampling block is used in each stage to mix and enrich context across all channels, while our designed blocks extract meaningful features in depth-wise setting. The macro-design in convolution blocks demonstrates the possibility of adapting the large receptive field and leveraging similar operation of extracting features compared with the Swin Transformer. We want to further investigate the variation between ConvNets and the Swin Transformer in layer- wise settings and refine the model architecture to better simulate ViTs in macro-level. Here, we further define and adapt layer-wise differences into another three perspectives: * • Applying Residual Connections: From Figure 1, the golden standard 3D U-Net block demonstrates the naive approach of using small kernels to extract local representations with increased channels Çiçek et al. (2016), while the SegResNet block applies the residual similar to the transformer block Myronenko (2018). Here, we also apply residual connections between the input and the extracted features after the last scaling layer. However, we do not apply any normalization and activation layers before and after the summation of residual to be equivalent with the swin transformer structure. * • Adapting Layer Normalization (LN): In ConvNets, batch normalization (BN) is a common strategy that normalizes convolved representations to enhance convergence and reduce overfitting. However, previous works have demonstrated that BN can lead to a detrimental effect in model generalizability Wu & Johnson (2021). Although several approaches have been proposed to have an alternative normalization techniques Salimans & Kingma (2016); Ulyanov et al. (2016); Wu & He (2018), BN still remains as the optimal choice in volumetric vision tasks. Motivated by vision transformers and Liu et al. (2022), we directly substitute BN with LN in the encoder block and demonstrate similar operations in ViTs Ba et al. (2016). * • Using GELU as the Activation Layer: Many previous works have used the rectified linear unit (ReLU) activation layers Nair & Hinton (2010), providing non-linearity in both ConvNets and ViTs. However, previously proposed transformer models demonstrate the Gaussian error linear unit (GELU) to be a smoother variant, which tackle the limitation of sudden zero in the negative input range in ReLU Hendrycks & Gimpel (2016). Therefore, we further substitute the ReLU with the GELU activation function. ## 4 3D UX-Net: Complete Network Description 3D UX-Net comprises multiple re-designed volumetric convolution blocks that directly utilize 3D patches. Skip connections are further leveraged to connect the multi-resolution features to a convolution-based decoder network. Figure 2 illustrates the complete architecture of 3D UX-Net. We further describe the details of the encoder and decoder in this section. ### 4.1 Depth-wise Convolution Encoder Given a set of 3D image volumes $V_{i}={X_{i},Y_{i}}_{i=1,...,L}$, random sub- volumes $P_{i}\in\mathcal{R}^{H\times W\times D\times C}$ are extracted to be the inputs for the encoder network. Instead of flattening the patches and projecting it with linear layer Hatamizadeh et al. (2022b), we leverage a LK convolutional layer to compute partitioned feature map with size $\frac{H}{2}\times\frac{W}{2}\times\frac{D}{2}$ that are projected into a $C=48$-dimensional space. To adapt the characteristics of computing local self-attention, we use the depthwise convolution with kernel size starting from $7\times 7\times 7$ (DWC) with padding of 3, to act as a ”shifted window” and evenly divide the feature map. As global self-attention is generally not computationally affordable with a large number of patches extracted in the Swin Transformer Liu et al. (2021), we hypothesize that performing depthwise convolution with a LK size can effectively extract features with a global receptive field. Therefore, we define the output of encoder blocks in layers $l$ and $l+1$ as follows: $\displaystyle\hat{z}^{l}=\text{DWC}(\text{LN}(z^{l-1}))+z^{l-1}$ (1) $\displaystyle{z}^{l}=\text{DCS}(\text{LN}(\hat{z}^{l}))+\hat{z}^{l}$ $\displaystyle\hat{z}^{l+1}=\text{DWC}(\text{LN}(z^{l}))+z^{l}$ $\displaystyle{z}^{l+1}=\text{DCS}(\text{LN}(\hat{z}^{l+1}))+\hat{z}^{l+1}$ where $\hat{z}_{l}$ and $\hat{z}_{l+1}$ are the outputs from the DWC layer in different depth levels; LN and DCS denote as the layer normalization and the depthwise convolution scaling, respectively (see. Figure 1). Compared to the Swin Transformer, we substitute the regular and window partitioning multi-head self-attention modules, W-MSA and SW-MSA respectively, with two DWC layers. Motivated by SwinUNETR Tang et al. (2022); Hatamizadeh et al. (2022a), the complete architecture of the encoder consists of 4 stages comprising of 2 LK convolution blocks at each stage (i.e. L=8 total layers). Inside the block, the DCS layer is followed by the DWC layer in each block. The DCS layer helps scale the dimension of the feature map (4 times of the input channel size) without increasing model parameters and minimizes the redundancy of the learned volumetric context across channels. To exchange the information across channels, instead of using MLP, we leverage a standard convolution block with kernel size $2\times 2\times 2$ with stride 2 to downscale the feature resolution by a factor of 2. The same procedure continues in stage 2, stage 3 and stage 4 with the resolutions of $\frac{H}{4}\times\frac{W}{4}\times\frac{D}{4}$, $\frac{H}{8}\times\frac{W}{8}\times\frac{D}{8}$ and $\frac{H}{16}\times\frac{W}{16}\times\frac{D}{16}$ respectively. Such hierarchical representations in multi-scale setting are extracted in each stage and are further leveraged for learning dense volumetric segmentation. Table 1: Comparison of transformer and ConvNet SOTA approaches on the Feta 2021 and FLARE 2021 testing dataset. (*: $p<0.01$, with Wilcoxon signed-rank test to all SOTA approaches) FeTA 2021 FLARE 2021 Methods #Params FLOPs ECF GM WM Vent. Cereb. DGM BS Mean Spleen Kidney Liver Pancreas Mean 3D U-Net Çiçek et al. (2016) 4.81M 135.9G 0.867 0.762 0.925 0.861 0.910 0.845 0.827 0.857 0.911 0.962 0.905 0.789 0.892 SegResNet Myronenko (2018) 1.18M 15.6G 0.868 0.770 0.927 0.865 0.911 0.867 0.825 0.862 0.963 0.934 0.965 0.745 0.902 RAP-Net Lee et al. (2021) 38.2M 101.2G 0.880 0.771 0.927 0.862 0.907 0.879 0.832 0.865 0.946 0.967 0.940 0.799 0.913 nn-UNet Isensee et al. (2021) 31.2M 743.3G 0.883 0.775 0.930 0.868 0.920 0.880 0.840 0.870 0.971 0.966 0.976 0.792 0.926 TransBTS Wang et al. (2021) 31.6M 110.4G 0.885 0.778 0.932 0.861 0.913 0.876 0.837 0.868 0.964 0.959 0.974 0.711 0.902 UNETR Hatamizadeh et al. (2022b) 92.8M 82.6G 0.861 0.762 0.927 0.862 0.908 0.868 0.834 0.860 0.927 0.947 0.960 0.710 0.886 nnFormer Zhou et al. (2021) 149.3M 240.2G 0.880 0.770 0.930 0.857 0.903 0.876 0.828 0.863 0.973 0.960 0.975 0.717 0.906 SwinUNETR Hatamizadeh et al. (2022a) 62.2M 328.4G 0.873 0.772 0.929 0.869 0.914 0.875 0.842 0.867 0.979 0.965 0.980 0.788 0.929 3D UX-Net (Ours) 53.0M 639.4G 0.882 0.780 0.934 0.872 0.917 0.886 0.845 0.874* 0.981 0.969 0.982 0.801 0.934* Table 2: Comparison of Finetuning performance with transformer SOTA approaches on the AMOS 2021 testing dataset.(*: $p<0.01$, with Wilcoxon signed-rank test to all SOTA approaches) Methods Spleen R. Kid L. Kid Gall. Eso. Liver Stom. Aorta IVC Panc. RAG LAG Duo. Blad. Pros. Avg nn-UNet 0.965 0.959 0.951 0.889 0.820 0.980 0.890 0.948 0.901 0.821 0.785 0.739 0.806 0.869 0.839 0.878 TransBTS 0.885 0.931 0.916 0.817 0.744 0.969 0.837 0.914 0.855 0.724 0.630 0.566 0.704 0.741 0.650 0.792 UNETR 0.926 0.936 0.918 0.785 0.702 0.969 0.788 0.893 0.828 0.732 0.717 0.554 0.658 0.683 0.722 0.762 nnFormer 0.935 0.904 0.887 0.836 0.712 0.964 0.798 0.901 0.821 0.734 0.665 0.587 0.641 0.744 0.714 0.790 SwinUNETR 0.959 0.960 0.949 0.894 0.827 0.979 0.899 0.944 0.899 0.828 0.791 0.745 0.817 0.875 0.841 0.880 3D UX-Net 0.970 0.967 0.961 0.923 0.832 0.984 0.920 0.951 0.914 0.856 0.825 0.739 0.853 0.906 0.876 0.900* Figure 3: Validation Curve with Dice Score for FeTA2021 (a), FLARE2021 (b) and AMOS2022 (c). 3D UX-Net demonstrates the fastest convergence rate with limited samples training (FeTA2021) and transfer learning (AMOS2022) scenario respectively, while the convergence rate is comparable to SwinUNETR with the increase of sample size training (FLARE2021). ### 4.2 Decoder The multi-scale output from each stage in the encoder is connected to a ConvNet-based decoder via skip connections and form a ”U-shaped” like network for downstream segmentation task. Specifically, we extract the output feature mapping of each stage $i(i\in{0,1,2,3,4)}$ in the encoder and further leverage a residual block comprising two post-normalized $3\times 3\times 3$ convolutional layers with instance normalization to stabilize the extracted features. The processed features from each stage are then upsampled with a transpose convolutional layer and concatentated with the features from the preceding stage. For downstream volumetric segmentation, we also concatenate the residual features from the input patches with the upsampled features and input the features into a residual block with $1\times 1\times 1$ convolutional layer with a softmax activation to predict the segmentation probabilities. ## 5 Experimental Setup Datasets We conduct experiments on three public multi-modality datasets for volumetric segmentation, which comprising with 1) MICCAI 2021 FeTA Challenge dataset (FeTA2021) Payette et al. (2021), 2) MICCAI 2021 FLARE Challenge dataset (FLARE2021) Ma et al. (2021), and 3) MICCAI 2022 AMOS Challenge dataset (AMOS2022) Ji et al. (2022). For the FETA2021 dataset, we employ 80 T2-weighted infant brain MRIs from the University Children’s Hospital with 1.5 T and 3T clinical whole-body scanners for brain tissue segmentation, with seven specific tissues well-annotated. For FLARE2021 and AMOS2022, we employ 511 multi-contrast abdominal CT from FLARE2021 with four anatomies manually annotated and 200 multi-contrast abdominal CT from AMOS 2022 with sixteen anatomies manually annotated for abdominal multi-organ segmentation. More details of the three public datasets can be found in appendix A.2. Figure 4: Qualitative representations of tissues and multi-organ segmentation across three public datasets. Boxed are further zoomed in and visualize the significant differences in segmentation quality. 3D UX-Net shows the best segmentation quality compared to the ground-truth. Implementation Details We perform evaluations on two scenarios: 1) direct supervised training and 2) transfer learning with pretrained weights. FeTA2021 and FLARE2021 datasets are leverage to evaluate in direct training scenario, while AMOS dataset is used in transfer learning scenario. We perform five-fold cross-validations to both FeTA2021 and FLARE2021 datasets. More detailed information of data splits are provided in Appendix A.2. For the transfer learning scenario, we leverage the pretrained weights from the best fold model trained with FLARE2021, and finetune the model weights on AMOS2022 to evaluate the fine-tuning capability of 3D UX-Net. The complete preprocessing and training details are available at the appendix A.1. Overall, we evaluate 3D UX-Net performance by comparing with current volumetric transformer and ConvNet SOTA approaches for volumetric segmentation in fully-supervised setting. We use the Dice similarity coefficient as an evaluation metric to compare the overlapping regions between predictions and ground-truth labels. Furthermore, we performed ablation studies to investigate the effect on different kernel size and the variability of substituting linear layers with depthwise convolution for feature extraction. Table 3: Ablation studies of different architecture on FeTA2021 and FLARE2021 Methods #Params (M) FeTA2021 FLARE2021 Mean Dice SwinUNETR 62.2 0.867 0.929 Use Standard Conv. 186.9 0.875 0.937 Use Depth Conv. 53.0 0.874 0.934 Kernel=$3\times 3\times 3$ 52.5 0.867 0.928 Kernel=$5\times 5\times 5$ 52.7 0.869 0.931 Kernel=$7\times 7\times 7$ 53.0 0.874 0.934 Kernel=$9\times 9\times 9$ 53.6 0.870 0.934 Kernel=$11\times 11\times 11$ 54.4 0.871 0.936 Kernel=$13\times 13\times 13$ 55.7 0.871 0.938 No MLP 51.1 0.869 0.915 Use MLP 56.3 0.872 0.933 Use DCS $1\times 1\times 1$ 53.0 0.874 0.934 ## 6 Results ### 6.1 Evaluation on FeTA & FLARE Table 1 shows the result comparison of current transformers and ConvNets SOTA on medical image segmentation in volumetric setting. With our designed convolutional blocks as the encoder backbone, 3D UX-Net demonstrates the best performance across all segmentation task with significant improvement in Dice score (FeTA2021: 0.870 to 0.874, FLARE2021: 0.929 to 0.934). From Figure 2, we observe that 3D UX-Net demonstrates the quickest convergence rate in training with FeTA2021 datasets. Interestingly, when the training sample size increases, the efficiency of training convergence starts to become compatible between SwinUNETR and 3D UX-Net. Apart from the quantitative representations, Figure 3 further provides additional confidence of demonstrating the quality improvement in segmentation with 3D UX-Net. The morphology of organs and tissues are well preserved compared to the ground-truth label. ### 6.2 Transfer Learning with AMOS Apart from training from scratch scenario, we further investigate the transfer learning capability of 3D UX-Net comparing to the transformers SOTA with AMOS 2022 dataset. We observe that the finetuning performance of 3D UX-Net significantly outperforms other transformer network with mean Dice of 0.900 (2.27% enhancement) and most of the organs segmentation demonstrate a consistent improvement in quality. Also, from Figure 2, although the convergence curve of each transformer network shows the comparability to that of the FLARE2021-trained model, 3D UX-Net further shows its capability in adapting fast convergence and enhancing the robustness of the model with finetuning. Furthermore, the qualitative representations in Figure 3 demonstrates a significant improvement in preserving boundaries between neighboring organs and minimize the possibility of over-segmentation towards other organ regions. ### 6.3 Ablation Analysis After evaluating the core performance of 3D UX-Net, we study how the different components in our designed architecture contribute to such a significant improvement in performance, as well as how they interact with other components. Here, both FeTA2021 and FLARE2021 are leveraged to perform ablation studies towards different modules. All ablation studies are performed with kernel size $7\times 7\times 7$ scenario except the study of evaluating the variability of kernel size. Comparing with Standard Convolution: We investigate the effectiveness of both standard convolution and depthwise convolution for initial feature extraction. With the use of standard convolution, it demonstrates a slight improvement with standard convolution. However, the model parameters are about 3.5 times than that of using depthwise convolution, while the segmentation performance with depthwise convolution still demonstrates a comparable performance in both datasets. Variation of Kernel Size: From Table 3, we observe that the convolution with kernel size $7\times 7\times 7$ optimally works for FeTA2021 dataset, while the segmentation performance of FLARE2021 demonstrates the best with kernel size of $13\times 13\times 13$. The significant improvement of using $13\times 13\times 13$ kernel for FLARE2021 may be due to the larger receptive field provided to enhance the feature correspondence between multiple neighboring organs within the abdominal region. For FeTA2021 dataset, only the small infant brains are well localized as foreground and $7\times 7\times 7$ kernel demonstrates to be optimal receptive field to extract the tissues correspondence. Adapting DCS: We found that a significant decrement is performed without using MLP for feature scaling. With the linear scaling, the performance enhanced significantly in FLARE2021, while a slight improvement is demonstrated in FeTA2021. Interestingly, leveraging depthwise convolution with $1\times 1\times 1$ kernel size for scaling, demonstrates a slightly enhancement in performance for both FeTA2021 and FLARE2021 datasets. Also, the model parameters further drops from 56.3M to 53.0M without trading off the model performance. ## 7 Discussion In this work, we present a block-wise design to simulate the behavior of Swin Transformer using pure ConvNet modules. We further adapt our design as a generic encoder backbone into ”U-Net” like architecture via skip connections for volumetric segmentation. We found that the key components for improved performance can be divided into two main perspectives: 1) the sliding window strategy of computing MSA and 2) the inverted bottleneck architecture of widening the computed feature channels. The W-MSA enhances learning the feature correspondence within each window, while the SW-MSA strengthens the cross-window connections at the feature level between different non- overlapping windows. Such strategy integrates ConvNet priors into transformer networks and enlarge receptive fields for feature extraction. However, we found that the depth convolutions can demonstrate similar operations of computing MSA in Swin Transformer blocks. In depth-wise convolutions, we convolve each input channel with a single convolutional filter and stack the convolved outputs together, which is comparable to the patch merging layer for feature outputs in Swin Transformers. Furthermore, adapting the depth convolutions with LK filters demonstrates similarities with both W-MSA and SW- MSA, which learns the feature connections within a large receptive field. Our design provides similar capabilities to Swin Transformer and additionally has the advantage of reducing the number of model parameters using ConvNet modules. Another interesting difference is the inverted bottleneck architecture. Figure 1 shows that both Swin Transformer and some standard ConvNets have their specific bottleneck architectures (yellow dotted line). The distinctive component in Swin Transformer’s bottleneck is to maintain the channels size as four times wider than the input dimension and the spatial position of the MSA layer. We follow the inverted bottleneck architecture in Swin Transformer block and move the depthwise convolution to the top similar to the MSA layer. Instead of using linear scaling, we introduce the idea of depthwise convolution in pointwise setting to scale the dense feature with wider channels. Interestingly, we found a slight improvement in performance is shown across datasets (FeTA2021: 0.872 to 0.874, FLARE2021: 0.933 to 0.934), but with less model parameters. As each encoder block only consists of two scaling layers, the limited number of scaling blocks may affect the performance to a small extent. We will further investigate the scalability of linear scaling layer in 3D as the future work. ## 8 Conclusion We introduce 3D UX-Net, the first volumetric network adapting the capabilities of hierarchical transformer with pure ConvNet modules for medical image segmentation. We re-design the encoder blocks with depthwise convolution and projections to simulate the behavior of hierarchical transformer. Furthermore, we adjust layer-wise design in the encoder block and enhance the segmentation performance across different training settings. 3D UX-Net outperforms current transformer SOTAs with fewer model parameters using three challenging public datasets in both supervised training and transfer learning scenarios. #### Acknowledgments This research is supported by NIH Common Fund and National Institute of Diabetes, Digestive and Kidney Diseases U54DK120058 (Spraggins), NSF CAREER 1452485, NIH 2R01EB006136, NIH 1R01EB017230 (Landman), and NIH R01NS09529. This study was in part using the resources of the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University, Nashville, TN. The identified datasets used for the analysis described were obtained from the Research Derivative (RD), database of clinical and related data. The imaging dataset(s) used for the analysis described were obtained from ImageVU, a research repository of medical imaging data and image-related metadata. ImageVU and RD are supported by the VICTR CTSA award (ULTR000445 from NCATS/NIH) and Vanderbilt University Medical Center institutional funding. ImageVU pilot work was also funded by PCORI (contract CDRN-1306-04869). We further thank Quan Liu, a Ph.D student in Computer Science Department of Vanderbilt University, to extensively discuss the initial idea of this paper. ## References * Atito et al. (2021) Sara Atito, Muhammad Awais, and Josef Kittler. Sit: Self-supervised vision transformer. _arXiv preprint arXiv:2104.03602_ , 2021. * Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. _arXiv preprint arXiv:1607.06450_ , 2016. * Bao et al. (2021) Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. _arXiv preprint arXiv:2106.08254_ , 2021. * Cao et al. (2021) Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. _arXiv preprint arXiv:2105.05537_ , 2021. * Chen et al. (2021) Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medical image segmentation. _arXiv preprint arXiv:2102.04306_ , 2021. * Çiçek et al. (2016) Özgün Çiçek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In _International conference on medical image computing and computer-assisted intervention_ , pp. 424–432. Springer, 2016. * Ding et al. (2021) Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. Repvgg: Making vgg-style convnets great again. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 13733–13742, 2021. * Ding et al. (2022a) Xiaohan Ding, Honghao Chen, Xiangyu Zhang, Kaiqi Huang, Jungong Han, and Guiguang Ding. Re-parameterizing your optimizers rather than architectures. _arXiv preprint arXiv:2205.15242_ , 2022a. * Ding et al. (2022b) Xiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31x31: Revisiting large kernel design in cnns. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 11963–11975, 2022b. * Dosovitskiy et al. (2020) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020. * Guo et al. (2019) Yunhui Guo, Yandong Li, Liqiang Wang, and Tajana Rosing. Depthwise convolution is all you need for learning multiple visual domains. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 8368–8375, 2019. * Han et al. (2022) Zhimeng Han, Muwei Jian, and Gai-Ge Wang. Convunext: An efficient convolution neural network for medical image segmentation. _Knowledge-Based Systems_ , pp. 109512, 2022. * Hatamizadeh et al. (2022a) Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In _International MICCAI Brainlesion Workshop_ , pp. 272–284. Springer, 2022a. * Hatamizadeh et al. (2022b) Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , pp. 574–584, 2022b. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016. * He et al. (2022) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 16000–16009, 2022. * Hendrycks & Gimpel (2016) Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_ , 2016. * Hu et al. (2022) Mu Hu, Junyi Feng, Jiashen Hua, Baisheng Lai, Jianqiang Huang, Xiaojin Gong, and Xian-Sheng Hua. Online convolutional re-parameterization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 568–577, 2022. * Huang et al. (2019) Chao Huang, Hu Han, Qingsong Yao, Shankuan Zhu, and S Kevin Zhou. 3d $u^{2}$-net: A 3d universal u-net for multi-domain medical image segmentation. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pp. 291–299. Springer, 2019. * Huo et al. (2018) Yuankai Huo, Zhoubing Xu, Shunxing Bao, Camilo Bermudez, Hyeonsoo Moon, Prasanna Parvathaneni, Tamara K Moyo, Michael R Savona, Albert Assad, Richard G Abramson, et al. Splenomegaly segmentation on multi-modal mri using deep convolutional networks. _IEEE transactions on medical imaging_ , 38(5):1185–1196, 2018. * Isensee et al. (2021) Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. _Nature methods_ , 18(2):203–211, 2021. * Ji et al. (2022) Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. _arXiv preprint arXiv:2206.08023_ , 2022. * Jiang et al. (2022) Yun Jiang, Yuan Zhang, Xin Lin, Jinkun Dong, Tongtong Cheng, and Jing Liang. Swinbts: a method for 3d multimodal brain tumor segmentation using swin transformer. _Brain Sciences_ , 12(6):797, 2022. * Lee et al. (2021) Ho Hin Lee, Yucheng Tang, Shunxing Bao, Richard G Abramson, Yuankai Huo, and Bennett A Landman. Rap-net: Coarse-to-fine multi-organ segmentation with single random anatomical prior. In _2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)_ , pp. 1491–1494. IEEE, 2021. * Li et al. (2022) Hao Li, Yang Nan, and Guang Yang. Lkau-net: 3d large-kernel attention-based u-net for automatic mri brain tumor segmentation. In _Annual Conference on Medical Image Understanding and Analysis_ , pp. 313–327. Springer, 2022. * Liu et al. (2021) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 10012–10022, 2021. * Liu et al. (2022) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 11976–11986, 2022. * Ma et al. (2021) Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2021. * Myronenko (2018) Andriy Myronenko. 3d mri brain tumor segmentation using autoencoder regularization. In _International MICCAI Brainlesion Workshop_ , pp. 311–320. Springer, 2018. * Nair & Hinton (2010) Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In _Icml_ , 2010. * Payette et al. (2021) Kelly Payette, Priscille de Dumast, Hamza Kebiri, Ivan Ezhov, Johannes C Paetzold, Suprosanna Shit, Asim Iqbal, Romesa Khan, Raimund Kottke, Patrice Grehten, et al. An automatic multi-tissue human fetal brain segmentation benchmark using the fetal tissue annotation dataset. _Scientific Data_ , 8(1):1–14, 2021. * Salimans & Kingma (2016) Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. _Advances in neural information processing systems_ , 29, 2016. * Shamshad et al. (2022) Fahad Shamshad, Salman Khan, Syed Waqas Zamir, Muhammad Haris Khan, Munawar Hayat, Fahad Shahbaz Khan, and Huazhu Fu. Transformers in medical imaging: A survey. _arXiv preprint arXiv:2201.09873_ , 2022. * Shu et al. (2021) Xiu Shu, Yunyun Yang, and Boying Wu. Adaptive segmentation model for liver ct images based on neural network and level set method. _Neurocomputing_ , 453:438–452, 2021. * Tang et al. (2022) Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swin transformers for 3d medical image analysis. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 20730–20740, 2022. * Ulyanov et al. (2016) Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. _arXiv preprint arXiv:1607.08022_ , 2016. * Wang et al. (2021) Wenxuan Wang, Chen Chen, Meng Ding, Hong Yu, Sen Zha, and Jiangyun Li. Transbts: Multimodal brain tumor segmentation using transformer. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pp. 109–119. Springer, 2021. * Wu & He (2018) Yuxin Wu and Kaiming He. Group normalization. In _Proceedings of the European conference on computer vision (ECCV)_ , pp. 3–19, 2018. * Wu & Johnson (2021) Yuxin Wu and Justin Johnson. Rethinking” batch” in batchnorm. _arXiv preprint arXiv:2105.07576_ , 2021. * Xie et al. (2021) Yutong Xie, Jianpeng Zhang, Chunhua Shen, and Yong Xia. Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation. In _International conference on medical image computing and computer-assisted intervention_ , pp. 171–180. Springer, 2021. * Zhang et al. (2022) Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, Sercan Ö Arik, and Tomas Pfister. Nested hierarchical transformer: Towards accurate, data-efficient and interpretable visual understanding. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, pp. 3417–3425, 2022. * Zhou et al. (2021) Hong-Yu Zhou, Jiansen Guo, Yinghao Zhang, Lequan Yu, Liansheng Wang, and Yizhou Yu. nnformer: Interleaved transformer for volumetric segmentation. _arXiv preprint arXiv:2109.03201_ , 2021. * Zhou et al. (2019) Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. _IEEE transactions on medical imaging_ , 39(6):1856–1867, 2019. * Zunair & Hamza (2021) Hasib Zunair and A Ben Hamza. Sharp u-net: depthwise convolutional network for biomedical image segmentation. _Computers in Biology and Medicine_ , 136:104699, 2021. ## Appendix A Appendix ### A.1 Data Preprocessing & Model Training We apply hierarchical steps for data preprocessing: 1) intensity clipping is applied to further enhance the contrast of soft tissue (FLARE2021 & AMOS2022:{min:-175, max:250}). 2) Intensity normalization is performed after clipping for each volume and use min-max normalization: $(X-X_{1})/(X_{99}-X_{1})$ to normalize the intensity value between 0 and 1, where $X_{p}$ denote as the $p_{th}$ percentile of intensity in $X$. We then randomly crop sub-volumes with size $96\times 96\times 96$ at the foreground and perform data augmentations, including rotations, intensity shifting, and scaling (scaling factor: 0.1). All training processes with 3D UX-Net are optimized with an AdamW optimizer. We trained all models for 40000 steps using a learning rate of $0.0001$ on an NVIDIA-Quadro RTX 5000 for both FeTA2021 and FLARE2021, while we perform training for AMOS2022 using NVIDIA-Quadro RTX A6000. One epoch takes approximately about 1 minute for FeTA2021, 10 minutes for FLARE2021, and 7 minutes for AMOS2022, respectively. We further summarize all the training parameters with Table 4. Table 4: Hyperparameters of both directly training and finetuning scenarios on three public datasets Hyperparameters Direct Training Finetuning Encoder Stage 4 Layer-wise Channel $48,96,192,384$ Hidden Dimensions $768$ Patch Size $96\times 96\times 96$ No. of Sub-volumes Cropped 2 1 Training Steps 40000 Batch Size 2 1 AdamW $\epsilon$ $1e-8$ AdamW $\beta$ ($0.9,0.999$) Peak Learning Rate $1e-4$ Learning Rate Scheduler ReduceLROnPlateau N/A Factor & Patience 0.9, 10 N/A Dropout X Weight Decay 0.08 Data Augmentation Intensity Shift, Rotation, Scaling Cropped Foreground ✓ Intensity Offset 0.1 Rotation Degree $-30^{\circ}$ to $+30^{\circ}$ Scaling Factor x: 0.1, y: 0.1, z: 0.1 ### A.2 Public Datasets Details Table 5: Complete Overview of three public MICCAI Chanllenge Datasets MICCAI Challenge FeTA 2021 FLARE 2021 AMOS 2022 Imaging Modality 1.5T & 3T MRI Multi-Contrast CT Multi-Contrast CT Anatomical Region Infant Brain Abdomen Abdomen Dimensions $256\times 256\times 256$ $512\times 512\times\\{37-751$} ${512-768}\times{512-768}\times\\{68-353$} Resolution $\\{0.43-0.70\\}\times\\{0.43-0.70\\}\times\\{0.43-0.70\\}$ $\\{0.61-0.98\\}\times\\{0.61-0.98\\}\times\\{0.50-7.50\\}$ $\\{0.45-1.07\\}\times\\{0.45-1.07\\}\times\\{1.25-5.00\\}$ Sample Size 80 361 200 Anatomical Label External Cerebrospinal Fluid (ESF), Spleen, Left & Right Kidney, Gall Bladder, Grey Matter (GM), White Matter (WM), Ventricles, Spleen, Kidney, Esophagus, Liver, Stomach, Aorta, Inferior Vena Cava (IVC) Cerebellum, Deep Grey Matter (DGM) Liver, Pancreas Pancreas, Left & Right Adrenal Gland (AG), Duodenum, Brainstem Bladder, Prostates/uterus Data Splits 5-Fold Cross- Validation 5-Fold Cross-Validation 1-Fold Train: 50 / Validation: 12 / Test: 18 Train: 272 / Validation: 69 / Test: 20 Train: 160 / Validation: 20 / Test: 20 ### A.3 Further Discussions Comparing to nn-UNet Table 6: Albation Studies of Adapting nn-UNet architecture on the Feta 2021 and FLARE 2021 testing dataset. (*: $p<0.01$, with Wilcoxon signed-rank test to all SOTA approaches, D.S: Deep Supervision) FeTA 2021 FLARE 2021 Methods ECF GM WM Vent. Cereb. DGM BS Mean Spleen Kidney Liver Pancreas Mean nn-UNet Isensee et al. (2021) 0.883 0.775 0.930 0.868 0.920 0.880 0.840 0.870 0.971 0.966 0.976 0.792 0.926 3D UX-Net (Plain) 0.882 0.780 0.934 0.872 0.917 0.886 0.845 0.874 0.981 0.969 0.982 0.801 0.934 3D UX- Net (nn-UNet struct., w/o D.S.) 0.885 0.784 0.937 0.872 0.921 0.887 0.849 0.876 0.983 0.972 0.983 0.821 0.940* 3D UX-Net (nn-UNet struct., D.S.) 0.890 0.791 0.939 0.877 0.922 0.891 0.854 0.881* 0.986 0.974 0.983 0.833 0.944* In Table 1 & 2, we compare our proposed network with multiple CNN-based SOTA networks and the golden standard approach nn-UNet. We observe that the performance of nn-UNet nearly outperform most of the transformer state-of-the- arts in both FeTA 2021 and FLARE 2021 datasets. Such improvement may mainly contribute to its innovation of self-configuration training strategies and ensembling outputs as postprocessing technique, while the network used in nn- UNet is only the plain 3D U-Net architecture. To further characterize the ability of our proposed network, we further substitute the plain 3D U-Net architecture with our proposed 3D UX-Net and adapt the self-configuring hyperparameters for training. We demonstrate a significant improvement of performance in FeTA 2021 and FLARE 2021 datasets with mean organ Dice from 0.874 to 0.881 and from 0.934 to 0.944 respectively, as shown in Table 6. To further investigate the difference in the network architecture, we observed that the convolution blocks in nn-UNet leverage the combination of instance normalization and leakyReLU. Such design allows to normalize channel-wise feature independently and mix the channel context with small kernel convolutional layers. In our design, we provide an alternative thought of extracting channel-wise features independently with depthwise convolution and mix the channel information during the downsampling layer only. Therefore, layer normalization is leveraged in our scenario and we want to further enhance the feature correspondence with large receptive field across channels efficiently. Furthermore, we found that the deep supervision strategy in nn- UNet, which compute an auxiliary loss with each stages’ intermediate output, also demonstrates its effectiveness to further improve the performance (FeTA 2021: from 0.876 to 0.881; FLARE 2021: from 0.940 to 0.944). For the training scenarios, instead of using the proposed initial learning rate $0.01$, we reduce the initial learning rate to $0.002$ to train with 150 epochs (40000 steps $\approx$ 150 epochs) for FLARE 2021 and 850 epochs (40000 steps $\approx$ 850 epochs) for FeTA 2021 respectively, with the batch size of 2. For the finetuning scenario with AMOS 2022, we only train the nn-UNet model with 250 epochs (40000 steps $\approx$ 250 epochs), instead of the default settings (1000 epochs) to ensure the fair network comparison with similar steps. ### A.4 Further Discussions on Training and Inference Efficiency Table 7: Albation Studies of Optimizing 3D U-XNet architecture on the Feta 2021 and FLARE 2021 testing dataset. (SD: Stage Depth, HDim: Hidden Dimension in the Bottleneck Layer.) Methods #Params (M) FLOPs (G) FeTA2021 FLARE2021 Mean Dice nn-UNet 31.2M 743.3G 0.870 0.926 SwinUNETR 62.2M 328.4G 0.867 0.929 SD: 2,2,2,2, HDim: 768 53.0M 639.4G 0.874 0.934 SD: 2,2,8,2, HDim: 384 32.1M 536.8G 0.873 0.932 Apart from the advantage of quantitative performance, we further leverage the LK depthwise convolutions to reduce the model parameters from 62.2M to 53.0M, compared to SwinUNETR in Table 3. However, although the training efficiency of 3D UX-Net is already better than nn-UNet (FLOPs: 743.3G to 639.4G), we observed that the FLOPs of 3D UX-Net still remains at a high value. Inspired by the architectures of both Swin Transformer Liu et al. (2021) and ConvNeXt Liu et al. (2022) used in the natural image domain, we further remove the bottleneck layer (ResNet block with 768 channels) and increase the block depth of stage 3 (e.g., 8 blocks). Such optimized design further significantly reduces both the model parameters (from 53.0M to 32.1M, nn-UNet: 31.2M) and FLOPs (from 639.4G to 536.1G, nn-UNet: 743.3M), while preserving the performance (shown in Table 7). Additional validation studies is needed to investigate the effectiveness of both MLP and pointwise DCS, and optimizing 3D UX-Net architecture, which will be the next steps of our future work. Another observation in Table 3 is the subtle differences in model parameters between kernel size of $3\times 3\times 3$ and $7\times 7\times 7$. We found that the increase of both model parameters and FLOPs is also attributed to the design of decoder network. Our decoder block design further add a 3D ResNet block after the transpose convolution to further resample and mix the channel context, instead of directly perform transpose convolution in nn-UNet. A efficient block design in decoder network is demanded to be further investigated and using depthwise convolution may be another potential solution to reduce the low efficiency burden. To further reduce the burden of low training and inference efficiency, re- parameterization of LK convolutional blocks may be another promising direction to focus. Prior works have demonstrated to scale up few convolutional blocks with LK size ($31\times 31$) and propose the idea of parallel branches with small kernels for residual shortcuts Ding et al. (2022b; 2021; a). The parallel branch can then be mutually converted through equivalent transformation of parameters. For example. a branch of $1\times 1$ convolution and a branch of $7\times 7$ convolution, can be transferred into a single branck of $7\times 7$ convolution Ding et al. (2021). Furthermore, Hu et al. proposed online convolutional re-parameterization (OREPA) to leverage a linear scaling at each branch to diversify the optimization directions, instead of applying non-linear normalization after convolution layer Hu et al. (2022). Also, stack of small kernels are leveraged to generate similar receptive field of view as LKs with better training and inference efficiency. The effectiveness of leveraging small kernels stack and multiple parallel branches design will be further investigated as another directions of our future work.
# Understanding Heterogeneity of Automated Vehicles and Its Traffic-level Impact: A Stochastic Behavioral Perspective Xinzhi Zhong Yang Zhou Soyoung Ahn Danjue Chen Civil and Environmental Engineering, University of Wisconsin-Madison Zachry Department of Civil and Environmental Engineering Texas A $\&$ M, College Station Civil, Construction, and Environmental Engineering at NC State University ###### Abstract This paper develops a stochastic and unifying framework to examine variability in car-following (CF) dynamics of commercial automated vehicles (AVs) and its direct relation to traffic-level dynamics. The asymmetric behavior (AB) model by Chen et al. [4] is extended to accommodate a range of CF behaviors by AVs and compare with the baseline of human-driven vehicles (HDVs). The parameters of the extended AB (EAB) model are calibrated using an adaptive sequential Monte Carlo method for Approximate Bayesian Computation (ABC-ASMC) to stochastically capture various uncertainties including model mismatch resulting from unknown AV CF logic. The estimated posterior distributions of the parameters reveal significant differences in CF behavior (1) between AVs and HDVs, and (2) across AV developers, engine modes, and speed ranges, albeit to a lesser degree. The estimated behavioral patterns and simulation experiments further reveal mixed platoon dynamics in terms of traffic throughout reduction and hysteresis. ###### keywords: Automated Driving , Extended Asymmetric Behaviour , Stochastic Calibration , Traffic Hysteresis , Uncertainty , Mixed Traffic. _The code supporting the findings of this study are available inGithub page_. ## 1 Introduction Vehicles on the road today have various automation features considered SAE Level 2-4 [32]. However, no uniform standards for these automation features currently exist. This can give rise to highly heterogeneous mixed traffic, consisting of automated vehicles (AVs) with wide-varying behaviors by design and human-driven vehicles (HDVs), which are highly variable by nature. For instance, adaptive cruise control (ACC) is now a basic automation function that regulates vehicle car-following (CF) maneuvers. The lack of uniform standards has evidently led to varying CF behaviors as observed by recent field studies [11, 13, 21, 19, 20]. The variation of CF manifests itself in the variations of the response time [21], string instability [11, 20] and energy consumption [13] across different ACC systems. While the variation of CF partly stems from customizable settings (e.g., headway) [19, 20], it could stem from a wide range of factors such as ACC algorithms and different elements of vehicle design (e.g., engine). Specifically, ACC algorithms are prolific in academic literature with many design options for the spacing policy and control logic. The spacing policy governs the steady-state CF behavior. Existing policies include constant spacing policy [35], constant time gap policy [23], and other, less prevalent policies [3]. The control logic influences how ACC responds to traffic disturbances (e.g., a leading vehicle’s acceleration and deceleration). Various paradigms of control logic exist in the literature, largely classified into: (1) linear feedback-based controller [26, 23, 27, 42]; (2) constrained optimization-based controller (e.g., model predictive control) [37, 38, 41, 39]; (3) data-driven or artificial intelligence (AI)-based controller [28, 30, 14]. The detailed designs of these approaches can also differ significantly, depending on the control objectives, parameters, and penalties/constraints. Besides these differences in formulation, various uncertainties (e.g., in vehicle dynamics) and stochasticity (e.g., stemming from user setting) further add complexity. All these differences present overwhelming challenges to systematically analyze to what extent the CF dynamics can vary and how the variation impacts traffic dynamics. Further, the myopic and adaptive nature of existing controllers, dictating the vehicle action (e.g., acceleration) in the next control time step based on the current system state, makes it challenging to gain insights at the traffic flow level. In light of these challenges, the objective of the present paper is two-fold: (1) develop a unifying and stochastic approach to approximate the CF behaviors of different AVs in a more tractable manner and (2) analyze the variation in ACC CF behavior and its traffic-level impact, in comparison to HDVs. To this end, we extend the Asymmetric Behavior (AB) model by Chen et al. [4] to approximate the CF behavior of various types of AVs. The AB model captures asymmetric reaction patterns, describing the evolution of driver response time with respect to the original value, while experiencing a traffic disturbance. The reaction pattern in this model gives direct insight into the disturbance evolution (e.g., amplifying, decaying, and duration). Further, these behavioral patterns are directly linked to important traffic phenomena, including the ‘capacity drop’ phenomenon (i.e., a drop in traffic throughput) [5]. While the AB model was originally developed to describe the CF behavior of HDVs, its flexible structure lends a unifying framework to analyze the CF behavior of different types of AVs [15, 34]. Notably, Kontar et al. [15] adopted the AB model to analyze the physical mechanisms of several well-known ACC algorithms. However, they observed more complex reaction patterns (e.g., concave followed by convex), suggesting the need for more general reaction patterns to effectively capture the behavior of AVs. Further, the control logic of commercially available AVs is mostly unknown due to its proprietary nature. A stochastic approach is thus needed to cope with this uncertainty. This paper addresses these shortcomings and provides an extended AB model (EAB model hereafter) to enable more flexible descriptions of the behavioral patterns. The model parameters are estimated in a stochastic fashion to capture the inherent randomness in the CF behavior and potential model mismatch. Specifically, we apply Approximate Bayesian Computation (ABC) with an adaptive sequential Monte Carlo sampler (ABC-ASMC) to calibrate the model parameters using field data of several ACC vehicles. This method is likelihood function free and relies on simulation instead to estimate a posterior joint distribution of parameters [8, 33, 43]. Thus, it provides a flexible platform for stochastic calibration when information about the parameter distribution is limited. The estimated behavioral patterns, confirmed by simulation experiments, elucidate platoon dynamics, particularly throughput reduction and traffic hysteresis. Our analysis based on the proposed stochastic and unifying framework has led to the following findings. (1) The estimated behavioral patterns and the posterior distributions of the parameters reveal marked differences in CF dynamics between AVs and HDVs, and across AV developers, albeit to a lesser degree. (2) Even for the same ACC maker, CF dynamics vary by speed range and engine modes. (3) With increasing penetration of ACC vehicles, the mixed platoon exhibits more complete hysteresis loops compared to HDVs, indicating lower throughput reduction. (4) The mixed platoon also exhibits smaller traffic hysteresis in the median-high speed range with higher penetration of ACC vehicles; however, the trend is opposite in low speed. While these specific findings could change as the technology develops, the proposed analysis framework is general and can lead to new insights as more data become available in the future. The rest of the paper is organized as follows. Section 2 presents the EAB CF model and its direct relation to traffic-level dynamics, along with the proposed stochastic calibration method based on ABC-ASMC. Section 3 provides the calibration results and discussion on the parameter variability that represents heterogeneity in CF behavior of commercial ACC vehicles and HDVs. Section 4 establishes a method to systematically measure traffic hysteresis and presents the quantified results of how ACC CF behavior impacts throughput and hysteresis. Section 5 further investigates the impact of heterogeneity and stochasticity on disturbance propagation in mixed traffic. Section 6 provides some discussion and concluding remarks. ## 2 Extended AB Model and Stochastic Calibration Method In this section, we first construct the EAB model to describe ACC behavioral patterns and their direct relation to the traffic-level dynamics. Then, we provide the stochastic calibration method by ABC-ASMC to calibrate the model parameters. ### 2.1 EAB Model Here we extend the AB model [4] to describe a wide range of potential CF behaviors under disturbances. We start our discussion with Newell’s simplified CF model [24] and then extend to the EAB model. Newell’s simplified CF model [24] states that follower $i$’s position at time $t$, $x_{i}(t)$, can be determined by a constant temporal-spatial shift ($\tau_{i}$ and $\delta_{i}$) of leader $i-1$’s position. The $\tau_{i}$ and $\delta_{i}$ are the response time and the minimum spacing for driver $i$, respectively, and represent the travel time and distance of a traffic disturbance. Note that the ratio of the average minimum spacing, $\delta$, and average response time, $\tau$, across vehicles corresponds to the congestion wave speed, $w=-\frac{\delta}{\tau}$, in the Kinematic Wave theory (KWT) with a triangular fundamental diagram. $\displaystyle x_{i}(t+\tau)=x_{i-1}(t)-\delta$ (1) Laval and Leclercq [18] observed that the follower would deviate from the Newell equilibrium around a disturbance. They formulated a time-dependent term, $\eta_{i}(t)=\frac{\tau_{i}(t)}{\tau}=\frac{\delta_{i}(t)}{\delta}$, to describe the dynamic deviations from the Newell trajectory, as shown in Eq. (2). Chen et al. [4] empirically verified the L-L model and further extended it to incorporate driver heterogeneity and asymmetric $\eta_{i}(t)$ evolution. They suggest five patterns for $\eta_{i}(t)$: concave, convex, nearly equilibrium, non-decreasing, and non-increasing when analyzing HDV CF. $\displaystyle x_{i}(t+\eta_{i}(t)\tau)=x_{i-1}(t)-\eta_{i}(t)\delta$ (2) The above-mentioned studies pertain to HDV traffic. For ACC controllers, Kontar et al. [15] observed composite patterns, concave followed by convex (concave-convex) and convex-concave patterns, which are beyond the AB model’s scope. Hence, to make the AB model more flexible, we extend the AB model using a parsimonious generalized piece-wise function as shown in Eq. (3). The parameter vector of the EAB model is $\vec{\theta}=[\eta^{0},\eta^{1},\eta^{2},\eta^{3},\epsilon^{0},\epsilon^{1},\epsilon^{2},t^{1}]^{\top}$. The differences between $\eta^{0}$ and $\eta^{1}$, $\eta^{1}$ and $\eta^{2}$, $\eta^{2}$ and $\eta^{3}$ are denoted as $\Delta\eta^{0-1}$, $\Delta\eta^{1-2}$, and $\Delta\eta^{2-3}$, respectively. Based on the signs of $\Delta\eta^{0-1}$, $\Delta\eta^{1-2}$, and $\Delta\eta^{2-3}$, the EAB describes seven behavioral patterns, as summarized in Table 1. $\displaystyle\eta_{i}(t)=\begin{cases}\eta_{i}^{0}&0<t\leq t_{i}^{1}\\\ \eta_{i}^{0}+\epsilon_{i}^{0}(t-t_{i}^{1})&t_{i}^{1}<t\leq t_{i}^{2}\\\ \eta_{i}^{1}+\epsilon_{i}^{1}(t-t_{i}^{2})&t_{i}^{2}<t\leq t_{i}^{3}\\\ \eta_{i}^{2}+\epsilon_{i}^{2}(t-t_{i}^{3})&t_{i}^{3}<t\leq t_{i}^{4}\\\ \eta_{i}^{3}&t_{i}^{4}<t\end{cases}$ (3) where $\eta_{i}^{1}=\eta_{i}^{0}+\epsilon_{i}^{0}(t_{2}-t_{1}),\eta_{i}^{2}=\eta_{i}^{1}+\epsilon_{i}^{0}(t_{3}-t_{2}),\eta_{i}^{3}=\eta_{i}^{2}+\epsilon_{i}^{0}(t_{4}-t_{3})$; $\eta_{i}^{0}$ is the original equilibrium state, which is a constant value before the disturbance; $\eta_{i}^{1}$ and $\eta_{i}^{2}$ are the critical $\eta_{i}(t)$ values where the follower reaches the maximum and minimum deviations (or minimum and maximum) from the equilibrium state; $\eta_{i}^{3}$ is a new equilibrium state, which is a constant value after the disturbance. $\eta_{i}^{3}$ is not necessarily equal to $\eta_{i}^{0}$ due to asymmetric CF behavior; $t_{i}^{1}$ is the time point when the follower starts to deviate from the original equilibrium state; $\epsilon_{i}^{0}$, $\epsilon_{i}^{1}$ and $\epsilon_{i}^{2}$ are the average slopes of $\eta_{i}(t)$ between $\eta_{i}^{0}$ and $\eta_{i}^{1}$, $\eta_{i}^{1}$ and $\eta_{i}^{2}$, and $\eta_{i}^{2}$ and $\eta_{i}^{3}$, respectively. When $\exists j:\epsilon_{i}^{j}\rightarrow 0,j\in\\{0,1,2\\}$, EAB model will converge to AB model. (a) (b) Figure 1: Schematic Illustration of Extended AB Model (a) Trajectories of the leader, Newell follower and EAB follower with the $\eta_{i}(t)$ in (b) (b) Concave-convex $\eta_{i}(t)$ pattern Fig. 1 is a schematic illustration of the EAB model with a concave-convex $\eta_{i}(t)$ pattern as an example when a disturbance propagates upstream at the speed of $w$. In this example, $\eta_{i}^{0}>1$ as shown in Fig. 1(b), which represents conservative behavior. In Fig. 1(b), the $\eta_{i}(t)$ increases at the rate of $\epsilon_{i}^{0}$ from $t_{i}^{1}$ to $t_{i}^{2}$ and starts to decrease at the rate of $\epsilon_{i}^{1}$ when the follower reaches its maximum spacing. $\eta_{i}(t)$ continues to decrease till the follower reaches the minimum spacing at $t_{i}^{3}$. When the follower realizes the leader is accelerating, the follower catches up to its equilibrium trajectory, and $\eta_{i}(t)$ is restored to an equilibrium at $t_{i}^{4}$. In this example, $\eta_{i}^{3}<\eta_{i}^{0}$, indicating a shorter equilibrium spacing after the disturbance. Consistent with $\eta_{i}(t)$, the disturbance initially amplifies and then partially decays. Based on empirical observations, [5] identified a direct connection between the reaction pattern and the hysteresis in the leader’s velocity, $v_{i-1}$ \- $\eta_{i}$ evolution. The reaction pattern could capture three predominant hysteresis patterns: straight line (SL), Counter-clockwise (CCW), Clockwise (CW)). They also found that the response time (i.e., early or late response) impact the hysteresis orientation. We extend this microscopic relation into more macroscopic hysteresis in the density-flow evolution, as shown in Table 1. We first define different response timings: (1) _early response_ if the follower starts to restore to the new equilibrium during the deceleration phase of the leader, and (2) _late response_ after the acceleration phase of the leader. The composite pattern, concave-convex, is typically considered as the early-response concave pattern followed by the late-response convex pattern. Likewise, the convex-concave pattern is considered as the early-response convex and then the late-response concave pattern. Note that for other combinations (e.g., early-response concave followed by early-response convex, etc.) are empirically rare. In the middle column, we use the blue line to represent the initial equilibrium state and the red line to represent the new equilibrium state after disturbance. The seven hysteresis patterns are defined: (1) SL: flow and density change along the slope of the wave speed. (2) $CW^{-}$, $CCW^{-}$: the flow and density drop below the initial equilibrium and remain below the new equilibrium after disturbance in clockwise and counter-clockwise orientation, respectively. (3) $CW^{+}$, $CCW^{+}$ : the flow and density move above the line of the initial equilibrium and continue to remain above the line of the new equilibrium. (4) $CW$: flow and density drop below the initial equilibrium and then rise above the new equilibrium. (5) $CCW$: flow and density will rise above the initial equilibrium and then drop below the new equilibrium. As indicated by the respective reaction patterns and hysteresis patterns, the evolution of the disturbance can be identified in the rightmost column of the Table 1. Readers could refer to Zhong et al. [40] (preparing) for a detailed proof. Table 1: $\eta_{i}(t)$ Pattern Categorization and its direct relation to traffic-level dynamics Category | Response | Hysteresis Orientation | Disturbance Evolution ---|---|---|--- Nearly Equilibrium(NE) | / | SL | Disturbance will not amplify or decay. Concave | early | CCW- | Disturbance will amplify. late | CW- Convex | early | CW+ | Disturbance will decay. late | CCW+ Concave and Convex | / | CCW | Disturbance will amplify and then partially decay. Convex and Concave | / | CW | Disturbance will firstly decay and then amplify. Non-decreasing | early | CCW- | Disturbance will decay, likely resulting in an decreased capacity. late | CW- Non-increasing | early | CW+ | Disturbance will decay, likely resulting in an increased capacity. late | CCW+ | | | Note that we work with the proposed EAB model to approximate the CF behaviors of various ACC vehicles. Considering potential model mismatch and stochasticities of ACCs, we take the stochastic calibration approach, ABC- ASMC, to estimate the joint distribution of the model parameters, $\tilde{\pi}(\vec{\theta})$. Details follow. ### 2.2 EAB model calibration: ABC-ASMC In this subsection, we apply ABC-ASMC [8] to approximate the joint distribution of parameters, $\tilde{\pi}(\vec{\theta})$, building on the basic ABC-based method developed by Zhou et al. [43]. The basic ABC method (ABC- rejection sampling) is a likelihood-free Bayesian inference, where the likelihood is replaced by the simulations of parameter values (called ”particles”) sampled from prior distributions. A goodness of fit measure (GOF) is applied to evaluate the closeness between the observed and simulated data (e.g., vehicle trajectories). The simulated data close to the observed would be accepted based on a reasonable tolerance level $\gamma$ of GOF and used to approximate the posterior distributions. Thus, $\gamma$ is the main decisive variable shaping appropriate posterior distributions. Readers could refer to Zhou et al. [43] and Csilléry et al. [7] for a detailed review of the ABC- rejection sampling method. This earlier method has a simple structure of independently sampling particles from prior distributions until enough particles are accepted for convergence to estimate the posterior joint distribution. This independent structure makes it easy to implement but can bring very high computational burden due to the naive search process (i.e., trial and error). The proposed ABC-ASMC has a more strategic structure for sampling particles to reach quicker convergence by searching $\gamma$ in an automatic fashion, meaning a significant advantage for computational efficiency. Details of the method follow. Given the observed CF pair, leading vehicle $i-1$ trajectory (i.e., position and velocity), $x_{i-1},v_{i-1}$, and following vehicle $i$ trajectory (i.e., position and reaction pattern), $x_{i,obs},\eta_{i,obs}$, the ABC-ASMC aims to approximate the posterior distribution, $\tilde{\pi}(\vec{\theta})$, by Bayes’ Theorem: $\displaystyle\pi(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})=\frac{f(x_{i,obs},\eta_{i,obs}|\vec{\theta},x_{i-1},v_{i-1})\pi(\vec{\theta})}{\int{f(x_{i,obs},\eta_{i,obs}|\vec{\theta}},x_{i-1},v_{i-1})d\vec{\theta}}$ (4) where $\pi(\vec{\theta})$ is the prior distribution of $\vec{\theta}$ usually set by prior knowledge and $f(x_{i,obs},\eta_{i,obs}|\vec{\theta},x_{i-1},v_{i-1})$ is the likelihood to reproduce $x_{i,obs},\eta_{i,obs}$ given the CF (control) model, $g(\vec{\theta},x_{i-1},v_{i-1})$ with parameter $\vec{\theta}$, and the leading trajectory, $x_{i-1},v_{i-1}$ (e.g., EAB model), where $\vec{\theta}=[\theta_{1},...\theta_{n},...\theta_{N}]$ and $N$ is the total number of the parameters in CF (control) model (e.g., $N=8$ in EAB model). Since $f(x_{i,obs},\eta_{i,obs}|\vec{\theta},x_{i-1},v_{i-1})$ is computationally intractable to obtain, an approximation algorithm is desired. ABC-ASMC is an effective tool to address this challenging inferential problem by replacing the likelihood function with simulations, adaptively decreasing $\gamma$ till convergence [7, 22, 33]. The ABC-ASMC mainly consists of four steps: initialization, sampling, updating, and stopping criteria checking. Details are given below: (1) Step 1 (Initialization): For iteration $l=0$, set $\gamma_{l}=\gamma_{0}$, where $\gamma_{0}$ is relatively a large number. Set the sampling weight $W_{l}^{k}=\frac{1}{K}$, $k=1\dots K$, where $K$ is the total number of particles, and the prior distribution at stage $l$, $\pi_{l}(\vec{\theta})=\pi(\vec{\theta})$. (2) Step 2 (Sampling): Sample $K$ particles ($\vec{\theta}$) from $\pi_{l}(\vec{\theta})$ according to $W_{l}^{k}$ to get a posterior distribution, $\hat{\pi}_{l}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$ satisfying $GOF(x_{i,sim},\eta_{i,sim},x_{i,obs},\eta_{i,obs})<\gamma_{l}$, where $x_{i,sim},\eta_{i,sim}=g(\vec{\theta},x_{i-1},v_{i-1})$ under $\gamma_{l}$. (3) Step 3 (Updating): Set $\gamma_{l+1}$ as the $\lambda$ percentile of largest GOF. We partitioned the $\hat{\pi}_{l}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$ into two subsets, an alive particle set, $\hat{\pi}_{l,A}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$, and a perturbed particle set, $\hat{\pi}_{l,P}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$. The alive particle set only selects $\vec{\theta}$ from $\hat{\pi}_{l}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$ satisfying $GOF(x_{i,sim},\eta_{i,sim},x_{i,obs},\eta_{i,obs})<\gamma_{l+1}$, to ensure that more fitted particles are kept during the iteration. The perturbed particle set consists of $(1-\lambda)K$ particles satisfying $GOF(x_{i,sim},\eta_{i,sim},x_{i,obs},\eta_{i,obs})<\gamma_{l+1}$, based on a component-wise independent normal zero-mean random walk according to the kernel function $\chi_{l}$ proposed by [2], to further explore sampling space: $\displaystyle\chi_{l}(\theta_{n}^{l,P}|\theta_{n}^{l,A})=(2var(\theta_{n}^{l,A}))^{-1/2}\varphi((2var(\theta_{n}^{l,A}))^{-1/2}(\theta_{n}^{l,P}-\theta_{n}^{l,A}))$ (5) where $\theta_{n}^{l,P}$ and $\theta_{n}^{l,A}$ are the marginals of $\vec{\theta}$ in the perturbed particle set and alive particle set, respectively. Based on the component-wise independent random walk, we can calculate the particle acceptance ratio of the perturbed set, $\rho_{l+1}$. Based on the above two subsets, we update $\hat{\pi}_{l+l}(\vec{\theta})$ by concatenating $\hat{\pi}_{l,A}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$ and $\hat{\pi}_{l,P}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$, then set $W_{l+1}^{k}=\frac{1}{K}$, $k=1\dots K$. (4) Step 4 (Stopping Criteria Checking): if $\rho_{l}\leftarrow 0$, Stop the algorithm, otherwise $l=l+1$ and Return to Step 2. Output $\hat{\pi}_{l}(\vec{\theta}|x_{i,obs},\eta_{i,obs},x_{i-1},v_{i-1})$. ## 3 Calibration Results and Statistical Analysis In this section, we calibrate the stochastic EAB model for ACCs and HDVs based on ABC-ASMC using the open-source empirical datasets, respectively - ACC data collected by Li et al. [19], and NGSIM [36] and High-D [16] for HDVs. Based on that, we take a holistic examination of the distribution-wise heterogeneity in CF behavior across HDVs, different ACC vehicle models, engine modes, and speeds. ### 3.1 Empirical Data Description Li et al. [19] designed experiments for a three-vehicle platoon consisting of a HDV followed by two ACC vehicles with 1 second headway. To reflect the real- world traffic conditions under disturbances, HDVs are instructed to follow a designed speed profile. HDVs initially travel at a nearly constant speed, then decelerate to a target speed, and finally accelerate to resume the initial constant speed. An illustrative example is provided in Fig. 2 (a). In the dataset, there are three different controllers, denoted as ‘Car Model-X’, ‘Car Model-Y’, and ‘Car Model-Z’, respectively. The dataset of Car Model-Y is further partitioned into two subsets by different engines (Normal or Sports). The dataset of Car Model-Z has four subsets by different engines (Normal or Power) as well as different speed ranges (Low speed or Median and High speed). To compare the CF behavior of HDVs and ACC vehicles, we extract CF data of HDVs that encountered a disturbance from two datasets (i.e, NGSIM and HighD), as exemplified in Fig. 2(b). Further, we partitioned the HDV dataset into two subsets by speed ranges. Readers could refer to Li et al. [19], USDOT [36] and Krajewski et al. [16] for more details. A general description of the selected data is provided in Table 2. (a) (b) Figure 2: Empirical Trajectory around Disturbance- Time Series Position and Time Series Velocity (a) ACC: Car Model-X (b) HDV: HDV-2 Table 2: General Description of Empirical Trajectories Type | Car Model | Engine | Initial Speed Level | Number of Trajectories | Follower Initial Speed (mph) ---|---|---|---|---|--- mean (mph) | SD (mph) HDV | HDV-1 | | Low | 16 | 30.33 | 1.05 HDV-2 | | Median and High | 50 | 48.22 | 7.52 ACC | X | | Median and High | 48 | 48.41 | 12.64 Y-1 | Normal | Median and High | 48 | 48.83 | 12.73 Y-2 | Sport | Median and High | 48 | 48.45 | 12.69 Z-1 | Normal | Low | 16 | 35.78 | 1.61 Z-2 | Normal | Median and High | 32 | 55.87 | 10.44 Z-3 | Power | Low | 16 | 36.54 | 1.40 Z-4 | Power | Median and High | 32 | 56.36 | 10.11 | | | | | | ### 3.2 Stochastic Calibration Results and Performance For each AV controller and HDV subsets, we randomly select 75% of the empirical trajectories as a training set, $M_{1}$, and the remaining 25% as a testing set, $M_{2}$. We apply ABC-ASMC to calibrate the EAB model using the training set and validate the framework by reproducing trajectories using the estimated posterior joint distributions of model parameters and comparing them with the trajectories in the testing set. As the EAB model is essentially an extension of Newell’s simplified CF model, calibration is conducted in two stages, where basic parameters $\tau$ and $\delta$ are first calibrated, and the remaining parameters related to $\eta(t)$ evolution are calibrated in the second stage. We do this to cope with the high dimension of the parameter space and estimate the congestion wave speed, the necessary parameter to measure $\eta(t)$. The first stage calibration is conducted in a deterministic fashion by minimizing the GOF measure, the normalized root mean square error (NRMSE) as below: $\displaystyle GOF(x_{i,obs},x_{i,sim})=\sum_{i=1}^{|M_{1}|}NRMSE(x_{i,obs},x_{i,sim})=(\sum_{1}^{|M_{1}|}\frac{\sqrt{\frac{1}{T}\sum_{t=1}^{T}(x_{i,obs}(t)-x_{i,sim}(t))^{2}}}{\sqrt{\frac{1}{T}\sum_{t=1}^{T}(x_{i,obs}(t))^{2}}})$ (6) where $|M_{1}|$ is the cardinality of training set $M_{1}$, $x_{i,obs}$ is the observed position and $x_{i,sim}$ is the simulated position. $T$ is the total CF time. Table 3: Calibrated Parameters in the Newell Model Type | Car Model | Engine | Speed Level | $\tau$ (s) | $\delta$ (m) | $w$ (m/s) ---|---|---|---|---|---|--- HDV | HDV-1 | | Low speed | 1 | 9 | -9 HDV-2 | | Median and high speed | 1 | 6 | -6 ACC | X | | Median and high | 1.1 | 10 | -9.09 Y | Normal | Median and high | 1.1 | 6 | -5.45 Y | Sport | Median and high | 1 | 8 | -8 Z | Normal | Low, median and high | 1 | 12 | -12 Z | Power | Low, median and high | 1 | 12 | -12 | | | | | | The first stage calibration result in Table 3 is consistent with the findings from several empirical studies conducted on HDVs (Chiabaut et al. [6]) and ACCs (Gunter et al. [12]). It shows that even the basic parameters show some variations across HDVs, different ACC car models and engines. HDV with median and high speed range (HDV-2) appears to be the most aggressive with the smallest $\tau$ and $\delta$. Car Model-Y has different settings for normal and sports engines, whereas Car Model-Z appears to share the same setting. It further shows that Car Model-Y is set to be more aggressive with a smaller $\delta$. Generally, Car Model-Z appears to have a more conservative setting than the other ACC car models. As a result, the estimated congestion wave speed varies across HDVs and ACC car models: it is fastest with Car Model-Y and slowest with Car Model-Z. For the second stage stochastic calibration, we set $K=20000$ and $\lambda=0.95$ considering the $\vec{\theta}$ dimension. Based on the maximum and minimum of the empirical reaction pattern $\eta_{obs}$, we set the prior distributions for the model parameters as independent uniform distributions whose marginal distributions are: (1) $\eta^{0},\eta^{1},\eta^{2},\eta^{3}\sim Uniform(0.5,1.5)$ for ACCs and $\eta^{0},\eta^{1},\eta^{2},\eta^{3}\sim Uniform(0.3,3)$ for HDVs; (2) $\epsilon^{0},\epsilon^{1},\epsilon^{2}\sim Uniform(-0.15,0.15)$; (3) $t_{1}\sim Uniform(0,25)$. For our calibration, we aim to reproduce the empirical position $x_{i,obs}$ as well as the reaction pattern $\eta_{i,obs}$. Hence, we modify the GOF for the second stage calibration as a weighted NRMSE between the empirical ($x_{i,obs}$,$\eta_{i,obs}$, ${\eta_{i,obs}}^{c}$) and calibrated ($x_{i,sim}$, $\eta_{i,sim}$, ${\eta_{i,sim}}^{c}$) as below: $\displaystyle GOF=c_{1}NRMSE(x_{i,obs},x_{i,sim})+c_{2}NRMSE(\eta_{i,obs},\eta_{i,sim})+c_{3}NRMSE({\eta_{i,obs}}^{c},{\eta_{i,sim}}^{c})$ (7) where $c_{1}$, $c_{2}$, and $c_{3}$ are weight coefficients, set as $c_{1}=0.4$, $c_{2}=0.4$, and $c_{3}=0.2$ in our study. $\eta^{c}$ denotes a set of critical $\eta(t)$ points, consisting of $\eta_{i,obs}^{max}$, $\eta_{i,obs}^{min}$, and the maximum of $|\eta_{i,sim}-\eta_{i,obs}|$, where $\eta_{obs}^{max}$ and $\eta_{obs}^{min}$ are the maximum and minimum points of $\eta_{i,obs}$, respectively. Fig. 3 gives the examples (HDV-2 and Car Model X) illustrating the convergence of $\gamma$ and $\rho$ in ABC-ASMC. The results of other Car Models and HDVs are shown in Appendix A. It shows that $\gamma$ for all cases decreases quickly within the first 80 iterations, demonstrating relatively quick convergence. Further, $\rho$ also decreases significantly, which means that ABC-ASMC becomes more selective in estimating the posterior joint distribution. After 100 iterations, $\gamma$ and $\rho$ both converge, suggesting that further sampling efforts cannot enhance GOF anymore, and the algorithm reaches the converged tolerance, $\gamma$. (a) (b) Figure 3: Convergence of $\gamma$ and $\rho$ (a): Car Model-X (b): HDV-2 Fig. 4 gives the overall calibration and validation results of EAB and AB model quantified by applying the Wasserstein (WS) metric [25] both to training set $M_{1}$ and testing set $M_{2}$. The WS metric gives the optimal simulated-observed reproduction distance by probabilistically matching a particle $k$ to a car following pair $m$. The objective function (Eq. (8)) aims to minimize the distribution-wise errors by determining the optimal weight distribution of particles that best fit CF pairs. $\displaystyle WS(\zeta)=\frac{1}{|M|}\min_{\kappa}\frac{1}{K}(\sum_{m,k}^{M,K}\kappa_{m,k}\zeta),\zeta\in\\{\zeta_{x},\zeta_{\eta},\zeta_{\eta^{c}}\\},\kappa_{m,k}\in\\{0,1\\}.$ (8) $\displaystyle s.t.\sum_{k=1}^{K}\kappa_{m,k}=1$ (9) $\displaystyle\zeta_{x}=\sqrt{\frac{1}{T_{m}}\sum_{t=1}^{T_{m}}(x^{m}_{i,obs}(t)-x^{m,k}_{i,sim}(t))^{2}}$ (10) $\displaystyle\zeta_{\eta}=\sqrt{\frac{1}{T_{m}}\sum_{t=1}^{T_{m}}(\eta^{m}_{i,obs}(t)-\eta^{m,k}_{i,sim}(t))^{2}}$ $\displaystyle\zeta_{\eta^{c}}=\sqrt{\frac{1}{T_{m}}\sum_{t=1}^{T_{m}}(\eta^{c,m}_{i,obs}(t)-\eta^{c,m,k}_{i,sim}(t))^{2}}$ where, $M$ is the trajectory set, and $m$ is the specific CF pair in $M$, $T_{m}$ is the total CF time of $m$. $|M|$ is the cardinality of $M$. $\zeta_{x}$, $\zeta_{\eta}$ and $\zeta_{\eta^{c}}$ denote the root mean square error in position $x$, reaction pattern $\eta$, and the critical points of $\eta$, respectively in Eq. 10. $\kappa_{m,k}$ is the weight assigned to the particle $k$ fitting the CF pair $m$. Eq. (9) governs that the sum of the weights $\kappa$ should be equal to 1. The calibration results with training data (Fig. 4(a)) shows that $WS(\zeta_{x})$ for all car models calibrated by EAB and AB models are smaller than $1$ m; $WS(\zeta_{\eta})$ are all below $0.03$; and $WS(\zeta_{\eta^{c}})$ are all smaller than 0.003. The WS values for the ACC vehicles are generally smaller with the EAB model than the AB models, justifying the need for the EAB model framework. Further, the validation errors using the testing data (Fig. 4(b)) are reasonably close to the calibration errors, demonstrating good performance of the EAB model. These findings suggest that EAB model, compared with the AB model, is overall more flexible and accurate in describing the ACC CF behavior. (a1) (b1) (a2) (b2) (a3) (b3) Figure 4: Stochastic Calibration Performance Validation using Training and Testing Trajectories (a): Training Results (b): Testing Results ### 3.3 Reaction Pattern Analysis We further categorize the reaction patterns, as shown in Table 4. Note that the categorization is determined by a predefined threshold, $\Delta\eta_{T}=0.09$ for AVs and $\Delta\eta_{T}=0.18$ for HDVs, based on the difference between the 75th percentile and 25th percentile $\eta_{0}$ values to prevent the categorization from being overly sensitive to random fluctuations. The higher threshold for HDVs suggests a higher level of stochasticity compared to AVs. We caution that the higher stochasticity could be more attributed to the nature of the data sets than the behavior itself. Notably, the HDV data are from naturalistic driving as opposed to controlled experiments, as is the case for ACC vehicles. The results in Table 4 show that both single and composite reaction patterns are observed across the board, particularly for Car Model Z. However, when compared to HDVs, the results for ACCs demonstrate a significant divergence. At low speeds, Car Model Z exhibits fewer NE patterns but more concave patterns compared to HDVs. At median and high speeds, ACC vehicles display higher proportions of NE, convex, and non-increasing patterns than HDVs. Early responses are evident in the majority of non-increasing patterns among ACC. Notably, the non-decreasing pattern takes up a significant portion, ranging from 20% to 36%. This trend, though in lower incidence than HDVs, is not desirable because it indicates a lower traffic throughput. However, this could be attributed to data limitations, where the recovery from a disturbance may not have been fully captured. A further experimental study is needed to confirm this finding. Variations are also significant across ACC vehicles. For example, Car Model X exhibits the highest occurrence of NE and non-decreasing patterns, while Car Model Z displays a significant portion of concave patterns. Variations are notable even within the same ACC developers. Notably, the distribution of behavioral patterns between Normal and Sports/Power engines is markedly different at low speed (for Model Z), though the differences are more nuanced at median and high speeds (Model Y and Z). With the same engine at different speeds, there is a higher frequency of concave patterns at low speeds, accompanied by a decrease in the occurrences of convex patterns. We also provide Jensen-Shannon Distance analysis in the Appendix B to corroborate the categorization remarks. Based on the findings, we draw the following conclusions: (1) The behavioral response to disturbance, which directly influences the propagation of disturbances, varies between HDVs and ACC vehicles, and across ACC vehicles. (2) The engine modes and speed have a substantial impact on behavioral patterns. These findings underscore the possibility of highly heterogeneous behavior in mixed traffic. The traffic-level implications of these findings are investigated through the analysis of traffic hysteresis in the following section. Table 4: Categorization: Proportion of Different $\eta$ Patterns under $\Delta\eta_{T}$ Speed | | Low | Median and High ---|---|---|--- $\Delta\eta_{T}$ | | 0.18 | 0.09 | 0.18 | 0.09 Car Model | Response | HDV | Z | HDV | X | Y | Z Engine | | Normal | Power | Normal | Sports | Normal | Power NE | | 0.01 | 0.03 | 0 | 0.01 | 0.27 | 0.17 | 0.16 | 0.05 | 0.03 Concave | early | 0.13 | 0.03 | 0.04 | 0.01 | 0.01 | 0.03 | 0.04 | 0.04 | 0.02 others | 0.14 | 0.44 | 0.60 | 0.24 | 0 | 0.10 | 0.08 | 0.21 | 0.31 Convex | early | 0.02 | 0.14 | 0.03 | 0.03 | 0.01 | 0.08 | 0.15 | 0.12 | 0.14 others | 0.01 | 0 | 0.01 | 0.04 | 0.05 | 0.05 | 0.06 | 0.12 | 0.03 Concave-convex | | 0.05 | 0.11 | 0.08 | 0.06 | 0.00 | 0.03 | 0.02 | 0.07 | 0.10 Convex-concave | | 0.02 | 0.03 | 0.15 | 0.02 | 0.01 | 0 | 0.03 | 0.08 | 0.06 Non-decreasing | early | 0.10 | 0.09 | 0 | 0.10 | 0.04 | 0.03 | 0.06 | 0.01 | 0.06 others | 0.51 | 0.13 | 0 | 0.48 | 0.36 | 0.31 | 0.22 | 0.22 | 0.20 Non-increasing | early | 0.01 | 0 | 0.08 | 0.01 | 0.24 | 0.15 | 0.15 | 0.05 | 0.05 others | 0 | 0 | 0.01 | 0 | 0.01 | 0.05 | 0.03 | 0.03 | 0 | | | | | | | | | | ## 4 Traffic Hysteresis Evaluation Traffic Hysteresis, an elliptical movement of flow-density evolution under disturbance, is an important traffic phenomenon linked to the reduction in traffic throughput and traffic flow instability. Studies suggest that traffic hysteresis is directly associated with dynamic CF behavior, characterized by asymmetric deceleration and acceleration during a disturbance [1, 29, 5]. Thus, a good CF model should be able to reproduce traffic hysteresis observed empirically for macroscopic perspectives. In this paper, we focus our investigation on how different CF behaviors of ACC vehicles manifest in traffic hysteresis. ### 4.1 Traffic Hysteresis Measurement We establish a systematic method to quantify traffic hysteresis. Such method is lacking in the current literature due to the challenge that traffic hysteresis is a directed two-dimensional movement. We first measure the flow and density as vehicles go through a disturbance based on Edie’s generalized definitions [9] and the measurement method by [17, 31]. Specifically, we define parallelograms rotated by the wave speed, $w$, along the trajectories; see Fig. 5(a). Then in each parallelogram, the flow and density are measured according to the generalized definitions: $\displaystyle k(g)=\sum_{i}^{I}\frac{\Delta t_{i}}{|Z_{g}|},q(g)=\sum_{i}^{I}\frac{\Delta x_{i}}{|Z_{g}|}$ (11) where $k$ is the density; $q$ is the flow; $\Delta t_{i}$ and $\Delta x_{i}$ are the travel time and distance of the $i^{th}$ vehicle in Zone $g$, respectively; and $|Z_{g}|$ is the area of Zone $g$, $I$ is the total number of vehicles. Note that defining the boundary of Zone $g$ is inherently challenging, particularly when $I$ is small (i.e., 2 in the CF pair); a larger boundary leads to the underestimation of flow and density, vice verse. Here, we firstly employ the trajectories of both the leader and the follower to establish the boundary, forming Zone $g$ (yellow shade), we multiply the area of this region by an additional adjustment parameter $\frac{I}{I-1}$ to obtain an appropriate estimation of $|Z_{g}|$. Finally, to capture the hysteresis evolution over the zones, the moving average method with window of 3 zones is implemented. (a) (b) (c) (d) (e) (f) Figure 5: Schematic Illustration of Hysteresis Measurement (a) Density-Flow Measurement (b) Hysteresis Orientation Measurement (c) Time-varying Hysteresis Magnitude (d) Time-varying Hysteresis Magnitude Relative to Initial Equilibrium (e) Time-varying Hysteresis Magnitude Relative to New Equilibrium (f) Cross- Product (Right-hand Rule) An example of CCW flow-density evolution is shown in Fig. 5(b). To systematically quantify the movement, we utilize four metrics: (1) the center point of flow-density relationship $O$($\frac{\sum_{g=1}^{G}k(g)}{G},\frac{\sum_{g=1}^{G}q(g)}{G}$), where $G$ is the total number of the zones; (2) the standard deviation (SD) of density and flow; (3) the time-varying magnitude of hysteresis (Fig. 5(c)) and (4) hysteresis patterns. For (3)-(4), we utilize the right-hand rule of the cross- product to establish the hysteresis orientation (i.e., clockwise or counter- clockwise; see Fig. 5(f)) and quantify the hysteresis magnitude. Note that the method could be extended to a vehicle platoon with more than 2 vehicles. Specifically, according to the loop center, and $H_{g}$, which is the vector connecting center $(k_{O},q_{O})$ and point $(k_{g},q_{g})$, the cross- product, $H_{g}\times H_{g+1}$, represents the dynamic variation from Zone $g$ to Zone $g+1$; see Fig. 5(b)-(c). The negative cross product represents the CW while the positive represents the CCW orientation; see Fig. 5(d). The absolute cross product value is graphically the area of the parallelogram formed by the sides $H_{g}$ and $H_{g+1}$. The value reflects the hysteresis magnitude associated with the transition from Zone $g$ to Zone $g+1$. Further, $H_{IE}$, the vector connecting $(k(1),q(1))$ and $(-\frac{q(1)}{w}+k(1),0)$ represents the initial equilibrium; and $H_{GE}$ connecting the $(k(G),q(G))$ and $(-\frac{q(G)}{w}+k(G),0)$ represents the new equilibrium. We then apply the cross products $H_{IE}\times H_{Ig}$ (Fig. 5(d)) and $H_{GE}\times H_{Gg}$ (Fig. 5(e)) to measure the amplification or decay of the disturbance in Zone g relative to the initial and the new equilibrium states in Zone 1 and Zone G. We further develop a systematic method to determine the hysteresis patterns. We first identify the maximum absolute average magnitude, $|{H_{max}}_{0}|$ (Fig. 5(c)). The hysteresis orientation is tracked by the signs of ${H_{max}}_{0}$ (i.e., $-$ for CW type, $+$ for CCW type). Then, we determine whether the new equilibrium is higher or lower than the initial equilibrium by the sign of $H_{IE}\times H_{Ig}$ ($-$, lower; $+$, higher). $H_{min_{1}}$(Fig. 5(d)) and $H_{max_{2}}$ (Fig. 5(e)) are identified to monitor the disturbance propagation (i.e., $-$ for amplification, $+$ for decay). By tracking the signs of $H_{IE}\times H_{Ig}$, $H_{max}$, $H_{min_{1}}$ and $H_{max_{2}}$, it encapsulates the seven hysteresis patterns detailed in Table 1. The CCW example characterized by ${H_{max}}_{0}>0$, $H_{min_{1}}<0$ and $H_{max_{2}}>0$, implies that the disturbance initially amplifies, followed by partial decay, ultimately resulting in an increased throughput. ### 4.2 Hysteresis Stochasticity Analysis We first investigate whether the stochastically calibrated EAB model can reproduce the observed traffic hysteresis. Specifically, we select three representative particles from the posterior distributions for comparison: (1) deterministic optimal (baseline), (2) the best-fit particle, and (3) the 5th percentile best-fit particles. (1) represents how typical calibration would be done - by finding the set of parameter values that minimizes the overall fit to all training trajectories. With the stochastic approach, (2) is obtained for each testing trajectory from the estimated joint distribution. (3) gives the distribution-wise sense of validation performance. These particles are used to reproduce hysteresis and compare with the empirical ground truth. The metrics for evaluation are $\overline{d_{O}}$ (Eq. (12)): average Euclidean distance between the centers of the observed and simulated hysteresis loops ; $\overline{d_{sd}}$ (Eq. (13)): average Euclidean distance between SDs of simulated and observed flow and density; $\overline{NRMSE_{H}}$: average NRMSE of the simulated and observed hysteresis magnitude (Eq. (14)); $\displaystyle\overline{d_{o}}=\frac{1}{|M_{2}|}(\sum_{i=1}^{|M_{2}|}\sqrt{(\frac{\sum_{g=1}^{G}k_{i,obs}(g)}{G}-\frac{\sum_{g=1}^{G}k_{i,sim}(g)}{G})^{2}+(\frac{\sum_{g=1}^{G}q_{i,obs}(g)}{G}-\frac{\sum_{g=1}^{G}q_{i,sim}(g)}{G})^{2}}$ (12) $\displaystyle\overline{d_{sd}}=\frac{1}{|M_{2}|}(\sum_{i=1}^{|M_{2}|}\sqrt{(SD(k_{i,obs})-SD(k_{i,sim}))^{2}+(SD(q_{i,obs})-SD(q_{i,sim}))^{2}}$ (13) $\displaystyle\overline{NRMSE_{H}}=\frac{1}{|M_{2}|}(\sum_{i=1}^{|M_{2}|}\frac{\sqrt{\frac{1}{G-1}\sum_{g=1}^{G-1}(H_{g}^{i,obs}\times H_{g+1}^{i,obs}-H_{g}^{i,sim}\times H_{g+1}^{i,sim})^{2}}}{\sqrt{\frac{1}{G-1}\sum_{g=1}^{G-1}(H_{g}^{i,obs}\times H_{g+1}^{i,obs})^{2}}})$ (14) where, $|M_{2}|$ is the cardinality of the testing set $M_{2}$. The results in Table 5 show that the stochastic EAB performs better than the deterministic counterpart at reproducing traffic hysteresis, judging by smaller values for the four metrics across the board with few exceptions. Table 5: Summary Statistics: Hysteresis Stochasticity Analysis based on Testing Trajectories Speed | Car Model | Particle | $\overline{d_{O}}$ | $\overline{d_{sd}}$ | $\overline{NRMSE_{H}}$ ---|---|---|---|---|--- Low | HDV-1 | Best Fitted | 18.13 | 28.59 | 1.17 5 Percentile | 25.91 | 35.43 | 1.10 Deterministic Optimal | 162.83 | 49.03 | 1.06 Z-1 (Normal) | Best Fitted | 10.61 | 12.63 | 0.86 5 Percentile | 23.29 | 19.52 | 0.84 Deterministic Optimal | 44.69 | 22.32 | 0.69 Z-3 (Power) | Best Fitted | 14.21 | 6.70 | 0.47 5 Percentile | 26.70 | 10.49 | 0.78 Deterministic Optimal | 26.20 | 11.17 | 0.90 Median and High | HDV-2 | Best Fitted | 14.07 | 28.23 | 0.92 5 Percentile | 55.20 | 72.83 | 1.71 Deterministic Optimal | 165.63 | 74.52 | 1.68 X | Best Fitted | 14.20 | 6.24 | 1.01 5 Percentile | 17.89 | 12.05 | 1.01 Deterministic Optimal | 68.19 | 12.26 | 0.99 Y-1 (Normal) | Best Fitted | 12.51 | 15.05 | 0.70 5 Percentile | 14.37 | 26.20 | 1.27 Deterministic Optimal | 54.17 | 26.09 | 0.86 Y-2 (Sports) | Best Fitted | 10.18 | 5.16 | 0.85 5 Percentile | 13.48 | 12.90 | 0.97 Deterministic Optimal | 26.17 | 8.93 | 0.93 Z-2 (Normal) | Best Fitted | 12.30 | 6.82 | 1.58 5 Percentile | 22.32 | 20.50 | 0.99 Deterministic Optimal | 60.87 | 16.43 | 1.84 | Z-4 (Power) | Best Fitted | 23.97 | 12.25 | 1.37 | 5 Percentile | 28.04 | 21.38 | 1.72 | Deterministic Optimal | 51.74 | 17.11 | 1.23 | | | | | We then provide two typical examples: CW in Fig. 6(a)-(d) and CCW in Fig. 6(e)-(h). From Fig. 6(a), we can see that all three particles successfully reproduce the CW orientation of the empirical hysteresis (black line). However, the best-fit particle (red) and the 5th percentile best-fit particle (green) produce hysteresis loops that are much closer to the empirical one. The hysteresis loop based on the deterministic particle (blue) is much narrower, signifying underestimation of density variation. Fig. 6(e) is a more complicated CCW example. From Fig. 6(e), the deterministic optimal particle shows very poor performance at reproducing the dynamic process of the hysteresis formation. This is corroborated in Fig. 6(b)-(d) and (f)-(h), where the stochastic estimations perform significantly better than the deterministic one in terms of center (Fig. 6(b)(f)), SD of density and flow (Fig. 6(c)(g)), and cross product of the movement (Fig. 6(d)(h)). All these findings strongly indicate that the stochastically calibrated EAB model can replicate traffic hysteresis by capturing the stochastic nature of CF dynamics and linking to traffic-level dynamics. (a) (b) (c) (d) (e) (f) (g) (h) Figure 6: Hysteresis Examples CW-loop: (a) Hysteresis Loop (b) Center of Hysteresis (c) SD of Flow and Density (d) Cross-product of the Movement; CCW-loop: (e) Hysteresis Loop (f) Center of Hysteresis (g) SD of Flow and Density (h) Cross-product of the Movement We further examine how the hysteresis patterns differ across HDVs and ACCs in Table 6. Note that hysteresis $H_{g}\times H_{g+1}$ is considered significant if its magnitude is greater than a predefined threshold, $H_{T}$. ${H_{T}}_{0}$ and ${H_{T}}_{1}$ are used to measure the significance of the disturbance amplification and decay relative to the initial and new equilibrium states (i.e., $H_{IE}\times H_{Ig}$ and $H_{GE}\times H_{Gg}$). $H_{T}$, ${H_{T}}_{0}$, ${H_{T}}_{1}$ are defined based on the difference between 75th and 25th percentiles of $|H_{1}\times H_{2}|$ and $|H_{IE}\times H_{I2}|$ and $|H_{GE}\times H_{G1}|$, respectively. Similar to the results for the behavioral pattern, significant differences between HDVs and ACC vehicles are observed, and to a lesser extent, across ACC vehicles and within the same ACC developer. For HDVs vs. ACCs, differences are particularly notable at medium to high speeds. For instance, HDVs predominantly show $CW^{-}$ patterns ($>0.42$) and $CCW^{-}$ ($>0.18$). ACC vehicles display a significantly lower proportion for $CW^{-}$ and $CCW^{-}$ patterns. Instead, they display higher proportions for $CW$ patterns. Across ACC vehicles, variations are also notable. For example, Model X exhibits the highest occurrence of NSL pattern and lowest occurrence of $CW$ patterns. Within the same ACC developers with different engine modes, Model Y and Model Z at median to high speeds exhibit largely similar patterns. However, some differences with more $CW^{-}$ and fewer $CW$ patterns are notable at low speeds with Model Z. Table 6: Categorization: Proportion of Empirical Hysteresis Patterns under $H_{T}$ Speed | Low | Median and High ---|---|--- $H_{T}$ | 400 | 15 | 400 | 15 ${H_{T}}_{0}$ | 21700 | 4770 | 21700 | 4770 ${H_{T}}_{1}$ | 36700 | 8460 | 36700 | 8460 Car Model | HDV | Z | HDV | X | Y | Z Engine | Normal | Power | Normal | Sports | Normal | Power NSL | 0.19 | 0 | 0 | 0.26 | 0.27 | 0.06 | 0.04 | 0.03 | 0 CW+ | 0.13 | 0.19 | 0.13 | 0.08 | 0.10 | 0.17 | 0.19 | 0.13 | 0.09 CW- | 0.44 | 0.75 | 0.75 | 0.42 | 0.46 | 0.40 | 0.52 | 0.47 | 0.41 CW | 0 | 0 | 0.13 | 0.04 | 0.08 | 0.33 | 0.21 | 0.19 | 0.34 CCW+ | 0 | 0 | 0 | 0.02 | 0.02 | 0 | 0.04 | 0.03 | 0.09 CCW- | 0.25 | 0.06 | 0 | 0.18 | 0.06 | 0.04 | 0 | 0.16 | 0.03 CCW | 0 | 0 | 0 | 0.02 | 0 | 0 | 0 | 0 | 0.03 | | | | | | | | | The hysteresis analysis results are consistent with those of the reaction pattern analysis in section 3. From the findings, several conclusions can be drawn: (1) Variations in CF dynamics directly influence the variations in traffic dynamics. (2) ACCs impact traffic differently than HDVs stemming from significantly different CF dynamics. (3) ACC vehicles exhibit notable heterogeneity among themselves. (4) Even the same ACC vehicle model displays differences in CF and traffic dynamics by engine mode, particularly at low speeds. (5) The speed contributes to heterogeneity in CF and traffic dynamics. ## 5 Mixed Platoon Behavior In this section, we investigate how the heterogeneity in CF dynamics and hysteresis patterns scale up to the platoon behavior to obtain more direct insight into the mixed traffic dynamics. We investigate (1) the disturbance evolution through a platoon and (2) the accompanying hysteresis characteristics with respect to the ACC penetration rate. We simulate a 20-vehicle platoon according to the calibrated stochastic EAB framework with varying ACC market penetration rates ($0-100\%$). The first vehicle trajectory is taken from the real data (i.e, Car Model Z-normal engine for low speed and Car Model X for high speed). Under the same leading trajectory, 19 followers are simulated to form a mixed platoon using the posterior joint distributions for HDVs and ACC vehicles given the penetration rate. Note that we control for the speed range and engine mode as per our findings in Section 3 that they can induce different CF dynamics. Fig. 7 and 8 provide two typical examples (Car Model Z - Normal at low speed and Car Model Y-Normal at median and high speed) to investigate the changes in the platoon-level disturbance propagation (the left column, (a1)-(e1)), hysteresis orientation (middle column, (a2)-(e2)), and hysteresis magnitude (right column, (a3)-(e3)) with the ACC penetration rate. We obtained the following observations. In low speed (Fig. 7), the new equilibrium state reaches closer to the original state (i.e., lower incidence of $CW^{-}$) with increasing ACC penetration, albeit at the increasing hysteresis magnitude. This trend can be explained by the higher frequency of non-decreasing patterns in HDVs, which leads to a reduction in traffic throughput. In contrast, ACCs exhibit a higher frequency of concave pattern, combined with a greater deviation of $\eta$. This leads to a substantial deviation from the initial equilibrium but a closer return to the initial equilibrium after a disturbance. This pattern signifies a more pronounced disturbance propagation, where a larger hysteresis magnitude indicates a notable reduction in the average speed for ACCs during the disturbance. In median and high speed, a significant reduction of disturbance is notable with higher ACC penetration; see the left column of Fig. 8. Further, the hysteresis loop becomes much smaller and complete with increasing penetration of ACC vehicles (the middle and right columns of the figure), indicating lower throughput reduction and disturbance magnitude. This is attributed to the higher incidence of convex and non-increasing patterns in ACC vehicles. Figure 7: Low Speed: Disturbance Propagation and Hysteresis Variation under different ACC penetration rates The resulting hysteresis characteristics, specifically the expectation of centers, SDs, hysteresis magnitude and hysteresis loop over 200 simulations, with respect to the ACC penetration rate are presented in Fig. 9. Again, Model Z-Normal Engine and Model Y- Normal Engine are presented as the representative examples for the low speed range and the median and high speed range, respectively, as the trends are qualitatively consistent across different ACCs within the same speed range. The hysteresis center shows a consistently increasing trend for both speeds. But the SD of flows when at the low speed shows an increasing trend in Fig. 9(c), resulting in a larger hysteresis magnitude in Fig. 9(e). Figure 8: Median and high Speed: Disturbance Propagation and Hysteresis Variation under different ACC penetration rates (a) (b) (c) (d) (e) (f) Figure 9: Hysteresis Expectation-ACC penetration rates over 200 simulations Low speed: Car Model Z - Normal Engine: (a) Centers (c) SD (e) Magnitude Median and high speed: Car Model Y - Normal Engine: (b) Centers (d) SD (f) Magnitude From the findings, several conclusions can be drawn: (1) the heterogeneity and stochasticity observed from CF dynamics and traffic dynamics will ultimately result in a reduced throughput. (2) The speed is a significant contributor to the ACC heterogeneity in the platoon-level performance. (3) With increasing penetration of ACC vehicles, the mixed platoon exhibits more complete hysteresis loops compared to HDVs, indicating lower throughput reduction. ## 6 Conclusion This paper developed a stochastic unifying behavioral CF model, EAB model, to approximate the CF behaviors of commercial ACC vehicles. The proposed approach is developed from a more interpretable, CF behavioral perspective, rather than a control theory-based approach. Specifically, the main advantages are that it provides (1) a common platform to compare different ACC vehicles and discern behavioral differences from HDVs and among ACC developers, engine modes, and speeds, and (2) a direct link between the CF behavior of ACC vehicles and traffic-level features such as traffic hysteresis. Further, the stochastic treatment of the EAB model enables characterization of uncertainties originating from vehicle dynamics, model mismatch, potential switch of control logic, etc. Specifically, we applied ABC-ASMC that enables efficient estimation of the joint distribution of the EAB model parameters without having to specify a likelihood function. The calibration results demonstrated that our algorithm is quickly-converging and robust, and can reproduce vehicle trajectories and reaction patterns to disturbances well. The results also suggested that the stochastic treatment is more descriptive for behavior patterns than the deterministic approach. Further, the EAB model can capture composite behavior patterns (i.e., convex-concave and convex-concave), which are not captured by the AB model. We found that there are notable distribution-wise differences in the behavioral patterns between ACC vehicles and HDVs, as well as across different ACC developers, engines, and speed ranges, albeit to a lesser degree. Connecting directly to the traffic-level dynamics, we investigated how heterogeneous CF dynamics manifests itself in throughput reduction and traffic hysteresis, important traffic phenomena linked to traffic throughput and stability. To this end, we established a systematic framework to determine the hysteresis orientation and quantify its magnitude. We found that our stochastic EAB approach is capable of reproducing empirical traffic hysteresis. The stochastic treatment overwhelmingly outperforms the deterministic one, further justifying our approach. As expected, significant heterogeneity in CF dynamics leads to heterogeneity in traffic hysteresis in terms of orientation and magnitude. In comparison to HDVs, the hysteresis loop is more complete with ACC vehicles, implying lower throughput reduction. This was corroborated in mixed traffic simulation experiments, where throughput reduction decreased with higher penetration of ACC vehicles. Likewise, the hysteresis magnitude decreased with higher penetration of ACC vehicles in the median-high speed range; however, the trend was opposite in low speed. Some future studies are desired. The current work is limited to a specific traffic scenario (e.g., one stop-and-go disturbance) and should be expanded to a wide range of scenarios (e.g., multiple disturbances) consistent with real- world traffic systems. Further, the findings related to the variations in CF dynamics and traffic hysteresis are specific to the data used in this study and can thus change in the future as the technology develops. The proposed analysis framework can lead to new insights as more data become available in the future. ## Acknowledgements This research was sponsored by the US National Science Foundation (Award CMMI 1932932 and 1932921). ## References * Ahn et al. [2013] Ahn, S., Vadlamani, S., Laval, J., 2013. A method to account for non-steady state conditions in measuring traffic hysteresis. Transportation Research Part C: Emerging Technologies 34, 138–147. * Beaumont et al. [2009] Beaumont, M.A., Cornuet, J.M., Marin, J.M., Robert, C.P., 2009\. Adaptive approximate bayesian computation. Biometrika 96, 983–990. * Besselink and Johansson [2017] Besselink, B., Johansson, K.H., 2017\. String stability and a delay-based spacing policy for vehicle platoons subject to disturbances. IEEE Transactions on Automatic Control 62, 4376–4391. * Chen et al. [2012a] Chen, D., Laval, J., Zheng, Z., Ahn, S., 2012a. A behavioral car-following model that captures traffic oscillations. Transportation research part B: methodological 46, 744–761. * Chen et al. [2012b] Chen, D., Laval, J.A., Ahn, S., Zheng, Z., 2012b. Microscopic traffic hysteresis in traffic oscillations: A behavioral perspective. Transportation Research Part B: Methodological 46, 1440–1453. * Chiabaut et al. [2010] Chiabaut, N., Leclercq, L., Buisson, C., 2010. From heterogeneous drivers to macroscopic patterns in congestion. Transportation Research Part B: Methodological 44, 299–308. * Csilléry et al. [2010] Csilléry, K., Blum, M.G., Gaggiotti, O.E., François, O., 2010\. Approximate bayesian computation (abc) in practice. Trends in ecology & evolution 25, 410–418. * Del Moral et al. [2012] Del Moral, P., Doucet, A., Jasra, A., 2012. An adaptive sequential monte carlo method for approximate bayesian computation. Statistics and computing 22, 1009–1020. * Edie et al. [1963] Edie, L.C., et al., 1963. Discussion of traffic stream measurements and definitions. Port of New York Authority New York. * Fuglede and Topsoe [2004] Fuglede, B., Topsoe, F., 2004\. Jensen-shannon divergence and hilbert space embedding, in: International symposium onInformation theory, 2004. ISIT 2004. Proceedings., IEEE. p. 31. * Gunter et al. [2020] Gunter, G., Gloudemans, D., Stern, R.E., McQuade, S., Bhadani, R., Bunting, M., Delle Monache, M.L., Lysecky, R., Seibold, B., Sprinkle, J., et al., 2020\. Are commercially implemented adaptive cruise control systems string stable? IEEE Transactions on Intelligent Transportation Systems 22, 6992–7003. * Gunter et al. [2019] Gunter, G., Janssen, C., Barbour, W., Stern, R.E., Work, D.B., 2019. Model-based string stability of adaptive cruise control systems using field data. IEEE Transactions on Intelligent Vehicles 5, 90–99. * He et al. [2020] He, Y., Makridis, M., Fontaras, G., Mattas, K., Xu, H., Ciuffo, B., 2020. The energy impact of adaptive cruise control in real-world highway multiple-car-following scenarios. European Transport Research Review 12, 1–11. * Jiang et al. [2022] Jiang, L., Xie, Y., Evans, N.G., Wen, X., Li, T., Chen, D., 2022. Reinforcement learning based cooperative longitudinal control for reducing traffic oscillations and improving platoon stability. Transportation Research Part C: Emerging Technologies 141, 103744. * Kontar et al. [2021] Kontar, W., Li, T., Srivastava, A., Zhou, Y., Chen, D., Ahn, S., 2021. On multi-class automated vehicles: Car-following behavior and its implications for traffic dynamics. Transportation research part C: emerging technologies 128, 103166. * Krajewski et al. [2018] Krajewski, R., Bock, J., Kloeker, L., Eckstein, L., 2018\. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems, in: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), IEEE. pp. 2118–2125. * Laval [2011] Laval, J.A., 2011. Hysteresis in traffic flow revisited: An improved measurement method. Transportation Research Part B: Methodological 45, 385–391. * Laval and Leclercq [2010] Laval, J.A., Leclercq, L., 2010\. A mechanism to describe the formation and propagation of stop-and-go waves in congested freeway traffic. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, 4519–4541. * Li et al. [2021] Li, T., Chen, D., Zhou, H., Laval, J., Xie, Y., 2021\. Car-following behavior characteristics of adaptive cruise control vehicles based on empirical experiments. Transportation research part B: methodological 147, 67–91. * Makridis et al. [2021] Makridis, M., Mattas, K., Anesiadou, A., Ciuffo, B., 2021\. Openacc. an open database of car-following experiments to study the properties of commercial acc systems. Transportation research part C: emerging technologies 125, 103047. * Makridis et al. [2020] Makridis, M., Mattas, K., Ciuffo, B., Re, F., Kriston, A., Minarini, F., Rognelund, G., 2020. Empirical study on the properties of adaptive cruise control systems and their impact on traffic flow and string stability. Transportation research record 2674, 471–484. * Marin et al. [2012] Marin, J.M., Pudlo, P., Robert, C.P., Ryder, R.J., 2012\. Approximate bayesian computational methods. Statistics and Computing 22, 1167–1180. * Milanés et al. [2013] Milanés, V., Shladover, S.E., Spring, J., Nowakowski, C., Kawazoe, H., Nakamura, M., 2013\. Cooperative adaptive cruise control in real traffic situations. IEEE Transactions on intelligent transportation systems 15, 296–305. * Newell [2002] Newell, G.F., 2002. A simplified car-following theory: a lower order model. Transportation Research Part B: Methodological 36, 195–205. * Panaretos and Zemel [2019] Panaretos, V.M., Zemel, Y., 2019\. Statistical aspects of wasserstein distances. Annual review of statistics and its application 6, 405–431. * Ploeg et al. [2011] Ploeg, J., Scheepers, B.T., Van Nunen, E., Van de Wouw, N., Nijmeijer, H., 2011. Design and experimental evaluation of cooperative adaptive cruise control, in: 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), IEEE. pp. 260–265. * Ploeg et al. [2013] Ploeg, J., Shukla, D.P., Van De Wouw, N., Nijmeijer, H., 2013\. Controller synthesis for string stability of vehicle platoons. IEEE Transactions on Intelligent Transportation Systems 15, 854–865. * Qu et al. [2020] Qu, X., Yu, Y., Zhou, M., Lin, C.T., Wang, X., 2020\. Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: a reinforcement learning based approach. Applied Energy 257, 114030\. * Saifuzzaman et al. [2017] Saifuzzaman, M., Zheng, Z., Haque, M.M., Washington, S., 2017\. Understanding the mechanism of traffic hysteresis and traffic oscillations through the change in task difficulty level. Transportation Research Part B: Methodological 105, 523–538. * Shi et al. [2021] Shi, H., Zhou, Y., Wu, K., Wang, X., Lin, Y., Ran, B., 2021. Connected automated vehicle cooperative control with a deep reinforcement learning approach in a mixed traffic environment. Transportation Research Part C: Emerging Technologies 133, 103421. * Shi and Li [2021] Shi, X., Li, X., 2021. Constructing a fundamental diagram for traffic flow with automated vehicles: Methodology and demonstration. Transportation Research Part B: Methodological 150, 279–292. * Shladover et al. [2015] Shladover, S.E., Nowakowski, C., Lu, X.Y., Ferlis, R., 2015\. Cooperative adaptive cruise control: Definitions and operating concepts. Transportation Research Record 2489, 145–152. * Sisson et al. [2018] Sisson, S.A., Fan, Y., Beaumont, M., 2018. Handbook of approximate Bayesian computation. CRC Press. * Srivastava et al. [2021] Srivastava, A., Chen, D., Ahn, S., 2021. Modeling and control using connected and automated vehicles with chained asymmetric driver behavior under stop-and-go oscillations. Transportation research record 2675, 342–355. * Swaroop and Hedrick [1999] Swaroop, D., Hedrick, J.K., 1999\. Constant spacing strategies for platooning in automated highway systems . * USDOT [2007] USDOT, D.o.T., 2007. Next generation simulation (ngsim). http://www.ngsim.fhwa.dot.gov . * Wang et al. [2014] Wang, M., Daamen, W., Hoogendoorn, S.P., van Arem, B., 2014\. Rolling horizon control framework for driver assistance systems. part i: Mathematical formulation and non-cooperative systems. Transportation research part C: emerging technologies 40, 271–289. * Wang et al. [2018] Wang, M., Hoogendoorn, S.P., Daamen, W., van Arem, B., Shyrokau, B., Happee, R., 2018\. Delay-compensating strategy to enhance string stability of adaptive cruise controlled vehicles. Transportmetrica B: Transport Dynamics 6, 211–229. * Yu et al. [2019] Yu, C., Sun, W., Liu, H.X., Yang, X., 2019. Managing connected and automated vehicles at isolated intersections: From reservation-to optimization-based methods. Transportation research part B: methodological 122, 416–435. * Zhong et al. [2023] Zhong, X., Zhou, Y., Ahn, S., 2023. Quantifying the relation between the car-following and traffic dynamics: A refined generalized definition approach . * Zhou et al. [2017] Zhou, Y., Ahn, S., Chitturi, M., Noyce, D.A., 2017\. Rolling horizon stochastic optimal control strategy for acc and cacc under uncertainty. Transportation Research Part C: Emerging Technologies 83, 61–76. * Zhou et al. [2020] Zhou, Y., Ahn, S., Wang, M., Hoogendoorn, S., 2020. Stabilizing mixed vehicular platoons with connected automated vehicles: An h-infinity approach. Transportation Research Part B: Methodological 132, 152–170. * Zhou et al. [2022] Zhou, Y., Jafarsalehi, G., Jiang, J., Wang, X., Ahn, S., Lee, J.D., 2022. Stochastic calibration of automated vehicle car-following control: An approximate bayesian computation approach. Available at SSRN: https://ssrn.com/abstract=4084970 doi:http://dx.doi.org/10.2139/ssrn.4084970. ## Appendix ### A: EAB Calibration with ABC-ASMC Fig. 10 depicts the convergence behavior of the ABC-ASMC algorithm as it calibrates EAB models for HDVs and ACCs. The quick convergence indicates that the algorithm effectively identifies the optimal posterior joint distributions for the EAB models, enabling accurate characterization of CF behaviors of HDVs and ACCs. (a) (b) (c) (d) Figure 10: Convergence of $\gamma$ and $\rho$ HDV: (a) HDV-1 ACC: (b) Car Model-Y (c) Car Model-Z-Normal Engine (d) Car Model-Z-Power Engine ### B: Reaction Pattern Analysis based on Jensen-Shannon Distance To evaluate the variability in the reaction pattern across HDVs and ACC vehicles, we adopt the Jensen–Shannon distance (JSD) to compare their calibrated posterior joint distributions (see Fig. 11)[Fuglede and Topsoe, 2004]. The JSD serves as a symmetric metric to quantify the difference between two joint distributions. Its value increases from 0 to 1 as the dissimilarity between the distributions grows. In Fig. 11, the pair-wise JSD ranges from 0.05 to 0.40, indicating some variation in the level of dissimilarity among different HDVs and ACCs. Fig. 11(a) and Fig. 11(f) show that JSD values exceed 0.14 between HDV-ACC pairs, indicating significant differences, at low speed and median and high speeds. Notably high levels of dissimilarity are observed for the HDV-Z pair at low speed (Fig. 11(a)) and HDV-X at median and high speeds (Fig. 11(f)). This finding suggests marked differences in $\eta$ evolution between HDVs and ACCs in general. Further, differences are also notable among different ACC developers (Fig. 11(f)), particularly between Car Model X and the other two (Y and Z). Car Model Y and Z appear to share some similar CF characteristics. For the same ACC developers, JSD values are observed in Fig. 11(b)-(c) for different engines is much lower than in Fig. 11(d), suggesting the difference between different engines is more distinct when at low speeds than at median and high speed. However, when examining Car Model Z at low speed (Fig. 11(d)), When controlled for the engine and the model, a remarkable difference in the $\eta$ evolution is observed between the two speed categories (Fig. 11(e)). Figure 11: JSD Metrics of Joint Distributions
# In-ear EEG biometrics for feasible and readily collectable real-world person authentication Takashi Nakamura, Valentin Goverdovsky and Danilo P. Mandic Takashi Nakamura, Valentin Goverdovsky and Danilo P. Mandic are with Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, United Kingdom, {takashi.nakamura14, goverdovsky<EMAIL_ADDRESS> ###### Abstract The use of EEG as a biometrics modality has been investigated for about a decade, however its feasibility in real-world applications is not yet conclusively established, mainly due to the issues with collectability and reproducibility. To this end, we propose a readily deployable EEG biometrics system based on a ‘one-fits-all’ viscoelastic generic in-ear EEG sensor (collectability), which does not require skilled assistance or cumbersome preparation. Unlike most existing studies, we consider data recorded over multiple recording days and for multiple subjects (reproducibility) while, for rigour, the training and test segments are not taken from the same recording days. A robust approach is considered based on the resting state with eyes closed paradigm, the use of both parametric (autoregressive model) and non- parametric (spectral) features, and supported by simple and fast cosine distance, linear discriminant analysis and support vector machine classifiers. Both the verification and identification forensics scenarios are considered and the achieved results are on par with the studies based on impractical on- scalp recordings. Comprehensive analysis over a number of subjects, setups, and analysis features demonstrates the feasibility of the proposed ear-EEG biometrics, and its potential in resolving the critical collectability, robustness, and reproducibility issues associated with current EEG biometrics. ## I Introduction Person authentication refers to the process of confirming the claimed identity of an individual, and is already present in many aspects of life, such as electronic banking and border control. The existing authentication strategies can be categorised into: 1) knowledge-based (password, PIN), 2) token-based (passport, card), 3) biometric (fingerprints, iris) [1]. Most extensively used recognition methods are based on knowledge and tokens, however, these are also most vulnerable to fraud, such as theft and forgery, and can be straightforwardly used by imposters. In contrast, biometric recognition methods rest upon unique physiological or behavioural characteristics of a person, which then serve as ‘biomarkers’ of an individual, and thus largely overcome the above vulnerabilities. However, at present, biometric authentication systems are cumbersome to administer and require considerable computational and man-power overloads, such as special recording devices and the corresponding classification software. With the current issues in global security, we have witnessed a rapid growth in biometrics applications based on various modalities, which include palm patterns with high spectral wave [2], patterns of eye movement [3], patterns in the electrocardiogram (ECG) [4], and otoacoustic emissions [5]. Each such biometric modality has its strengths and weaknesses, and typically suits only a chosen type of application and its corresponding scenarios [6]. A robust biometric system in the real-world should satisfy the following requirements [1]: * • _Universality_ : each person should possess the given biometric characteristic, * • _Uniqueness_ : not any two people should share the given characteristic, * • _Permanence_ : the biometric characteristic should neither change with time nor be alterable, * • _Collectability_ : the characteristic should be readily measurable by a sensor and readily quantifiable. In addition, a practical biometric system must be harmless to the users, and should maximise the trade-off between performance, acceptability, and circumvention; in other words, it should be designed with the accuracy, speed, and resource requirements in mind [6]. One of the currently investigated biometric modalities is the electroencephalogram (EEG), an electrical potential between specific locations on the scalp which arises as a result of the electrical field generated by assemblies of cortical neurons, and reflects brain activity of an individual, such as intent [7]. From a biometrics perspective, the EEG fulfils the above requirements of _universality_ , as it can be recorded from anyone, together with the _uniqueness_. Specifically, the individual differences of EEG alpha rhythms has been examined [8] and reported to exhibit a significant power in discriminating individuals [9] in the area of clinical neurophysiology. The brain activity is neither exposed to surroundings nor possible to be captured at a distance, therefore the brain patterns of an individual are robust to forgery, unlike face, iris, and fingerprints. The EEG is therefore more robust against imposters’ attacks than other biometrics and among different technologies to monitor brain function. However, in order to utilise EEG signals in the real-world, several key properties such as _permanence_ and _collectability_ must be further addressed. The ‘proof-of-concept’ for EEG biometrics, was introduced in our own previous works in [10, 11], and most of the follow-up studies were conducted over only one day (or even over one single trial) recording with EEG channels covering the entire head, while in the classification stage, the training and validation datasets were randomly selected from the same recording day (or the same trial). Apart from its usefulness as a proof-of-concept, this setup does not satisfy the feasibility requirement for a real-world biometric application, since: * • Recording scalp-EEG with multiple electrodes is time-consuming to set-up and cumbersome to wear. Such a sensor therefore does not meet _collectablity_ requirement. * • EEG recordings from one day (or a single trial) cannot truly evaluate the performance in identifying features of an individual, as this scenario does not satisfy the _permanence_ requirement either, see details in Section II-B. * • The training and validation data within this scenario are inevitably mixed, thereby introducing a _performance bias_ in classification. The classification results from such studies are therefore unrealistically high, and we shall refer to this setting as the _biased scenario_. Therefore, for feasible EEG biometrics, the EEG sensor should be wearable, easy to administer, and fast to set-up, while in order to evaluate the performance, the recorded signal should be split in a rigorous way – the training and validation datasets in the classification stage should be created so as _not to share the same recording days_ , a setting we refer to as the _rigorous scenario_. While a considerable body of research has been undertaken to explore the EEG biometrics and to find the most informative subject- specific characteristic of EEG (_uniqueness_), most studies either focused on reducing the number of electrodes (_collectability_) or on evaluating whether the traits are temporally robust (_permanence_) by using EEG data obtained over multiple recording days; for more details see Section II-C. In this paper, based on our works in [11] and [12], we bring EEG-based biometrics into the real-world by resolving the following critical issues: 1. 1. _Collectability_. Biometrics verification is evaluated with a wearable and easy to set-up in-ear sensor, the so-called ear-EEG [12], 2. 2. _Uniqueness_ and _permanence_. These issues are addressed through subject- dependent EEG features which are recorded over temporally distinct recording days, 3. 3. _Reproducibility_. The recorded data are split into the training and validation data in two different setups, _biased_ and _rigorous_ setup, 4. 4. _Fast response_. The classification is performed by both a fast non-parametric (cosine distance) and standard parametric approaches (linear discriminant analysis and support vector machine). Through these distinctive features, we successfully introduce a proof-of- concept for a wearable in-ear EEG biometrics in the community. Figure 1: Person recognition systems. _Top_ : Verification system. _Bottom_ : Identification system. ## II Overview of EEG based biometrics ### II-A Biometric systems with verification/identification Depending on the context, the two categories of biometric systems are: 1) verification systems and 2) identification systems, summarised in Figure 1 [6]. Verification refers to validating a person’s identity based on their individual characteristics, which are stored/registered on a server. In technical terms, this type of a biometric system performs a _one-to-one matching_ between the ‘claimed’ and ‘registered’ data, in order to determine whether the claim is true. In other words, the question asked for this application is ‘Is this person A?’ as illustrated in Figure 1 (top panel). In contrast, an identification system confirms the identify of an individual from cross-pattern matching of the all available information, that is, based on _one-to-many template matching_. The underlining question for this application is ‘Who is this person?’ as illustrated in Figure 1 (bottom panel). ### II-B Feasible EEG biometrics design Traditionally, EEG-based biometrics research has been undertaken based on both publicly available datasets [11] and custom recordings as part of research efforts [13]. However, most of existing studies failed to rigorously address the key criterion, _collectability_ , which is also related to repeatability. A large number of studies, especially those conducted at the dawn of EEG biometrics research, employed classification of the clients based on supervised learning with the training and validation data coming from the same recording trial. However, this experimental setup cannot truly evaluate the performance in identifying individual features, since such classification does not take into account the varying characteristic among multiple recording trials and recording days. In addition, EEG is prone to contamination by artefacts from subjects’ movements (e.g. eye blinks, chewing), while the sources of external noise include electrode noise, power line noise, and electromagnetic interference from the surroundings. This opens the possibility to additionally incorrectly associate ‘EEG patterns’ with either trial- dependent features or so-called noise-related features – in other words, this setup is _biased_ in favour of high classification rate. Therefore, given the notorious variability of EEG patterns across days, biometrics studies based on a single recording day (even for a single subject) can only validate very limited scenarios, without any notion of repeatability and long-term feasibility [13]. Figure 2 shows the concept of a _rigorous_ EEG biometrics verification/identification system in the real-world. Individuals participate in EEG recordings and their EEG signals are registered and stored on a server or in a database (left panel). The client is granted access to their account by providing new EEG data in verification scenarios, whereby the algorithm discriminates the identify of an individual is in identification scenarios through new EEG recordings. Recall that the registered EEG must be recorded beforehand. In order to fulfil the feasibility requirement, several studies performed successful EEG-based biometrics from multiple recording trials conducted on multiple distinct days, thus satisfying the _collectability_ requirement. However, the majority of these studies were still conducted in an unrealistic scenario, whereby the training and validation data in the classification process are split into segments, with all the segments coming from multiple trials but on the same recording day being randomly assigned to the training and validation datasets. Therefore, this _biased_ setup, despite being based on the classification from multiple recording trials, mixes the training and test recordings from the same recording day and thus cannot truly evaluate the performance in the identification of individual features. In order to truly validate the robustness of a EEG biometrics application, within a _rigorous_ setup, it is therefore necessary to both: i) conduct multiple recordings over multiple days, and ii) to assign recordings on one day as the training data and use the recordings from the other days as the validation data. In other words, the training and validation datasets should be created _so as not to share the same recording days_ (as illustrated later in Figure 6, Setup-R). As emphasised by Rocca et al., the issue of the repeatability of EEG biometrics in different recording sessions is still a critical open issue [14]. Figure 2: Feasible EEG biometrics verification framework. _Left_ : EEG recording registration. The registered EEG signal must have been recorded beforehand. _Right_ : Verification and identification system. TABLE I: Recording and classification set-up, and performance comparison among existing EEG biometrics Setup | Author | Subjects | Days | Interval | Ch. | Task∗ | Segment | Feature Extraction | Classifier∗∗ | Performance ---|---|---|---|---|---|---|---|---|---|--- _Biased_ | [15] | 10 | 5 | 2 weeks | 4 | EC | 5 s | AR | ANN | CRR = 97.0 % | | | | | EO | | CRR = 96.0 % 2-11 | [16] | 51 | 4 | 34 $\pm$ 74 days | 2 | EC | 4 s | AR, PSD, MuI, COH, CC | FDA | EER = 3.4 % 2-11 | [17] | 40 | 2 | - | 1 | EC | 180 s | AR, PSD | KNN, LDA | CRR = 97.5 % _Rigorous_ | [18] | 9 | 3 | 3 days | 8 | MI | 1 s | PSD | MAP model | HTER = 19.3 % 2-11 | [19] | 4 | 2 | 10 days - 5 months | 1 | EC | 50 s | PSD | LDA | AC = 100 % 2-11 | [20] | 9 | 2 | 1 - 3 weeks | 3 | EC | 1 s | AR | Linear classier | CRR = 100 % | | | | | 5 | | CRR = 100 % 2-11 | [21] | 15 | 2 | 5 - 40 days | 1 | ERP | 1.1 s | Time-series | CC | CRR = 89.0 % | | 9 | 3 | 134 - 188 days | | CRR = 93.0 % | [22] | 50 | 3 | Ave. 34 days | 19 | EC | 45 s | AR, PSD, COH | L1, L2, Cos. dist. | R1IR = 90.8 % | | | | | EO | | R1IR = 85.6 % Task∗ | EC: resting state with eyes closed, EO: resting state with eyes open, MI: motor imagery, ERP: event related potential Classifier∗∗ | ANN: artificial neural networks, FDA: Fisher discriminant analysis, KNN: k-nearest neighbours, LDA: linear discriminant analysis | | MAP: maximum a posteriori, CC: cross correlation, L1 (Manhattan) distance, L2 (Euclidean) distance, cosine distance ### II-C Previous protocols Table I summarises the state-of-the-art of the existing EEG biometrics applications based on multiple data acquisition days. _Biased setup._ In the first category (Setup: _biased_) is the studies where the training and validation features were randomly selected regardless of the data acquisition days. Abdullah et al. [15] collected 4 channels of EEG data from 10 male subjects during the resting state, in the both eyes open (EO) and eyes closed (EC) scenarios, in 5 separate recording days over a course of 2 weeks. In each recording day, 5 trials of 30 s recordings were recorded, and the recorded data were split into 5 s segments with an overlap of 50 %. The autoregressive (AR) coefficients of the order $p=21$ were extracted from each segment, and the extracted features were _randomly_ divided into the training (90 %) and validation (10 %) sets, namely 10-fold cross-validation. An artificial neural network (ANN) yielded 97.0 % of correct recognition rate (CRR) for the EC task, and 96.0 % of CRR for the EO task. Riera et al. [16] recorded 2 forehead channels of EEG from 51 subjects over 4 separate recording days. The average temporal distance between the 1st and the 4th recording was 34 $\pm$ 74 days. The participants performed the EC task, and the duration of recordings was between 2 and 4 minutes. The recorded EEG was split into 4 s segments, and five types of features were calculated for each segment, namely AR coefficients of order $p=100$, power spectral density (PSD) in the $1-40$ Hz band, mutual information (MuI), coherence (COH), and cross-correlation (CC). The authors trained various classifiers and identified the 5 best classifiers. The Fisher’s discriminant analysis (FDA) was then employed and was first trained with different types of discriminant functions using the 1st to 3rd recording trials; then the 4th recording trials were used for testing. Next, the best 5 classifiers from the training process were utilised for authentication tests, using the first and the second minutes of recordings from each trial; therefore, _the training data for the classifiers and test data for the validation were not disjoint_ (biased setup). The discriminant analysis with the selected discriminant function achieved 3.4 % of equal error rate (EER). Su et al. [17] analysed 5 minutes of the EC task from 40 subjects, with 6 recording trials performed in 2 separate recording days for each subject. The recorded EEG data from the FP1 channel were split into segments of multiple lengths. The PSD in the $5-32$ Hz band and AR coefficients of the order $p=19$ were chosen as features. The extracted features were _randomly_ divided into the training (50 %) and validation (50 %) sets. As a result of 100 iterations, the classifier combining Fisher’s linear discriminant analysis (LDA) and k-nearest neighbours (KNN) achieved an average CRR = 97.5 %, for a segment length of 180 s. _Rigorous setup._ Multiple research groups considered EEG biometrics based on splitting the training and validation data in a _rigorous_ way, _so as not to share the data from the same recording days_ to highlight the feasibility of their system (Setup: _rigorous_). Marcel et al. [18] analysed 8 channels of EEG from 9 subjects, with 4 recording trials over 3 consecutive days. The 15 s trials consisted of two different motor imagery (MI) mental tasks, the imagination of hand movements. The recorded data were split into 1 s segments, and PSD in the $8-30$ Hz band was calculated for each segment. The Gaussian mixture model (GMM) was chosen as a classifier, and maximum a posteriori (MAP) estimation was used for adapting a model for client data. By combining recordings over two days as training data, the authors achieved 19.3 % of half total error rate (HTER), which is a performance criterion widely used in biometrics; for more detail see Section III-G. Lee et al. [19] conducted an experiment of 300 s in duration from four subjects over two days, based on single channel of EEG in the EC scenario. The data were segmented into multiple window sizes, and to extract frequency domain features, PSD was calculated only for the $\alpha$ band ($8-12$ Hz). Even though the dataset size was relatively small, with 50 s of segment length, the LDA achieved 100 % classification accuracy. Rocca et al. [20] recorded two resting state EEGs, in both the eyes open (EO) and eyes closed (EC) scenarios, from 9 subjects over 2 different recording days, which were spanned 1 to 3 weeks. The recording length was 60 s, and the recorded data were split into 1 s segments with an overlap of 50 %. The AR model (Burg algorithm) of the order $p=10$ was employed for feature extraction, and the training and validation data were split without mixing the trials from different recording days. The recognition results, with features from selected 3 or 5 channels of scalp EEG, were obtained by linear classification based on minimising the mean square error (MMSE), to achieve CRR = 100 %. Armstrong et al. [21] recorded event related potentials (ERPs) and constructed two datasets. One dataset included EEG recorded from 15 subjects in two separate days, with a 5 - 40 inter-day interval, and the other one contained EEG from 9 subjects obtained in three separate days, with the average interval between the first and third recordings of 156 days. The recorded data from the O2 channel were split into 1.1 s long segments, which contained an ERP and started from a 100 ms pre- stimulus. The cross-correlation (CC) between the training and validation data was used as a feature for classification, and CRR = 89.0 % was achieved for validating the 2nd day recordings whereas CRR = 93.0 % for classifying the 3rd day recordings. Maiorana et al. [22] analysed 19 channels of EEG from 50 subjects during both EC and EO tasks in three different recording days, with the average interval between the first and the third recording of 34 days. Each recording trial consisted of 240 s of data, segmented into 5 s windows, with an overlap of 40 %. Three types of features were extracted, including a channel-wise AR model (using the Burg algorithm) of the order $p=12$, channel- wise PSD, and the coherence (COH) between the EEG channels. The L1, L2, and cosine distances were calculated for the extracted features, and the rank-1 identification rate (R1IR) achieved 90.8 % accuracy in the EC task and 85.6 % in the EO task. ### II-D Biometrics based on collectable EEG systems With a perspective of _collectability_ , a biometrics application with dry EEG electrodes was recently introduced [23]. While conventional wet EEG headsets require the application of a conductive gel which is generally time-consuming, the dry headset with 16 scalp channels took on the average 2 minutes to be operational. The brain-computer interface based biometrics application with rapid serial visual presentation paradigm achieved CRR = 100 % with 27 s window size over all 29 subjects. Although the recordings were performed over a single recording day per subject, the application with a dry headset was a step forward towards establishing collectable EEG biometrics in real-world. In a recent effort to enable collectable EEG, the in-ear sensing technology [12] was introduced into the research community. The ear-EEG has been proven to provide on-par signal quality, compared to conventional scalp-EEG, in terms of steady state responses [12, 24], monitoring sleep stages [25, 26], and also for monitoring cardiac activity [27, 28]. The advantages of the in-ear EEG sensing for a potential biometrics application in the real-world are: * • _Unobtrusiveness_ : The latest ‘off-the-shelf’ generic viscoelastic EEG sensor is made from affordable/consumable standard earplugs [29], * • _Robustness_ : The viscoelastic substrate expands after the insertion, so the electrodes fit firmly inside the ear canal [27], where the position of electrodes remains the same in different recording sessions, * • _User-friendliness_ : The sensor can be applied straightforwardly by the user, without the need for a trained person. Therefore, biometrics with ear-EEG offers a high degree of _collectability_ , a critical issue in real-world applications. Previously, even based on this biased scenarios, in-ear EEG based biometrics application has been proposed in [30]. ### II-E Problem formulation We investigate the possibility of biometrics verification with a wearable in- ear sensor, which is capable of fulfilling the _collectability_ requirement. The data were recorded over temporally distinct recording days, in order to additionally highlight the _uniqueness_ and _permanence_ aspects. Although the changes in EEG rhythms may well depend on the time period of years rather than days, the alpha band features during the resting state with eyes closed were reported as the most stable EEG feature over two years [31]. Since EEG alpha rhythms predominantly originate during wakeful relaxation with eyes closed, we chose our recording task to be the resting state with eyes closed. This task was used in multiple previous studies [15, 16, 17, 19, 20, 22]. In order to design a feasible biometrics application in the real-world, we considered imposters in two different ways: i) registered subjects in a database, and ii) subjects not belonging to a database. Previously, Riera et al. [16] also used a single trial of EEG recording from multiple subjects as ‘intruders’, while the ‘imposters’ data were EEG recordings available from multiple other experiments. For rigour, we collected two types of data: 1) based on multiple recordings from fifteen subjects over two days, and 2) multiple recordings from five subjects, which were only used for imposters’ data. The classification was performed by both a non-parametric and parametric approach. The non-parametric classifier, minimum cosine distance, is a simplest way for evaluating the similarity between the training and validation matrix, whereas the parametric approach, the support vector machine (SVM), was tuned within the training matrix in order to find optimal hyper-parameters and weights for validation. The same hyper-parameters and weights were used for classifying the validation matrix. Besides, the linear discriminant analysis (LDA) was also employed as a classifier. Through the binary client-imposter classification, we then evaluated the feasibility of our in-ear EEG biometrics. ## III Methods ### III-A Data acquisition The recordings were conducted at Imperial College London, for two different groups of subjects and under the ethics approval, Joint Research Office at Imperial College London ICREC12_1_1. One set of data were the recordings used as both clients and imposters data, denoted by $S_{R}$, and the other subset were the recordings for only imposters’ data, denoted by $S_{N}$. Table II summarises the two recording configurations of $S_{R}$ and $S_{N}$. For the $S_{R}$ subset of recordings, fifteen healthy male subjects (aged 22-38 years) participated in two temporally separate sessions, with the interval between two recording sessions between 5 and 15 days, depending on the subject. The participants were seated in a comfortable chair during the experiment, and were asked to rest with eyes closed. The length of each recording was 190 s, and the recording was undertaken three times (trials) per one day. The interval between each recording trial was approximately 5 to 10 minutes. In total, six trials were recorded per subject. The in-ear sensor was inserted in the subject’s left ear canal after earwax was removed; it then expanded to conform to the shape of the ear canal. The reference gold-cup standard electrodes were attached behind the ipsilateral earlobe and the ground electrodes were placed on the ipsilateral helix. For simplicity, the upper electrode is denoted by Ch1, while Ch2 refers to the bottom electrode, as shown in Figure 3 (left panel). The two EEG signals from flexible electrodes were recorded using the g.tec g.USBamp amplifier with a 24-bit resolution, at a sampling frequency $fs$ = 1200 Hz. For the $S_{N}$ subset of recordings, five healthy subjects (aged 22-29 years) participated in three recording trials. Similar to the $S_{R}$ subset of recordings, the participants were seated in a comfortable chair, and were resting with eyes closed. The duration of recording was also 190 s. A generic earpiece with two flexible electrodes [29] was inserted in the subject’s left ear canal and the same reference and ground configuration was utilised for the $S_{R}$ subset of recordings. Similar to the setup in [22], there was no restriction on the activities that the subjects performed, and no health test such as their diet and sleep, was carried out neither before or between an EEG acquisition and the following one, nor during the days of the recordings. This lack of restrictions allowed us to acquire data in conditions close to real life. TABLE II: Two EEG recordings and corresponding subset Subset | Client/Imposter $S_{R}$ | Imposter only $S_{N}$ ---|---|--- Task | Resting state with eyes closed No. Subjects | 15 | 5 No. Trials | 6 (2 days, 3 trials) | 3 (1 day, 3 trials) Duration | 190 s Figure 3: The in-ear sensor used in our study. _Left_ : Wearable in-ear sensor with two flexible electrodes. _Right_ : Placement of the generic viscoelastic earpiece. Figure 4: Flowchart for the biometrics analysis framework in this study. ### III-B Ear-EEG sensor The in-ear EEG sensor is made of a memory-foam substrate and two conductive flexible electrodes, as shown in Figure 3. The substrate material is a viscoelastic foam, therefore the ‘one-fits-all’ generic earpiece fits any ear regardless of the shape. The size of earpiece was the same for over twenty subjects (both the $S_{R}$ and $S_{N}$ subjects). Further details of the construction of such a viscoelastic earpiece and its detailed recordings of various brain functions can be found in [29, 27]. ### III-C Pre-processing The two channels of the so-obtained ear-EEG were analysed based on the framework illustrated in Figure 4. In each recording, for both the $S_{R}$ and $S_{N}$ recordings, the first 5 s of recording data were removed from the analyses, in order to omit noisy recordings arising at the beginning of the acquisition. The two recorded channels of EEG were bandpass filtered with the fourth-order Butterworth filter with the pass-band $0.5-30$ Hz. The bandpass filtered signals were split into segments. The symbol $N$ denotes the number of segments per recording trial from both the $S_{R}$ and $S_{N}$ subsets. The lengths of segments were chosen as $L_{seg}=10,20,30,60,90$ s. Therefore, when the segment length was $L_{seg}=$ 60 s, $N=3$ ($190-5=185$ s, $\left\lfloor 185/60\right\rfloor=3$) segments were extracted from every recording trial of the $S_{R}$ and $S_{N}$. Within each segment, the data was split into epochs of 2 s length. The epochs with the amplitudes of greater than 50 $\mu$V for either Ch1 or Ch2 were considered corrupted by artefacts and removed from the analyses. This method resulted in a loss of 4.3 % of the data, namely approximately 7.7 s out of 190 s per recording trial. ### III-D Feature Extraction After the pre-processing, two types of features were extracted from each segment of the ear-EEG. For a fair comparison with the state-of-the-art, these features were selected to be the same or similar to those used in the recent studies based on the resting state with eyes closed [19, 20], and included: 1) a frequency domain feature – power spectral density (PSD), and 2) coefficients of an autoregressive (AR) model. #### III-D1 The PSD features Figure 5 shows power spectral density for the in-ear EEG Ch1 (left) and for the in-ear EEG Ch2 (right) of two subjects. For this analysis, the recorded signals were conditioned with the fourth-order Butterworth filter with the pass-band $0.5-30$ Hz. The PSD were obtained using Welch’s averaged periodogram method [32], the window length was 20 s with 50 % of overlap. The PSDs are overlaid between different recording days (red: Day1, blue: Day2), as well as among different recording trials with the same recording days, especially visible from 3 to $20$ Hz. Previously, Maiorana et al. utilised PSD features for EEG biometrics based on the resting state eyes closed and achieved the best performance between the PSD features from theta to beta band, which was classified by the minimum cosine approach [22]; the inclusion of the delta band decreased their identification performance. In our in-ear EEG biometrics approach, the obtained PSDs were visually examined and we found that the ratio between the the total $\alpha$ band ($8-13$ Hz) power and the total $\theta-\alpha_{high}$ band ($4-16$ Hz) power is a relatively more significant individual factor for biometrics, rather than the total $\alpha$ band ($8-13$ Hz) power, which is proposed in [19]. Therefore, in each segment of length $L_{seg}$, univariate PSD was calculated by Welch’s method with 2 s of the window length and no overlap. Three features were obtained for each PSD: 1) The ratio between the total $\alpha$ band ($8-13$ Hz) power and the total $\theta-\alpha_{high}$ band ($4-16$ Hz) power, 2) the maximum power in $\alpha$ band, and 3) the frequency corresponding to the maximum of $\alpha$ band power. In total, $D=6$ (three features $\times$ two channels) frequency domain features were extracted from each segment. Figure 5: Power spectral density for the in-ear EEG Ch1 (left) and the in-ear EEG Ch2 (right) of Subject 1 (top panels) and Subject 2 (bottom panels). The thick lines correspond to the averaged periodogram obtained by the all recordings from the 1st day (red) and the 2nd day (blue), whereas the thin lines are the averaged periodogram obtained by a single trial. #### III-D2 The AR features The Burg algorithm [32] of order $p=10$ was used to estimate the AR coefficients. For each segment, we applied univariate AR parameter estimation of its $\alpha$ band ($8-13$ Hz) with a window length of 2 s and no overlap. The AR model was chosen as a feature, because it was used in a previous successful study on EEG biometrics based on the resting state with eyes closed [20]. A total of $D=20$ features (ten coefficients $\times$ two channels) were therefore extracted for each ear-EEG segment. ### III-E Validation scenarios With the extraction of the both univariate AR and PSD features from two channels, the dimension $D$ of features per EEG segment was twenty six. Recall that the first 5 s of recording data were removed from the analyses. For each trial of the $S_{R}$ and $S_{N}$ recordings, the data with the duration of 190 s was split into segments of length $L_{seg}=10,20,30,60,90$ s. Therefore, $N=18,9,6,3,2$ segments were respectively obtained. Each recording trial was represented by the feature matrix $X_{R}$ for the $S_{R}$ recordings and $X_{N}$ for the $S_{N}$ recordings, such matrices have $N\times D$ elements. In this way, a set of six feature matrices was obtained from one subject for the $S_{R}$ recordings (three recording trials per one day, over two different days), whilst a set of three feature matrices was obtained from one subject for the $S_{N}$ recordings (three recording trials per one day, one recording day). We next discuss the use of feature matrices in two different validation scenarios. As emphasised in Introduction, we introduce a feasible EEG biometrics which satisfies the _collectability_ requirements, which are also related to repeatability. Therefore, for rigour, we used all feature matrices $X_{R}$ from the 1st day of recordings as the training data, and feature matrices from the 2nd day of recordings as the validation data, and vice versa (Setup-R)111Setup-_R_ : _Rigorous_ setup, Setup-_B_ : _Biased_ setup.. Our goal was to examine the robustness of the proposed approach over the two different time periods in Setup-R. For the second setup, Setup-B††footnotemark: , training feature matrices were also selected from the trials which were recorded at the same recording day as the validation matrices. Namely, the training and validation data are split by mixing the data from the same recording days. Notice that, although used in most available EEG biometrics studies [15, 16, 17], Setup-B could not evaluate the repeatability/reproducibility of the application, because the training and validation data were both from the same recording days. In other words, such an approach benefits from the recording-day-dependent EEG characteristic in the classification. However, as the number of feasible biometric modalities with in-ear EEG sensor is limited and for comparison with other studies, for convenience we also provide the results for Setup-B. Algorithm 1 , Setup-R (rigorous): Select the training and validation data without mixing segments from the two recording days, e.g. $[i,j,k]=$ [1,1,1] 1: $VC$: The matrix of the selected-subject, the selected-day and the selected-trial, [1,1,1] $Y_{VC}=X_{R}^{(1,1,1)}$ 2: $VI$: The matrices of the selected-trial and the selected-day from the non- selected-subjects, [(2:15),1,1] $Y_{VI}=[X_{R}^{(2,1,1)T},X_{R}^{(3,1,1)T},...,X_{R}^{(15,1,1)T}]^{T}$ 3: $TC$: The matrices of all recording trials recorded at the non-selected-day from the selected-subject, [1,2,(1:3)] $Y_{TC}=[X_{R}^{(1,2,1)T},X_{R}^{(1,2,2)T},X_{R}^{(1,2,3)T}]^{T}$ 4: $TI$: The matrices of all recording data recorded at the non-selected-day from the non-selected-subjects, [(2:15),2,(1:3)] $Y_{TI}=[X_{R}^{(2,2,1)T},...,X_{R}^{(2,2,3)T},X_{R}^{(3,2,1)T},...,X_{R}^{(15,2,3)T}]^{T}.$ 5: $VI_{N}$: The matrices of the selected-trial from the $S_{N}$ recording subjects, [(1:5),-,1] $Y_{VI_{N}}=[X_{N}^{(1,-,1)T},X_{N}^{(2,-,1)T},...,X_{N}^{(5,-,1)T}]^{T}.$ Algorithm 2 , Setup-B (biased): Select the training and validation data with mixing segments from the two recording days, e.g. $[i,j,k]=$ [2,2,2] 1: $VC$: The matrix of the selected-subject, the selected-day and the selected-trial, [2,2,2] $Y_{VC}=X_{R}^{(2,2,2)}$ 2: $VI$: The matrices of the selected-trial and the selected-day from the non- selected-subjects, [(1,3:15),2,2] $Y_{VI}=[X_{R}^{(1,2,2)T},X_{R}^{(3,2,2)T},...,X_{R}^{(15,2,2)T}]^{T}.$ 3: $TC$: The matrices of all recording trials recorded at the non-selected day from the selected-subject, [2,1,(1:3)], and the matrices of non-selected trials recorded at the selected-day from the selected-subject, [2,2,(1,3)] $Y_{TC}=[X_{R}^{(2,1,1)T},X_{R}^{(2,1,2)T},X_{R}^{(2,1,3)T},X_{R}^{(2,2,1)T},X_{R}^{(2,2,3)T}]^{T}.$ 4: $TI$: The matrices of all recording data recorded at the non-selected-day from the non-selected-subjects, [(1,3:15),1,(1:3)], and the matrices of non- selected trials recorded at the selected-day from the non-selected-subjects, [(1,3:15),2,(1,3)] $Y_{TI}=[X_{R}^{(1,1,1)T},X_{R}^{(1,1,2)T},X_{R}^{(1,1,3)T},X_{R}^{(1,2,1)T},X_{R}^{(1,2,3)T},X_{R}^{(3,1,1)T},...,X_{R}^{(15,2,3)T}]^{T}.$ 5: $VI_{N}$: The matrices of the selected-trial from the $S_{N}$ recording subjects, [(1:5),-,2] $Y_{VI_{N}}=[X_{N}^{(1,-,2)T},X_{N}^{(2,-,2)T},...,X_{N}^{(5,-,2)T}]^{T}.$ Figure 6 summarises the two validation scenarios, Setup-R and Setup-B. For clarity, we denote by $VC$ the validation feature matrix for the client, by $VI$ the validation feature matrix for the imposters, while $TC$ is the training feature matrix for the client, and $TI$ as the training feature matrix for the imposters. The feature matrix from a single trial of the subject $i$, recording day $j$, and trial $k$, from the $S_{R}$ recordings is denoted by $X_{R}^{(i,j,k)}$. Then, the training feature matrix $Y_{T}$ and the validation matrix $Y_{V}$ are given as $\displaystyle Y_{T}$ $\displaystyle=$ $\displaystyle[Y_{TC}^{T},Y_{TI}^{T}]^{T},$ $\displaystyle Y_{V}$ $\displaystyle=$ $\displaystyle Y_{V_{R}}=[Y_{VC}^{T},Y_{VI}^{T}]^{T}.$ Besides, in order to evaluate feasibility in the real-world, we used $S_{N}$ recordings, which are EEG recordings only used for imposters; Riera et al. termed the imposter only data as ‘intruders’ [16]. For an additional scenario in both Setup-R and Setup-B (see Section IV-D), $S_{N}$ recordings were used as the validation data for imposters, $VI_{N}$. The feature matrix from a single trial of the subject $i$, and trial $k$, from the $S_{N}$ recordings is denoted by $X_{N}^{(i,-,k)}$. Therefore, the validation matrix is given by $\displaystyle Y_{V}$ $\displaystyle=$ $\displaystyle Y_{V_{R}}+Y_{VI_{N}}=[Y_{VC}^{T},Y_{VI}^{T},Y^{T}_{VI_{N}}]^{T}.$ Table III summarises the properties of matrices for the training matrix and the validation matrix in the both Setup-R and Setup-B. Figure 6: Two validation scenarios (Setup-R and Setup-B), where $X_{R}^{(i,j,k)}\in\mathbb{R}^{N\times D}$ and $X_{N}^{(i,-,k)}\in\mathbb{R}^{N\times D}$ denote a feature matrix from a single trial of the subject $i$, recording day $j$, and trial $k$, from the $S_{R}$ recordings and the $S_{N}$ recordings, respectively. The number of segments per recording trial $N$ depends on the chosen segment lengths $L_{seg}$. The dimension $D$ is the number of features per EEG segment. TABLE III: Dimensions of the training and validation matrix in Setup-R and Setup-B | | Setup-R | Setup-B ---|---|---|--- Train | $Y_{TC}$ | $3N\times D$ | $5N\times D$ | $Y_{TI}$ | $42N\times D$ | $70N\times D$ 2-4 | $Y_{T}$ | $45N\times D$ | $75N\times D$ Validation | $Y_{VC}$ | $N\times D$ | $Y_{VI}$ | $14N\times D$ 2-4 | $Y_{V_{R}}$ | $15N\times D$ | $Y_{VI_{N}}$ | $5N\times D$ Total validation | $\begin{array}[]{lcl}Y_{V}&=&Y_{V_{R}}:90(15N)\\\ Y_{V}&=&Y_{V_{R}}+Y_{VI_{N}}:90(15N+5N)\end{array}$ elements | $i=$ 1:15 (Sub), $j=$ 1:2 (Day), $k=$ 1:3 (Trial) ### III-F Classification For both the Setup-R and Setup-B, we selected every trial from every subject for the validation of client data, so as to have validated ninety times (three trials $\times$ two days $\times$ fifteen subjects). For each validation, both the largest and smallest values were found for each feature (column-wise) from the training matrix, then the validation matrix was normalised to the range $[0,1]$ based on these largest and smallest values. Three classification algorithms were employed: 1) a non-parametric approach – minimum cosine distance [22], 2, 3) parametric approaches – linear discriminant analysis (LDA) [33] and support vector machine (SVM) [34]. #### III-F1 Cosine distance The cosine distance is the simplest way for evaluating the similarity between the rows of the validation matrix, $Y_{V_{(l,:)}}$, where $l=1,...,15N$ for $Y_{V}=Y_{V_{R}}$ and $l=1,...,20N$ for $Y_{V}=Y_{V_{R}}+Y_{VI_{N}}$, and the training matrix, $Y_{T}$, and is given by $\displaystyle d\left(Y_{V_{(l,:)}},Y_{T}\right)=\min_{n}\frac{\sum_{m=1}^{D}Y_{V_{(l,m)}}Y_{T_{(n,m)}}}{\sqrt{\sum_{m=1}^{D}(Y_{V_{(l,m)}})^{2}}\sqrt{\sum_{m=1}^{D}(Y_{T_{(n,m)}})^{2}}}.$ In other words, the cosine distance is used for evaluating the similarity between a given test sample (e.g $l$th row of the validation matrix, $Y_{V_{(l,:)}}$) and a template (training) feature matrix, $Y_{T}$. The distances between the $l$th row of the validation matrix, $Y_{V_{(l,:)}}$, and the each row of training matrix $Y_{T}$ were first computed, then the minimum among the computed distances was selected. #### III-F2 LDA The binary-class LDA was employed as a classifier. The LDA finds a linear combination of parameters to separate given classes. The LDA projects the data onto a new space, and discriminates between two classes by maximising the between-class variance while minimising the within-class variance. #### III-F3 SVM The binary-class SVM was employed as a parametric classifier [34]. For both Setup-R and Setup-B, four hyper-parameters: type of kernel, regularisation constant for loss function $C$, inverse of bandwidth $\gamma$ of kernel function, and order of polynomial $d$, were tuned by 5-fold cross-validation within the training matrix. Then, the same hyper-parameters were used in order to obtain the optimal weight parameters within the training matrix. The same hyper-parameters and weight parameters as in the training were used for validation. Table IV summarises the hyper-parameters for SVM. TABLE IV: Hyper-parameters for SVM Type of kernel | $\kappa(\mathbf{x},\mathbf{x}^{\prime})$ | Hyper-parameters ---|---|--- Linear | $\mathbf{x}^{T}\mathbf{x}^{\prime}$ | - Sigmoid | $\tanh(\gamma\mathbf{x}^{T}\mathbf{x}^{\prime}+r)$ | $\gamma,(r=0)$ RBF | $\exp(-\gamma|\mathbf{x}-\mathbf{x}^{\prime}|^{2})$ | $\gamma$ Polynomial | $(\gamma\mathbf{x}^{T}\mathbf{x}+r)^{d}$ | $\gamma,d,(r=0)$ ### III-G Performance evaluation Feature extraction and classification with minimum cosine distance and with LDA was performed using Matlab 2016b, and the classification with SVM was conducted in Python 2.7.12 Anaconda 4.2.0 (x86_64) operated on an iMac with 2.8GHz Intel Core i5, 16GB of RAM. For the verification setup (the number of classes $M=2$, client-imposter classification), the performance was evaluated through the false accept rate (FAR), false reject rate (FRR), half total error rate (HTER), accuracy (AC), and true positive rate (TPR), defined as: $\displaystyle FAR=FP/(FP+TN),\,\,FRR=FN/(TP+FN),\,\,\,\,\,\,\,\,$ $\displaystyle HTER=\frac{FAR+FRR}{2},\,\,AC=\frac{TP+TN}{TP+FN+FP+TN},$ $\displaystyle TPR=TP/(TP+FN).\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$ The parameter TP (true positive) represents the number of positive (target) segments correctly predicted, TN (true negative) is the number of negative (non-target) segments correctly predicted, FP (false positive) is the number of negative segments incorrectly predicted as the positive class, and FN (false negative) is the number of positive segments incorrectly predicted as negative class. For the identification setup ($M=15$), the performance was evaluated by subject-wise sensitivity (SE), identification rate (IR) and Cohen’s kappa ($\kappa$) coefficient as: $\displaystyle SE_{i}=TP_{i}/(TP_{i}+FN_{i}),\,\,IR=\frac{\sum_{i=1}^{15}TP_{i}}{N_{segment}}$ $\displaystyle\pi_{e}=\frac{\sum_{i=1}^{15}\left\\{(TP_{i}+FP_{i})(TP_{i}+FN_{i})\right\\}}{{N_{segment}}^{2}},\,\,\kappa=\frac{IR-\pi_{e}}{1-\pi_{e}},$ where $N_{segment}$ is the total number of segments. ## IV Results The biometric verification results within a one-to-one client-imposter classification problem are next summarised. In terms of the verification, we considered the following scenarios: * • Client-imposter verification based on varying segment lengths $L_{seg}$ (Section IV-A), * • Verification with various combinations of features (Section IV-B), * • Verification across different classifiers, both non-parametric and parametric ones (Section IV-C), * • Verification of registered clients and imposters ($S_{R}$), and of non- registered-imposters ($S_{N}$) (Section IV-D), * • Subject-wise verification (Section IV-E). We also considered biometric identification, that is, a one-to-many subject- to-subject classification problem (Section IV-F). Table V summarises the details of the considered scenarios. TABLE V: Summary of parameter choice in the proposed biometric application Section | No. Subject | $L_{seg}$ | Features | Classifier | Setup | System ---|---|---|---|---|---|--- IV-A | 15 | $10,20,30,60,90$ | PSD+AR | Cos. Dist. | R&B | Verification IV-B | 15 | $60$ | PSD, AR, PSD+AR | Cos. Dist. | R IV-C | 15 | $60$ | PSD+AR | Cos. Dist., LDA, SVM | R&B IV-D | 15+5 | $60$ | PSD+AR | Cos. Dist., LDA, SVM | R&B IV-E | 15 | $60$ | PSD+AR | Cos. Dist. | R 1-7 IV-F | 15 | $10,20,30,60,90$ | PSD+AR | Cos. Dist. | R&B | Identification ### IV-A Client-Imposter verification with different segment sizes Table VI summarises validation results for both Setup-R and Setup-B, over different segment sizes $L_{seg}=10,20,30,60,90$ s. Both the PSD and AR features were used. The elements in TP, FN, FP and TN columns denote the number of segments classified by a validation stage. The chance level in this scenario is 14/15 = 93.3 %; this is because every subject is once used for client data, $V_{VC}\in\mathbb{R}^{N\times D}$ in Table III, and imposter data are selected from the other ‘non-client’ subjects, $V_{VI}\in\mathbb{R}^{14N\times D}$ in Table III. The ratio between $V_{VC}$ and $V_{VI}$ is therefore $1:14$, and thus the chance level is 14/15. In Setup-R, the results with $L_{seg}=60$ s achieved both the best HTER score, 17.2 %, and the best accuracy (AC), 95.7 %. Notice that, the number of TP (=183) is larger than FN + FP (=174) with $L_{seg}=60$ s, therefore, the likelihood of making a true positive verification is higher than making a false verification. In Setup-B, the results with $L_{seg}=90$ s obtained both the best HTER score, 6.9 %, and the best accuracy (AC), 98.3 %. TABLE VI: Client-Impostor verification over different segment sizes | $L_{seg}$ | TP | FN | FP | TN | FAR | FRR | HTER | AC ---|---|---|---|---|---|---|---|---|--- _Setup-R_ | 10 s | 622 | 996 | 996 | 21656 | 4.4 | 61.6 | 33.0 | 91.8 | 20 s | 371 | 439 | 439 | 10901 | 3.9 | 54.2 | 29.1 | 92.8 | 30 s | 271 | 269 | 269 | 7291 | 3.6 | 49.8 | 26.7 | 93.4 | 60 s | 183 | 87 | 87 | 3693 | 2.3 | 32.2 | 17.2 | 95.7 | 90 s | 120 | 60 | 60 | 2460 | 2.4 | 33.3 | 17.8 | 95.6 _Setup-B_ | 10 s | 1097 | 523 | 523 | 22157 | 2.3 | 32.3 | 17.3 | 95.7 | 20 s | 611 | 199 | 199 | 11141 | 1.8 | 24.6 | 13.2 | 96.7 | 30 s | 425 | 115 | 115 | 7445 | 1.5 | 21.3 | 11.4 | 97.2 | 60 s | 233 | 37 | 37 | 3743 | 1.0 | 13.7 | 7.3 | 98.2 | 90 s | 157 | 23 | 23 | 2497 | 0.9 | 12.8 | 6.9 | 98.3 TP: True Positive, FN: False Negative, FP: False Positive, TN: True Negative, FAR: False Accept Rate, FRR: False Reject Rate, HTER: Half Total Error Rate, AC: Accuracy ### IV-B Client-Imposter verification with different features Table VII shows the validation results in Setup-R, and over a range of different selections of features, such as AR coefficients, frequency band power, and the combination of AR and band power features for the segment length of $L_{seg}=60$ s. The classification results using both AR features and PSD features were the highest in terms of both HTER and AC, which corresponds to Table VI (upper-panel), for $L_{seg}=60$ s. TABLE VII: Rigorous setup: Client-Imposter verification over different features in Setup-R $L_{seg}=60$ | No. feature $D$ | FAR | FRR | HTER | AC ---|---|---|---|---|--- AR | 20 | 4.8 | 66.7 | 35.8 | 91.1 PSD | 6 | 4.4 | 61.1 | 32.8 | 91.9 AR + PSD | 26 | 2.3 | 32.2 | 17.2 | 95.7 ### IV-C Client-Imposter verification with different classifiers Table VIII shows the imposter-client verification accuracy based on the minimum cosine distance, LDA, and SVM, for both Setup-R and Setup-B, with a segment size of $L_{seg}=60$ s. Both the PSD and AR features were used. In Setup-R, the results with cosine distance were the best in terms of both HTER score, 17.2 % and AC, 95.7 %. In Setup-B, the results of both HTER and AC were the best based on the SVM classifier, 5.5 % and 99.0 %, respectively. TABLE VIII: Client-Impostor verification with different classifiers $L_{seg}=60$ s | TP | FN | FP | TN | FAR | FRR | HTER | AC ---|---|---|---|---|---|---|---|--- _Setup-R: rigorous_ Cos. Dist | 183 | 87 | 87 | 3693 | 2.3 | 32.2 | 17.2 | 95.7 LDA | 149 | 121 | 139 | 3641 | 3.7 | 44.8 | 24.2 | 93.6 SVM | 135 | 135 | 54 | 3726 | 1.4 | 50.0 | 25.7 | 95.3 _Setup-B: biased_ Cos. Dist | 233 | 37 | 37 | 3743 | 1.0 | 13.7 | 7.3 | 98.2 LDA | 200 | 70 | 87 | 3693 | 2.3 | 25.9 | 14.1 | 96.1 SVM | 241 | 29 | 11 | 3769 | 0.3 | 10.7 | 5.5 | 99.0 ### IV-D Validation including non-registered imposters Table IX summarises the confusion matrices of both Setup-R and Setup-B with segment sizes $L_{seg}=60$ s, classified by the minimum cosine distance, LDA, and SVM; these correspond to Table VIII, panels Setup-R and Setup-B. The confusion matrices were categorised into: * • Client matrix $Y_{VC}$ from dataset $S_{R}$, * • Imposter matrix $Y_{VI}$ from dataset $S_{R}$ * • Imposter matrix $Y_{VI_{N}}$ from dataset $S_{N}$. Notice that the minimum cosine distance approach assigns the class (client or imposter) of the nearest data from the training matrix. In this study, we selected every trial from every subject for the validation of client data, so as to have validated ninety times; therefore, _the nearest data (from dataset $S_{R}$) for each imposter data from dataset $S_{N}$, also always become the ‘client’ once._ Hence, when the nearest data for an $S_{N}$ data become the ‘client’ data in the training matrix, the imposter data from dataset $S_{N}$ are straightforwardly classified as ‘client’. Therefore, regardless of data, the TPR for Imposter matrix $Y_{VI_{N}}$ is 93.3 % for the minimum cosine distance approach; however for comparison among different classifiers, these results are also included. In Setup-R, the TPR of client $Y_{VC}$, achieved by the minimum cosine distance was the highest, with respective value of 67.8 %. However, the TPRs obtained by SVM for imposters $Y_{VI}$ and $Y_{VI_{N}}$ were 98.6 % and 96.2 %, respectively, which was higher than those achieved by LDA. In Setup-B, both the TPR of client $Y_{VC}$ and that of imposters $Y_{VI}$ and $Y_{VI_{N}}$ by SVM were the highest, with respective values of 89.3 %, 99.7 % and 96.3 %. TABLE IX: Confusion matrix of the Client-Imposter verification scenario from different datasets in both Setup-R and Setup-B $L_{seg}=60$ s | Prediction | | TPR ---|---|---|--- | | | Client | Imposter | Total | (%) _Setup-R_ Label | Cos. | Client $Y_{VC}$ | 183 | 87 | 270 | 67.8 | | Imposter $Y_{VI}$ | 87 | 3693 | 3780 | 97.7 | | Imposter $Y_{VI_{N}}$∗ | 90∗ | 1260∗ | 1350∗ | 93.3∗ 2-7 | LDA | Client $Y_{VC}$ | 149 | 121 | 270 | 55.2 | | Imposter $Y_{VI}$ | 139 | 3641 | 3780 | 96.3 | | Imposter $Y_{VI_{N}}$ | 136 | 1214 | 1350 | 89.9 2-7 | SVM | Client $Y_{VC}$ | 135 | 135 | 270 | 50.0 | | Imposter $Y_{VI}$ | 54 | 3726 | 3780 | 98.6 | | Imposter $Y_{VI_{N}}$ | 52 | 1298 | 1350 | 96.2 _Setup-B_ Label | Cos. | Client $Y_{VC}$ | 233 | 37 | 270 | 86.3 | | Imposter $Y_{VI}$ | 37 | 3743 | 3780 | 99.0 | | Imposter $Y_{VI_{N}}$∗ | 90∗ | 1260∗ | 1350∗ | 93.3∗ 2-7 | LDA | Client $Y_{VC}$ | 200 | 70 | 270 | 74.1 | | Imposter $Y_{VI}$ | 87 | 3693 | 3780 | 97.7 | | Imposter $Y_{VI_{N}}$ | 152 | 1198 | 1350 | 88.7 2-7 | SVM | Client $Y_{VC}$ | 241 | 29 | 270 | 89.3 | | Imposter $Y_{VI}$ | 11 | 3769 | 3780 | 99.7 | | Imposter $Y_{VI_{N}}$ | 50 | 1300 | 1350 | 96.3 ∗ Always the same result regardless of the data ### IV-E Client-Imposter verification results per subject Table X (middle columns) summarises the subject- and day-wise validation results with PSD and AR features from $L_{seg}=$ 60 s segments in Setup-R, which corresponds to Table VI (upper-panel) for $L_{seg}=$ 60 s. The first and second columns in the Verification part show respectively subject-wise HTER and AC, with the training matrix $Y_{T}$ selected from all the first day recordings and classified based on the second day recordings. The third and fourth columns in the Verification part show classification results obtained based on the training matrix $Y_{T}$, which was selected from all the second day recordings in order to classify the first day recordings. ### IV-F Biometrics identification scenarios Table X (right column) summarises the subject-wise identification rate obtained by the minimum cosine distance classifier with the PSD and AR features from $L_{seg}=60$ s segments in Setup-R. Previously, we considered a binary client-imposter classification problem (e.g. $M=2$) for each subject, each day, and each trial, however, the classification algorithm used in this study was the simple minimum cosine distance between the validation matrix and training matrix. For the prediction of $l$th row of the validation matrix, $Y_{V_{(l,:)}}$, the minimum distance between the training matrix, $Y_{T}$, was found, e.g. the $n$th row of the training matrix, and the same label was assigned to the $n$th row of the training matrix as the prediction label for $l$th row of the validation matrix, $Y_{V_{(l,:)}}$. Notice that _the minimum distance approach is applicable for biometrics identification problems, which is a one-to-many classification._ The number of classes was $M=15$, which corresponds to the the number of subjects in the $S_{R}$ recordings, therefore the chance level was 1/15 = 6.7 %. The achieved identification rate was 67.8 % with $L_{seg}=60$ s segments in Setup-R, while the achieved Cohen’s kappa coefficient was $\kappa=0.65$ (Substantial agreement) [35]. Figure 7 shows identification rate of both Setup-R and Setup-B, with different segment sizes $L_{seg}=10,20,30,60,90$ s. In Setup-B, the identification rate with $L_{seg}=10$ s was 67.7 %, which was almost the same as the result with $L_{seg}=60$ s in Setup-R. The highest identification rate, 87.2 %, was achieved with $L_{seg}=90$ s, where corresponding Kappa was $\kappa=0.86$ (Almost Perfect agreement) in Setup-B. Figure 7: The identification rate with different segment size in Setup-R and Setup-B. The error bars indicate the standard error. TABLE X: Recording details, accuracy and HTER in the verification problem, and sensitivity in the identification problem for each subject with different training and validation data in Setup-R | | Impedance | | | Verification (Client-Imposter, $M=2$) | Identification ---|---|---|---|---|---|--- $S_{R}$ | Interval | (k$\Omega$) | Time∗ | $Y_{T}$:Day1 $\rightarrow$ $Y_{V}$:Day2 | $Y_{T}$:Day2 $\rightarrow$ $Y_{V}$:Day1 | ($M=15$) Sub | day | Day1 | Day2 | Day1 | Day2 | AC(%) | HTER(%) | AC(%) | HTER(%) | SE(%) 1 | 5 | $<9$ | $<10$ | M | A | 98.5 | 6.0 | 100 | 0.0 | 94.4 2 | 6 | $<6$ | $<6$ | A | A | 94.8 | 28.6 | 97.0 | 11.9 | 61.1 3 | 8 | $<5$ | $<8$ | A | A | 98.5 | 0.8 | 94.8 | 33.8 | 66.7 4 | 7 | $<9$ | $<10$ | A | M | 95.6 | 28.2 | 96.3 | 7.2 | 66.7 5 | 7 | $<8$ | $<4$ | A | A | 96.3 | 17.5 | 97.0 | 22.2 | 61.1 6 | 7 | $<12$ | $<14$ | A | A | 91.1 | 20.2 | 97.8 | 6.4 | 77.8 7 | 15 | $<12$ | $<13$ | M | M | 98.5 | 11.1 | 96.3 | 22.6 | 66.7 8 | 6 | $<11$ | $<11$ | M | M | 91.1 | 35.8 | 91.9 | 35.4 | 33.3 9 | 7 | $<10$ | $<13$ | A | A | 96.3 | 12.3 | 95.6 | 12.7 | 77.8 10 | 8 | $<9$ | $<9$ | M | M | 91.9 | 35.4 | 93.3 | 8.7 | 61.1 11 | 6 | $<11$ | $<13$ | A | A | 95.6 | 17.8 | 94.8 | 13.1 | 72.2 12 | 5 | $<13$ | $<11$ | M | M | 96.3 | 17.5 | 99.3 | 0.4 | 83.3 13 | 7 | $<13$ | $<9$ | M | M | 95.6 | 17.8 | 97.8 | 11.5 | 72.2 14 | 6 | $<8$ | $<9$ | M | A | 97.8 | 11.5 | 95.6 | 33.4 | 55.6 15 | 5 | $<9$ | $<13$ | A | A | 94.1 | 13.5 | 91.9 | 25.0 | 66.7 Ave. | 7 | - | - | - | - | 95.5 | 18.2 | 96.0 | 16.3 | IR = 67.8 % _Setup-R: rigorous_ , $L_{seg}=60$ | Overall AC=95.7 %, HTER=17.2 % | $\kappa$ = 0.65 ∗M: Morning (9-12), A: Afternoon (12-18) | bold: best result, italic: worst result, out of 15 subjects. ## V Discussion This study aims to establish a repeatable and highly collectable EEG biometrics using a wearable in-ear sensor. We considered a biometric verification problem, which was cast into a one-to-one client-imposter classification setting. Notice that, as described in Section III-F, before classification, the validation matrix was normalised column-wise to the range $[0,1]$ using the corresponding maximum/minimum values of the training matrix. ### V-A Verification with different segment sizes Firstly, the classification results were compared for different segment lengths $L_{seg}$, shown in Table VI. Within the same setup, i.e. Setup-R or Setup-B, the performance of HTER and AC increased with the segment length, although the results with $L_{seg}=60$ s and $L_{seg}=90$ s are almost the same. Longer segments allowed for more data epochs to be averaged over, hence the EEG noise inference for classification diminished and the inherent EEG characteristic were able to be captured by averaging. However, a longer segment length also implies a longer recording time, which is not ideal for feasible EEG biometrics. Compared to the results in Setup-R and Setup-B for the same segment length, both the HTERs and accuracy (ACs) of Setup-B were clearly better than those of Setup-R. In terms of client discrimination, the decrease in FRR was significant from Setup-R to Setup-B. In Setup-B, a larger number of client segments was correctly classified (see TP) than in Setup-R. In setup-R, only the result with $L_{seg}=60$ s achieved TP > (FN + FP), which indicates that the likelihood of making a true positive verification is higher than making a false verification; in Setup-B, the shortest segment size, $L_{seg}=10$ s, achieved TP > (FN + FP). The difference between Setup-R and Setup-B was that the training matrices $Y_{T}$ in Setup-B included the trials which were recorded on the same day as the validation trial. In other words, the assigned validation data (trial) and the part of assigned training data (trials) were recorded within 5 - 10 minutes in the same environment. Therefore, the training matrix contains significantly similar EEG recordings to the validation matrix in Setup-B, and this leads to a higher classification performance than the classification in Setup-B. With an increase in the segment size $L_{seg}$, the number of segments per recording trial, $N$, became smaller, especially for $N=2$ with the segment size $L_{seg}=90$ s; therefore, the training matrix only contains six (2 $\times$ 3 trials) examples of client data in Setup-R (c.f. ten examples in Setup-B), which might not be enough for training client data. Hence, the performance with $L_{seg}=90$ s was slightly lower than that with $L_{seg}=60$ s in Setup-R. ### V-B Verification with different classifiers Table VIII shows the classification comparison among the minimum cosine distance methods, LDA and SVM. The SVM was used as a parametric classifier; firstly, the optimal hyper-parameters (see details in Table IV) were selected from 5-fold cross-validation within the training matrix, and then weight parameters based on these chosen hyper-parameters were obtained. Notice that we could tune the classifier in different ways, e.g. in order to minimise false acceptance or minimise false rejection. The optimal tuning in this study was performed so as to maximise class sensitivities, i.e. maximise the number of TP and TN elements, which resulted in minimum HTERs. In both Setup-R and Setup-B, the FARs by SVM were smaller than those achieved by both the minimum cosine distance and the LDA, because the tuning was performed for maximising TN elements. Since the number of imposter elements was fourteen times bigger than the number of clients in both Setup-R and Setup-B (i.e. chance level was 14/15), the SVM parameters were tuned for higher sensitivity to imposters. As a result, the FRR by SVM, which were related to client sensitivity given in Table IX, were higher than those achieved by both minimum cosine distance and LDA in Setup-R. In Setup-B, as mentioned above, the training matrix contains the data from the same recording day, which are more similar EEG patterns than the data obtained from a different recording day. Therefore, the SVM model chose hyper- parameters and weight parameters from the training matrix, so as to better the validation data in Setup-B, which led to higher performance than by both the minimum cosine distance and LDA. Notice that, as described before, tuning of the hyper-parameters was performed within the training matrix, then the so-obtained hyper-parameters were used for finding the optimal weight parameters within the training matrix. The same hyper-parameters and weight parameters were used for classifying the validation matrix. This setup is applicable for feasible EEG biometrics scenarios in the real-world. ### V-C Validation including non-registered imposters In Table IX, the confusion matrices for the client matrix $Y_{VC}$ and imposter matrix $Y_{VI}$ from dataset $S_{R}$ and imposter matrix $Y_{VI_{N}}$, are given, which were then used for a comparison between Setup-R and Setup-B. Compared to the results obtained within the same classifier (minimum cosine distance, LDA, SVM) in Setup-R and Setup-B, the true positive rate (TPR) of clients $Y_{VC}$ and the sensitivity of imposter $Y_{VI}$ from dataset $S_{R}$ in Setup-B were higher than those in Setup-R. As described before, in Setup-B, the two client trials from the same recording day as the validation trial were included into the training matrix, and therefore more segments were correctly classified. In contrast, the TPRs of imposter $Y_{VI_{N}}$ from dataset $S_{N}$ by both the LDA and SVM in Setup-B were almost the same to those in Setup-R; 89.9 % and 88.7 % for LDA, 96.2 % and 96.3 % for SVM. Compared to the TPR between two imposter data ($Y_{VI}$ and $Y_{VI_{N}}$), regardless of the classifiers, the TPR of $Y_{VI}$ were higher than those of $Y_{VI_{N}}$. Since the imposter data from $S_{N}$ were not included in the training matrix, the more $S_{N}$ data were misclassified as ‘client’ than $S_{R}$ data misclassified as ‘client’. However, in the real-world scenarios for biometrics, imposters are not always ‘registered’. The lower TPR for $Y_{VI_{N}}$ means that the application is inadequate for attack from non-registered subjects. One potential way to overcome the vulnerability of the minimum cosine distance classifier is by introducing threshold for classification. If the nearest distance is larger than the given distance parameter, the segment is excluded from the classification or is classified as imposter. ### V-D Client-Imposter verification results per subject For subject-wise classification, Table X summarises classification results obtained by the minimum cosine distance in Setup-R, for different training- validation scenarios. The results varied across subjects and for training- validation configurations between 91.1 % to 100 % of AC and between 0.0 % to 35.8 % of HTER. The size of viscoelastic earpiece was the same for twenty subjects (both $S_{R}$ and $S_{N}$ subjects), therefore all the subjects were able to wear it comfortably. The upper bounds of the electrode impedance over three recordings per day of each participant are given in Table X (Impedance part). The highest performance was achieved by Subject 1, with maximum impedances of 9 k$\Omega$ and 10 k$\Omega$ for the 1st and 2nd recording, respectively. Even though the impedances for Subject 2 and Subject 5 were smaller than those for Subject 1, the corresponding performance was below average over fifteen subjects. Besides, the lowest performance was exhibited by Subject 8, for whom the impedances were smaller than 11 k$\Omega$ for Day1 and 11 k$\Omega$ for Day2. Figure 8 shows average PSDs for Subject 8 – observe that the PSDs for the EEG recorded on Day2 (blue) is slightly larger than those on Day1 (red). Figure 8: Power spectral density for the in-ear EEG Ch1 (left) and the in-ear EEG Ch2 (right) of Subject 8. The thick lines correspond to the averaged periodogram obtained by the all recordings from the 1st day (red) and the 2nd day (blue), whereas the thin lines are the averaged periodogram obtained by a single trial. ### V-E Biometrics identification In terms of biometrics identification results, a one-to-many subject-to- subject classification problem, the average sensitivity over fifteen subjects, i.e. the identification rate, was 67.8 % in Setup-R with $L_{seg}=60$ s, as shown in Table X (right column). Figure 7 illustrates the identification rates of both Setup-R and Setup-B, with different segment sizes $L_{seg}=10,20,30,60,90$ s. The identification rate increased with segment length, although the results with $L_{seg}=60$ s and $L_{seg}=90$ s are almost the same. Notice that the performances with $L_{seg}=10$ s in Setup-B and $L_{seg}=60$ s in Setup-R were almost the same, 67.7 % and 67.8 %, respectively. The highest identification rate in Setup-B was 87.2 % with $L_{seg}=90$ s. Indeed, the training matrix for Setup-B included the trials which were recorded at the same day as the validation trial; therefore the performance was better than in Setup-R. In a previous biometrics identification study, Maiorana et al. [22] analysed 19 channels of EEG during EC tasks in three different recording days, and achieved the rank-1 identification rate (R1IR) of 90.8 % for a segment length 45 s. Notice that it is hard to compare the performance with our approach, because the number of channels was very different, as 19 scalp EEG channels covered the entire head vs our 2 in-ear EEG channels embedded on an earplug. Therefore, although our results were lower, the proof-of-concept in-ear biometrics emphasised the _collectability_ aspect in fully wearable scenarios. ### V-F Alpha attenuation in the real-world scenarios One limitation of using the alpha band, is the sensitivity to drowsiness, a state where the alpha band power is naturally elevated. For illustration, Figure 9 shows the PSD obtained from a subject, calculated by Welch’s averaging periodogram method. The subject slept during one recording, then the subject was woken up and another recording started less than 10 minutes after the first recording. The PSD graphs in Figure 9 are overlapped except for the alpha band; the alpha power observed during the ‘sleepy’ recording trial was smaller than that at the ‘normal’ recording, thus demonstrating the alpha attenuation due to fatigue, sleepiness, and drowsiness. The alpha attenuation is well known in the research in sleep medicine [36, 26], where it is particularly used to monitor sleep onset. Figure 9: Power spectral density for the in-ear EEG Ch1 (left), and the in- ear EEG Ch2 (right) of one subject. The thick lines correspond to the averaged periodogram obtained by the recordings from ‘sleepy’ trial (red) and ‘normal’ trial (blue). Observe that the alpha power attenuated during the ‘sleepy’ trial. ## VI Conclusion We have introduced a proof-of-concept for a feasible, collectable and reproducible EEG biometrics in the community by virtue of an unobtrusive, discreet, and convenient to use in-ear EEG device. We have employed robust PSD and AR features to identify an individual, and unlike most of the existing studies, we have performed classification rigorously, without mixing the training and validation data from the same recording days. We have achieved HTER of 17.2 % with AC of 95.7 % with segment sizes of 60 s, over the dataset from fifteen subjects. The aspects that need to be further addressed in order to fulfil the requirements for ‘truly wearable biometrics’ in the ‘real-world’ will focus on extensions and generalisations of this proof-of-concept to cater for: * • Intra-subject variability with respect to the circadian cycle and the mental state, such as fatigue, sleepiness, and drowsiness; * • Additional feasible recording paradigms, for example, evoked response scenarios; * • Truly wearable scenarios with mobile and affordable amplifiers; * • Inter- and intra-subject variability over the period of months and years; * • Fine tuning of the variables involved in order to identify the optimal features and parameters (segment length, additional EEG bands). ## VII Acknowledgement We wish to thank the anonymous reviewers for their insightful comments. ## References * [1] A. Jain, L. Hong, and S. Pankanti, “Biometric identification,” Communications of the ACM, vol. 43, no. 2, pp. 90–98, 2000. * [2] Y. Sato, F. Akazawa, D. Muramatsu, T. Matsumoto, A. Nakamura, and T. Sota, “An authentication method by high spectral resolution palm datacube,” in Proceedings of the International Conference on Biometrics and Kansei Engineering, pp. 239–244, 2013. * [3] C. D. Holland and O. V. Komogortsev, “Complex eye movement pattern biometrics: The effects of environment and stimulus,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 12, pp. 2115–2126, 2013. * [4] I. Odinaka, P. H. Lai, A. D. Kaplan, J. A. O’Sullivan, E. J. Sirevaag, S. D. Kristjansson, A. K. Sheffield, and J. W. Rohrbaugh, “ECG biometrics: A robust short-time frequency analysis,” in Proceedings of IEEE International Workshop on Information Forensics and Security (WIFS), 2010. * [5] Y. Liu and D. Hatzinakos, “Earprint: Transient evoked otoacoustic emission for biometrics,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 12, pp. 2291–2301, 2014. * [6] S. Prabhakar, S. Pankanti, and A. Jain, “Biometric recognition: Security and privacy concerns,” IEEE Security & Privacy Magazine, vol. 1, no. 2, pp. 33–42, 2003. * [7] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain computer interfaces for communication and control,” Frontiers in Neuroscience, vol. 4, no. 113, pp. 767–791, 2002. * [8] L. C. Johnson and G. A. Ulett, “Quantitative study of pattern and stability of resting electroencephalographic activity in a young adult group,” Electroencephalography and Clinical Neurophysiology, vol. 11, no. 2, pp. 233–249, 1959. * [9] J. Berkhout and D. O. Walter, “Temporal stability and individual differences in the human EEG: An analysis of variance of spectral values,” IEEE Transactions on Biomedical Engineering, no. 3, pp. 165–168, 1968. * [10] R. Palaniappan and D. P. Mandic, “EEG based biometric framework for automatic identity verification,” Journal of VLSI Signal Processing, vol. 49, no. 2, pp. 243–250, 2007. * [11] R. Palaniappan and D. P. Mandic, “Biometrics from brain electrical activity: A machine learning approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 738–742, 2007. * [12] D. Looney, P. Kidmose, C. Park, M. Ungstrup, M. Rank, K. Rosenkranz, and D. P. Mandic, “The in-the-ear recording concept: User-centered and wearable brain monitoring,” IEEE Pulse, vol. 3, no. 6, pp. 32–42, 2012. * [13] P. Campisi and D. L. Rocca, “Brain waves for automatic biometric-based user recognition,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 5, pp. 782–800, 2014. * [14] D. L. Rocca, P. Campisi, and G. Scarano, “Stable EEG features for biometric recognition in resting state conditions,” Biomedical Engineering Systems and Technologies, vol. 452, pp. 313–330, 2014. * [15] M. K. Abdullah, K. S. Subari, J. L. C. Loong, and N. N. Ahmad, “Analysis of effective channel placement for an EEG-based biometric system,” in Proceedings of IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), pp. 303–306, 2010. * [16] A. Riera, A. Soria-Frisch, M. Caparrini, C. Grau, and G. Ruffini, “Unobtrusive biometric system based on electroencephalogram analysis,” EURASIP Journal on Advances in Signal Processing, vol. 2008, 2008. * [17] F. Su, L. Xia, A. Cai, Y. Wu, and J. Ma, “EEG-based personal identification: From proof-of-concept to a practical system,” in Proceedings of International Conference on Pattern Recognition, pp. 3728–3731, 2010. * [18] S. Marcel and J. d. R. Millan, “Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 743–748, 2007\. * [19] H. J. Lee, H. S. Kim, and K. S. Park, “A study on the reproducibility of biometric authentication based on electroencephalogram (EEG),” in Proceedings of the IEEE International EMBS Conference on Neural Engineering (NER), pp. 13–16, 2013. * [20] D. L. Rocca, P. Campisi, and G. Scarano, “On the repeatability of EEG features in a biometric recognition framework using a resting state protocol,” in Proceedings of the International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS), pp. 419–428, 2013. * [21] B. C. Armstrong, M. V. Ruiz-Blondet, N. Khalifian, K. J. Kurtz, Z. Jin, and S. Laszlo, “Brainprint: Assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics,” Neurocomputing, vol. 166, pp. 59–67, 2015. * [22] E. Maiorana, D. La Rocca, and P. Campisi, “On the permanence of EEG signals for biometric recognition,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 1, pp. 163–175, 2016. * [23] Y. Chen, A. D. Atnafu, I. Schlattner, W. T. Weldtsadik, M. C. Roh, H. J. Kim, S. W. Lee, B. Blankertz, and S. Fazli, “A high-security EEG-based login system with RSVP stimuli and dry electrodes,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 12, pp. 2635–2647, 2016. * [24] P. Kidmose, D. Looney, M. Ungstrup, M. L. Rank, and D. P. Mandic, “A study of evoked potentials from ear-EEG,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 10, pp. 2824–2830, 2013. * [25] D. Looney, V. Goverdovsky, I. Rosenzweig, M. J. Morrell, and D. P. Mandic, “A wearable in-ear encephalography sensor for monitoring sleep: Preliminary observations from nap studies,” Annals of the American Thoracic Society, vol. 13, no. 12, pp. 2229–2233, 2016. * [26] T. Nakamura, V. Goverdovsky, M. J. Morrell, and D. P. Mandic, “Automatic sleep monitoring using ear-EEG,” IEEE Journal of Translational Engineering in Health and Medicine, vol. 5, no. 1, p. 2800108, 2017. * [27] V. Goverdovsky, W. von Rosenberg, T. Nakamura, D. Looney, D. J. Sharp, C. Papavassiliou, M. J. Morrell, and D. P. Mandic, “Hearables: Multimodal physiological in-ear sensing,” Scientific Reports, vol. 7, no. 1, p. 6948, 2017. * [28] V. Goverdovsky, D. Looney, P. Kidmose, C. Papavassiliou, and D. P. Mandic, “Co-located multimodal sensing: A next generation solution for wearable health,” IEEE Sensors Journal, vol. 15, no. 1, pp. 138–145, 2015. * [29] V. Goverdovsky, D. Looney, P. Kidmose, and D. P. Mandic, “In-ear EEG from viscoelastic generic earpieces: Robust and unobtrusive 24/7 monitoring,” IEEE Sensors Journal, vol. 16, no. 1, pp. 271–277, 2016. * [30] M. T. Curran, J.-k. Yang, N. Merrill, and J. Chuang, “Passthoughts authentication with low cost earEEG,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pp. 1979–1982, 2016. * [31] C. Neuper, R. H. Grabner, A. Fink, and A. C. Neubauer, “Long-term stability and consistency of EEG event-related (de-)synchronization across different cognitive tasks,” Clinical Neurophysiology, vol. 116, no. 7, pp. 1681–1694, 2005. * [32] M. H. Hayes, “Statistical digital signal processing and modeling,” 1996. * [33] K. P. Murphy, Machine learning: a probabilistic perspective. MIT press, 2012. * [34] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, p. 27, 2011. * [35] J. R. Landis and G. G. Koch, “The measurement of observer agreement for categorical data,” Biometrics, vol. 33, no. 1, pp. 159–174, 1977. * [36] M. H. Silber, S. Ancoli-Israel, M. H. Bonnet, S. Chokroverty, M. M. Grigg-Damberger, M. Hirshkowitz, S. Kapen, S. A. Keenan, M. H. Kryger, T. Penzel, M. R. Pressman, and C. Iber, “The visual scoring of sleep in adults,” Journal of Clinical Sleep Medicine, vol. 3, no. 2, pp. 121–131, 2007.
# Hydro-, Magnetohydro-, and Dust-Gas Dynamics of Protoplanetary Disks G. Lesur1, B. Ercolano2, M. Flock3, M.-K. Lin4,5, C.-C. Yang6, J. A. Barranco7, P. Benitez-Llambay8, J. Goodman9, A. Johansen10,11, H. Klahr3, G. Laibe12,13, W. Lyra14, P. Marcus15, R.P. Nelson16, J. Squire17, J. B. Simon18, N. Turner19, O.M. Umurhan20,21, A.N. Youdin22 1 Univ. Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France 2 Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstr. 1, 81679 München, Germany 3 Max-Planck-Institut für Astronomie, Königstuhl 17, 69117, Heidelberg, Germany 4 Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan 5 Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan 6 Department of Physics and Astronomy, University of Nevada, Las Vegas, 4505 S. Maryland Parkway, Box 454002, Las Vegas, NV 89154-4002, USA 7 Dept. of Physics & Astronomy, San Francisco State University, San Francisco, CA 94132, U.S.A. 8 Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark 9 Department of Astrophysical Sciences, Princeton University, Princeton NJ, U.S.A. 10 Center for Star and Planet Formation, GLOBE Institute, University of Copenhagen, Øster Voldgade 5-7, 1350 Copenhagen, Denmark 11 Lund Observatory, Department of Astronomy and Theoretical Physics, Lund University, Box 43, 221 00 Lund, Sweden 12 Univ Lyon, Univ Lyon1, Ens de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis,-Laval, France. 13 Institut Universitaire de France 14 Department of Astronomy, New Mexico State University, PO Box 30001, MSC 4500 Las Cruces, NM 88003, USA 15 Dept. of Mechanical Engineering, University of California, Berkeley, CA, 94720, U.S.A. 16 Department of Physics & Astronomy, Queen Mary University of London, Mile End Road, London, E1 4NS, U.K. 17 Physics Department, University of Otago, Dunedin 9010, New Zealand 18 Department of Physics and Astronomy, Iowa State University, Ames, IA, 50010, USA 19 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California 91109, U.S.A. 20 SETI Institute, 339 Bernardo Ave., Suite 200, Mountain View, CA 94043, U.S.A. 21 Space Sciences Division, NASA Ames Research Center, Moffett Field, CA 94035, U.S.A. 22 University of Arizona, Departments of Astronomy and Planetary Sciences ###### Abstract The building of planetary systems is controlled by the gas and dust dynamics of protoplanetary disks. While the gas is simultaneously accreted onto the central star and dissipated away by winds, dust grains aggregate and collapse to form planetesimals and eventually planets. This dust and gas dynamics involves instabilities, turbulence and complex non-linear interactions which ultimately control the observational appearance and the secular evolution of these disks. This chapter is dedicated to the most recent developments in our understanding of the dynamics of gaseous and dusty disks, covering hydrodynamic and magnetohydrodynamic turbulence, gas-dust instabilities, dust clumping and disk winds. We show how these physical processes have been tested from observations and highlight standing questions that should be addressed in the future. ## 1 INTRODUCTION We focus in this chapter on the progress since PPVI in the fields of angular momentum transport, turbulence, dust growth, and winds from planet-forming disks. We begin by reviewing the basics of disk dynamics. The interested reader may consult the reviews and lecture notes provided in reference for more detailed discussion of these fundamental aspects. ### 1.1 Framework and observational constraints The ongoing improvements in observational techniques, experiments, and simulations are opening a new window on our understanding of protoplanetary disks dynamics, as it becomes possible to test and validate scenarios that were, until recently, only theoretical concepts. In parallel, dynamical models are being refined, treating processes such as radiative heating and cooling, non-ideal magnetohydrodynamics (MHD), and outflows, to name a few. These efforts have led to significant revisions in our understanding of the mechanisms responsible for mass accretion. In addition, it has been realized that the dust/gas coupling is much richer than anticipated, leading to recognition of new multi-phase instabilities and dynamical phenomena that can enhance planet-formation rates. The goal of this chapter is to review these recent advances and to point out the open questions and some links between them. In the following and unless stated otherwise, we consider an isolated protoplanetary disk, perturbed neither by a binary stellar companion nor by infalling material from a remnant envelope with mass $M_{\mathrm{disk}}$ orbiting a central young stellar object of mass $M_{\mathrm{star}}$ assumed to be near a solar mass. This restriction allows us to make general statements about the disk independent of environmental effects, which should be applicable to most class II disks. We note however that disks are not always isolated, as streamers are sometimes observed extending over thousands of astronomical units (e.g. _Pineda et al._ , 2020). We assume the disk is low enough in mass to be non-self-gravitating, i.e. $M_{\mathrm{disk}}/M_{\mathrm{star}}<O(10^{-1})$ (_Armitage_ , 2011). More precise threshold disk masses can be found in, e.g. _Kratter and Lodato_ (2016); _Haworth et al._ (2020). The disk is accreting on the star, forming planets from dust grains, and ultimately dispersing on a timescale of a few million years. In this introduction, we define the main quantities and concepts used throughout the chapter relating to accretion, angular momentum transport (1.2), and dust dynamics (1.3). In subsequent sections, we focus on instabilities of the gas phase leading to hydrodynamical turbulence (§2). We emphasize the conditions for existence of these instabilities which turn out to depend mainly on thermodynamics. We next move to the coupling with dust gains (§3), describing instabilities and dust clumping arising because of dust-gas interactions. In §4, we discuss the roles of magnetic fields, magneto-rotational turbulence, and winds launched by gas pressure and magnetic forces. Finally, we link these theoretical processes to present and future observations in §5. We conclude with a summary of the major achievements of the field since PPVI and highlight several key questions which should be addressed in the future. ### 1.2 Accretion, angular momentum transport, and turbulence Protoplanetary disks are accretion disks, delivering material onto their central stars at rates typically between $10^{-10}$ and $10^{-7}\,M_{\odot}/\mathrm{yr}$ (_Venuti et al._ , 2014; _Hartmann et al._ , 2016; _Manara et al._ , 2016). Their formation, evolution, and dispersal are closely linked to their angular momentum content and, since angular momentum is a conserved quantity, how the angular momentum is transported. This question of angular momentum transport has been the subject of a great deal of work over the past 50 years not only in protoplanetary disks, but also in the disks around stellar remnants and in active galactic nuclei. Essentially two transport mechanisms have been proposed: turbulence and magnetized disk winds (MDW). The former has attracted most attention because of the simplicity of treating it using the _Shakura and Sunyaev_ (1973) pseudo-viscosity paradigm. Winds, initially proposed by _Blandford and Payne_ (1982), have regained popularity over the last ten years as a way to transport angular momentum in otherwise ”dead” discs. The links between accretion flow, angular momentum transport, and disks’ secular evolution can be seen in the equations of mass and angular momentum conservation, averaged through the disk thickness (e.g. _Lesur_ 2021a): $\displaystyle\frac{\partial\Sigma}{\partial t}+\frac{1}{2\pi R}\frac{\partial}{\partial R}\dot{M}_{\mathrm{acc}}$ $\displaystyle=-\zeta\Sigma\Omega_{\mathrm{K}},$ (1) $\displaystyle\frac{\dot{M}_{\mathrm{acc}}\Omega_{\mathrm{K}}}{4\pi}$ $\displaystyle=\frac{1}{R}\frac{\partial}{\partial R}\left(R^{2}\alpha_{\mathrm{S}}P\right)$ $\displaystyle\qquad+\zeta(\lambda-1)P\left(\frac{R}{H}\right)^{2},$ (2) where we define the disk mass accretion rate $\dot{M}_{\mathrm{acc}}$, surface density $\Sigma$ and pressure $P$, Keplerian angular velocity $\Omega_{\mathrm{K}}$ and geometrical thickness $H$. In these equations we include three unknown dimensionless coefficients: $\zeta$, the mass loss parameter due to a hypothetical outflow, $\alpha_{\mathrm{S}}$ which corresponds to _Shakura and Sunyaev_ (1973) definition and describes radial angular momentum transport within the disk, such as by turbulence, and finally $\lambda$, the outflow lever arm (_Blandford and Payne_ , 1982), which quantifies the specific angular momentum extracted vertically by an outflow. All three of these coefficients may vary with radius and time. Once the three coefficients are known, the secular evolution of the system can be entirely predicted and accretion theory is complete. If the disk accretion is driven solely by turbulence of hydrodynamical or magnetohydrodynamical origin, then the accretion rate can be derived from (2) and is approximately (_Hartmann et al._ , 1998) $\displaystyle\dot{M}_{\mathrm{acc}}\sim\left(3\times 10^{-8}\,M_{\odot}/\mathrm{yr}\right)\left(\frac{\alpha_{\mathrm{S}}}{10^{-2}}\right)\left(\frac{\varepsilon}{0.1}\right)^{2}$ $\displaystyle\times\left(\frac{R}{10\,\mathrm{AU}}\right)^{1/2}\left(\frac{M_{\mathrm{star}}}{M_{\odot}}\right)^{1/2}\left(\frac{\Sigma}{10\,\mathrm{g.cm}^{-2}}\right)$ (3) where we use the disk aspect ratio $\varepsilon\equiv H/R$ and assume $\Sigma(R)$ follows a shallow power law (typically $\propto R^{-1/2}$). Hence typically $\alpha_{\mathrm{S}}\sim 10^{-2}$ is required to match the observed accretion rates by turbulence alone. It is worth noting that gas turbulence need not always lead to outward angular momentum transport. One can obtain $\alpha_{\mathrm{S}}\sim 0$ from turbulence that is vigorous in the sense of large velocity fluctuations. This is particularly important in the context of dust transport and settling: some instabilities lead to efficient stirring of dust grains yet yield little angular momentum transport (e.g. 2.1). Hence the $\alpha_{\mathrm{S}}$ parameter gives incomplete information on the presence and impact of turbulence. ### 1.3 Dust transport & growth Dust grains have porous and fractal structures (_Dominik et al._ , 2007; _Blum and Wurm_ , 2008). They are often modeled as spheres of radii $s\lesssim 10$ cm and densities up to $\rho\sim 1$ g.cm-3 for compacted material (_Love et al._ , 1994). Aggregates evolve as they undergo collisions. Sticking is favored for small ice-coated grains, while collisions between large aggregates involve redistribution of energy via elastic waves, which can trigger sliding or breaking at points of contact between substructures. Sticking, bouncing, compaction, abrasion or fragmentation, mass transfer, cratering, and erosion are all possible outcomes of collisions (_Blum_ , 2018), as we discuss in § 3.1. The drag stopping time is $t_{\rm s}\sim(\rho_{\rm m}s)/(\rho_{\rm g}c_{\rm s})$, where $\rho_{\rm m}$ is the material density of the grain and $\rho_{\rm g}$ is the density of the surrounding gas (_Epstein_ 1923; ; _Baines et al._ 1965; _Clair et al._ 1970). Drag damps eccentricity and makes them settle and drift radially (_Adachi et al._ , 1976). The competition between drag and the stellar gravity is (_Safronov_ , 1969; _Whipple_ , 1972, 1973) $\mathrm{St}\equiv\tau_{\mathrm{s}}\equiv\Omega_{\mathrm{K}}t_{\rm s}.$ (4) for $\mathrm{St}\sim 1$. Solids concentrate strongly as they from the gas, making for planet formation. In a vertically isothermal disk, Eq. (4) $\mathrm{St}=((\rho_{\mathrm{m}}s)/(\sqrt{2\pi}\Sigma_{\mathrm{g}}))\exp(z^{2}/2H^{2})$, that the surface density $\Sigma_{\rm g}$ is key for setting values of $\mathrm{St}$ and thus, dust dynamics all through the disk. In the midplane of a disk $\Sigma_{\rm g}\propto R^{-1}$ and $\Sigma_{\rm g}\sim 200$ g.cm-2 at $R=1$ AU (which match the low end of the observed mass distribution, e.g. _Andrews et al._ 2009) millimeter-sized grains have $\mathrm{St}_{0}\sim 0.04$ at $R\sim 50$ AU. , most grains have $\mathrm{St}\gtrsim 1$. Since $\Sigma_{g}$ can not be inferred directly from observations, values of $\mathrm{St}$ major uncertainties . In the vertical direction, and in the absence of turbulence, grains damped harmonic oscillators of quality factors $\sim\mathrm{St}_{0}=\mathrm{St}\left(z=0\right)$ driven by the mean flow of the gas at large scale. : the diffusivity $\alpha_{\mathrm{D}}$ (which is not necessarily equal to the angular momentum transport coefficient $\alpha_{\mathrm{S}}$; see Section 5.2) and the correlation time at eddy scales $t_{\rm e}$ (_Youdin and Lithwick_ , 2007). A classic choice is $\alpha_{\mathrm{D}}\sim\alpha_{\mathrm{S}}$ and $t_{\rm e}\sim\Omega_{\mathrm{K}}^{-1}$, the typical time for vortex stretching by differential rotation (see however § 2 and § 3.2.2). In steady state, solids concentrate close to the midplane within a layer of height $H_{\rm d}\simeq H\sqrt{\frac{\alpha_{\mathrm{D}}}{\alpha_{\mathrm{D}}+\mathrm{St}}}$ (5) (see also _Yang et al._ , 2018). As a result, larger grains . In the radial direction, between gas and dust. for both phases. In an inviscid, non-magnetic, planet-free Keplerian disk, , given by $v_{\mathrm{d},R}=\frac{1}{\mathrm{St}+\mathrm{St}^{-1}\left(1+\epsilon_{\mathrm{d}}\right)^{2}}\frac{1}{\Omega_{\mathrm{K}}\rho_{\mathrm{g}}}\frac{\partial P}{\partial R}=-\epsilon_{\mathrm{d}}^{-1}v_{\mathrm{g},R},$ (6) where $\epsilon_{\mathrm{d}}=\rho_{\rm d}/\rho_{g}$ is the local dust-to-gas density ratio (_Nakagawa et al._ , 1986). drifts towards pressure maxima – either the inner or local pressure bumps – with optimal efficiency when $\mathrm{St}\sim 1$. grains travel through the disk in a typical time $t_{\rm drift}\sim\Omega_{\mathrm{K}}^{-1}(H/R)^{-2}(1+\mathrm{St}^{2})/\mathrm{St}$. ## 2 THERMO-HYDRODYNAMICAL INSTABILITIES Table 1: Candidates for hydrodynamic activity in PPDs: the Vertical Shear Instability (VSI), the Convective Overstability (COS), and the Zombie Vortex Instability (ZVI). Figures adapted from _Pfeil and Klahr_ (2021), _Lyra_ (2014), and _Barranco et al._ (2018), respectively. The vertical component of the vorticity is shown for each, while the Reynolds stress is also shown for the VSI. Each instability requires a different gas cooling timescale $\tau_{\mathrm{cool}}$, here scaled by the Keplerian orbital frequency $\Omega_{\mathrm{K}}$. They also require different disk structures, here expressed for a vertically isothermal disk with a radial temperature profile $\propto R^{-q}$, midplane density profile $\propto R^{-p}$, adiabatic index $\gamma$, the height above the midplane $z$, and the gas pressure scale height $H$. $R$ is the cylindrical distance from the star. A Shakura-Sunyaev viscosity, $\alpha_{\mathrm{SS}}$, can be measured from simulations as a metric for radial angular momentum transport, but does not fully characterize the turbulence generated by these instabilities. Vertical Shear Instability | Convective Overstability | Zombie Vortex Instability ---|---|--- | | $\tau_{\mathrm{cool}}\Omega_{\mathrm{K}}\ll 1$ | $\tau_{\mathrm{cool}}\Omega_{\mathrm{K}}\sim 1$ | $\tau_{\mathrm{cool}}\Omega_{\mathrm{K}}\gg 1$ $q\neq 0$ | $-1<p/q<1/(\gamma-1)$ | $|z|\gtrsim\sqrt{\gamma/(\gamma-1)}H$ $\alpha_{\mathrm{SS}}\sim 10^{-4}$ | $\alpha_{\mathrm{SS}}\sim 10^{-3}$ | $\alpha_{\mathrm{SS}}\sim(10^{-5}$–$10^{-4})$$\dagger$ Outcome: turbulence & vortices * $\dagger$ Based on incompressible simulations (_Barranco and Marcus_ , 2005). PPDs are poorly ionized and weakly coupled to magnetic fields (see §4 for details). The lack of magnetically-driven turbulence in some regions has led to a surge in the interest in purely hydrodynamic origins of turbulence. To this end, three new hydrodynamic instabilities have been discovered: 1) the Vertical Shear Instability (VSI); 2) the Convective Overstability (COS); and 3) the Zombie Vortex Instability (ZVI). Key features of these instabilities are summarized in Table 1. Which of these operate depends on the disk’s structure and the gas thermal timescales. However, taken together they likely pertain to large swaths of PPDs (see §2.4) and are expected to produce weak to moderate turbulence, transport, and large-scale structures such as vortices, which are all relevant to interpreting disk observations as well as planetesimal formation (§3). To anchor the following discussion, we consider a three-dimensional (3D) PPD orbiting a central star of mass $M_{*}$. Cylindrical $(R,\phi,z)$ coordinates are centered on the star with the disk plane located at $z=0$. The equilibrium disk is steady and axisymmetric with a purely azimuthal flow $\bm{v}_{\mathrm{g}}=R\Omega(R,z)\hat{\bm{\phi}}$, where $\Omega$ is the gas’ angular velocity, which is set by centrifugal balance and is close to Keplerian rotation but not exactly. Note that we do not consider warped or eccentric disks. The gas density $\rho_{\mathrm{g}}$ is set by vertical hydrostatic balance and we take a power-law midplane density profile $\rho_{\mathrm{g0}}(R)\propto R^{-p}$. For an ideal gas its temperature $T$ is defined through $P=\mathcal{R}\rho T/\mu$, where $\mathcal{R}$ is the gas constant and $\mu$ is the mean molecular weight. The gas’ specific entropy is $S\equiv c_{P}\ln\left(P^{1/\gamma}/\rho_{\mathrm{g}}\right)$, where $c_{P}$ is the specific heat capacity at constant pressure and $\gamma$ is the constant adiabatic index. We consider vertically isothermal disks with power-law equilibrium temperature profiles $T\propto R^{-q}$, which are often used to model the disk bulk or interior in the outer regions of irradiated PPDs (_Hubeny_ , 1990; _Chiang and Goldreich_ , 1997). The isothermal sound-speed is $c_{s}\equiv\sqrt{\mathcal{R}T/\mu}$ and $H=c_{s}/\Omega_{\mathrm{K}}$. Then the gas surface density $\Sigma_{\mathrm{g}}\propto R^{-\sigma}$ with $\sigma=p+q/2-3/2$. PPDs are thin with aspect-ratios $\varepsilon\equiv H/R\ll 1$. Finally, we assume gravitationally stable disks with Toomre parameters $Q_{g}=c_{s}\Omega/\pi G\Sigma_{g}\gg 1$, see _Kratter and Lodato_ (2016) for a discussion of gravitational instabilities and the Chapters by Bae et al. and Pinte et al. for their potential to explain spiral structures observed in PPDs. Prior to PPVI, PPDs were mostly considered hydrodynamically inactive, at least to small-scale, local, infinitesimal perturbations. This notion stems from the classic Solberg-Høiland (SH) stability criteria (e.g. _Tassoul_ , 1978), which can be written as $\kappa^{2}+N_{R}^{2}+N_{z}^{2}>0$ and $-\partial_{z}P\left(\kappa^{2}\partial_{z}S-R\partial_{z}\Omega^{2}\partial_{R}S\right)>0$. Here, $\kappa^{2}=R^{-3}\partial_{R}\left(R^{4}\Omega^{2}\right)$ is the square of the radial epicycle frequency; and $N_{R,z}^{2}=-\left(c_{P}\rho_{\mathrm{g}}\right)^{-1}\partial_{R,z}P\partial_{R,z}S$ are the squares of the radial and vertical contributions to the buoyancy (or Brunt-Väisälä) frequency, respectively. For radially smooth, vertically thin disks, the SH stability criteria are generally satisfied (_Lin and Youdin_ , 2015) because usually $N_{z}^{2}\geq 0$ (so the disk is stable against vertical convection as well) and $|N_{R}^{2}|\ll\kappa^{2}$, while the second criterion reduces to $\gamma>1+O(\varepsilon^{2})$, which is satisfied for typical values of $\gamma=5/3$ or $7/5$. However, the SH stability criteria assume the gas is adiabatic, i.e. it does not experience heat gains or losses, and disturbances are axisymmetric and infinitesimal. The key to the new class of thermo-hydrodynamic instabilities in Table 1 is the violation of these assumptions (see discussion in _Fromang and Lesur_ 2019). Both the VSI and COS require finite thermal losses. The ZVI applies to adiabatic gas, but requires finite amplitude, non-axisymmetric perturbations. These conditions can be expected in PPDs and thus the SH criteria are not applicable. Furthermore, parts of PPDs may have a non- standard radial structure such that $N_{R}^{2}<0$, which is necessary for the COS. The revived interest in hydrodynamic instabilities in PPDs thus resulted from more careful treatments of the disk’s thermal structure and evolution. To this end, we write the gas energy equation as $\partial_{t}S+\bm{v}_{\mathrm{g}}\cdot\nabla S=-\Lambda.$ Here, $\Lambda$ represents all heating and cooling processes, but for simplicity we will refer to it as ‘cooling’. Physically, cooling is mediated by radiation and its form depends on the gas and dust properties (e.g. _Malygin et al._ , 2017; _Barranco et al._ , 2018; _Pfeil and Klahr_ , 2019, see also §2.4). However, for discussion purposes it is sufficient to consider the Newtonian cooling prescription, $\displaystyle\Lambda=\frac{c_{P}}{\gamma T}\frac{\left(T-T_{\mathrm{ref}}\right)}{\tau_{\mathrm{cool}}},$ (7) where the reference temperature $T_{\mathrm{ref}}$ is usually taken to be the initial temperature field; and $\tau_{\mathrm{cool}}$ is the cooling timescale over which the local temperature relaxes back to $T_{\mathrm{ref}}$. We define the dimensionless cooling time $\beta_{\mathrm{cool}}\equiv\tau_{\mathrm{cool}}\Omega_{\mathrm{K}}$. The beta cooling prescription encapsulates all possible gas thermodynamic responses. This enables the three hydrodynamic instabilities to be considered by varying $\beta_{\mathrm{cool}}$. The limit $\beta_{\mathrm{cool}}\to\infty$ corresponds to adiabatic evolution, where the specific entropy is materially conserved, which enables the ZVI provided $|N_{z}|$ is sufficiently large. In the isothermal limit, $\beta_{\mathrm{cool}}\to 0$, or instant cooling, $T\to T_{\mathrm{ref}}$ and the gas temperature does not evolve, which enables the VSI provided there is a radial temperature gradient to produce vertical shear (see below). In the intermediate case, $\beta_{\mathrm{cool}}\sim 1$, the disk cools on the orbital timescale, which enables the COS if $N_{R}^{2}<0$ also. The onset of these hydrodynamic instabilities thus has both structural and thermodynamical requirements (Table 1). If these are met, they can lead to hydrodynamic turbulence and vortex formation. In simulations, it is straightforward to measure the conventional Shakura-Sunyaev $\alpha_{\mathrm{S}}$ to describe radial angular momentum transport mediated by said turbulence. However, it is important to note that $\alpha_{\mathrm{S}}$ alone is insufficient to fully characterize the ensuing turbulence. The fundamental measure of turbulence is the root-mean-squared velocities or Mach numbers $\mathcal{M}$ at various scales. For planetesimal formation, one is often interested in the resulting turbulent diffusion of particles, $\alpha_{\mathrm{D}}$ (see §1.3 and §3). One often assumes $\mathcal{M}^{2}$, $\alpha_{\mathrm{S}}$, and $\alpha_{\mathrm{D}}$ are equal and isotropic, but this is not necessarily the case, especially for the anisotropic and inhomogeneous turbulence associated with the three instabilities. Observationally, this means that the turbulence parameters associated with mass accretion, dust distributions, and line broadening can all be different, even when they have the same physical origin. ### 2.1 Vertical shear instability The VSI (_Arlt and Urpin_ , 2004; _Nelson et al._ , 2013) is a linear, axisymmetric instability that is a disk-variation of the Goldreich-Schubert- Fricke instability in differentially rotating stars (GSFI, _Goldreich and Schubert_ , 1967; _Fricke_ , 1968). The VSI can be considered as the $\mathrm{Pr}\rightarrow 0$ limit of the GSFI (_Knobloch and Spruit_ , 1982), where the Prandtl number $\mathrm{Pr}$ is the ratio of viscosity to thermal diffusivity. The VSI is powered by the disk’s vertical differential rotation, i.e. $\partial_{z}\Omega\neq 0$, which is exhibited by any baroclinic equilibrium wherein surfaces of constant density and pressure are misaligned ($\nabla\rho_{\mathrm{g}}\times\nabla P\neq\bm{0}$). This includes vertically isothermal disks with a radial temperature gradient ($q\neq 0$), for which one finds $\displaystyle R\frac{\partial\Omega^{2}}{\partial z}\simeq q\varepsilon\frac{z}{H}\Omega_{\mathrm{K}}^{2}.$ (8) Although vertical shear is weak ($\left|R\partial_{z}\Omega\right|\sim\varepsilon\Omega_{\mathrm{K}}\ll\Omega_{\mathrm{K}}$), the associated free energy can be released through radially-short, vertically- elongated disturbances (_Umurhan et al._ , 2013) provided that the stabilizing effect of vertical buoyancy is either absent ($N_{z}^{2}=0$) or eliminated by rapid cooling when $N_{z}^{2}>0$ (_Nelson et al._ , 2013), as is also the case for the GSFI when Pr$\rightarrow 0$ (_Knobloch and Spruit_ , 1982). In the case $N_{z}^{2}>0$, _Lin and Youdin_ (2015) find the criterion $\displaystyle\tau_{\mathrm{cool}}\lesssim\frac{|q|\varepsilon}{\gamma-1}\Omega_{\mathrm{K}}^{-1},$ (9) or $\beta_{\mathrm{cool}}\lesssim\varepsilon\ll 1$ for typical disk parameters. That is, cooling timescales must be much shorter than the dynamical timescale. Linear VSI theory was initially developed in the local approximation for axisymmetric disturbances (_Urpin and Brandenburg_ , 1998; _Urpin_ , 2003). More recently, non-axisymmetric (_Volponi_ , 2014; _Latter and Papaloizou_ , 2018) and radially-local, vertically global models have been developed (_Nelson et al._ , 2013; _Barker and Latter_ , 2015; _McNally and Pessah_ , 2015; _Lin and Youdin_ , 2015; _Umurhan et al._ , 2016b). The linear instability mechanism can be explained in terms of angular momentum, vorticity, and energetic considerations (_Yellin-Bergovoy et al._ , 2021). Weakly non-linear theories have been developed by _Latter and Papaloizou_ (2018) and _Shtemler and Mond_ (2019, 2020). Semi-global linear analyses find: 1) destabilized inertial waves or ‘body modes’ that permeate the disk column, and 2) ‘surface modes’ concentrated near the vertical boundaries. Body modes tend to dominate nonlinear simulations, which are typically characterized by narrow bands of large-scale vertical motions (_Nelson et al._ , 2013), similar to that exhibited by low- order body modes with widths $\sim\varepsilon H$. Interestingly, local, linear VSI modes are also nonlinear solutions, which partly explains why they can persist with large amplitudes (_Latter and Papaloizou_ , 2018). Nevertheless, linear VSI modes are subject to secondary, Kelvin-Helmholtz-type instabilities, which are often associated with vortex formation (_Nelson et al._ , 2013; _Latter and Papaloizou_ , 2018). Indeed, vortices – non- axisymmetric flow features – are readily observed in full 3D simulations (_Richard et al._ , 2016; _Manger and Klahr_ , 2018), although there it has been attributed to the Rossby Wave Instability (RWI _Lovelace et al._ , 1999; _Li et al._ , 2000, 2001) of narrow vorticity rings directly formed by the VSI or pressure bumps that arise from radial variations in the rate of angular momentum transport by the VSI. These vortices may be subsequently amplified by the Sub-critical Baroclinic Instability (_Klahr and Bodenheimer_ , 2003; _Petersen et al._ , 2007a, b; _Lesur and Papaloizou_ , 2010, see also §2.2) for steep radial temperature profiles and moderate cooling timescales; or attacked by 3D Elliptic Instabilities (_Kerswell_ , 2002; _Lesur and Papaloizou_ , 2009), especially if their aspect-ratios $\chi$ are small ($\lesssim 4$). Recent simulations with beta and radiative cooling find both compact, short-lived (a few orbits) vortices and large-scale, elongated ($\chi\gtrsim 8$) vortices that survive for hundreds of orbits (_Manger et al._ , 2020; _Flock et al._ , 2020). The latter can arise from pressure bumps at the boundary of ‘VSI dead zones’ (_Flock et al._ , 2020), which develops from the strong dependence of the VSI on the cooling time that leads to a situation where the VSI does not operate at small radii ($\lesssim 10$ au) while being active at larger radii. High resolution simulations also find meridional vortices (_Flores-Rivera et al._ , 2020). VSI-active disks are turbulent with moderate outwards angular momentum transport: the radial stress-to-pressure ratio, or an alpha parameter, $\alpha_{\mathrm{S}}$ of order $10^{-4}$ is found for $\varepsilon=0.05$ and is an increasing function of $\varepsilon$ (_Manger et al._ , 2020). However, VSI-turbulence is strongly anisotropic because typically $|v_{z}/v_{R}|\gg 1$. This results in a vertical stress-to-pressure ratio of $\alpha_{z}\equiv\langle\rho_{\mathrm{g}}v_{z}\delta v_{\phi}\rangle/P\sim 10^{-2}$ (here $\delta$ denotes the deviation from equilibrium), i.e. two orders of magnitude larger than radial transport (_Stoll et al._ , 2017a). Furthermore, both $\alpha_{\mathrm{S}}$ and $\alpha_{z}$ are non-uniform. These are important distinctions from standard, constant-alpha disks as the anisotropy of the VSI implies that the gas’ and dust’s radial and vertical evolution can differ significantly. The basic properties of the VSI can be captured by locally isothermal disks. A first approximation to modeling the VSI in more realistic PPDs is to use the beta cooling prescription with an estimate of physical cooling timescales (see §2.4) in linear theory (_Lin and Youdin_ , 2015; _Fukuhara et al._ , 2021) or numerical simulations (_Pfeil and Klahr_ , 2021). Beyond this, one can explicitly account for radiative losses and stellar irradiation by solving the radiative hydrodynamic equations (_Stoll and Kley_ , 2014; _Flock et al._ , 2017b; _Flock et al._ , 2020). Such models show that the VSI can indeed be expected in the outer regions of PPDs with an $\alpha_{\mathrm{S}}\sim 10^{-4}$ and flow structures largely consistent with locally isothermal models. Turbulent velocities on the order of $10^{-2}{c_{s}}$ are found in the midplane regions ; whereas in the coronal regions ($|z|\gtrsim 0.2R$, or roughly beyond two pressure scale heights) these increase to $\sim 0.06c_{s}$ and can reach $O(10^{-1}c_{s})$ at some points (_Flock et al._ , 2017b). These are consistent with the upper limits obtained from ALMA observations of turbulent line widths (§5.2). However, identifying VSI turbulence this way appears difficult owing to the VSI’s strong anisotropy; instead, its vertical motions can be detected by spatially resolved observations of the CO line emission (_Barraza-Alfaro et al._ , 2021). Recent studies have also begun to incorporate the VSI in models of planetesimal formation and planet evolution. Axisymmetric vorticity perturbations induced by the VSI, and discrete vortices in the disk plane, have been shown to act as effective traps of mm dust (_Stoll and Kley_ , 2016; _Flock et al._ , 2017b; _Flock et al._ , 2020). The former gives rise to prominent ring-like structures, while the latter produces localized enhancements of dust. Large-scale vertical gas motions induced by the VSI can effectively lift up passive dust grains to the disk atmosphere (_Stoll and Kley_ , 2016; _Flock et al._ , 2017b; _Flock et al._ , 2020). When dust-on-gas feedback is accounted for, solids can settle against VSI-stirring if the overall dust abundance is sufficiently super-solar (_Lin_ , 2019). The dense midplane dust layer can further undergo the Streaming Instability (_Youdin and Goodman_ , 2005, see also §3), in which case the VSI can enhance radial dust concentrations (_Schäfer et al._ , 2020). Following planet formation, disk- planet interaction and pebble accretion with VSI turbulence have been investigated by _Stoll et al._ (2017b) and _Picogna et al._ (2018), respectively. For this particular problem they find results similar to viscous disk models with an equivalent $\alpha_{\mathrm{S}}$ as measured in VSI runs, but in the case of pebble accretion one also needs to include stochastic kicks onto the solids. Finally, while the VSI is fundamentally a hydrodynamic instability, PPDs are magnetized. In ideal MHD, a strong field with a plasma beta parameter $\gtrsim 1/\varepsilon$ can stabilize the VSI (_Latter and Papaloizou_ , 2018; _Cui and Lin_ , 2021), which may apply to the well-ionized disk atmospheres. On the other hand, in the disk bulk where non-ideal effects dominate, the VSI persists and can co-exist with magnetized disk winds (_Cui and Bai_ , 2020). The dynamics of dust grains under such conditions remains to be investigated. ### 2.2 Convective overstability and baroclinic vortex amplification The COS is a linear, axisymmetric instability that can be interpreted as radial convection mediated by cooling (_Klahr and Hubbard_ , 2014; _Lyra_ , 2014). Unstratified models show that the COS requires 1) a radial buoyancy such that $N_{R}^{2}<0$, and 2) cooling on dynamical timescales, $\beta_{\mathrm{cool}}\sim 1$. For power-law disks the first criterion becomes $\displaystyle-1<\frac{p}{q}<\frac{1}{\gamma-1}\quad\text{for $N_{R}^{2}<0$}.$ (10) This analytical criterion applies to the disk midplane where $N_{z}^{2}=0$. For $q>0$, Eq. 10 translates to $-(q+3)/2<\sigma<[q(\gamma+1)/(\gamma-1)-3]/2$, implying a shallow or even rising surface density profile. For example, in irradiated disks with $q=0.5$, one requires $-1.75<\sigma<0$ for $\gamma=7/5$. Such atypical profiles may be realized at special locations such as gap edges or the outer edge of dead zones (e.g. _Delage et al._ , 2022). However, even if $N_{R}^{2}>0$ at $z=0$, in typical disk models the fact that $\varepsilon$ increases with $R$ can drive $N_{R}^{2}<0$ away from the midplane, where $N_{z}^{2}>0$. Unlike the VSI, though, vertical buoyancy is not expected to suppress COS modes, which have little vertical motion (_Lyra_ , 2014). A similar situation applies to the symmetric instability in the Earth’s atmosphere (_Schultz and Schumacher_ , 1999). Nevertheless, the existence and saturation of the COS in vertically stratified disks off the midplane still need to be demonstrated explicitly, which is not trivial as potentially also VSI can be operative in this region. The COS corresponds to growing epicyclic oscillations. In a thin disk, rotation is strongly stabilizing. This means that, even if a gas parcel is radially buoyant ($N_{R}^{2}<0$), it will still perform radial oscillations on the epicyclic frequency. Indeed, in the absence of cooling the SH conditions imply stability as $\kappa^{2}\gg|N_{R}^{2}|$ since typically $|N_{R}|\sim\varepsilon\Omega_{\mathrm{K}}$. However, if cooling occurs on the timescale on the order of the orbital period, the gas parcel can exchange heat with the surrounding such that it experiences a buoyant acceleration, thereby increasing its oscillation amplitude (_Latter_ , 2016). On the other hand, if cooling is too rapid then the parcel experiences no buoyancy and epicycles are again stable. Ultimately, the COS is mainly powered by the unstable entropy gradient (cf. vertical shear for the VSI). Also unlike the VSI, the COS prefers vertically short, radially extended length scales. The linear analysis for the COS was possible as there are, just like for the VSI, axisymmetric modes which are unstable (_Klahr and Hubbard_ , 2014; _Lyra_ , 2014; _Latter_ , 2016; _Volponi_ , 2016). Weakly nonlinear theory (_Latter_ , 2016) suggested that COS modes might be limited to small amplitudes due to secondary parasitic instabilities. However, recent axisymmetric numerical simulations (_Teed and Latter_ , 2021) showed a progression in the nonlinear regime with increased Reynolds number as the system transitions from a weakly nonlinear state with relatively ordered waves, to wave turbulence, and finally to the formation of intermittent and then persistent zonal flows. This process is likely what spawned vortices in 3D, non-axisymmetric models of the COS (_Lyra_ , 2014). The axisymmetric simulations of _Teed and Latter_ (2021) also showed the existence of ”elevator modes”, in which a mode of uniform vertical velocity dominates the vertical extent of the (unstratified) box. Whether or not this leads to a large-scale meridional circulation in a stratified, global disk needs to be clarified. Yet, well before the discovery of the axisymmetric, linear COS, the amplification of vortices had already been observed in simulations of radially stratified disks with cooling (_Klahr and Bodenheimer_ , 2003; _Petersen et al._ , 2007a, b; _Lesur and Papaloizou_ , 2010; _Lyra and Klahr_ , 2011). These authors find that such amplification of the vortices also requires $N_{R}^{2}<0$ and cooling that is neither too fast nor slow. _Lesur and Papaloizou_ described this effect as a Sub-critical Baroclinic Instability (SBI), where ‘sub-critical’ refers to the fact that a vortex is not an infinitesimal perturbation in the flow, but needs a finite initial size to have the amplification be stronger than viscosity and shear. That is, SBI is a non-linear phenomenon. In addition, the SBI operates in 2D, thin disks, unlike the linear, axisymmetric COS that requires the vertical dimension. Ultimately, however, the criteria for COS and SBI are qualitatively the same, suggesting that both instabilities share the same physical driving, i.e. radial buoyancy and cooling rates on their individual radial oscillation frequencies. Whereas for the linear COS, this frequency is the epicyclic frequency ($\kappa$); vortices typically rotate much slower than the disk rotation, with a turnover frequency of order $\Omega_{\mathrm{K}}/\chi$ (recall $\chi$ is the vortex aspect ratio). Thus the larger the vortex aspect ratio, the longer the rotation time, which gives a wide range of cooling times that will lead to the amplification of a range of vortex shapes (_Raettig et al._ , 2013). Global numerical simulations of the SBI in 2D and 3D disks readily show the development of large-scale vortices that can persist for hundreds of orbits (_Klahr and Bodenheimer_ , 2003; _Petersen et al._ , 2007a, b; _Lyra and Klahr_ , 2011; _Barge et al._ , 2016). Simulations also find that COS/SBI leads to outwards angular momentum transport at a typical level of $\alpha_{\mathrm{SS}}\sim 10^{-3}$, but can vary by an order of magnitude depending on buoyancy and cooling parameters (_Raettig et al._ , 2013). Local 2D models suggest that vortices can be amplified even when radial buoyancy is only weakly unstable (_Raettig et al._ , 2013). High resolution, local 3D simulations show that SBI growth is balanced by secondary, 3D Elliptic Instabilities (_Lesur and Papaloizou_ , 2009), resulting in a vortex with a turbulent core (_Lesur and Papaloizou_ , 2010). Nevertheless, the ability for COS-induced or SBI-amplified vortices to act as efficient dust traps have been demonstrated by _Raettig et al._ (2015) in 2D, and _Lyra et al._ (2018) and _Raettig et al._ (2021) in 3D. On the other hand, _Lobo Gomes et al._ (2015) find the interaction between massive planets and radially buoyantly unstable, cooling disks renders $N_{R}^{2}>0$, in which case planet-induced gaps become smoother and leads to weaker vortices that are associated with the RWI of the gap edge, rather than COS or SBI. However, models of planetesimal formation and disk-planet interaction that explicitly incorporate COS/SBI-induced turbulence and vortices are still lacking (cf. as being developed for the VSI), which should be pursued in the future, especially given the relevance of vortices for planet formation, and the predicted occurrence of the COS in the 1-10 AU region. ### 2.3 Zombie vortex instability Unlike the VSI and the COV, the ZVI is a non-axisymmetric, non-linear instability, the latter meaning that it requires finite-amplitude perturbations. The ZVI can develop if there is sufficient vertical buoyancy. In practice, this translates to 1) $N_{z}^{2}\gtrsim\Omega_{\mathrm{K}}^{2}$, which is generally the case for $|z|\gtrsim 1.5H$; and 2) $\beta_{\mathrm{cool}}\gg 1$, otherwise buoyancy is diminished by cooling. _Barranco and Marcus_ (2005) first observed coherent, long-lived vortices naturally form in the strongly stratified regions above and below the PPD midplane in their local, shearing box simulations with a spectral method. At first, they hypothesized that the vortices were created by breaking internal gravity waves, but _Marcus et al._ (2013), _Marcus et al._ (2015), and _Marcus et al._ (2016) demonstrated the existence of a new type of purely hydrodynamic instability associated with “baroclinic critical layers”: narrow structures in a stratified shear flow where a wave’s phase speed matches the shear velocity plus/minus the vertical buoyancy frequency divided by the wavenumber. They named the instability the “Zombie Vortex Instability” (ZVI) not only because it may occur in dead zones, but also because of the way one zombie vortex “infects” neighboring baroclinic critical layers, spawning new zombie vortices, which “infect” farther critical layers, and so on, filling a region of the disk with vigorous turbulence. ZVI is not an artifact of the numerical method as it was computed with spectral codes and finite-volume codes, with Boussinesq, anelastic, and fully compressible treatments of the continuity condition, and with and without the shearing box. One may ask, if it is so robust, how was it missed in previous numerical calculations? Prior work often lacked one or more of the crucial ingredients: ZVI requires strong vertical stratification, high resolution to resolve the narrow critical layers ($\gtrsim 256$ cells or spectral modes per $H$), a broad spectrum of perturbations (i.e., Kolmogorov, but not Gaussian-peaked, so that the vorticity peaks on the small scales), and enough simulation time to allow the critical layers to amplify perturbations. _Lesur and Latter_ (2016) confirmed the existence of the instability with their own spectral simulations, but also pointed out the sensitivity of the ZVI to dissipation processes and small scale physics: fast cooling and/or sufficient viscosity suppresses the ZVI in numerical simulations (with Reynolds number $\mathrm{Re}\lesssim 10^{6}$). However, the relevant Reynolds number in PPDs is $\mathrm{Re}\sim 10^{14}$, so viscosity may not inhibit the ZVI in PPDs, but does make it difficult to produce ZVI in a laboratory (_Wang_ , 2016). _Barranco et al._ (2018) investigated ZVI under a wider range of realistic scenarios, including with nonuniform vertical stratification and radiative cooling, and demonstrated that the off-midplane regions throughout the planet-forming regions of protoplanetary disks may be susceptible to ZVI so long as the cooling time is longer than a few orbital periods, which may be likely with modest grain settling and/or growth. _Umurhan et al._ (2016a) and _Wang and Balmforth_ (2020) have undertaken theoretical investigations into the earliest phases of the instability considering both linear growth and the nonlinear dynamics within the baroclinic critical layers. Most recently, _Wang and Balmforth_ (2021) developed a reduced model for the nonlinear dynamics within forced baroclinic critical layers and have demonstrated that perturbations grow secularly, generating jet-like defects within the shear which are then later susceptible to secondary instabilities that yield coherent vortical structures. Turbulence from ZVI has some unique characteristics that may be highly relevant to dust transport. At late times, ZVI turbulence results in significant vertical mixing that tends to homogenize the background stratification resulting in a nearly uniform $N_{z}$. Similar behavior is seen in atmospheric and oceanic flows in which the breaking of internal gravity waves creates step-like or staircase patterns in stratification (_Orlanski and Bryan_ , 1969; _Phillips_ , 1972; _Pelegrí and Sangrã_ , 1998). While the region in the immediate vicinity of the midplane lacks the requisite stratification for the excitation of baroclinic critical layers, _Barranco et al._ (2018) observed that zombie turbulence from the ZVI susceptible regions can penetrate into the midplane, albeit with a smaller magnitude. At late times, ZVI turbulence resulted in the creation of azimuthal quasi-steady-state zonal flows. The zonal flows consisted of 5-6 pairs of dipolar vortex layers within a radial extent of $8H$ (see the third figure in Table 1). The radial thickness of the cyclonic layers is approximately one-third the thickness of the anticyclonic layers, but this is compensated by the fact that the cyclonic layers are roughly three times more intense than the anticyclonic layers. Fully-developed zombie turbulence shows intermittency where the flow cycles through near-laminar phases of zonal flow punctuated by chaotic bursts of new zombie vortices. In some simulations, the bursting is quasi-periodic in time (with periods between 100-150 orbits), whereas in other cases, it appeared more stochastically. The ultimate impact of ZVI on PPD dynamics is still to be demonstrated. Future topics include: (1) how efficient is ZVI at transporting angular momentum in compressible simulations ($\alpha_{\mathrm{S}}$); (2) the properties of ZVI turbulence as a function of the Reynolds number, especially beyond $O(10^{7})$; (3) how does ZVI turbulence transport and mix dust and how does dust affect the ZVI, as well its mutual interaction with planetesimals and planets; (4) what does the ZVI look like in global simulations; and (5) what are the observational signatures of ZVI turbulence, e.g., broadening of molecular lines, and can that be distinguished from turbulence from other purely hydrodynamic instabilities. ### 2.4 Occurrence in physical disk models As Table 1 summarizes, the onset of the above hydrodynamic instabilities depends on both the disk structure and thermal relaxation or cooling times ($\tau_{{\rm cool}}$), which are intimately interdependent on one another. As discussed in §2.1 – 2.3, the structural requirements for the VSI, $\partial_{z}\Omega\neq 0$, can be met by a non-zero radial temperature gradient ($q\neq 0$), that for the COV where the radial buoyancy $N_{R}^{2}<0$, and that for the ZVI where the vertical buoyancy $|N_{z}|>\Omega_{\mathrm{K}}$. As for their thermodynamic requirements, with a broad brush stroke, we can say that VSI requires cooling times significantly shorter than an orbital period, COS needs cooling times of order the orbital period, whereas ZVI is operable when the cooling time is longer than a few orbital periods. In this section, we describe the ingredients for estimating cooling times and discuss recent efforts to do so in order to assess the occurrence of the above instabilities. We will see that at the current stage, results are still model-dependent as they are subject to a number of uncertainties. Establishing the relevant cooling times for the mechanism in question depends upon the spatial lengthscale of the fastest growing disturbance $\ell_{m}$ and how $\ell_{m}$ compares to the photon mean-free path $\ell_{{\rm ph}}=1\big{/}(\kappa_{\mathrm{op}}\rho)$, where $\kappa_{\mathrm{op}}$ is the material’s frequency integrated opacity. This places the dynamics in either the optically thin ($\ell_{{\rm m}}\ll\ell_{{\rm ph}}$) or thick ($\ell_{{\rm m}}\gg\ell_{{\rm ph}}$) regime (e.g., see the discussion in _Barranco et al._ , 2018). Since H2 and other molecules are inefficient radiators compared to dust particles (however molecular line cooling may play a role in the disk’s coldest regions, e.g., _Woitke et al._ , 2016), the primary cooling pathway involves the collisional transfer of thermal energy from gas to particles, the latter of which efficiently radiates (e.g., see discussions in _Barranco et al._ , 2018; _Pfeil and Klahr_ , 2019; _Lyra and Umurhan_ , 2019). In practice, $\tau_{{\rm cool}}$ can be thought of as the longer timescale of the particle-gas collisional energy exchange timescale $\tau_{{\rm gas}}^{{\rm col}}$ and the radiative timescale $\tau_{{}_{\rm cool}}^{{\rm rad}}$, i.e., $\tau_{{\rm cool}}={{\rm max}}\left(\tau_{{\rm cool}}^{{\rm rad}},\tau_{{\rm gas}}^{{\rm col}}\right)$. Furthermore, $\tau_{{\rm cool}}^{{\rm rad}}$ depends upon whether the lengthscale of interest is in the optically thick or thin regimes embodied in the expression (_Spiegel_ , 1957) $\frac{1}{\tau_{{\rm cool}}^{{\rm rad}}}=\frac{\mu}{\tau_{{\rm cool}}^{{\rm thin}}},\qquad\mu\equiv\left[1-\frac{\tan^{-1}\left(2\pi\ell_{\mathrm{ph}}/\ell_{m}\right)}{2\pi\ell_{\mathrm{ph}}/\ell_{m}}\right];$ (11) where the frequency integrated optically thin cooling timescale is given by $\tau_{{\rm cool}}^{{\rm thin}}\equiv{\ell_{{\rm ph}}\rho_{g}C_{P}}\big{/}{16\sigma_{\mathrm{B}}T^{3}},$ and $\sigma_{\mathrm{B}}$ is the Stefan-Boltzmann constant. A further complication is how to estimate the frequency-integrated opacity $\kappa_{\mathrm{op}}$ as a function of disk location and epoch that, in turn, depends on the disk’s turbulence-mitigated spatiotemporal particle distribution $n(a)$, where $a$ is particle size (e.g. _Birnstiel et al._ , 2012; _Estrada et al._ , 2016). This reveals a causality dilemma as the key determinant driving disk hydrodynamic instabilities, i.e. $\tau_{{\rm cool}}\left(\kappa_{{\rm op}},n(a),\cdots\right)$ and associated disk structure, depends on the turbulence generated by the very same instabilities. These, in turn, are sensitive to the assumed disk mass and maximum particle size as a function of disk epoch – two quantities that are currently only weakly constrained by observations. The currently adopted strategy in confronting the problem of determining $\kappa_{{\rm op}}$, and subsequently $\tau_{{\rm cool}}$, is to assume an alpha-type viscous disk model in which vertical particle settling is balanced by upward turbulent transport (_Dubrulle et al._ , 1995). Together with evolutionary models of dust mass and size distributions – and their resulting frequency-integrated Rosseland or Planck mean opacities (c.f., _Semenov et al._ , 2003; _Woitke et al._ , 2016; _Cuzzi et al._ , 2014) – several recent studies have made preliminary forays into mapping out disk hydrodynamic instabilities using the derived thermal relaxation times based on the aforementioned opacity models (_Malygin et al._ , 2017; _Barranco et al._ , 2018; _Pfeil and Klahr_ , 2019; _Fukuhara et al._ , 2021). Each of these studies extracts various dynamical and thermodynamical profiles resulting from and/or are inputs to various 1+1 global disk evolutionary models – e.g., quantities like density, pressure, total disk mass ($M_{{\rm disk}}$), particle and gas surface densities ($\Sigma_{g},\Sigma_{d}$ respectively), particle size distribution, the turbulent $\alpha_{\mathrm{S}}$ for gas accretion, the turbulent $\alpha_{\mathrm{D}}$ for dust diffusion, etc., all as functions of position and disk age. One can then assess whether the structure and thermodynamic criteria for the onset of the instabilities discussed in §2.1–2.3 are met, at least marginally. _Pfeil and Klahr_ (2019), following the framework described in _Malygin et al._ (2017), assume a submicron-sized monodisperse population of grains and primarily analyze disks with mass $M_{\mathrm{disk}}=0.1M_{\odot}$. Fig 1a is a corresponding map where their results predict that the VSI is widespread across the vast majority of the disk (1AU$<R<$100AU), with some coincidental potential for active COS for 1AU$<R<$10AU in regions containing the midplane and for regions extending away from the midplane for $R>$10AU. This study implies that $\tau_{{\rm cool}}$ is limited by $\tau_{{\rm cool}}^{{\rm rad}}$ as the grain number density and disk mass is sufficiently high that the collision time is considerably short, i.e., $\tau_{{\rm cool}}^{{\rm rad}}\gg\tau_{{\rm gas}}^{{\rm col}}$. Figure 1: Occurrence of hydrodynamic instabilities in different PPD models. (a) From _Pfeil and Klahr_ (2019). This monodisperse ($a\approx 1\mu$m) grain population model considers $M_{{\rm disk}}=0.1M_{\odot}$ and $\alpha_{\mathrm{S}}=10^{-3}$. Black lines delineate successive pressure scale heights. In their models, a Vertical Convective Instability (VCI) can develop because $N_{z}^{2}<0$, but this is not considered in the main text. (b) From _Fukuhara et al._ (2021). Panels (i)–(iii) show VSI activity for $M_{\mathrm{disk}}/M_{\sun}=10^{-2},\,10^{-3}$, $\Sigma_{d}/\Sigma_{g}=10^{-3},10^{-2}$, and polydisperse grains with $a_{{\rm max}}=1$mm, $s=3.5$, $\alpha_{\mathrm{D}}=10^{-4}$. Panel (iv) shows $\tau_{{}_{\rm cool}}^{{\rm col}}/\tau_{{}_{\rm cool}}^{{\rm rad}}$ for $a_{{\rm max}}=10\mu$m, $M_{{\rm disk}}=0.01M_{\odot}$, and $\alpha_{\mathrm{D}}=10^{-4}$. Dashed black lines delineate successive pressure scale heights. (c) From _Barranco et al._ (2018). ZVI-active regions are shaded. Black contours are $\log(\Omega\tau_{{\rm cool}})$ while dotted magenta contours signify $\log(\Omega\tau_{{}_{\rm cool}}^{{\rm rad}})$. Here $M_{{\rm disk}}\approx 0.04M_{\odot}$, $\Sigma_{d}/\Sigma_{g}=0.01$, and grains have $a_{{\rm max}}=10$ cm and $s=3.25$. Panels correspond to different $\alpha_{\mathrm{D}}$ (labeled as $\alpha$). Simulations with vertical extent and at the radii of the red lines develop ZVI turbulence, while those of the blue dashed lines do not. Note that in (a) $\alpha_{\mathrm{S}}$ parameterizes mass transport, while in (b) and (c) $\alpha_{\mathrm{D}}$ parameterizes the dust settling. _Fukuhara et al._ (2021) treat disk models with a polydisperse population of grains with a number density size distribution $n(a)\propto a^{-s}$ with a constant power-law index $s=3.5$ between a minimum and maximum grain sizes $a_{{\rm min}}$ and $a_{{\rm max}}$, respectively. From this they construct an effective mean molecule travel length $\ell_{gd}$, from which they estimate the gas-grain collisional energy exchange timescale in terms of a simpler collisional timescale, with $\tau_{{\rm gas}}^{{\rm col}}\equiv\ell_{gd}/v_{{\rm th}}$, where $v_{{\rm th}}$ is a typical thermal velocity ($\propto T^{1/2}$). Their analysis reveals the useful approximate form $\ell_{gd}\propto a_{{\rm max}}^{1/2}\Sigma_{g}^{-1/2}(\Sigma_{d}/\Sigma_{g})^{-1}\propto M_{{\rm disk}}^{-1/2}(\Sigma_{d}/\Sigma_{g})^{-1}$. As noted earlier in _Barranco et al._ (2018), as grains get incorporated into larger particles, the total effective collisional cross-section of the grain population decreases. Thus, a molecule must travel further before the next collision, which leads to increasingly inefficient cooling. However, this reduction in the local medium’s effective collisional cross-section may be halted if particles grow as porous fractal aggregates instead of as zero-porosity spheres (_Okuzumi et al._ , 2012). For a range of $M_{{\rm disk}}$ values, _Fukuhara et al._ (2021) examine the onset of the VSI, finding that it is absent in low mass disks ($M_{{\rm disk}}=0.001M_{\odot}$), while for relatively high disk masses ($M_{{\rm disk}}=0.1M_{\odot}$) it can potentially take root in fairly narrow zones sandwiching the midplane and out to the disk’s outer edge ($|z|/R\lessapprox 0.05-0.1$, see also Fig 1b). They show how the suppression of the VSI, especially in the low disk mass models, is due to the inability of gas molecules to efficiently communicate with the dust particles causing $\beta_{\mathrm{cool}}$ to greatly exceed the critical value of 0.1. However their results would suggest that the increased cooling times could make their disks susceptible to the COS or ZVI instead. In a prior study, _Barranco et al._ (2018) take a more sophisticated approach to estimating cooling times by explicitly considering the finite time for energy to be exchanged between gas molecules and dust grains via collisions (e.g., _Hollenbach and McKee_ , 1979; _Burke and Hollenbach_ , 1983; _Glassgold et al._ , 2004). They allow for a variable power-law index ($3<s<4$) in the particle size distribution and find $\tau_{{\rm gas}}^{{\rm col}}\propto a_{{\rm S}}\rho_{g}^{-1}T^{-1/2}(\Sigma_{d}/\Sigma_{g})^{-1}$, where $a_{{\rm S}}$ is the Sauter-mean radius of grains (i.e. the radius of a monodisperse population of grains that would have the same total surface area and total volume of a given polydisperse population of grains). Their study makes predictions for a Minimum Mass Solar Nebula disk model with $M_{{\rm disk}}\approx 0.042M_{\odot}$ (_Cuzzi et al._ , 1993) for a variety of $\alpha_{\mathrm{SS}}$ and $a_{{\rm max}}$ values, finding under these parameter conditions that $\tau_{{\rm cool}}$ is generally controlled by $\tau_{{\rm gas}}^{{\rm col}}$ . Fig 1c exhibits a set of results for large values of $a_{{\rm max}}$ ($\sim 10$ cm) in which the VSI is entirely ruled out as $\beta_{\mathrm{cool}}\geq 0.1$ everywhere in the model, while allowing for the feasibility of the ZVI generally away from the midplane where $\beta_{\mathrm{cool}}\gtrapprox 20$, and possibly the COS in midplane layers where $\beta_{\mathrm{cool}}\sim 1$. For smaller $a_{{\rm max}}$ ($\sim 1$ cm) their results indicate that the VSI is feasible closer to the midplane and closer to the star ($<$ 10 AU) where $\beta_{\mathrm{cool}}\leq 0.1$. The general trends reported in _Fukuhara et al._ (2021) – especially the limiting behavior posed by collisional exchange timescales (e.g., see bottom right panel of Fig. 1b) – are largely consistent with the findings of _Barranco et al._ (2018); while their mutual differences are likely due to the use of different disk model parameters as well as in the approaches applied in estimating $\tau_{{\rm gas}}^{{\rm col}}$. These are matters that need future resolution. In summary: a systematic examination is required to determine what type of instability is operative when and where in a PPD. The strategies used in the aforementioned studies ought to be applied toward more realistic global evolution models that improve upon simplistic power-law formalisms. The maps in Fig. 1 should not be taken as definitive but are meant to illustrate the diverse outcomes depending on model assumptions. However, some trends may be inferred from such recent studies: the picture of the disk during its earliest stages – i.e., when the $a_{{\rm max}}$ are still small and $M_{{\rm disk}}$ still relatively high – may be more like Fig. 1a, while as the disk evolves with $M_{{\rm disk}}$ decreasing and $a_{{\rm max}}$ inexorably growing the turbulent state of the disk may be driven by processes designated like those shown in Figs. 1b-c. In any case, given the possible range of $\beta_{{\rm cool}}$ values likely relevant across the disk bulk, some type of hydrodynamical instability that leads to turbulence – whether it be ZVI and/or COS, and/or VSI (or perhaps something yet to be discovered) – is likely active in PPDs. As a final remark, we emphasize that alpha-type viscous disk models are generally no replacement for direct numerical simulations for modeling the turbulence generated by these hydrodynamic instabilities. It should be kept in mind that PPDs are essentially inviscid but computational limitations prohibit one to probe realistic Reynolds numbers. An important future direction is to develop detailed models of the hydrodynamic turbulence generated by these instabilities that are suitable for use in large-scale, long-term global simulations of PPDs. Until then, alpha-type viscous disk models – while convenient – should be interpreted with care. ## 3 MULTI-PHASE INSTABILITIES WITH DUST-GAS INTERACTIONS ### 3.1 Barriers to planetesimal formation in turbulent disks Bridging the gap between dust grains and solid planetary cores in the core accretion paradigm involves intermediate gravitationally-bound bodies of typical sizes $\sim$0.1–100 km called planetesimals . Conversion of pebbles into planetesimals requires for solids to overcome several physical barriers as they grow, and is constrained by spatially resolved observations of young disks. Strong concentrations of solids must be driven and retained in the disk. Privileged places for large dust enrichments are long-lived pressure maxima, where dust grains tend to drift into. Moreover, dust grains settle towards the mid-plane of the disk. Turbulent of order $\alpha_{\mathrm{D}}\sim\alpha_{\mathrm{S}}\gtrsim 10^{-5}$–$10^{-4}$ solid- to-gas density ratio $\epsilon_{\mathrm{d}}\lesssim 1$ in the mid-plane may prevent the gravitational collapse of the dust layer . This turbulence can be hydrodynamical, MHD, or even of dust origin since the dust layer may itself be Kelvin-Helmholtz unstable . #### 3.1.1 The radial-drift barrier Disks need to somehow retain their solids as the solids grow in size and reach , at which their radial drift onto the star becomes (Eq. (6)). Pure growth could in principle assist grains for decoupling from the gas before they get accreted, but bouncing or fragmentation probably limit this possibility in practice (_Brauer et al._ 2008; _Birnstiel et al._ 2009; _Zsom et al._ 2010; see also §3.1.2). Local dust traps offer powerful alternatives to concentrate grains (see chapter by Bae et al. in the same book). Some traps are axisymmetric. Among suggestions: disk boundaries, dead zones, snow-lines, zonal flows, and edges of planetary gaps. the dust layer, , modifies the transport of gas according to $\displaystyle\frac{\partial\Sigma}{\partial t}-\frac{1}{R}\frac{\partial}{\partial R}$ $\displaystyle\left[aZ\frac{\partial\left(c_{\mathrm{s}}^{2}\Sigma\right)}{\partial R}\right]$ $\displaystyle+\frac{1}{R}\frac{\partial}{\partial R}\left(\frac{1+Zb}{1+Z}R\,v_{\rm visc}\right)=0,$ where $a=1/\left[\mathrm{St}+(1+Z)^{2}\mathrm{St}^{-1}\right]$, $b=\mathrm{St}^{2}/\left[\left(1+Z\right)^{2}+\mathrm{St}^{2}\right]$, and vertical integration being restrained to the dust layer (_Gonzalez et al._ , 2017). Relative intensities of are of order $\sim aZ/\alpha$ in smooth disks, meaning that for $10^{-2}\lesssim\mathrm{St}\lesssim 10^{2}$ and a large $Z\gtrsim 0.1$, gas dynamics can be dominated by viscosity, favoring the formation of self-induced dust traps (_Gonzalez et al._ , 2017). spirals, vortices and shocks induced by a companion (see chapter by Bae et al. in the same book). Studying the resilience of these traps is key for understanding planetesimal formation. The existence and the morphology of these traps combined with the location of the dust outer radius, and/or the remaining dust flux outside the observed traps provide indirect but degenerated constraints on the parameters $\alpha_{\mathrm{D}}$, $\rm{St}$ and $\epsilon_{\mathrm{d}}$ (_Birnstiel and Andrews_ , 2014; _Rosotti et al._ , 2019). Models of dust traps should be compatible with the constraints provided by structural data. Observation of silicate reveals crystallinity in the grains in the cold outer regions , although it requires high temperatures to develop. Explanations invoke outward drift mechanisms, redistribution of material by , without any major consensus so far . Isotopic analysis reveals that the early Solar System has likely been split into two parts in less than $\sim$3 Myrs (_Kruijer et al._ , 2017). This puts a drastic constraint on the existence of a dust trap in its early life, with subsequent consequences for the formation of terrestrial and giant planets (_Desch et al._ , 2018; _Kruijer et al._ , 2020). #### 3.1.2 The growth barriers Lab experiments suggest that make dust aggregates grow up to a few centimeters in size by hit-and-stick collisions. occur without restructuring at first, but . Above a few centimeters in size, collisions are found to be on average either non-adhesive, or destructive (_Blum_ , 2018), which is referred as the bouncing, erosion or fragmentation barriers. Fragmentation occurs for collision velocities of order $\sim$1–10 m.s-1, when monomer bonds are broken. The fragmentation thresholds appear to for icy particles, but this is not in general true and depends on temperature and surface coating (_Homma et al._ , 2019; _Musiolik and Wurm_ , 2019; _Steinpilz et al._ , 2019; _Bischoff et al._ , 2020; _Arakawa and Krijt_ , 2021) The lab experiments appear to be in agreement with a maximum size of order $\sim$1 mm for the size distribution of dust grains analyzed on the Comet 67P/Churyumov-Gerasimenko (_Blum et al._ , 2017a), and with a maximum size of $\sim$1 cm for meteoritic inclusions. Besides the dust-gas instabilities discussed in the following sections, there may be other possible pathways to overcome these growth barriers. One involves rare low-velocity events that seed the growth of large bodies (_Windmark et al._ , 2012; _Garaud et al._ , 2013; _Booth et al._ , 2018). However, the growth timescales may be too long for solids to resist both the radial-drift barrier and erosion (_Schräpler et al._ , 2018). Porous aggregates made of small ultra-sticky icy monomers have also been invoked (_Potapov et al._ , 2020). The robustness of this scenario remains to be validated experimentally with matrix monomers of size 1–10 $\mu$m. Electrostatic charges such as resulting from photoelectric and plasma charging were thought to inhibit growth efficiency (_Okuzumi_ , 2009; _Akimkin et al._ , 2020). Triboelectric charging may relieve this constraint, given collision velocities $\lesssim$0.1 m.s-1 (_Steinpilz et al._ , 2020b, a; _Teiser et al._ , 2021). In any case, the lithospheric pressure of comets is only compatible with material of low tensile strength. Experiments reveal that such tensile strength can not originate from collisional growth through mass transfer, but fits with a gentle collection of solids via a local gravitational collapse (_Blum et al._ , 2017a). So far, observations have probed continuum emission of solids up to a few centimetres (_Lommen et al._ , 2009; _Casassus et al._ , 2019). Although the forthcoming Square Kilometre Array will allow to probe larger wavelengths, it is not clear whether continuum emission of decimeter- sized grains will be lost into free-free electron emission or not (_Dewdney et al._ , 2009). Another possibility to overcome the growth barriers is to concentrate grains aerodynamically by means of dust-gas instabilities into the form of dusty clouds, up to the stage where the local mass of solids is such that it collapses gravitationally (_Ward_ , 2000; _Shariff and Cuzzi_ , 2011; _Youdin_ , 2011; _Shi and Chiang_ , 2013; _Latter and Rosca_ , 2017). We hereby dedicate the rest of the section to this pathway. ### 3.2 Streaming instability & resonant drag instabilities #### 3.2.1 Linear phase A linear analysis of the SI proceeds from the equations of motion for gas and dust under the local-shearing-box approximation, treating dust as a pressureless fluid that drifts inwards (see, e.g., _Youdin and Johansen_ , 2007, Equations (1)–(4)). Considering a small, homogenous patch at $z=0$ within the midplane dust layer, one linearizes the equations to obtain algebraic equations for each Faster-growing, larger-scale modes are considered likely to have a stronger nonlinear influence, although the link between linear-SI growth rates and planetesimal formation remains highly uncertain at this time. Figure 2: The basic ingredients involved in the mechanism of the streaming instability. The feedback cycle (steps (1)–(4)) shows how an asymmetric clump of dust can dynamically drive gas and dust flows that subsequently enhance the density of the clump. We show the radial-azimuthal plane with dust density shown using brown dots and the motions of gas and dust relative to the background drifts shown with pink and brown arrows, respectively. During this process, the dust continues its bulk inwards drift; the pictured deflections are on top of this drift. For this feedback to operate, the perturbation must have a vertical structure, otherwise radial gas motions in step (1) are halted by the gas pressure because they are compressive. While the key physical principles shown here are valid in both the low- and high-$\epsilon_{\mathrm{d}}$ regimes, it is necessarily overly simplified; more detailed versions are given in _Squire and Hopkins_ (2020). ##### Ideal streaming instability Motivated by the study of _Goodman and Pindor_ (2000), the analysis described above was first carried out by _Youdin and Goodman_ (2005), who discovered the SI and suggested its importance to planetesimal formation. They showed that the key parameters governing the instability are the dimensionless stopping time $\tau_{\mathrm{s}}$ and the solid-to-gas density ratio $\epsilon_{\mathrm{d}}$,111The pressure support parameter $\eta$ (defined by _Nakagawa et al._ , 1986) sets the velocity scale via the differential speed $\eta v_{\mathrm{K}}$ between gas and dust, but otherwise does not change the properties of the instability. demonstrated that the motions induced by the SI act to clump the dust, and argued that in most relevant regimes the growth rates are large enough for the instability to reach its nonlinear stages well before the radial dust drift significantly modifies the equilibrium. A key feature uncovered in their analysis was the sudden increase in the SI’s growth rate for $\epsilon_{\mathrm{d}}\gtrsim 1$. Most of the analysis of _Youdin and Goodman_ (2005) solved for the eigenvalues numerically, which can be inconvenient for making simple estimates across a wide parameter range. We see that the SI grows faster – although at smaller scales – for near vertical modes ($k_{z}\gg k_{r}$, $\theta_{k}\approx 0$), and grows more slowly for smaller grains (smaller $\tau_{\mathrm{s}}$). At $\epsilon_{\mathrm{d}}\gtrsim 1$, likely the more relevant regime for planetesimal formation, _Squire and Hopkins_ (2018a) showed that the SI changes its character and mechanism, . For $\epsilon_{\mathrm{d}}\gg 1$ and $\tau_{\mathrm{s}}\ll 1$, its maximum growth rate is ${\rm Im}(\omega)/\Omega\approx\sqrt{\epsilon_{\mathrm{d}}-1}$ at radial wavenumber $k_{r}\eta r\approx 0.4\epsilon_{\mathrm{d}}^{19/8}\tau_{\mathrm{s}}^{-5/4}$. This growth is very rapid (${\rm Im}(\omega)>\Omega$), with a rate that increases monotonically with $\epsilon_{\mathrm{d}}$, although it operates at very small scales for small $\tau_{\mathrm{s}}$. Even in its most basic form, the mechanism for the streaming instability is more complex than most astrophysical fluid instabilities. Various works have presented simplified analyses, exploring reduced equations and the behavior of the roots of the dispersion relation to understand how the instability operates (see, e.g., _Jacquet et al._ , 2011; _Squire and Hopkins_ , 2018a; _Zhuravlev_ , 2019; _Jaupart and Laibe_ , 2020; _Pan_ , 2020; _Squire and Hopkins_ , 2020). These have shown that rotation, dust drift, and two- dimensional structure are key to the SI, although the background shear flow is not. The basic mechanism of its operation is sketched in Fig. 2, which shows how a radial clump of dust can drive a gas flow that – due to the Coriolis force – drives a radial drift of dust outwards (relative to the background dust drift), thus enhancing the original clump. In order for the associated gas flow to be incompressible, and thus not strongly resisted by gas pressure, the flows must have vertical structure. . Note also that the gas pressure perturbations, which are driven in step (2) , are not in phase with the dust- density perturbations in most regimes, complicating the picture beyond dust trapping in pressure bumps. This is consistent with _Lin and Youdin_ (2017)’s finding that an isothermal gas embedded with small grains is equivalent to a single fluid with a modified cooling, in which case a phase lag between pressure and dust-density perturbations is a necessary feature of growing oscillations, such as the SI. Interestingly, this feature seems to carry over to the nonlinear regime, where an anti-correlation between dust and gas density has been observed (see figure 7 of _Yang and Johansen_ 2014 as well as _Li et al._ 2018). Thus, the conventional notion of dust-trapping at pressure maxima does not necessarily apply under dynamical (i.e., non-steady) conditions such as instabilities. ##### Extensions: polydisperse grains, turbulence, and other drag instabilities Any real disk will involve a wide spectrum of grain sizes ($\tau_{\mathrm{s}}$) at a given radial location, which is not taken into account in the above analysis. _Krapp et al._ (2019) first explored this “polydisperse” streaming instability, uncovering the unexpected result that in regimes with $\epsilon_{\mathrm{d}}\lesssim 1$ the maximal growth rate of the SI decreases monotonically as the number of discrete grain species is increased. Very low growth rates, or only upper bounds in some cases, are found with a true continuous distribution. Their results, which have been further explored in _Zhu and Yang_ (2021) and _Paardekooper et al._ (2020, 2021), suggest that the polydisperse SI is primarily controlled by $\epsilon_{\mathrm{d}}$ and the maximum $\tau_{\mathrm{s}}$ of the distribution ($\tau_{{\rm s,max}}$). Specifically, _Zhu and Yang_ (2021) found that the instability is robust (converged growth rates $\gtrsim 0.1\Omega$) only above a sharp, distinct boundary in $\epsilon_{\mathrm{d}}$-$\tau_{{\rm s,max}}$ space, with $\epsilon_{\mathrm{d}}\gtrsim 1$ or $\tau_{{\rm s,max}}\gtrsim 1$ (see also §3.2.2). The results also suggest that calculations involving polydisperse grains must carefully check convergence in the number of grain species. The presence of background turbulence, even at very low levels, may also adversely affect the SI by diffusing small-scale motions and dust density variations. Simple estimates can be obtained by modeling the effect of turbulence as causing a gas viscosity, dust pressure, and dust density diffusion, each parameterized by the disk $\alpha_{\mathrm{S}}$. The approach, explored by _Umurhan et al._ (2020) and _Chen and Lin_ (2020), suggests that, because turbulence decreases its growth rate and increases its spatial scale, the SI can operate only relatively far out in the disk beyond $\simeq\\!10{\rm AU}$, and only for very low (though realistic) levels of turbulence $\alpha_{\mathrm{S}}\lesssim 10^{-4}$ and relatively large grains (see also _Squire and Hopkins_ , 2018a; _Jaupart and Laibe_ , 2020; _Zhuravlev_ , 2020, for further mathematical details). On the other hand, turbulence can also act to clump grains itself, an effect that cannot be captured by diffusive treatments (e.g., _Cuzzi et al._ , 2001; _Pan et al._ , 2011; _Hartlep et al._ , 2017; _Hartlep and Cuzzi_ , 2020). Finally, it is worth noting several extensions to the standard SI analysis, which have revealed related instabilities. _Pan and Yu_ (2020) considered non- axisymmetric motions in the radial-azimuthal plane (perturbations without vertical structure across the dust layer). They found a similar instability, as expected from numerical simulations (_Schreiber and Klahr_ , 2018), although there are some unique features and different scalings from the axisymmetric SI. _Auffinger and Laibe_ (2018) found an interesting related instability when the background profile is modified by a pressure bump, although its details and relationship to the standard SI remain unclear. The vertically global SI study of _Lin_ (2021) revealed a faster-growing instability driven by the vertical gradient of the dusty gas’ rotation velocity that is not captured by standard analyses (see also the unpublished study of _Ishitsu et al._ , 2009). This may dominate the SI and control the dust-layer thickness, showing interesting similarities to results of nonlinear simulations; more investigation is needed. Finally, the RDI method of _Squire and Hopkins_ (2018a) also uncovered a related instability, termed the “settling instability,” which occurs as grains settle towards the midplane of the disk (see also _Lambrechts et al._ , 2016). Unlike standard SI, the instability grows rapidly even for the smallest grains at low $\epsilon_{\mathrm{d}}$, suggesting the possibility of early seeding of grain growth through clumping before grains reach the midplane. It is also more robust than SI to polydisperse grains, although the nonlinear clumping it causes may be rather weak (_Krapp et al._ , 2020). #### 3.2.2 Nonlinear saturation and dust concentration The discussion in §3.2.1 considers how a system of coupled gas and dust in a Keplerian disk responds to small perturbations. As the instability drives these perturbations to grow exponentially with time, they should ultimately become nonlinear, leading to a saturated, potentially turbulent, state. _Johansen and Youdin_ (2007) first studied the nonlinear saturation of the streaming instability and the resulting turbulence with an unstratified local shearing box and a single dust species. Depending on the dust size, the system could reach two distinctly different saturation states, which continues to be phenomenologically accurate for more complex systems. When the dimensionless stopping time $\tau_{\mathrm{s}}\gtrsim 1$, the dust undergoes traffic jams and organizes into axisymmetric dust filaments. When $\tau_{\mathrm{s}}\lesssim 0.1$, by contrast, the system generates numerous dust-gas vortices vertically, collecting dust in between. The former could lead to a $\sim$103 enhancement in dust concentration, while the latter a much weaker $O(10)$ enhancement. The gas remains incompressible to a high degree ($\lesssim$0.1%). These saturated states are qualitatively consistent across various simulations with different numerical methods, although some issues in numerical convergence at high-end tail of the particle density distribution might exist (_Bai and Stone_ , 2010; _Yang and Johansen_ , 2016; _Benítez- Llambay et al._ , 2019). As mentioned in §3.2.1, two distinct regimes occur in the linear growth of the streaming instability with polydisperse dust grains – fast and slow growth –, and it appears that this dichotomy also carries over to their nonlinear saturation in unstratified disks (_Yang and Zhu_ , 2021). When $\epsilon_{\mathrm{d}}\gtrsim 1$ or $\tau_{\mathrm{s,max}}\gtrsim 1$, the dust-gas dynamics at the saturation state is similar to that driven by monodisperse dust grains with $\tau_{\mathrm{s}}\sim\tau_{\mathrm{s,max}}$ discussed above (see also _Schaffer et al._ , 2021). On the contrary, when $\epsilon_{\mathrm{d}}\lesssim 1$ and $\tau_{\mathrm{s,max}}\lesssim 1$, the system at the saturation state appears to be close to laminar. Interestingly, _Yang and Zhu_ (2021) found that when the saturation state is turbulent (i.e., $\epsilon_{\mathrm{d}}\gtrsim 1$ or $\tau_{\mathrm{s,max}}\gtrsim 1$), significant dust segregation by size occurs and the mean radial drift of dust grains of different size is noticeably altered from the drag-force equilibrium of _Nakagawa et al._ (1986). When vertical gravity from the central star is considered, the saturated turbulence driven by the streaming instability sustains a vertically stratified layer of dust with a finite scale height. The same dust-gas vortices could be seen near the mid-plane in the stratified simulations of _Yang et al._ (2017) with single dust species. With multiple species, _Bai and Stone_ (2010) and _Schaffer et al._ (2018) demonstrated clear scale separation between different species (see also _Yang and Zhu_ , 2021). If the initial solid loading is not sufficiently high, this equilibrated stratified dust layer can be maintained for more than thousands of orbital periods, when simulations ended. If the solid loading, quantified by the dust-to-gas _column_ density ratio $Z\equiv\Sigma_{\mathrm{d}}/\Sigma_{\mathrm{g}}$, is above some critical value $Z_{\mathrm{c}}$, strong concentration of solids occurs (_Johansen et al._ , 2009). This leads to nearly _axisymmetric_ , dense, narrow filamentary structures of solids (_Yang and Johansen_ 2014; _Li et al._ 2018; see also _Flock and Mignone_ 2021), which may have observational consequences (_Scardoni et al._ , 2021). Without external turbulence, typical radial separation between adjacent filaments is about 20% of the gas scale height $H_{\mathrm{g}}$. With external turbulence such as non-ideal MHD, the separation can be on the order of about $H_{\mathrm{g}}$ (_Yang et al._ , 2018). In any case, the separation appears to decrease with increasing $Z$ (_Yang et al._ , 2017, 2018). As mentioned in §3.2.1, these dense filaments are _not_ correlated with local pressure maxima (_Yang and Johansen_ , 2014; _Li et al._ , 2018) and hence the latter may not be responsible for driving the concentration of solids in this scenario. Furthermore, the boundary between strong clumping of solids when $Z\gtrsim Z_{\mathrm{c}}$ and no strong clumping when $Z\lesssim Z_{\mathrm{c}}$ appears to be sharp (see also below). However, the relationship of this boundary to the linear streaming instability discussed in §3.2.1 (e.g., its change at $\epsilon_{\mathrm{d}}\simeq 1$) is tenuous (see _Li and Youdin_ , 2021, §3.5). It remains to be understood whether this dichotomy arises because other linear instabilities are important (e.g., that of _Lin_ 2021 with vertical stratification), or whether clumping to Roche densities is inherently nonlinear. Further study of this important, but subtle, issue is needed. Figure 3: Critical solid abundance $Z_{\mathrm{c}}$ as a function of dimensionless stopping time $\tau_{\mathrm{s}}$. Each line is the best fit to the numerical simulations conducted in different work, and above the line may strong clumping of solids occur and trigger the formation of planetesimals. All works compiled here assumed a dimensionless radial pressure gradient of $\Pi=0.05$. The value of $Z_{\mathrm{c}}$ should depend on several parameters, including the dimensionless stopping time $\tau_{\mathrm{s}}$ (single or multiple species), the radial pressure gradient, as well as the dynamics of external disturbances. The radial pressure gradient is often quantified by the dimensionless number $\Pi\equiv\Delta u_{\phi}/c_{\mathrm{s}}$ (_Bai and Stone_ , 2010) or $\eta\equiv\Delta u_{\phi}/v_{\mathrm{K}}$ (_Nakagawa et al._ , 1986), where $\Delta u_{\phi}$ is the reduction in the equilibrium azimuthal velocity of the gas , and $c_{\mathrm{s}}$ and $v_{\mathrm{K}}$ are the local speed of sound and Keplerian speed, respectively. At a given and representative radial pressure gradient $\Pi=0.05$, without external disturbances, and with single species, _Carrera et al._ (2015) first measured $Z_{\mathrm{c}}$ as a function of $\tau_{\mathrm{s}}$ by gradually removing the gas over the course of the simulations. By fixing $Z$ instead in each simulation, the measurement for $\tau_{\mathrm{s}}<0.1$ was later modified by _Yang et al._ (2017), and _Li and Youdin_ (2021) further found significantly lower threshold for $\tau_{\mathrm{s}}\gtrsim 0.015$. The comparison between the three works is shown in Fig. 3. To trigger strong solid concentration for small particles with $\tau_{\mathrm{s}}\lesssim 10^{-2}$, a value of $Z$ greater than a few percent is in general required. The timescale for forming dense solid filaments when $Z>Z_{\mathrm{c}}$ appears reciprocal to $\tau_{\mathrm{s}}$, i.e., proportional to the timescales of radial drift and vertical sedimentation, and in fact a smaller dust species may lead to a much stronger concentration than a larger one (_Yang et al._ , 2017). When varying radial pressure gradient, it appears that the stronger the gradient, the more difficult to drive strong concentration of solids (_Bai and Stone_ , 2010; _Abod et al._ , 2019), and $Z_{\mathrm{c}}$ may simply scale linearly with $\Pi$ (_Sekiya and Onishi_ , 2018). Finally, for a system with a dust-size distribution, it remains unclear how to best quantify the critical condition (_Bai and Stone_ , 2010; _Schaffer et al._ , 2021). The evaluation of this critical solid abundance $Z_{\mathrm{c}}$ becomes complicated when external disturbances to dust-gas dynamics driven by (magneto-)hydrodynamical instabilities are present (see §2 and §4). For large particles with $\tau_{\mathrm{s}}\sim 1$, it was found that the MHD turbulence driven by ideal MRI in general assists local concentration of solids and hence triggers their strong clumping with a lower $Z_{\mathrm{c}}\simeq 1\%$ (as compared to $Z_{\mathrm{c}}\simeq 2$–3% with a similar numerical setup but without MHD turbulence) (_Johansen et al._ , 2007, 2011). However, it appears that under such an ideal MHD turbulence, strong clumping of particles with $\tau_{\mathrm{s}}\sim 0.1$ is significantly more difficult to trigger, potentially making $Z_{\mathrm{c}}\gtrsim 8\%$ (_Yang et al._ , 2018). the _anisotropic_ turbulence driven by the non-ideal MHD or the VSI might alleviate this difficulty for planetesimal formation. _Yang et al._ (2018) found that in a layered accretion disk with an Ohmic dead zone, $Z_{\mathrm{c}}\simeq 2\%$ for particles of $\tau_{\mathrm{s}}\sim 0.1$. Similar $Z_{\mathrm{c}}$ was also found in disks dominated by ambipolar diffusion (_Xu and Bai_ , 2021). _Schäfer et al._ (2020) found that in some cases, the interaction between the VSI and the streaming instability makes the clumping of solids even stronger as compared to the latter alone. In both scenarios, the turbulence near the mid-plane is highly anisotropic, with the vertical stirring of particles much stronger than the radial one (see also _Zhu et al._ , 2015; _Stoll and Kley_ , 2016; _Riols and Lesur_ , 2018). This also raises the question of how sensitively the triggering of strong clumping depends on vertical sedimentation of solids and hence the local dust-to-gas density ratio $\epsilon_{\mathrm{d}}$ near the mid-plane. ### 3.3 Secular gravitational instabilities The idea that gravitational instabilities (GI) in the dust layer cause solids to collapse into planetesimals has a long history (_Safronov_ , 1969; _Goldreich and Ward_ , 1973; _Youdin and Shu_ , 2002; reviewed in more detail by _Chiang and Youdin_ , 2010). Several Solar System observations support the hypothesis of planetesimal formation by gravitational collapse (_Morbidelli et al._ , 2009; _Nesvorný et al._ , 2010, 2019; _Blum et al._ , 2017b; _McKinnon et al._ , 2020) . The gaseous component of protoplanetary disks is at best marginally gravitationally unstable, and the dust component has less self-gravitating mass at early times. If the gas could be ignored, then collisional damping of particle motions ensures gravitational fragmentation (_Tanga et al._ , 2004; _Michikoshi et al._ , 2007). Gas cannot be ignored in the early stages of planet formation, which introduces both stabilizing and destabilizing effects on a self-gravitating dust layer. Gas drag can also lead to strong particle clumping by the SI or in pressure bumps (§3.1 and §3.2.2; see also _Pinilla and Youdin_ , 2017). Strong particle clumping clearly facilitates GI. The conditions for gravitational collapse in a dynamically clumping medium is addressed below. Dust gravitational instabilities in an otherwise smooth disk of gas and dust strong clumping, e.g. outside pressure bumps and below SI clumping thresholds (Fig. 3; _Li and Youdin_ , 2021). Dust that is perfectly coupled to the gas gives a limiting case of GI. _Sekiya_ (1983) analyzed a midplane layer in this limit and found that GI requires a total density, $\rho_{0}>0.17\rho_{\rm R}$, where $\rho_{\rm R}=3.53M_{\ast}/r^{3}$ is the Roche limit below which a hydrostatic fluid body is tidally disrupted (see _Chiang and Youdin_ , 2010 for discussion). If the midplane layer were dust-free, then applying this _Sekiya_ (1983) condition to the midplane of a gas disk (that is vertically isothermal with $\gamma=7/5$) translates to $Q_{\rm g}=c_{s}\Omega/(\pi G\Sigma_{\rm g})\lesssim 0.25$, stricter than usual due to finite thickness, the assumed incompressibility and other details of the sublayer analysis. With dust (more relevantly), instability occurs for midplane dust-gas ratios $\epsilon_{\mathrm{d}}\gtrsim 4Q_{\rm g}-1$, which is large for gravitationally stable gas disks. For larger grains that slip relative to gas, a new mode of secular gravitational instability (SGI) exists (_Ward_ , 1976, 2000; _Youdin_ , 2005; _Michikoshi et al._ , 2012). Due to the transfer of angular momentum from dust to gas, SGI modes can have wavelengths longer than the standard Toomre (dust) upper limit of $\lambda_{\rm T,d}=4\pi^{2}G\Sigma_{\rm d}/\Omega^{2}$. The possibility of SGI forming wide dust rings has been proposed as an explanation for observed structures in protoplanetary disks (_Takahashi and Inutsuka_ , 2016) and — after the rings fragment into many planetesimals — for the radial zonation of asteroid spectral classes (_Youdin_ , 2011). However, the viability of the SGI in real protoplanetary disks is unclear. Radial turbulent diffusion was shown to have a strong stabilizing effect on SGI, especially for smaller solids (_Youdin_ , 2011; _Shariff and Cuzzi_ , 2011). _Takahashi and Inutsuka_ (2014) analyzed “two fluid” models that include drag feedback (similar to the linear SI, but with self-gravity and without the vertical motions) and showed that sufficiently long wavelength modes are stabilized. Their stability criterion, valid for $\alpha_{\mathrm{D}}\ll\tau_{\rm s}\ll 1$, i.e. weak turbulent diffusion and small solids, is $\displaystyle\epsilon_{\mathrm{d}}(1+\epsilon_{\mathrm{d}})$ $\displaystyle>Q_{\rm g}^{2}$ (12) where we use $Q_{\rm g}=Q_{\rm g,sub}H_{\rm p}/H_{\rm g}=Q_{\rm g,sub}\sqrt{\alpha_{\mathrm{D}}/\tau_{\rm s}}$ to replace the Toomre $Q_{\rm g,sub}$ of the gas subdisk that appears in the original works (see also _Latter and Rosca_ , 2017; _Tominaga et al._ , 2020). If we require the gas disk to be gravitationally stable, $Q_{\rm g}\gtrsim 2$, then the instability condition becomes $\epsilon_{\mathrm{d}}\gtrsim Q_{\rm g}$, i.e. similar to the Roche-like criteria of _Sekiya_ (1983). In general, an effective viscosity that transports momentum is known to facilitate GI (_Lin and Kratter_ , 2016). _Tominaga et al._ (2019) discovered a viscous GI in a gas-dust sublayer with an approximate stability criterion $\displaystyle\frac{\epsilon_{\mathrm{d}}}{\sqrt{1+\epsilon_{\mathrm{d}}}}$ $\displaystyle>Q_{\rm g}\sqrt{\frac{3\alpha}{\tau_{\rm s}}}\,,$ (13) where $\alpha\sim\alpha_{\mathrm{D}}\sim\alpha_{\mathrm{S}}\ll\tau_{\mathrm{s}}$ is assumed. Since $\alpha\ll\tau_{\rm s}$ is again required, this criterion is easier to satisfy than Equation (12). Simulations with turbulence that could act as an effective viscosity are thus strongly motivated. In general, investigations that go beyond height-integrated models of the isolated dust sublayer are needed to better determine the relevance of the SGI and related viscous GI for observed disk structures and planetesimal formation. ### 3.4 Formation of planetesimals and their expected properties Since PPVI, it has become feasible to include the self-gravity of pebbles in the simulations of the streaming instability to trigger gravitational collapse of local pebble concentrations into a large number of bound objects and hence planetesimals, leading to a statistically meaningful mass function of newborn planetesimals. At similar conditions, the result appears insensitive to numerical methods, or the use of sink particles (_Johansen et al._ , 2015) or clump finding (_Simon et al._ , 2016) to identify the bound objects, although some variability may exist between different realizations of the same system (_Rucska and Wadsley_ , 2021). Excluding the high-mass cutoff, it was found that the differential mass function could be represented by a power law $\mathrm{d}N/\mathrm{d}M\propto M^{-q}$ with $q\simeq 1.6$, where $M$ is the mass of a planetesimal. Therefore, small planetesimals dominate in number, while a few largest planetesimals dominate the total mass. This function seems not particularly sensitive to the pebble size ($\tau_{\mathrm{s}}$) or the solid abundance ($Z$) (_Simon et al._ , 2017). On the other hand, the _efficiency_ of planetesimal formation does appear to depend on the radial pressure gradient (_Abod et al._ , 2019) and the strength of turbulence (_Gole et al._ 2020; see also the linear analyses by _Chen and Lin_ 2020 and _Umurhan et al._ 2020; §3.2.1). The higher the pressure gradient or the stronger the turbulence, the less efficient the formation of planetesimals is. In fact, the former could be understood by the more turbulent saturation state of the streaming instability (_Abod et al._ , 2019), and hence indicating that the region near a pressure bump may be a favorable location for planetesimal formation (_Carrera et al._ , 2021). Moreover, there may exist a maximum turbulent forcing at $\alpha_{\mathrm{S}}\sim 10^{-3}$ above which planetesimal formation is inhibited (_Gole et al._ , 2020). In contrast to the differential mass function of planetesimals, the cumulative mass function showed that the largest few planetesimals are situated with a much steeper slope, indicating a high-mass cutoff. It was convenient to express this cutoff as an exponential tapering (_Johansen et al._ , 2015), while it is perhaps more accurate to describe the whole mass function by a broken power law (_Li et al._ , 2019). In any case, the “knee” of the mass function defines the characteristic mass $M_{\mathrm{c}}$ of newborn planetesimals. _Schäfer et al._ (2017) found that $M_{\mathrm{c}}$ correlates with the separation between axisymmetric dense filaments of pebbles driven by the streaming instability, and hence that $M_{\mathrm{c}}$ is directly proportional to the total mass reservoir inside each filament. Moreover, $M_{\mathrm{c}}$ could also depend on the balance between gravitational contraction and turbulent diffusion of a collapsing pebble cloud (_Schreiber and Klahr_ , 2018; _Gerbig et al._ , 2020; _Klahr and Schreiber_ , 2020, 2021). In an attempt to consolidate the simulations in the literature, _Liu et al._ (2020) fit and gave an expression for $M_{\mathrm{c}}$ as a function of $Z$, $\Pi$, disk aspect ratio, as well as strength of the disk gravity. Even though the mass function of planetesimals could be produced and analyzed with hydrodynamical simulations, these simulations could not resolve the process of gravitational collapse of the concentrated pebble clouds. To study the internal structure of a planetesimal resulting from such a process, these pre-collapse pebble clouds are taken as initial conditions and evolved using statistical or $N$-body methods. Factors under consideration included planetesimal mass, initial pebble size or size distribution, pebble porosity, pebble collision, and gas drag. In general, the timescale for the collapse decreases with increasing planetesimal mass and is limited by the free-fall time when the planetesimal radius $R\gtrsim 100$ km (_Wahlberg Jansson and Johansen_ , 2014). At such a large size $R\gtrsim 100$ km, though, the collisions between pebbles become fragmenting and much less primordial pebbles are preserved (_Wahlberg Jansson et al._ , 2017). When $R\lesssim 10$ km, by contrast, most of the collisions may be bouncing and the resulting planetesimal could be a rubble pile. Nevertheless, the pebbles are compressed at bouncing collisions. Icy pebbles may be compacted up to a volume filling factor of $\sim$20%, while silicate pebbles $\sim$40% (_Lorek et al._ , 2016). The porosity of a planetesimal then depends on the final size distribution of pebbles; the more small dust, especially resulting from fragmenting collisions, the less porous the planetesimal is. Finally, gas drag causes size sorting of pebbles in a collapsing pebble cloud (_Wahlberg Jansson and Johansen_ , 2017; _Visser et al._ , 2021). While small pebbles have slow terminal velocities, large pebbles have long deceleration times. The combination of both effects along with the nebula conditions determine the pebble size distribution as a function of radial distance to the planetesimal center. Another essential factor is the initial angular momentum of a collapsing pebble cloud. It appears that almost all pebble clouds in nonlinear simulations of the streaming instability with self-gravity contained too much angular momentum and should fragment into planetesimal binaries (_Nesvorný et al._ , 2019). The distribution of the angular momentum vector showed that about 80% of the binaries were prograde. By simulating the gravitational collapse of these pebble clouds using $N$-body methods, these clouds tend to form binaries with similar-sized objects and appreciable eccentricity $e\lesssim 0.6$ (_Nesvorný et al._ , 2010; _Robinson et al._ , 2020; _Nesvorný et al._ , 2021). Clouds with sufficiently low angular momentum may result in low contact speeds between the two components of the binaries and hence form a pathway to contact binaries (_Robinson et al._ 2020; _Nesvorný et al._ 2021; see also _Lyra et al._ 2021). On the other hand, clouds with high mass or angular momentum may lead to a hierarchical system or multiple binaries. The orbital inclination of the binary should be similar to that of the initial pebble cloud, which is limited by the vertical scale height of the pebbles and hence should be small. ## 4 MAGNETOHYDRODYNAMICAL PROCESSES & WINDS ### 4.1 Framework #### 4.1.1 Ionization Protoplanetary disks are observed to have both inflows (i.e., accretion) and outflows (winds). While purely hydrodynamic and thermal processes might be responsible for both phenomena, various arguments suggest that magnetohydrodynamic (MHD) processes are important. For example, known hydrodynamic instabilities that might give rise to a turbulent viscosity are weak (§2), except gravitational instabilities of disks that are quite massive and therefore probably quite young (Class 0). Also, accelerating molecular outflows to speeds of several $\mathrm{km\,s^{-1}}$ is difficult to do purely thermally without heating the gas to such a temperature that the molecules dissociate, whereas magnetocentrifugal winds can in principle expel gas without heating it at all, at least in ideal MHD. MHD, however, requires the gas to support electrical currents, which in turn requires mobile charge carriers: free electrons, atomic and molecular ions, and even small charged grains or polycyclic aromatic hydrocarbons (PAH). In a typical low-mass Class II system, temperatures are high enough for (partial) thermal ionization only at small radii ($\lesssim 0.1\textsc{au}$), where only a small fraction of the mass of the disk resides. The thermally ionized region becomes larger both for systems with high luminosity (e.g. for a Herbig Star with 55 solar luminosity this region can extend to 1 au (_Flock et al._ , 2016)), and for systems with high accretion, as here the MRI heating can heat up and so ionize the disk further outward (_Mohanty et al._ , 2018). Elsewhere in the disk, as in much of the interstellar medium (ISM), ionization relies on nonthermal sources. These include X-ray (_Glassgold et al._ , 2017) and far ultraviolet (_Perez- Becker and Chiang_ , 2011) photons emitted by the protostar’s corona and accretion layers, cosmic rays (_Rab et al._ , 2017; _Seifert et al._ , 2021), radionuclides such as $\mathrm{{}^{26}Al}$ (_Umebayashi and Nakano_ , 2009), and perhaps strong electric fields associated with MRI (_Inutsuka and Sano_ , 2005; _Okuzumi et al._ , 2019) or even lightning (_Whipple_ , 1966; _Desch and Cuzzi_ , 2000). _Güdel_ (2015) gives a pedagogical review of many of these processes. The degree of ionization of protoplanetary disks is uncertain because of the uncertain strengths of the various ionizing processes listed above, but also because of the uncertain abundance and size distribution of grains, which tend to trap free charges and promote recombination (_Ivlev et al._ , 2016; _Thi et al._ , 2019, and references therein). Close to the star where midplane temperatures are 500-1500 K, however, thermionic emission from grains may actually increase the abundance of free electrons (_Desch and Turner_ , 2015; _Jankovic et al._ , 2022); at still hotter temperatures where the grains sublime, the main source is thermal ionization of alkali metals (K, Na, $\ldots$), which have low first-ionization potentials. #### 4.1.2 Plasma conductivity At magnetic field strengths and ionization levels sufficient to drive accretion at observed rates, the electrical conductivity behaves as a tensor rather than a scalar (_Wardle and Ng_ , 1999). That is to say, the electric field $\bm{E^{\prime}}$ in the local rest frame of the gas and the current density $\bm{J}$ are generally not parallel, because the mobile charges are deflected by the magnetic field to varying degrees depending on their charge- to-mass ratios and collision rates with the neutral gas. When the mass density in the charged species is negligible compared to that of the neutrals, the tensorial Ohm’s Law $\bm{E^{\prime}}=\bm{\upsigma^{-1}\cdot J}$ reduces to three terms (_Balbus and Terquem_ , 2001; _Bai_ , 2014) $\bm{E^{\prime}}=\eta_{\mathrm{O}}\bm{J}+\eta_{\mathrm{H}}\bm{J\times\hat{B}}+\eta_{A}\underbrace{\bm{\hat{B}\times}(\bm{J\times\hat{B}})}_{\bm{J}_{\perp}},$ (14) in which $\bm{\hat{B}}=\bm{B}/|\bm{B}|$ is a unit vector parallel to the magnetic field. The Ohmic diffusivity $\eta_{\mathrm{O}}$ is independent of $|\bm{B}|$ and therefore dominates when the field is weak compared to the pressure of the gas and the ionization fraction is very low; these conditions often prevail at the disk midplane. $\eta_{\mathrm{O}}$ is also the only one of the three diffusivities that contributes to $\bm{E^{\prime}\cdot B}$, and therefore the only one that can alter magnetic flux; the others preserve flux but allow it to drift relative to the neutral gas. Insofar as the charged particles collide with the neutrals more often than one another, $\eta_{\mathrm{O}}$ is inversely proportional to the fractional ionization, which typically needs to be $n_{e}/n_{\textsc{H}}\gtrsim 10^{-13}$ to support MRI in protoplanetary disks. The ambipolar diffusivity $\eta_{A}\propto|\bm{B}|^{2}$ dominates the slippage between the charged and neutral components when the field is strong and the ionization fraction is well above $10^{-13}$ (but still $\ll 1$), as tends to be the case at high altitudes in the disk and in their MHD winds. Although it preserves magnetic flux, ambipolar diffusion contributes to $\bm{E^{\prime}\cdot J}$, and therefore dissipates magnetic energy (not flux), which heats the gas. Finally, the Hall diffusivity $\eta_{\mathrm{H}}$ scales linearly with field strength. It can be important when the dominant charge carriers of one sign are magnetized—meaning that their collision rate with the neutrals is less than their rate of gyration around field lines—while those of the other sign are not. At inferred field strengths, this tends to be the case at intermediate altitudes in the disk: i.e. above the midplane, but below the FUV dissociation front where disk winds may be launched (_Wardle_ , 2007), and $\eta_{\mathrm{H}}$ may be important even where subdominant (_Wurster_ , 2021). Unlike the other two diffusivities, the Hall term contributes neither to $\bm{E^{\prime}\cdot B}$ nor to $\bm{E^{\prime}\cdot J}$ and is therefore not directly dissipative. But it does contribute to flux migration, yielding an $\bm{E^{\prime}\times B}$ drift along $-\bm{J}_{\perp}$. #### 4.1.3 Magnetic field Once the gas is coupled to magnetic fields, the magnetic field strength and topology become a key control parameters to the dynamics of the system. It is usual to quantify the field strength with the dimensionless plasma $\beta\equiv 8\pi P/B^{2}$ parameter. While it is in principle possible to generate poloidal field loops in situ by dynamo action, such a dynamo remains difficult to sustain in the regions $R>1\,\mathrm{AU}$ because of the strong magnetic diffusion (_Simon et al._ , 2013; _Riols et al._ , 2021). Hence, the large-scale poloidal field threading the disk $B_{\mathrm{p}}$ (that is to say averaging out the turbulent part) is conserved: i.e it can only be accreted or diffused away from the central star during the system lifetime. As a result, it is often the key control parameter of MHD processes in disks, quantified in terms of $\beta_{\mathrm{p}}\equiv 8\pi P_{\mathrm{mid}}/B_{p}^{2}$, where we have used the mid-plane pressure $P_{\mathrm{mid}}$. While it is a useful parameter from a theoretical standpoint, it is also useful to connect it to physical values. Using a “typical” T-tauri disk with $\Sigma=200\,\mathrm{g.cm}^{-2}R_{\mathrm{AU}}^{-1}$, one gets $B_{p}\simeq 10\,R_{\mathrm{AU}}^{-11/8}\beta_{\mathrm{p}}^{-1/2}\,\mathrm{G}$ (_Lesur_ , 2021a, eq. 2.4) . Hence, the typical value expected in disks with $\beta_{\mathrm{p}}\sim 10^{4}$ is $B_{p}\sim 5\,\mathrm{mG}$ at 10$\,\mathrm{AU}$. If the large-scale field dominates angular momentum transport, and hence a MHD wind is launched (§4.3.2), it can be shown that the total field $B$ is tightly linked to the accretion rate. It is then possible to deduce a minimum field strength for a given accretion rate: $B\gtrsim 0.31\,R_{\mathrm{AU}}^{-5/4}(\dot{M}_{\mathrm{acc}}/10^{-7}M_{\odot}.\mathrm{yr}^{-1})^{1/2}\,\mathrm{G}$ (_Bai and Goodman_ , 2009). Note that this lower limit should be taken with care given that in most MHD winds, $B$ is dominated by its toroidal component, which changes sign on both sides of the disk, and hence is relatively difficult to measure directly from optically thin tracers. ### 4.2 MRI-driven turbulence The magneto-rotational instability (_Balbus and Hawley_ , 1991) plays an important role for the accretion processes and overall for the protoplanetary disk evolution. Since the last 30 years, many works have investigated the MRI in the context of protoplanetary disks, using theoretical models (_Turner et al._ , 2014) and lab experiments (_Rüdiger et al._ , 2018). From our current understanding, we believe that in the inner (T $>$ 1000 K) and outer protoplanetary disk regions ($\Sigma\leq 15\rm\,gcm^{-2}$), the conditions for the MRI are met, which are: a weak magnetic field $\beta_{\mathrm{p}}\geq 1$ (_Lesur_ , 2021a, eq. 6.64), shear and high enough ionization (§4.1). In this section we highlight the most important achievements since the last PPVI, more specifically, the convergence of the MRI under a net-vertical field, the MRI with realistic thermodynamics, and the MRI in the regime from ideal to non-ideal MHD using local and global simulations. #### 4.2.1 Quasi-Ideal regime In the quasi-ideal MHD regime (that is to say when diffusive length-scales are much smaller than $H$) and with a net-flux vertical magnetic field present, the saturation of the MRI is reached by the super-exponential growth of parasitic instabilities, halting the further growth of the MRI (_Murphy and Pessah_ , 2015; _Hirai et al._ , 2018). Such conditions of saturated MRI activity are present in the inner and outer regions of protoplanetary disks, and many global simulations confirmed the efficiency of the MRI using a net- vertical magnetic field (_Parkin_ , 2014; _Suzuki and Inutsuka_ , 2014; _Flock et al._ , 2017a; _Zhu and Stone_ , 2018). Fully saturated MRI turbulence shows an angular momentum transport coefficient between $\alpha_{\mathrm{S}}=0.01$ and $1.0$ depending on the strength and topology of the initial magnetic field. For instance, _Salvesen et al._ (2016) found $\alpha_{\mathrm{S}}\simeq 11\beta_{\mathrm{p}}^{-0.53}$ for $10<\beta_{\mathrm{p}}<10^{5}$ in stratified shearing box simulations threaded by a vertical field, indicating that $\alpha_{\mathrm{S}}$ is roughly proportional to $B_{p}$. While this result holds in ”ideal” MHD, in protoplanetary disks we expect the magnetic Prandtl number $\mathrm{Pm}$, the ratio between the viscous and the magnetic diffusion rate, to be very small, even in the regions where ideal MHD applies. This small $\mathrm{Pm}$ regime was investigated using non-stratified high- resolution simulations, reaching for the first time convergence (_Meheut et al._ , 2015) with a value of $\alpha_{\mathrm{S}}=3.2\times 10^{-2}$ and a net horizontal magnetic flux with $\beta=10^{3}$. In contrast, zero-net flux MRI simulations in the low $\mathrm{Pm}$ limit show turbulent decay (_Mamatsashvili et al._ , 2020) and no sustained turbulence. Further studies are needed to understand the characteristics of the MRI at different levels of $Pm$ and different magnetic configurations. #### 4.2.2 Non-ideal regime Moving radially outward in the protoplanetary disk, the non-ideal terms of Ohmic, Hall and Ambipolar diffusion become more important (§4.1), efficiently reducing the level of MRI turbulence (_Bai_ , 2015; _Simon et al._ , 2018) down to levels of several $\sim 10^{-4}$ for $\alpha_{\mathrm{S}}$. For these non-ideal terms, many disk parameters play now an important role, since magnetic diffusivities (14) depend on the magnetic field strength, local density of gas and dust and the local high-energetic radiation field of X-Ray,Cosmic ray and even FUV (_Gole and Simon_ , 2018). These non-ideal MHD regions are prone to magnetic flux concentrations, triggering zonal flows which appear as axisymmetric long-lived perturbation and which appear in the Ohmic (_Bai and Stone_ , 2014), Hall (_Kunz and Lesur_ , 2013) and Ambipolar regimes (_Simon and Armitage_ , 2014). At the transition between dead and active zones, these perturbations and flux concentrations are enhanced (_Flock et al._ , 2015) halting the radial drift of pebbles (_Ruge et al._ , 2016). The study of the non-ideal MHD evolution of the disk has become more important in the recent years, especially due to the ability to cause variations in the surface density. Future studies should extend the parameter regime of the non- ideal terms and investigate their importance for the magnetic evolution of the disk. #### 4.2.3 Thermodynamics In recent years, local and global simulations investigated the effect of combined radiative transfer and magnetohydrodynamics, capturing the important heating and cooling processes together with the MRI using radiation MHD simulations. In optical thin regimes of protoplanetary disks, the temperature is mostly defined by the irradiation heating, and the MRI strength adapts to the local profile of temperature and pressure, as shown by (_Flock et al._ , 2013) using global radiation magnetohydrodynamical simulations of protoplanetary disks including irradiation. In the optically thick regime (_Hirose et al._ , 2014; _Scepi et al._ , 2018) the heating by the MRI leads to an increase of the disk temperature and so to the disk scale height which again enhances the turbulence and leads to convection. Most of the MHD heating is taking place in the magnetic reconnection layers (_Ross and Latter_ , 2018) for which in general high resolution is required (_McNally et al._ , 2014) to resolve the heating rate. In protoplanetary disks the inner gas disk is believed to remain mostly optically thin for its own thermal radiation (_Flock et al._ , 2017a) however a detailed set of gas opacities remain under investigation. Especially the inner gas disk, between the silicate sublimation line and the co-rotation radius, depends heavily on the heating and cooling processes and so the gas opacity includes various molecular lines (_Malygin et al._ , 2014). At the same time, MRI activity in this region controls the last stage of the accretion process, transporting matter onto the star while at the same time feeding the magnetically driven winds in this inner region. Future studies of the thermodynamics and the MRI should improve the gas opacity to show the importance of the heating and cooling processes, especially for the evolution of the inner, MRI-active, gas disk. #### 4.2.4 Outlook The saturation level, the strength and the appearance of MRI turbulence dependents on many disk parameters. Most important here are the local degree of ionization and the large-scale magnetic field $B_{p}$ threading the disk, as they determine the appearance and the strength of MRI activity, both in the ideal and non-ideal regimes. Future studies should focus on the non-ideal terms, the long-term magnetic flux evolution and the proper thermodynamics to study the importance of the MRI for the disk evolution. Solving for detailed thermodynamics is crucial not only to obtain realistic levels of turbulence but also to enable synthetic observations. Heating by the non-ideal MHD terms (_Mori et al._ , 2019) is significant and should be captured in future studies. ### 4.3 Outflows Outflows have been proposed since the 1990s to explain the atomic jets observed in several YSOs (_Frank et al._ , 2014). The distinction between jets and winds is relatively unclear in the literature, but it is usually implicitly linked to the outflow collimation and velocity: while jets are assumed to be fast (100-1000km/s) and well collimated, and therefore appear narrow at large distances, winds are usually seen as slow (1-30km/s) conical outflows. These slow outflows are sometimes called “molecular“ since they are detected in molecular lines. Here, we make the distinction between two kinds of outflows: thermally driven outflows (photoevaporation), for which the energy source is primarily an external radiation field (which could be due to the central star or the immediate environement), and magnetised outflows, which are driven by accretion energy. #### 4.3.1 Thermal outflows Thermal (photoevaporative) winds are covered in detail in the Chapter by Pascucci et al., where current efforts to detect these winds in gas line observations are also reviewed. While significant progress has been done on theoretical modeling of these winds since PPVI , the broad-brush picture of how thermal winds work and their effect on the surface density evolution of disks has been largely confirmed. Recent calculations, however, have highlighted some important discrepancies in the prediction of wind properties, which will need to be resolved before the models can be used to quantitatively interpret observations, as detailed in the Chapter by Pascucci et al. In the simple picture gas can only be thermally unbound, if it is heated to temperatures that approach the local escape temperature, $T_{\rm esc}=GM_{*}\bar{m}/rk_{\textsc{b}}$, where $\bar{m}$ is the mean mass per gas particle; this is $\approx 10^{4}$ K for $M_{*}=M_{\odot}$, $r=1\textsc{au}$, and $\bar{m}=0.68m_{p}$ (ionized hydrogen at solar abundance). Depending on the irradiating spectrum, gas is heated to different temperatures. As an example, Extreme-Ultraviolet (EUV) radiation ($13.6eV<E<100eV$) heats the gas mainly by photoionization of hydrogen and helium, generally yielding an almost isothermal gas with temperature around $10^{4}\,\mathrm{K}$. For geometrically thin, non self-gravitating disks around a 1$M_{\odot}$ star, the radius at which the escape temperature of the gas equals $10^{4}\,\mathrm{K}$ is approximately $1\,\mathrm{AU}$. The mass loss profile, in this case, is peaked around the so-called gravitational radius of the system, which is the cylindrical radius at which the sound speed of the heated gas equals the Keplerian orbital speed (_Alexander et al._ , 2014). and has a total rate of about 10${}^{-10}M_{\odot}/yr$ assuming an EUV flux of 10${}^{41}phot/sec$ (_Alexander et al._ , 2006a, b), as demonstrated by isothermal hydrodynamical simulations. Soft X-ray radiation ($100eV<E<1keV$) is not efficient at ionizing hydrogen directly, but it ejects inner-shell electrons from abundant metals (e.g. Carbon and Oxygen); the ejected photoelectrons have suprathermal energies and result in secondary ionizations. This yields a more weakly ionized gas of a range of temperatures from a few thousand to $10^{4}\,\mathrm{K}$. Two-dimensional radiation-hydrodynamic models yield mass loss profiles that are more extended with respect to the EUV case and total mass loss rates of order 10${}^{-8}M_{\odot}/yr$, assuming a soft X-ray flux of 10${}^{30}erg/sec$ (_Owen et al._ , 2010; _Picogna et al._ , 2019), or even higher for Carbon-depleted disks (_Wölfer et al._ , 2019). The dominant gas heating mechanism for FUV photons ($E<13.6eV)$) is photoelectric heating from dust grains, which is proportional to the grain surface. Thus in the presence of abundant small grains or PAHs in the disk atmosphere, FUVs can efficiently heat the gas and may also contribute to drive a thermal wind, as suggested by hydrostatic 1+1D models (_Gorti et al._ , 2009, e.g.). However, uncertainties in the atmospheric abundance of PAHs, which are rarely observed in T-Tauri disks (_Seok and Li_ , 2017), and the variable nature of the FUV flux, which is linked to the accretion rate onto the central object, make it difficult to assess the role of FUV-driven winds on the final disk dispersal. Recent calculations of thermal winds have combined two-dimensional (axisymmetric) hydrodynamics with explicit, though simplified, treatments of the chemical and thermal state of the gas under the influence of these radiative processes (_Wang and Goodman_ , 2017; _Nakatani et al._ , 2018a, b). These works find that X-rays alone are less effective at driving photoevaporation than EUV (_Wang and Goodman_ , 2017) or FUV (_Nakatani et al._ , 2018b), except at very low gas metallicities or if some cooling processes are suppressed, contrary to _Owen et al._ (2010). However, the former assumed harder X-ray spectra than the latter: hard ($E\geq 1$ keV) X-rays penetrate more deeply than softer ones, thereby warming a larger volume of gas to a lower temperature, possibly $<T_{\rm esc}$, for the same luminosity. _Wang and Goodman_ (2017) and _Nakatani et al._ (2018b) also differ among themselves as to the relative importance of EUV and FUV. Clearly, more work is needed, but it might be helpful to propose a fiducial set of radiation fields and disk structures against which different groups could test their codes. #### 4.3.2 MHD outflows The lack of any systematic process to drive turbulence in protoplanetary disks has revived the interest in the magnetised outflow (MO) scenario, especially in the weakly ionized regions, following an idea initially proposed by _Wardle and Königl_ (1993) and subsequently developed by _Königl et al._ (2010) and _Salmeron et al._ (2011). These MOs launched from weakly ionized disks received more support from numerical simulation, first in shearing box models. _Bai and Stone_ (2013) first found that a magnetically dead disk dominated by Ohmic and ambipolar diffusion could still emit a MO thanks to the ionized layer localized at the disk surface. This result was quickly generalized to the outer, more turbulent, regions ($R>30$ AU, _Simon et al._ 2013) and to models including the Hall effect (_Lesur et al._ , 2014). Despite their importance, these early shearing box models suffer from an oversimplification of the gravitational potential at large $z$ (_Turner et al._ , 2014) which makes them unreliable to study outflows. Hence, since PPVI, a lot of effort has been devoted to confirming these results in global models, which do not suffer from these drawbacks. The first generation of global simulations including non-ideal MHD effects (_Gressel et al._ , 2015; _Béthune et al._ , 2017; _Bai_ , 2017) demonstrated that these outflows were not a mere artifact of the shearing box model, and could indeed explain accretion rates of the order of $10^{-8}\,M_{\odot}/\mathrm{yrs}$, assuming a disk surface density $\Sigma\simeq 10\,\mathrm{g.cm}^{-2}$ at $R=10\mathrm{AU}$ with a disk magnetization $\beta_{\mathrm{p}}\simeq 10^{5}$. It was also found that the mass loss rate in the wind $\dot{M}_{\mathrm{wind}}=2\pi\int\,R\mathrm{d}R\zeta\Sigma\Omega_{K}$ (see eq. 1) was of the order of the mass accretion rate $\dot{M}_{\mathrm{acc}}$. A way to interpret these high mass ejection rates is to use the locally defined ejection efficiency $\displaystyle\xi=\frac{1}{\dot{M}_{\mathrm{acc}}}\frac{\mathrm{d}\dot{M}_{\mathrm{wind}}}{\mathrm{d}\log R}$ to compare ejection to accretion rates. In steady state, and assuming a constant $\xi$, it is easy to show that $\displaystyle\dot{M}_{\mathrm{wind}}=\dot{M}_{\mathrm{acc}}(R_{\mathrm{in}})\left(\left(\frac{R_{\mathrm{out}}}{R_{\mathrm{in}}}\right)^{\xi}-1\right)$ for a wind-emitting disk with inner radius $R_{\mathrm{int}}$ and outer radius $R_{\mathrm{out}}$ ( these two radii can in principle be arbitrarily chosen in the wind-emitting region, but we will assume here that they correspond to the radial extension of the wind-emitting region). It can be shown that if accretion is the only energy source for the outflow, then energy conservation requires $\xi<1$ (_Ferreira and Pelletier_ , 1995; _Lesur_ , 2021a, eq. 11.40). This limit is usually exceeded in the first generation of simulations ($\xi\sim 1$ in _Béthune et al._ 2017, $\xi=1.5$ in _Bai_ 2017), indicating that these outflows cannot be driven solely by accretion energy, and that another energy source must contribute to the outflow. Since all of these early solutions included some sort of prescribed coronal heating, it was quickly realized that these outflows could be of magneto-thermal origin (i.e. the outflow is a mixture of photo-evaporation and MHD wind). #### 4.3.3 Magneto-thermal winds The very high mass loading in these simulations triggered the search for more accurate models of the wind thermodynamics, including some of the processes involved in photo-evaporation (4.3.1). Using this type of approach, _Wang et al._ (2019) found ejection indices slightly lower than in the first global models, with $\xi=0.5-1$, a result also recovered by _Gressel et al._ (2020). Hence, the thermal driving turned out to be less important than anticipated in earlier models. _Rodenkirch et al._ (2020) showed solutions of a hot magneto- thermal wind with the transition to a photoevaporative dominated wind for very small magnetic fields for $\beta>10^{7}$. Using numerical self-similar solutions, _Lesur_ (2021b) also found solutions with $\xi\sim 0.5$ without any coronal heating, confirming that massive outflows with $\xi\lesssim 1$ are mostly a result of the low magnetizations (high $\beta_{p}$) required to match the accretion rates expected in YSOs than a thermal effect. Overall, since $\xi\sim 1$ in all of the MO simulations published to date with $\beta_{\mathrm{p}}\gg 1$, one expects $\dot{M}_{\mathrm{wind}}\gtrsim\dot{M}_{\mathrm{acc}}$ if $R_{\mathrm{out}}/R_{\mathrm{in}}\gtrsim 2$ (i.e. if the wind is launched from a significant fraction of the disk surface). Hence, very massive outflows should be the rule, more than the exception, from a theoretical standpoint. The fact that these outflows are very massive also has an impact on their kinematics and observational signature. For instance, it is customary to measure the amount of angular momentum extracted by a MO by its lever arm $\lambda$ (see eq.2). Theoretically, $\lambda$ can be related to $\xi$ through $2\xi\simeq 1/(\lambda-1)$ (_Lesur_ , 2021a, eq. 11.40). Hence, the outflows emitted in magneto-thermal models all have $\lambda\simeq 1.5$. Because $\lambda$ is a direct measure of the angular momentum ”excess”, it can also be deduced by measuring the azimuthal and poloidal velocity components of a given outflow (_Anderson et al._ , 2003) which gives a relatively direct test of these models on observed jets and molecular winds (ref to I. Pascucci’s chapter) Because of these very low $\lambda$ and high $\xi$, these magneto-thermal outflows are relatively far in parameter space from the magneto-centrifugal mechanism of _Blandford and Payne_ (1982) from which previous accretion- ejection solutions have been derived (_Wardle and Königl_ , 1993; _Ferreira_ , 1997). Indeed, these early solutions all had $\beta_{\mathrm{p}}\simeq 1$, with $\lambda\gg 1$ and $\xi\ll 1$ and lead to sonic accretion speeds which makes them incompatible with the known accretion rate and surface density of PPDs. As pointed out by _Bai et al._ (2016), the new $\beta_{\mathrm{p}}\gg 1$ solutions are closer to the magnetic tower scenario of _Lynden-Bell_ (2003) since they are mostly driven by the warping of the toroidal magnetic field at the disk ionized surface. However, it is worth noting that MOs exist at all $\beta_{\mathrm{p}}$ and make a continuum between these two extreme limits (_Jacquemin-Ide et al._ , 2019; _Lesur_ , 2021b), hence MHD-driven outflows should be much more general in accreting systems than thought in the past. Outlook Finally, if MOs are key in the angular momentum transport of PPDs, it also implies that our understanding of their secular evolution should be revised. First and foremost, if the mass accretion rate $\dot{M}_{\mathrm{acc}}$ is driven by the outflow stress (eq. 2), then $\dot{M}_{\mathrm{acc}}$ is not proportional to the density gradient as in the viscous disk model. As a result, such a disk might not spread radially as efficiently as viscous disks (though see _Yang and Bai_ 2021) nor fill density gaps spontaneously. Related to this question, there are hints that planet migration (rate and direction) is probably affected by MOs (_McNally et al._ , 2020; _Kimmig et al._ , 2020) though these early results have to be confirmed by models including self-consistent MOs. This will likely have measurable consequences on planet population synthesis models and therefore predicted planet populations (see chapter led by S-J Paardekopper). Since MOs are structures driven by the large-scale field threading the disk, the strength of this field becomes crucial to predict the evolution of the system. Indeed, all of the models to date show that the mass loss $\zeta$ and lever arm $\lambda$ are decreasing functions of $\beta_{\mathrm{p}}$. As a result, the mass accretion and ejection rates are partially controlled by the local field strength, which in principle can evolve with time. As a result, the secular evolution of PPDs might be dictated by the evolution of their magnetic flux. One of the consequences is that it becomes possible to create large cavities in a disk as a result of magnetic field concentration in the inner disk regions, leading for instance to an alternative model of transition disks (see §5.3). These prospects are tightly linked to the question of the large-scale magnetic field strength and transport in these disks, which is still largely unknown and poorly described. From a theoretical point of view, there is no consensus on how this large-scale field should evolve. Global simulations in the non- ideal MHD regime (_Bai and Stone_ , 2017; _Gressel et al._ , 2020; _Lesur_ , 2021b) suggest that the large-scale field tend to diffuse outwards for $R\gtrsim 1$ AU, with a velocity increasing with the field strength, but some theoretical models suggest the field can be transported inwards (_Leung and Ogilvie_ , 2019). In the ideal MHD regime, which should be valid for $R\lesssim 1AU$ in PPDs (§4.1), the magnetic field is inversely transported inwards (_Zhu and Stone_ , 2018; _Jacquemin-Ide et al._ , 2021), at a speed that also increases with the field strength. Hence, the behavior of disks in the non-ideal and ideal regimes regarding field transport appears to be opposite (fig. 4). If this picture is confirmed, it would suggest the formation of an inner strongly magnetized disk with an outer disk that slowly loses its initial magnetic field. There is little doubt that such a scenario would in turn impact planet formation mechanisms (e.g. _Suzuki et al._ 2016), but it still needs to be confirmed.
# Transferring Neural Potentials For High Order Dependency Parsing Farshad Noravesh111Email<EMAIL_ADDRESS> ###### Abstract High order dependency parsing leverages high order features such as siblings or grandchildren to improve state of the art accuracy of current first order dependency parsers. The present paper uses biaffine scores to provide an estimate of the arc scores and is then propagated into a graphical model. The inference inside the graphical model is solved using dual decomposition. The present algorithm propagates biaffine neural scores to the graphical model and by leveraging dual decomposition inference, the overall circuit is trained end-to-end to transfer first order informations to the high order informations. ## 1 Introduction Dependency parsing is the basis of many complex pipelines for problems in natural language processing such as machine summarization, machine translation, event extraction, semantic parsing ,semantic role labeling(SRL), emotion analysis, dialogue systems and information processing. Thus, any error in dependency parsing could propagate to downstream task and therefore any advance in this field could lead to major improvement in NLP tasks. There are two main approaches to dependency parsing. The first approach is transition based which has incremental local inference and involves using datastructures such as buffer and stack (Nivre, 2008),(Buys & Blunsom, 2015). This approach has the limitation of resolving relatively short sentences and is a trade-off between speed and accuracy. The second approach is graph based and can handle any long sentence but the inference time has usually long time complexity. There are many technical issues for improving state of the art dependency parsers. Thus these research directions include: 1. 1. nonprojective cases 2. 2. high order features 3. 3. faster inference algorithms 4. 4. training data scarcity and the need for few shot learning 5. 5. span-span modeling 6. 6. reranking The amount of nonprojective examples of dependency parsing varies from one language to another. At inference time, Eisner algorithm which is a bottom up CKY-like algorithm can not resolve nonprojective cases. For nonprojective parsing, many algorithms based on maximum spanning tree are used such as (Zmigrod et al., 2021),(McDonald et al., 2005),(McDonald et al., 2006),(Levi et al., 2015) or leveraging maximum subgraph parsing for the case of nontrees in semantic dependency parsing as is shown in (Kuhlmann & Jonsson, 2015). An alternative approach to finding maximum spanning tree is to formulate it as an integer linear programming(ILP) like (Riedel et al., 2006). One advantage of ILP formulation is the capability to model many constraint like each verb has a single subject in a direct way. The other advantage is that it can be used for nonprojective cases (Riedel & Clarke, 2006). Most articles in the literature are devoted to first order parsing (Dozat & Manning, 2016) which only use one edge and node as sources of input features and neglect the richness of language structure. Although (Li et al., 2022) uses graph neural networks(GNN) and creates the graph dynamically, but it still does not model high order features. A good way to leverage GNN to consider higher order features like grandparents, grandchildren and siblings, is described in (Ji et al., 2019) which recursively aggregates the neighbors’ information and the graph at each layer appears as a soft parse tree, and finally it uses a MST algorithm for decoding. The novelty of this GNN is that it represents both the head representation of each node, and dependent representation of each node. The drawback is that the number of parameters to learn is quite large and it also suffers from the curse of dimensionality and therefore needs many data to train efficiently in this high dimensional vector space. An interesting idea to get around this difficulty in GNN, is considering each node as an edge in the dependency structure, which is explained in (Yang & Tu, 2022). An alternative for GNN is stack-pointer networks in (Ma et al., 2018) and siblings and grandchildren features have been modeled. Although this model is fast but has all the limitations of deep learning such as interpretability, being data hungry and struggling with the curse of dimensionality. One way to generalize to high order features is using ILP and see it as a structural prediction problem (Martins et al., 2009b). In order to unify all approaches, graphical models are used as a promising paradigm as is shown in (Niculae & Martins, 2020). Prior knowledge could be encoded as hard constraint (Martins et al., 2009a) and keeps polynomial number of constraints in general. (Martins et al., 2009a) uses nonlocal(high order) features. Another idea that can be combined is dual decomposition which is inspired by optimization (Martins, Smith, Figueiredo & Aguiar, 2011),(Martins, Figueiredor, Aguiar, Smith & Xing, 2011). A good approach to unify the loopy belief propagation parser of (Smith & Eisner, 2008) and the relaxed linear program (Martins et al., 2009a) is explained in (Martins et al., 2010) that considers model assumptions in a factor graph. (Gormley et al., 2015) considers Feed-forward topology of inference as a differentiable circuit and considers high order interactions in a factor graph. (Gormley et al., 2015) models each potential function as a loglinear form. Although high order features are crucial to obtain state of the art models for dependency parsing but there is another factor which is even more important and is described in (Gan et al., 2021). The basic idea is to measure the relation between the spans in contrast to measuring the relations between words in classical dependency parsing. This approach is a proper generalization since each word is always a span of length one, and subspans could be evaluated from spans recursively which could be considered as a dynamic programming paradigm. ## 2 Related Works An inspiring and natural approach to high order dependency parsing is described in the seminal work of (Smith & Eisner, 2008) that formulates it as an approximate learning and inference over a graphical model and the global constraints are encoded inside the model and Loopy Belief Propagation(LBP) is a simple approximation that is used for that. (Smith & Eisner, 2008) incrementally adjusts the numerical edge weights that are fed to a fast first- order parser. One of the main difficulties is satisfy hard constraints such as tree constraint which ensures the resulting graph is a tree. The probability distribution of all configurations(all assigments $\mathcal{A}$) is defined by the following Markov random field(MRF) $p(\mathcal{A})=\frac{1}{\mathcal{Z}}\underset{m}{\prod}F_{m}(\mathcal{A})$ (2.1) where $F_{m}$ is the m-th factor function which could be unary, binary, ternary, or global. From a different classification angle, these factors could be either hard or soft. A hard factor has a value 0 on violated constraint parses, acting as a constraint to rule them out such as TREE constraint which ensures the final graph is a tree or even a harder constraint such as PTREE which ensures trees to be projective. Another important hard constraint is EXACTLY1 which does not allow any word to have more than one parent. Soft factors in (2.1) can easily be modeled by the following loglinear model: $F_{m}(\mathcal{A})=\exp\sum_{h\in features(F_{m})}\theta_{h}f_{h}(\mathcal{A},W,m)$ (2.2) Nine types of soft constraints and seven hard constraints are described in (Smith & Eisner, 2008). An interesting experimental and combinatorial exploration is discovering which set of soft and hard constraints are sufficient for a reasonable accuracy which experimentally measures the degree of sensitivity of each of these constraints to the final accuracy. The main difficulty in training this model is the fact that the normalizing constant in the denominator depends implicitly on the learning parameters and therefore can not be neglected but Belief Propagation(BP) provides an estimate of that marginal distribution. Thus the gradient of the normalizing constant can easily be computed as follows. $\nabla_{\theta}\log\mathcal{Z}=\sum_{m}\mathbb{E}_{p(\mathcal{A})}[\nabla_{\theta}F_{m}(\mathcal{A})]$ (2.3) (Gormley et al., 2015) considers approximations and parsing in (Smith & Eisner, 2008) as a differentiable circuit to improve accuracy. It uses a different objective function which is based on the $L2$ distance between the approximate marginals and the gold marginals. ## 3 Main Results ### 3.1 Terminology Let $W=W_{0},\ldots,W_{n}$ denote the input sentence where $W_{0}$ is the root. The corresponding part of speech(POS) tags are $T_{1},\ldots,T_{n}$. There are $O(n^{2})$ links in the dependency parse that can be enumerated by $\\{L_{ij}:0\leq i\leq n,1\leq j\leq n\\}$ ### 3.2 Transferring Neural Potentials By borrowing from (Dozat & Manning, 2016), the scores can easily be calculated as follows: $\begin{split}h_{i}^{(arc-dep)}&=MLP^{(arc-dep)}(r_{i})\\\ h_{j}^{(arc- head)}&=MLP^{(arc-head)}(r_{j})\\\ s_{i}^{(arc)}&=H^{(arc- head)}U^{(1)}h_{i}^{(arc-dep)}+H^{(arc-head)}u^{(2)}\end{split}$ (3.1) Unary and binary potentials could be defined as follows: $\begin{split}\psi_{Y_{k}}&=\exp{s_{i(k)j(k)}}\\\ \psi_{Y_{k},Y_{k^{\prime}}}&=\psi_{Y_{k}}+\psi_{Y_{k^{\prime}}}+\phi_{Y_{k},Y_{k^{\prime}}}\end{split}$ (3.2) where $i(k)$ and $j(k)$ is the simple lookup table mapping from the actual dependency graph to the graphical model. Please note that the score of the labels are also defined similarly. The best way to understand the idea of transferring neural potentials is to imagine that the cheap and fast first order parser is the baseline and the goal is to perturb these edge scores to be adjusted to the global constraints through high order features. There are two paradigms that can resolve this issue. The first paradigm says that the weights of the first order parser does not receive any feedback from high order features and the misalignment is modeled by a new term $\phi_{Y_{k},Y_{k^{\prime}}}$ that is small only for cases that high order features do not have any conflict with first order features and we call it a perfect alignment case. The second paradigm couples first order with high order features in a bidirectional way and allows to change the scores of first order parser by end to end training and the error is propagated all the way downstream to influence and tune the first order parser. The first paradigm can be used as a warm start of the second paradigm to increase the speed of training process since the initial weights are at a reasonable space and just a perturbation of it could satisfy the high order dependency constraints. The present paper assumes that $\psi_{Y_{k}}+\psi_{Y_{k^{\prime}}}$ can sufficiently model the interactions of edges and there is no need to model the mutual interaction $\phi_{Y_{k},Y_{k^{\prime}}}$ explicitly, since the model is trained end to end and all potentials are based on neural networks and the mutual interaction is implicitly considered. There are two main approaches to inference for the best parse. The first one is based on sum-product algorithm also known as belief propagation algorithm. After calculating the beliefs from final message passing iteration, the marginal probability can be approximated. This should be done for all variable nodes of the factor graph to get all parts of the parse. The second approach simultaneously maximizes the objective by finding the best assignment which is also called the MAP assignment task. The second approach is mathematically richer since the integrality gap can be evaluated in contrast to loopy belief propagation that only can hope to reach the convergence and the evaluation is hard. These two approaches are explained here: #### 3.2.1 Loopy Belief Propagation After sending messages iteratively from variables, $y_{i}$, to factors, $\alpha$ and, from factors to variables, the algorithm will eventually converge: $\begin{split}m^{(t)}_{i\rightarrow\alpha}(y_{i})&\propto\prod_{\beta\in\mathcal{N}(i)\backslash\alpha}m_{\beta\rightarrow i}^{(t-1)}(y_{i})\\\ m^{(t)}_{\alpha\rightarrow i}(y_{i})&\propto\sum_{y_{\alpha}\sim y_{i}}\psi_{\alpha}(y_{\alpha})\prod_{j\in\mathcal{N}(\alpha)\backslash i}m_{j\rightarrow\alpha}^{(t-1)}(y_{i})\end{split}$ (3.3) where $\mathcal{N}(i)$ and $\mathcal{N}(\alpha)$ are the neighbors of $y_{i}$ and $alpha$ respectively. Beliefs at each variable and factor are computed as follows: $\begin{split}b_{i}(y_{i})&\propto\prod_{\alpha\in\mathcal{N}(i)}m_{\alpha\rightarrow i}^{(t_{max})}(y_{i})\\\ b_{\alpha}(y_{\alpha})&\propto\psi_{\alpha}(y_{\alpha})\prod_{i\in\mathcal{N}(\alpha)}m_{i\rightarrow\alpha}^{(t_{max})}(y_{i})\end{split}$ (3.4) This approach is used in (Gormley et al., 2015) in the inference step. #### 3.2.2 MAP Inference In the present work, this approach is chosen since it is fast, parallelizable and has a rich mathematical analysis. Linear programming(LP) relaxation is used to solve MAP inference as is explained in (Jaakkola & Sontag, 2010). The factor graph has an equivalent Markov random field(MRF) and thus the objective is as follows: $\begin{split}\text{MAP}(\theta)&=\underset{\mu}{\max}\sum_{i\in V}\sum_{x_{i}}\theta_{i}(x_{i})\mu_{i}(x_{i})+\sum_{ij\in E}\sum_{x_{i},x_{j}}\theta_{ij}(x_{i},x_{j})\mu_{ij}(x_{i},x_{j})\\\ &=\underset{\mu}{\max}\ \theta.\mu\end{split}$ (3.5) subject to : $\begin{split}\mu_{i}(x_{i})&\in\\{0,1\\}\ \forall i\in V,x_{i}\\\ \underset{x_{i}}{\sum}\mu_{i}(x_{i})&=1\ \forall i\in V\\\ \mu_{i}(x_{i})&=\underset{x_{j}}{\sum}\mu_{ij}(x_{i},x_{j})\ \forall ij\in E,x_{i}\\\ \mu_{j}(x_{j})&=\underset{x_{i}}{\sum}\mu_{ij}(x_{i},x_{j})\ \forall ij\in E,x_{j}\end{split}$ (3.6) where $\theta_{i}$ and $\theta_{ij}$ are unary and binary potentials respectively. This is a pairwise relaxation. We can tighten the relaxation by enforcing the joint consistency of edges in a cluster of variables using the framework of lift-and-project methods but is out of the scope of the present paper since a fast algorithm is more preferred to a highly accurate algorithm. The lifting refers to introducing the new high level variables and the projection refers to projecting to the original variables. An alternative framework is to use cutting plane algorithms. When using these higher order methods, the number of constraints and the variables, grows exponentially in the size of the clusters considered and is therefore prohibitive. The constraints in (3.6) can be generalized to the cluster based constraint as is done in (Batra et al., 2011),(Sontag et al., 2008) to have tighter relaxation. A different LP relaxation for MAP assignment problem is by reducing it to an instance of a Bipartite Multi-cut problem as is shown in (J. Reddi et al., 2010). A good survey of all LP relaxations for MAP inference in discrete Markov random fields is described in (Kannan et al., 2019). A cutting-plane algorithm is used in the present paper as follows: After solving the pairwise LP relaxation, there are two cases. The first case is that the solution is integer, the MAP assignment is done, and algorithm is terminated. To handle the second case, one can add a valid constraint to the relaxation. Valid constraint is a constraint that does not cut off any of the integral vertices. Solving (3.6) is computationally expensive and is not efficient. Thus, a natural approach is to use dual decomposition which is explained in (Martins, Figueiredor, Aguiar, Smith & Xing, 2011), (Koo et al., 2010),(Martins, Smith, Figueiredo & Aguiar, 2011). Block coordinate descent is used for dual decomposition in (Belanger et al., 2014) while the present paper used ADMM as is leveraged in (Martins, Smith, Figueiredo & Aguiar, 2011). #### 3.2.3 Dual Decomposition Following the ADMM approach of (Martins, Smith, Figueiredo & Aguiar, 2011) to dual decomposition, first the primal problem is defined as follows: $\begin{split}P:\underset{z_{s}\in y_{s}}{\max}\sum_{s=1}^{S}f_{s}(z_{s})\\\ <u(r)>_{r\in R}\in\mathbb{R}^{|R|}\\\ s.t.\ z_{s}(r)=u(r)\ \forall s,r\in\bar{R}_{s}\end{split}$ (3.7) After doing relaxation and writing the dual form, the master problem is: $\begin{split}D:\underset{\lambda=<\lambda_{1},\ldots,\lambda_{S}>}{\min}\sum_{s=1}^{S}g_{s}(\lambda_{s})\\\ s.t.\sum_{s:r\in\bar{R_{s}}}\lambda_{s}(r)=0\ \forall r\in R\end{split}$ (3.8) where $g_{s}(\lambda_{s})$ are the solution to the following slaves $\underset{z_{s}\in Z_{s}}{\max}\ f_{s}(z_{s})+\sum_{r\in\bar{R_{s}}}\lambda_{s}(r)z_{s}(r)-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r)-u^{t}(r))^{2}$ (3.9) Since the scores $f_{s}(z_{s})$ in (3.9) is modeled by a linear form like $f_{s}(z_{s})=\sum_{r\in R_{s}}\theta_{s}(r)z_{s}(r)$, the slaves can be written as: $\underset{z_{s}\in Z_{s}}{\max}\ \sum_{r\in\bar{R_{s}}}(\theta_{s}(r)+\lambda_{s}(r))z_{s}(r)-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r)-u^{t}(r))^{2}$ (3.10) Note that $\theta_{s}(r)$ in (3.10) are neural potentials that are estimated from the deep learning module and these coefficients vary at each iteration of the overall circuit. To solve (3.10) using a generic quadratic solver, it is written in the following form: $\underset{z_{s}\in Z_{s}}{\max}\ \sum_{r\in\bar{R_{s}}}(\theta_{s}(r)+\lambda_{s}(r)+\rho u^{t}(r))z_{s}(r)-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r))^{2}$ (3.11) The Lagrange variables can be updated as follows: $\lambda_{s}^{t+1}(r)=\lambda_{s}^{t}(r)-\eta_{t}(z_{s}^{t+1}(r)-u^{t+1}(r))$ (3.12) where $\eta_{t}$ is the step size. Applying ADMM algorithm, u has a closed form solution as a simple average which is obtained by projected subgradient method: $u^{t+1}(r)=\frac{1}{\delta(r)}\sum_{s:r\in\bar{R_{s}}}z_{s}^{t+1}(r)$ (3.13) where $\delta(r)$ is the cardinality of set $\\{s:r\in R_{s}\\}$. The loop iterates until primal and dual residuals defined in (Martins, Smith, Figueiredo & Aguiar, 2011) violate the constraint. To fully perceive the details of these variables, consider a sentence with 5 tokens as follows: $w_{1},\ldots,w_{5}$ Figure 1: right and left minimal dependencies $z^{gp}_{3f}=\\{y_{34},y_{45}\\}$ The minimal dependencies are defined as all second order dependencies that are either consecutive siblings or grand parents and are shown in Figure 1 with constraints that are shown in Figure 2 . Figure 2: forward constraints Figure 3 shows the factor graph generated for 5 token sentences which includes 6 constraints and 7 overlapping basic components. Nots that only two types of higher order constraints namely grandparent and consecutive siblings are used in the present paper since there is always a tradeoff between computational complexity and exactness of the solutions. These two types of constraints are more essential than the rest of them and have more impact in any selection process. Figure 3: factor graph In order to connect the dual decomposition to the first order deep learning model, a mapping is defined that assigns a score to each of the components. The weighted combinations of these first order scores supports how solution of the $z_{s}$ variables can shape the global dependency parsing graph: $f_{s}(z_{s})=\sum_{r}z_{(s,r)}\theta(s,r)$ (3.14) Once the $z$ variables are solved, the dependency graph can be read. Note that the number of components of each slave is fixed but each basic component can be connected to any number of constraints. The equality constraint in equation (3.7) ensures the consistency of the selection. ### 3.3 Training And Prediction By drawing inspiration from (Gormley et al., 2015), everything is trained end to end like a circuit and the inference mechanism of the factor graph is coupled with the neural estimation of edge scores. A drawback of the present approach like any other deep learning model is the lack of sufficient training data since deep learning models are data hungry and suffer from curse of dimensionality. There is a compromise between speed of convergence and the accuracy of modeling. This is the reason that very few slaves(constraints) are considered for high order modeling. The more constraint added to the model, the longer it takes to converge. Minimum Bayes risk(MBR) is used to produce a tree that minimizes the expected loss as follows: $\begin{split}h_{\theta}(x)&=\underset{\hat{y}}{\operatorname*{arg\,min}}\mathbb{E}_{y\sim p_{\theta}(.|x)}l(\hat{y},y)\\\ &=\underset{\hat{y}}{\operatorname*{arg\,max}}\sum_{i:\hat{y_{i}}=ON}p_{\theta}(y_{i}=ON|x)\end{split}$ (3.15) At test time, maximum spanning tree automatically ensures that the resulting graph is a tree. Algorithm 1 outputs dependency parsing tree 1:Input: batch of sentences 2:repeat 3: calculate word embedding 4: calculate the scores using (3.1) 5: calculate the neural potentials using first order estimation (3.2) 6: repeat 7: for each slave $s=1,\ldots,S$ do 8: make an update for slave $z_{s}$ using (3.11) 9: end for 10: update u using (3.13) 11: update $\lambda$ using (3.12) 12: $t\leftarrow t+1$ 13: until primal and dual residuals are below a threshold 14: round $u,z,\lambda$ 15: backpropagate the loss in (3.15) to adjust the neural scores 16: until maxIter 17: if not tree 18: use maximum spanning tree algorithm 19: return the dependency parsing tree ### 3.4 Experiments Universal dependency dataset for English language is used for all of the experiments in the present paper. The maximum length of the sentence is 71 tokens but at most 60 token sentences are used for training since few datapoints for longer sentences are not enough for an adequate training and makes some noise. Table 1: different experiments for dependency parsing experiment | opt | epochs | batch size | maxseq | highorder | accuracy ---|---|---|---|---|---|--- 1 | Adam | 10 | 5 | 20 | False | 93.2 2 | Adam | 10 | 5 | 40 | False | 91.2 3 | SGD | 10 | 5 | 60 | False | 90.1 4 | Adam | 10 | 20 | 60 | False | 89.7 5 | Adam | 10 | 5 | 60 | False | 88.6 6 | Adam | 10 | 5 | 60 | True | 65.3 7 | SGD | 10 | 5 | 60 | True | 66.2 When highorder in Table reffig:experiments is True, it means that the high order dependency parsing is used while when it is False, it corresponds to the first order modeling that uses biaffine attention and factor graph and dual decomposition algorithm are not involved in training. One reason that the results of highorder in Table 1 are lower than the first order counterpart, is that very few slaves are used in these experiments for faster convergence. Even the backward constraint slaves are not used and there is always a tradeoff between speed and accuracy. ## 4 Conclusion The present algorithm combines first order information with high order information to represent a richer feature representation to obtain relations and labels for dependency parsing. The contribution of the present paper is on joining deep biaffine scores in traditional deep learning with high order constraints that are best represented by graphical models. By combining the strength of deep learning and graphical model inferences, a unified approach to high order parsing is suggested. One can also analyze that the high order parser is a perturbation of first order parser. The analysis could also be used for further investigation on which structures in languages have high deviations between first order and high order parsers. ## References * (1) * Batra et al. (2011) Batra, D., Nowozin, S. & Kohli, P. (2011), ‘Tighter relaxations for map-mrf inference: A local primal-dual gap based separation algorithm.’, Journal of Machine Learning Research - Proceedings Track 15, 146–154. * Belanger et al. (2014) Belanger, D., Passos, A., Riedel, S. & McCallum, A. (2014), ‘Message passing for soft constraint dual decomposition’, Uncertainty in Artificial Intelligence - Proceedings of the 30th Conference, UAI 2014 pp. 62–71. * Buys & Blunsom (2015) Buys, J. & Blunsom, P. (2015), ‘Generative incremental dependency parsing with neural networks’, 2, 863–869. * Dozat & Manning (2016) Dozat, T. & Manning, C. (2016), ‘Deep biaffine attention for neural dependency parsing’. * Gan et al. (2021) Gan, L., Meng, Y., Kuang, K., Sun, X., Fan, C., Wu, F. & Li, J. (2021), ‘Dependency parsing as mrc-based span-span prediction’. * Gormley et al. (2015) Gormley, M. R., Dredze, M. & Eisner, J. (2015), ‘Approximation-aware dependency parsing by belief propagation’, CoRR abs/1508.02375. * J. Reddi et al. (2010) J. Reddi, S., Sarawagi, S. & Vishwanathan, S. (2010), Map estimation in binary mrfs via bipartite multi-cuts., pp. 955–963. * Jaakkola & Sontag (2010) Jaakkola, T. & Sontag, D. A. (2010), Approximate inference in graphical models using lp relaxations. * Ji et al. (2019) Ji, T., Wu, Y. & Lan, M. (2019), Graph-based dependency parsing with graph neural networks, in ‘Annual Meeting of the Association for Computational Linguistics’. * Kannan et al. (2019) Kannan, H., Komodakis, N. & Paragios, N. (2019), Tighter continuous relaxations for MAP inference in discrete MRFs: A survey, pp. 351–400. * Koo et al. (2010) Koo, T., Rush, A., Collins, M., Jaakkola, T. & Sontag, D. (2010), Dual decomposition for parsing with non-projective head automata., pp. 1288–1298. * Kuhlmann & Jonsson (2015) Kuhlmann, M. & Jonsson, P. (2015), ‘Parsing to noncrossing dependency graphs’, Transactions of the Association for Computational Linguistics 3, 559–570. * Levi et al. (2015) Levi, E., Reichart, R. & Rappoport, A. (2015), ‘Edge-linear first-order dependency parsing with undirected minimum spanning tree inference’. * Li et al. (2022) Li, B., Gao, M., Fan, Y., Sataer, Y., Gao, Z. & Gui, Y. (2022), Dyngl-sdp: Dynamic graph learning for semantic dependency parsing, in ‘Proceedings of the 29th International Conference on Computational Linguistics’, International Committee on Computational Linguistics, Gyeongju, Republic of Korea. * Ma et al. (2018) Ma, X., Hu, Z., Liu, J., Peng, N., Neubig, G. & Hovy, E. H. (2018), Stack-pointer networks for dependency parsing, in ‘Annual Meeting of the Association for Computational Linguistics’. * Martins, Figueiredor, Aguiar, Smith & Xing (2011) Martins, A. F. T., Figueiredor, M. A. T., Aguiar, P. M. Q., Smith, N. A. & Xing, E. P. (2011), An augmented lagrangian approach to constrained map inference, in ‘Proceedings of the 28th International Conference on International Conference on Machine Learning’, ICML’11, Omnipress, Madison, WI, USA, p. 169–176. * Martins et al. (2009a) Martins, A. F. T., Smith, N. A. & Xing, E. P. (2009a), Concise integer linear programming formulations for dependency parsing, in ‘Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1’, ACL ’09, Association for Computational Linguistics, USA, p. 342–350. * Martins et al. (2009b) Martins, A. F. T., Smith, N. A. & Xing, E. P. (2009b), Polyhedral outer approximations with application to natural language parsing, in ‘Proceedings of the 26th Annual International Conference on Machine Learning’, ICML ’09, Association for Computing Machinery, New York, NY, USA, p. 713–720. * Martins, Smith, Figueiredo & Aguiar (2011) Martins, A., Smith, N., Figueiredo, M. & Aguiar, P. (2011), Dual decomposition with many overlapping components., pp. 238–249. * Martins et al. (2010) Martins, A., Smith, N., Xing, E., Aguiar, P. & Figueiredo, M. (2010), Turbo parsers: Dependency parsing by approximate variational inference, pp. 34–44. * McDonald et al. (2005) McDonald, R., Pereira, F., Ribarov, K. & Hajič, J. (2005), Non-projective dependency parsing using spanning tree algorithms, in ‘Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing’, HLT ’05, Association for Computational Linguistics, USA, p. 523–530. * McDonald et al. (2006) McDonald, R. T., Crammer, K. & Pereira, F. (2006), Spanning tree methods for discriminative training of dependency parsers. * Niculae & Martins (2020) Niculae, V. & Martins, A. (2020), Lp-sparsemap: Differentiable relaxed optimization for sparse structured prediction, in H. D. III & A. Singh, eds, ‘Proceedings of the 37th International Conference on Machine Learning’, Vol. 119 of Proceedings of Machine Learning Research, PMLR, pp. 7348–7359. * Nivre (2008) Nivre, J. (2008), ‘Algorithms for deterministic incremental dependency parsing’, Comput. Linguist. 34(4), 513–553. * Riedel et al. (2006) Riedel, S., cakici, R. & Meza-Ruiz, I. (2006), Multi-lingual dependency parsing with incremental integer linear programming, in ‘Proceedings of the Tenth Conference on Computational Natural Language Learning’, CoNLL-X ’06, Association for Computational Linguistics, USA, p. 226–230. * Riedel & Clarke (2006) Riedel, S. & Clarke, J. (2006), Incremental integer linear programming for non-projective dependency parsing, in ‘Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing’, EMNLP ’06, Association for Computational Linguistics, USA, p. 129–137. * Smith & Eisner (2008) Smith, D. & Eisner, J. (2008), Dependency parsing by belief propagation, pp. 145–156. * Sontag et al. (2008) Sontag, D., Globerson, A. & Jaakkola, T. (2008), Clusters and coarse partitions in lp relaxations, in ‘Proceedings of the 21st International Conference on Neural Information Processing Systems’, NIPS’08, Curran Associates Inc., Red Hook, NY, USA, p. 1537–1544. * Yang & Tu (2022) Yang, S. & Tu, K. (2022), Semantic dependency parsing with edge gnns, in ‘Findings of the Association for Computational Linguistics: EMNLP 2022’, Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, pp. 6096–6102. * Zmigrod et al. (2021) Zmigrod, R., Vieira, T. & Cotterell, R. (2021), On finding the k-best non-projective dependency trees, in ‘Annual Meeting of the Association for Computational Linguistics’.
# LMExplainer: a Knowledge-Enhanced Explainer for Language Models Zichen Chen, Ambuj K Singh, Misha Sra University of California, Santa Barbara {zichen_chen, ambuj<EMAIL_ADDRESS> ###### Abstract Large language models (LMs) such as GPT-4 are very powerful and can process different kinds of natural language processing (NLP) tasks. However, it can be difficult to interpret the results due to the multi-layer nonlinear model structure and millions of parameters. Lack of understanding of how the model works can make the model unreliable and dangerous for everyday users in real- world scenarios. Most recent works exploit the weights of attention to provide explanations for model predictions. However, pure attention-based explanation is unable to support the growing complexity of the models, and cannot reason about their decision-making processes. Thus, we propose LMExplainer, a knowledge-enhanced interpretation module for language models that can provide human-understandable explanations. We use a knowledge graph (KG) and a graph attention neural network to extract the key decision signals of the LM. We further explore whether interpretation can also help AI understand the task better. Our experimental results show that LMExplainer outperforms existing LM+KG methods on CommonsenseQA and OpenBookQA. We also compare the explanation results with generated explanation methods and human-annotated results. The comparison shows our method can provide more comprehensive and clearer explanations. LMExplainer demonstrates the potential to enhance model performance and furnish explanations for the reasoning processes of models in natural language. ## 1 Introduction Pre-trained language models (PLMs) have recently gained significant attention due to their impressive performance on various natural language processing tasks (Brown et al., 2020; Liu et al., 2023; Wei et al., ; Zhou et al., 2022). PLMs are a type of Artificial Intelligence (AI) that is trained on large amounts of data in order to perform various natural language processing tasks. These tasks can include language translation (Conneau and Lample, 2019), text generation (Mireshghallah et al., 2022), and text classification (Raffel et al., 2020), among others. One of the main advantages of PLMs is their ability to capture the nuances and the complexity of human languages, allowing them to achieve state-of-the-art performance on different tasks (Li et al., 2022). However, a major limitation of PLMs is the lack of interpretability (Meng et al., 2022), as they are unable to provide explanations for their decision- making processes. The decision-making process is often referred to as a “black box”. PLMs use techniques such as attention mechanisms, which allow the model to focus on specific parts of the input data when making decisions (Vaswani et al., 2017; Devlin et al., 2019; Liu et al., 2019a). These mechanisms are not transparent and are difficult for humans to interpret (Jain and Wallace, 2019). For example, model’s embedding may capture relationships and meanings that are not immediately apparent to humans due to transmission through millions of neurons. This lack of interpretability poses a challenge in critical domains, such as healthcare (Loh et al., 2022) and online education (Zytek et al., 2022), as it limits the trust that users can place in the inference made by the models. In addition to explainability, improving model interpretability can help address issues of fairness, privacy, and safety. Therefore, exploring methods that explain the behaviors of PLMs can help overcome the black-box nature of neural networks. Many recent approaches have focused on providing model-intrinsic and post-hoc methods to address the challenge of model explanations (Ribeiro et al., 2016; Shrikumar et al., 2017; Štrumbelj and Kononenko, 2014; Ying et al., 2019; Tigas et al., 2022; Situ et al., 2021). Model-intrinsic methods consider models that are considered interpretable due to their simple structure, such as linear models. Post-hoc methods provide explanations that are obtained after the model training, such as feature selection. Interpreting and explaining the decision-making process of PLMs is a challenging task. To overcome this challenge, Thorne et al. propose the Multiple Instance Learning method for generating token-level explanations for natural language inference (NLI), which is based on the attention matrix of a neural model. Chen et al. utilize counterfactual examples to generate contrastive explanations. Zhan et al. incorporate structured information from external knowledge to explain the model. However, these works only provide simple textual explanations, neglecting the reasoning process and thereby, do not provide a complete understanding of the model’s reasoning process. We argue that in order to provide more informative explanations that allow humans to understand and trust the model in crucial domains, it is necessary to not only provide insight into how the model reasons but also offer textual explanations. By doing so, we can help humans to gain a more comprehensive understanding of the decision-making process and interpret the model’s output in a human-readable format. In this paper, we present LMExplainer, a novel approach for explaining the predictions made by large PLMs. Our approach uses retrieved knowledge and Graph Attention Networks (GAT), and is able to provide explanations of the rationale behind the PLM’s predictions. As an example, we apply our approach to the task of question answering (QA). In addition to addressing the challenge of explaining the decision-making process, we also investigate the potential impact of interpretation on model performance. Specifically, we explore the potential of using explanations to serve a dual purpose: helping humans in comprehending the model, and enhancing the model’s understanding of the task at hand through interpretation during the explanation process. In this paper, explanation refers to explaining the model’s decision-making in a human-understandable way, while interpretation refers to understanding the internal workings of the model. We evaluate the effectiveness of LMExplainer on the CommonsenseQA (Talmor et al., 2019) and OpenBookQA datasets (Mihaylov et al., 2018). Our experimental results demonstrate that LMExplainer outperforms state-of-the-art LM+KG QA methods on CommonsenseQA, while exhibiting competitive performance on OpenBookQA. These outcomes indicate that our method can enhance overall performance. Furthermore, we demonstrate that LMExplainer is capable of providing valuable reasoning insights for humans in a natural manner, surpassing prior explanation methods. To the best of our knowledge, LMExplainer is the first work to utilize graph-based knowledge in generating natural language explanations to comprehend the rationale behind model behaviors. ## 2 Related Work ### 2.1 Post-hoc Explanation Post-hoc explanation methods have gained significant attention in NLP research in recent years. These include feature-based, concept-based and example-based explanations. One popular approach of feature-based methods is called LIME (Ribeiro et al., 2016). They generate explanations by approximating the original model with a local sample. The model then uses the approximations to highlight the most important features for a given prediction. LIME is extended by Guidotti et al., which uses a decision tree classifier to approximate non- linear models. However, they cannot guarantee that the approximations are accurate representations of the original model due to inherent limitations. Compared to feature-based explanations, the concept-based approaches use single or multiple phrases to explain the model, which is more understandable to humans. Thorne et al. generate tokens of classifiers operating on pairs of sentences, while Yu et al. generate _aspects_ as explanations for search results. Example-based approaches similarly explain using natural language. Kumar and Talukdar use positive labels to generate candidate explanations, while Chen et al. use contrastive examples in the format of “Why A not B" to distinguish between confusing candidates. Different from prior work, we integrate the reasoning features and concepts to explain the model’s behavior. ### 2.2 Large Language Models Language models are used in a wide range of tasks across NLP, such as sentiment analysis, machine translation, and question answering. Conventional n-gram language models (Brown et al., 1992) are based on conditional probability, and are interpretable. Recently, large language models such as RoBERTa (Liu et al., 2019a) and GPT-4 (OpenAI, 2023) have achieved impressive results. For example, they are able to generate coherent stories (Nye et al., 2021), perform as AI teachers (Tack and Piech, 2022), and chat with humans using natural language (Lin et al., 2020). However, these models are often criticized for their lack of interpretability, which can hinder their adoption in real-world applications. Previous interpretable frameworks (Ribeiro et al. (2016), Sundararajan et al. (2017), Smilkov et al. (2017), Ding and Koehn (2021)) could be applied on the large language models, but they often rely on approximations and simplifications of the original models, which may cause discrepancies between the model and the explanation. Swamy et al. propose a framework to explain the language models during training, but they focus on the comparison metrics between models and different stages of the same model, and do not consider the effect of their methods on performance. In this paper, we explain language models by utilizing the model reasoning process, and evaluate the impact of the proposed method on model performance. ### 2.3 Explainability with Knowledge-Graph Knowledge Graphs (KGs) are increasingly being used as a means to improve the interpretability and explainability of language models (Huang et al., 2022; Yasunaga et al., 2021; Huang et al., 2019; Liu et al., 2019b). KGs are structured representations of knowledge, and can be used to capture the complex semantic relationships that are difficult to represent in traditional language models (Ji et al., 2021). We divide KG-based methods into two types: graph-based and graph embedding-based. Graph-based explanation identifies the most relevant paths or subgraphs in a KG that support a given model prediction. Zhan et al. retrieve explainable reasoning paths from a KG and use path features to predict the answers. However, their path explanations are difficult for humans to understand. Graph embedding-based explanation uses an encoder to embed the structure and the relations of a graph, and then the embedding is used to generate explanations by identifying the most relevant nodes in the KG. Yasunaga et al. integrate the KG into the model, enabling the model to reason over structured knowledge and generate more interpretable predictions. However, these generated explanations may not consistently and accurately represent the model’s reasoning, and they might be challenging for humans to comprehend because they are presented in a graph-based format. By drawing upon the insights from prior works, we employ graph embedding as our fundamental component to generate explanations. ## 3 Task Definition In this paper, we aim to address two research questions related to the interpretability of PLMs: (1) how can we provide interpretable and naturally understandable explanations for the decision-making process of PLMs, and (2) how does the provision of explanations impact the performance of PLMs? We now define the task of generating reasoning-level explanation for inference made by PLMs. As an example, we use a QA task. Given a pre-trained PLM $f_{LM}$ with input context $z$ and predicted answer $a$, the goal is to generate an explanation $E$ for why $f_{LM}$ makes prediction $a$ for input context $z$. The explanation $E$ should provide insight into the reasoning behind the prediction and be presented in a format that is understandable to humans. This task can be written as: $E\leftarrow Generate\\_Explanation(f_{LM},z,a)$ (1) ## 4 Approach The LMExplainer architecture is shown in Figure 1. It consists of three main steps: (1) key element extraction and building (Section 4.1), (2) element- graph interpretation (Section 4.2), and (3) explanation generation (Section 4.3). In the first step, we extract the relevant elements from the input data and the retrieved knowledge, and build an element-graph representation. In the second step, we use a Graph Attention Network (GAT) to interpret the element- graph and identify the reason-elements behind the model’s prediction. Finally, in the third step, we use a prompt-based method to generate a textual explanation of the decision-making process based on the identified reasoning elements. Our approach in LMExplainer is flexible and applicable to a range of PLMs, e.g., BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019a), and GPT-2 (Radford et al., ). Figure 1: The LMExplainer architecture. Given a question context $z$ (which includes question $q$ and the set $\mathcal{A}$ of answers), we first generate language embeddings using a PLM. Simultaneously, it retrieves relevant knowledge from a KG to construct a subgraph (Sec. 4.1). The language embeddings and subgraph are then combined to obtain GNN embeddings. This combined representation is then passed through a GAT to obtain the attention scores (Sec. 4.2). Our attention scores serve a dual purpose. Firstly, they weigh the importance of the GNN embeddings and are used with the language embeddings for the final prediction (Sec. 4.4). Secondly, they are used to generate explanations by highlighting the most important parts of the reasoning process (Sec. 4.3). ### 4.1 Key Elements Extraction and Building Certain key elements can significantly influence the reasoning process of PLMs. To capture these essential elements, we use the context of input data, represented as $z$. Each token $x$ in the input question $q$ is treated as a content element. We use $\mathcal{A}$ to denote the set of all candidate answers, and $a\in\mathcal{A}$ to denote one candidate answer. Referring to Figure 1, the example demonstrates the $q$ and $a$ in “Input Context [$z$]". We connect the context $z$ to each token $x$ and potential answer $a$ to construct a multi-relational graph, following the approach from Yasunaga et al.. Specifically, we incorporate external knowledge to construct the knowledge graph. The graph is constructed by retrieving paths in the knowledge graph from ConceptNet (Speer et al., 2017). This allows us to prepare the essential elements that play a key role in the final results and analyze the relations among them. In addition to connecting relations in the knowledge graph, we also have two relationships to capture the content and the potential answers: one is the relationship between question $q$ and context $z$, and another is the relationship between answer $a$ and context $z$. We denote them with $r_{z,q}$ and $r_{z,a}$, respectively. Apart from this, we integrate the knowledge from PLMs as an instructive embedding to the KG. We denote a node in KG with $n$. The instructive embedding is added as a probability into the node embedding. The instructive score of $n$ can be computed as: $\hat{n}_{score}=f_{score}(f_{enc}(n|z))$ (2) $f_{score}=sigmoid(\text{MLP}(\cdot))$ (3) where $f_{enc}$ is used to get the embedding from PLM, and $f_{score}$ is a multi-layer perceptron (MLP) followed by sigmoid activation. This score is used to instruct the KG to capture the node that is the key for reasoning. The constructed graph typically has a large number of paths at the initial building stage, which can lead to a large reasoning space. Since the score $\hat{n}$ can represent the correlation between the node $n$ and context $z$, we will utilize it to prune the graph by removing the irrelevant nodes. The extracted graph is our key element-graph $G_{e}$. ### 4.2 Interpreting graph Given the element-graph, we follow (Yasunaga et al., 2021) to extract the representation for graph reasoning. The method is based on the Graph Attention Network (GAT) (Veličković et al., 2018), and uses a Graph Convolutional Network (GCN) approach (Kipf and Welling, 2017). The nodes in the graph provide feature vectors, while the edges provide paths for the transfer and aggregation of features between nodes. In each layer of a GCN, a single node transfers its own feature vectors to all of its neighboring nodes and aggregates the feature vectors transmitted by its neighbors as its updated node features. This process of “pass-aggregate-update" allows both GCN and GAT to preserve part of the structure and context of the original data through the connections between the nodes. Given the element-graph $G_{e}$, for any node $V_{e}$, we define its neighbor node $s$, where $s\in\mathcal{N}(V_{e})$, and $\mathcal{N}(V_{e})$ refers to all the neighbors of $V_{e}$. We use $V_{es}$ to denote the connecting between node $V_{e}$ and node $s$. The updated node feature $h^{k+1}_{V_{e}}$ is then calculated as shown in Equation 4. $h^{k+1}_{V_{e}}=f_{\delta}(\sum\limits_{s\in\mathcal{N}_{V_{e}}\cup\left\\{V_{e}\right\\}}\alpha_{es}m_{es})+h^{k}_{V_{e}}$ (4) where $f_{\delta}$ is a two-layer MLP, and $\alpha_{es}$ and $m_{es}$ are the attention coefficients and the message, respectively. The node feature $h^{k+1}_{V_{e}}$ is the updated node feature at layer $k+1$ in our GNN, while $h^{k}_{V_{e}}$ is the node feature at layer $k$. After all the neighbor nodes of node $V_{e}$ in the $k$-th layer have passed information to node $V_{e}$, the feature of node $V_{e}$ in the $(k+1)$-th layer, $h^{k+1}_{V_{e}}$, is obtained through an additive aggregation and a non-linear transformation, in addition to the previous feature of itself. This process allows the GAT model to capture the structural and contextual information of the graph and use it to update the node features at each layer. The updated node features at the final layer are then used as the representation of the graph for reasoning. In order to filter the important connections of the graph $G_{e}$, we incorporate attention weights $\alpha_{es}$ when aggregating the message passing $m_{es}$ from neighbor node $s$ (Veličković et al., 2018). The message passing $m_{es}$ is computed using the node type embedding $u_{s}$, its feature embedding $h^{k}_{s}$, and the relation embedding $r_{es}$ between nodes $V_{e}$ and $s$: $m_{es}=f_{n}(h^{k}_{s},u_{s},r_{es})$ (5) where $f_{n}$ is a linear transformation. The relation embedding $r_{es}$ is calculated by a two-layer MLP $f_{\theta}$: $r_{es}=f_{\theta}(\hat{r}_{es},u_{es})$ (6) where $\hat{r}_{es}$ is a one-hot embedding for the relation connecting $V_{e}$ and $s$, $u_{es}$ is the contacted node type embedding of $V_{e}$, and $s$.The attention weight $\alpha_{es}$ is calculated from the query vector $\bm{q}_{s}$ and key vector $\bm{k}_{e}$: $\alpha_{es}=softmax(\frac{\bm{q}_{s}^{\top}\bm{k}_{e}}{\sqrt{D}})$ (7) where $D$ refers to the feature dimension. The query and key vectors are derived from the element-graph as follows: $\bm{q}_{s}=f_{q}(h^{k}_{s},u_{s},\hat{n})$ (8) $\bm{k}_{e}=f_{k}(h^{k}_{e},u_{e},\hat{n},r_{es})$ (9) Both $f_{q}$ and $f_{k}$ are linear transformations. The attention weight $\alpha_{es}$ captures the importance of the connection between nodes $V_{e}$ and $s$, while the message passing $m_{es}$ captures the information being passed between the nodes. By multiplying these two quantities, the GAT model is able to filter the important connections in the graph and use them to update the node features. This process allows our method to discern the underlying rationale in decision-making processes, and obtain the interpretative embedding." After the interpretation embedding has been obtained, we can generate explanations by extracting the most important nodes and edges in the graph. We use a linear transformation to map the final node features to the output space, and use a softmax function to compute the probability distribution over the potential answers. The most probable answers are selected as the final answer, and the corresponding nodes and edges in the element-graph are used to generate the textual explanations for the decision-making process of the PLMs. ### 4.3 Attention-aware Explanation Generation In prior work, Chen et al. proposed a counterfactual-based explanation generator that pairs input text with qualified counterfactual examples to fine-tune the LM to generate explanations in the format of "why A and not B". However, this approach only provides a possible explanation that may reveal the reasoning behind the model’s decision, rather than faithfully interpreting the inner workings of the neural network. To generate more accurate explanations, we utilize a template-based approach that interprets the decision-making process of the LM, as described in Section 4.2. The generated explanations are based on the final answer, corresponding nodes of reason-elements, and edges in the interpretation. Our explanation generator consists of two steps: key explanation element extraction and prompt-based explanation generation. #### 4.3.1 Explanation Components Extraction We first extract the key components that are essential to the decision-making process of the PLMs. These key components consist of the final answer, corresponding nodes of reason-elements, edges, and attention weights ($\alpha$) obtained in Section 4.2. The final answer, corresponding nodes, and edges are used to trace the important explanation nodes, and the attention weights are used to sort the nodes and select the top $w$ nodes that are most relevant to the decision-making. Each node represents an element, so we have $w$ most important components to interpret the explanation. We use $l$ to represent the extracted key component. We denote the output by $E$. $E$ is a natural language explanation. #### 4.3.2 Prompt-based Explanation Generation We integrate the key component set $\\{l\\}$ into our prompt-based explanation generator. The prompt-based explanation generator is based on a set of predefined structures that guide the generation of explanations. The generator includes input context $z$, model predicted output $y^{\prime}$, the trigger sentence $\mathrm{Z}$, and the extracted key components $\\{l\\}$. Our explanation generation has two stages: (1) why choose this explanation, (2) why not choose other explanations. In the “why choose" stage, we generate the explanation for explaining why the model chose the specific answer. The template we used is “Q: [$z$], A: [$y^{\prime}$], T: [$\mathrm{Z}$], R: [$\\{l\\}$]". The output $E$ of the first stage is used in the second stage to explain why the model did not choose other answers. The template we used in the second stage is: “P: [$E$], T: [$\mathrm{\hat{Z}}$]". We use the GPT-3.5-turbo (Ouyang et al., 2022) model to provide a literal interpretation of the reasoning process of the PLMs. The output of the generator is a natural language explanation in the form of a sentence or a paragraph. As demonstrated in Figure 1, our approach uses the top 5 reason-elements. These elements are then passed to Stage 1, which produces the “why choose" explanation. Once the “why choose" explanation is obtained, it is used to generate the corresponding “why not choose" explanation. ### 4.4 Learning and Inference In our task, each question $q$ is associated with a set of answer choices $\mathcal{A}$, with only one being the correct answer. We utilize the information from LM embedding and interpretation embedding. Specifically, we define the probability of choosing an answer with $P(a|q)\propto exp(MLP(\mathbb{H}^{LM},\mathbb{H}^{itp}))$, where $\mathbb{H}^{itp}=h_{V}^{K}$, and $\mathbb{H}^{LM}$ is the representation embedding from LM. We optimize the model by using the cross-entropy loss. ## 5 Experiments ### 5.1 Dataset In our experiments, we use the CommonsenseQA (Talmor et al., 2019) and OpenBookQA (Mihaylov et al., 2018) datasets to evaluate the performance of the candidate approaches. CommonsenseQA consists of 12,247 questions created by crowd-workers, which are designed to test commonsense knowledge through a 5-way multiple choice QA task. OpenBookQA consists of 5,957 questions each requiring the task of 4-way multiple choice question answering. The questions are designed to assess the ability of models to reason with elementary science knowledge. ### 5.2 Baselines Our evaluation can be divided into two parts. In the first part, we focus on model performance. We compare LMExplainer with three sets of baseline models on the CommonsenseQA and OpenBookQA datasets. The first set of baseline models consists of fine-tuned language model RoBERTa-large (Liu et al., 2019a), which demonstrates the capabilities of language models without interpretation. The second set of baseline models includes KG augmented versions of RoBERTa-large, using ConceptNet as the source of common sense knowledge and following the approach in (Lin et al., 2019). The third set of baseline models is the current state-of-the-art common sense reasoning method on CommonsenseQA, MHGRN (Feng et al., 2020), QA-GNN (Yasunaga et al., 2021), GreaseLM (Zhang et al., 2022). The PLM we used is from Huggingface111https://huggingface.co/. In the second part, we evaluate LMExplainer on explanation ability. To establish a baseline for comparison, two prior works, namely PathReasoner (Zhan et al., 2022b) and Explanations for CommonsenseQA (Aggarwal et al., 2021), were employed as benchmarks. These works are recognized for providing natural and comprehensible explanations. ### 5.3 Experimental Settings We set our GNN module to have 200 dimensions and 5 layers, where a dropout rate of 0.2 was applied to each layer. We trained the model using the RAdam optimizer on a single NVIDIA A100 GPU, with a training process of approximately 3 hours. A batch size of 64 was employed during the training process, and the learning rate for the language model and the GNN module were set to 1e-5 and 1e-3, respectively. These settings were adopted in the first part of the evaluation to investigate the performance of the GNN module. We employ ConceptNet (Speer et al., 2017) as our external knowledge source for CommonsenseQA and OpenBookQA. ConceptNet contains a vast amount of information with 799,273 nodes and 2,487,810 edges, which provides a valuable resource for improving the accuracy of QA systems. We extract a subgraph with a hop size of 3, and subsequently prune the obtained graph to retain only the top 200 nodes. For explanation generation, the example prompts we used in the first stage are Q=“Question context is", A=“The predicted choice is", T=“According to the model top reason-elements" + K; “explain the model reasoning process with “since…, ….", K is the reason-elements of the model. In the second stage, P=“According to", and T=“explain why the model doesn’t choose other answers". ### 5.4 Experimental Results We present our experimental results in Table 1 and Table 2, where the accuracy of our proposed LM approach is evaluated on the CommonsenseQA and OpenBookQA datasets. Our empirical findings indicate that our approach leads to consistent improvements in performance compared to existing baseline methods on both datasets. Specifically, the test performance on CommonsenseQA is improved by 4.71% over the prior best LM+KG method, GreaseLM, 5.35% over the included KG augmented LMs, and 7.12% over fine-tuned LMs. The test performance achieves comparable results to the prior best LM+KG method, GreaseLM, on OpenBookQA. It is worth noting that GreaseLM is specifically designed to improve accuracy for QA tasks, while our LMExplainer model focuses on providing explanations for the reasoning process. Despite this difference in focus, our LMExplainer model not only offers insight into the underlying reasoning but also demonstrates an improvement in performance. This finding highlights the potential benefits of incorporating explainability into the model design, as it may lead to enhanced performance in addition to fostering a better understanding of the decision-making process. Method | IHdev-Acc. | IHtest-Acc. ---|---|--- Baselines (Feng et al., 2020) | | MHGRN (2020) | 73.69% | 71.08% KagNet (2019) | 73.47% | 69.01% GconAttn (2019) | 72.61% | 68.59% RGCN (2018) | 72.69% | 68.41% RN (2017) | 74.57% | 69.08% Baselines (our implementation) | | GreaseLM (2022) | 76.17% | 72.60% QA-GNN (2021) | 74.94% | 72.36% LMExplainer (ours) | 77.97% | 77.31% Table 1: Performance comparison of our proposed LMExplainer model against various baselines on Commonsense QA in-house split. Our model outperforms all the other methods, achieving 77.97% and 77.31% accuracy on IHdev and IHtest, respectively. As the official test is hidden, here we report the in-house Dev (IHdev) and Test (IHtest) accuracy, following the data split of (Lin et al., 2019). Method | Dev-Acc. | Test-Acc. ---|---|--- Baselines (Feng et al., 2020) | | MHGRN (2020) | 68.10% | 66.85% GconAttn (2019) | 64.30% | 61.90% RGCN (2018) | 64.65% | 62.45% RN (2017) | 67.00% | 65.20% Baselines (our implementation) | | GreaseLM (2022) | 71.80% | 70.80% QA-GNN (2021) | 63.00% | 59.80% LMExplainer (ours) | 69.20% | 68.00% Table 2: Performance comparison of our proposed LMExplainer model against various baselines on OpenBookQA. Our LMExplainer model exhibits competitive performance in relation to the top-performing model, GreaseLM. It is worth noting that GreaseLM is specifically tailored to enhance accuracy for QA tasks, whereas our LMExplainer model emphasizes providing explanations for the underlying reasoning process. We use the official data splits. ### 5.5 Explanation Results Input Questions | | Q: What is someone doing if he or she is sitting quietly and his or her eyes are moving? --- A. reading B. meditate C. fall asleep D. bunk E. think Label | A. reading | Results of Our Approach - LM Ranked Reasoning Keys | | 1\. quiet chattering mind, 2. not making sound, 3. mind focuses, --- 4\. glasses for people with poor eyesight, 5. war Explanation (why choose) | | Since the person is described as sitting quietly and their eyes are moving, --- it is likely that they are engaged in a visual activity. Based on the keyword "glasses for people with poor eyesight", option "A. reading" is the most likely answer, as reading is a common visual activity that requires focusing one’s eyes on a page and is often aided by glasses for people with poor eyesight. Explanation (why not choose) | | The other options, such as "B. meditate" or "C. fall asleep", --- involve closing one’s eyes or having a still mind, so it is unlikely that the person is doing either of those activities if their eyes are moving. Similarly, "D. bunk" and "E. think" do not seem to be related to the visual activity of having one’s eyes move while sitting quietly. | Explanation of Others PathReasoner (Zhan et al., 2022b) | | quietly [related to] quiet [at location] a library [used for] reading --- eyes [used for] reading eyes [form of] eye [related to] glasses [used for] reading sitting [related to] sit [related to] relaxing [has subevent] reading Explanations for CommonsenseQA (Aggarwal et al., 2021) | | Positive examples: --- \- When we read, our eyes move. \- While reading, a person sits quietly, Negative examples: \- While meditating, eyes don’t move, eyes are closed, \- While sleeping, eyes are closed and they don’t move, \- When a person bunks, he/she doesn’t sit quietly, \- Eyes don’t move when you think about something. Explanation: When we read, our eyes move. While reading, a person sits quietly. While meditating and sleeping, eyes don’t move, eyes are closed. When a person bunks, he/she doesn’t sit quietly. Eyes don’t move when you think about something. Table 3: Explanation examples of LMExplainer, PathReasoner and Explanations for CommonsenseQA dataset. We show the different types of explanations, including ranked reasoning keys, explanations for why choose and explanations for why not choose. The explanations for “why choose‘", presents the model reasoning process in a logical way, while for “not choose" shows the model why does not choose other answers, which enhances the transparency and interpretability of the reasoning process for humans. We use green and blue to highlight the logical connectives and reasoning framework, respectively. Our explanation results in Table 3 provide evidence of the reasoning ability of our proposed LMExplainer. To further demonstrate the effectiveness of our approach, we compare it with two other state-of-the-art methods, PathReasoner (Zhan et al., 2022a) and Explanations for CommonsenseQA (Aggarwal et al., 2021). PathReasoner utilizes structured information to explain the reasoning path, while Explanations for CommonsenseQA first are created by human- annotated explanations and then leverage a generation model to organize the final explanation. In Table 3, we present the inputs of our model, as well as the results of our approach, which include ranked reasoning keywords and natural language explanations of the reasoning process. These examples highlight the ability of our LMExplainer approach in generating comprehensive and interpretable explanations for the models. In comparison to PathReasoner explanations, which only provide structured reasoning paths that are non-informative and require manual selection of a specific path, our proposed approach not only offers a complete reasoning path but also provides a justification for the predicted answer. As illustrated in Table 3, PathReasoner presents four reasoning paths, including redundant paths, making it difficult to identify the faithful reasoning path. In contrast, our method provides a clear and concise natural language explanation for the chosen answer (why choose explanation), which greatly enhances the understandability and smoothness of the explanation. The Explanations for CommonsenseQA dataset consists of human-annotated explanations that provide highly accurate descriptions of the reasoning process. However, as shown in Table 3, its explanations are simply a combination of positive and negative examples provided by humans. While this approach can generate high-quality explanations from a human perspective, it fails to illustrate the actual reasoning process of the model. In contrast, the explanations generated by LMExplainer are not a mere combination of sentences but are inferred and logically derived. Our approach provides a more comprehensive and accurate depiction of the reasoning process and improves the overall interpretability and usefulness of the generated explanations. In addition, the why not choose explanation explains why the model does not choose other answers, which gives people a better understanding of the model’s predictions and increases the transparency of the model. These results highlight the effectiveness of quantifying the influence of tokens on determining the reasoning process and provide a literal representation of the information flow during the inference process. This is important because it allows us to understand the rationale behind the decision-making process of the LM and identify key factors that contribute to its predictions. ### 5.6 Ablation Studies Table 4, Table 5 and Table 6 summarize the ablation study, examining the impact of different components on the performance of our LMExplainer model. We evaluated the effects of varying PLM sizes, knowledge components, and interpreting components on the CommonsenseQA IHdev and IHtest sets. #### 5.6.1 PLM Size PLM | IHdev-Acc. | IHtest-Acc. ---|---|--- RoBERTa-large (final) | 77.97% | 77.31% RoBERTa | 66.26% | 63.01% Table 4: Ablation study on the effect of PLM size on model accuracy. Table 4 shows the impact of the size of PLM on our proposed method. We evaluate the performance of two different PLM sizes: RoBERTa-large (340M parameters) and RoBERTa (110M parameters). Our results indicate that using a larger PLM leads to a significant improvement in performance, with an increase of 11.71% and 14.30% on the IHdev and IHtest sets, respectively. These findings suggest that the size of the PLM plays a critical role in the performance of our model, and using a larger PLM can result in better performance. #### 5.6.2 Knowledge Component Method | IHdev-Acc. | IHtest-Acc. ---|---|--- RoBERTa-large only | 74.28% | 70.19% RoBERTa only | 62.65% | 60.27% w/ external knowledge (final) | 77.97% | 77.31% Table 5: Ablation study on the effect of knowledge component on model accuracy. Table 5 shows the impact of the knowledge component on our method. We compare the performance of the LM-only model with and without external knowledge from ConceptNet. Model only means we only use the LM to predict the answer. W/ external knowledge means we incorporate the external knowledge. We observe that incorporating external knowledge can significantly improve the accuracy of prediction, especially on the test set. By incorporating the knowledge, the accuracy of IHdev and IHtest is increased by at least 3.69% and 7.12%, respectively. This shows that external knowledge plays an important role in enhancing the reasoning ability of the model. #### 5.6.3 Interpreting Component Method | IHdev-Acc. | IHtest-Acc. ---|---|--- RoBERTa-large w/o itp | 73.05% | 71.96% RoBERTa w/o itp | 68.63% | 64.54% RoBERTa-large w/ itp (final) | 77.97% | 77.31% Table 6: Ablation study on the effect of interpreting component on model accuracy. In Table 6, we analyze the impact of the interpreting component on the model’s performance. Model w/o itp indicates that the interpreting component was not incorporated in the prediction, whereas the Model w/ itp row indicates its presence. We observe that removing the interpreting component leads to a clear decrease in accuracy of at least 4.92% and 5.35% on IHdev and IHtest, respectively. Furthermore, comparing the results of RoBERTa-large only, RoBERTa-large w/o itp, and Final, we find that the interpreting component has a greater impact on accuracy than the other components. The ablation highlights the positive contributions of each component of our method. Specifically, we find that the interpreting component has a crucial role in our method’s accuracy and generalizability on unseen questions. ## 6 Conclusion In this paper, we propose LMExplainer, a novel model that incorporates an interpretation module to enhance the performance of language models while also providing clear and trustworthy explanations of the model’s reasoning. Our model utilizes both interpretation and explanation to achieve these goals. The explanation results are presented in a logical and comprehensive manner, making it easier for people to understand the model’s reasoning in natural language. Our experimental results demonstrate superior performance compared to prior state-of-the-art works across standard datasets in the commonsense domain. Our analysis shows that LMExplainer not only improves the model’s performance but also provides humans with a better understanding of the model. ## References * Aggarwal et al. (2021) Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for CommonsenseQA: New Dataset and Models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 3050–3065, Online. Association for Computational Linguistics. * Brown et al. (1992) Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. _Computational Linguistics_ , 18(4):467–480. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901. * Chen et al. (2021) Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, and Yin Zhang. 2021. Kace: Generating knowledge aware contrastive explanations for natural language inference. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2516–2527. * Conneau and Lample (2019) Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. _Advances in neural information processing systems_ , 32. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Ding and Koehn (2021) Shuoyang Ding and Philipp Koehn. 2021. Evaluating saliency methods for neural language models. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5034–5052, Online. Association for Computational Linguistics. * Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1295–1309. * Guidotti et al. (2018) Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. _arXiv preprint arXiv:1805.10820_. * Huang et al. (2022) Jie Huang, Kerui Zhu, Kevin Chen-Chuan Chang, Jinjun Xiong, and Wen-mei Hwu. 2022\. DEER: Descriptive knowledge graph for explaining entity relationships. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 6686–6698, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Huang et al. (2019) Xiaowen Huang, Quan Fang, Shengsheng Qian, Jitao Sang, Yan Li, and Changsheng Xu. 2019. Explainable interaction-driven user modeling over knowledge graph for sequential recommendation. In _proceedings of the 27th ACM international conference on multimedia_ , pages 548–556. * Jain and Wallace (2019) Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In _Proceedings of NAACL-HLT_ , pages 3543–3556. * Ji et al. (2021) Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. _IEEE transactions on neural networks and learning systems_ , 33(2):494–514. * Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations (ICLR)_. * Kumar and Talukdar (2020) Sawan Kumar and Partha Talukdar. 2020. NILE : Natural language inference with faithful natural language explanations. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8730–8742, Online. Association for Computational Linguistics. * Li et al. (2022) Belinda Z Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, and Jacob Andreas. 2022. Quantifying adaptability in pre-trained language models with 500 tasks. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4696–4715. * Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2829–2839. * Lin et al. (2020) Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. 2020. Caire: An end-to-end empathetic chatbot. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, pages 13622–13623. * Liu et al. (2023) Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_ , 55(9):1–35. * Liu et al. (2019a) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_. * Liu et al. (2019b) Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019b. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1782–1792. * Loh et al. (2022) Hui Wen Loh, Chui Ping Ooi, Silvia Seoni, Prabal Datta Barua, Filippo Molinari, and U Rajendra Acharya. 2022. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). _Computer Methods and Programs in Biomedicine_ , page 107161. * Meng et al. (2022) Chuizheng Meng, Loc Trinh, Nan Xu, James Enouen, and Yan Liu. 2022. Interpretability and fairness evaluation of deep learning models on mimic-iv dataset. _Scientific Reports_ , 12(1):7166. * Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2381–2391. * Mireshghallah et al. (2022) Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learning-free controllable text generationusing energy language models. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 401–415. * Nye et al. (2021) Maxwell Nye, Michael Tessler, Josh Tenenbaum, and Brenden M Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. _Advances in Neural Information Processing Systems_ , 34:25192–25204. * OpenAI (2023) OpenAI. 2023. Gpt-4 technical report. _ArXiv_ , abs/2303.08774. * Ouyang et al. (2022) Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022\. Training language models to follow instructions with human feedback. _arXiv preprint arXiv:2203.02155_. * (29) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551. * Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , pages 1135–1144. * Shrikumar et al. (2017) Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In _International conference on machine learning_ , pages 3145–3153. PMLR. * Situ et al. (2021) Xuelin Situ, Ingrid Zukerman, Cecile Paris, Sameen Maruf, and Gholamreza Haffari. 2021. Learning to explain: Generating stable explanations fast. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 5340–5355, Online. Association for Computational Linguistics. * Smilkov et al. (2017) Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. _ArXiv_ , abs/1706.03825. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_ , AAAI’17, page 4444–4451. AAAI Press. * Štrumbelj and Kononenko (2014) Erik Štrumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. _Knowledge and information systems_ , 41:647–665. * Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ICML’17, page 3319–3328. JMLR.org. * Swamy et al. (2021) Vinitra Swamy, Angelika Romanou, and Martin Jaggi. 2021. Interpreting language models through knowledge graph extraction. In _Advances in Neural Information Processing Systems (NeurIPS), 1st Workshop on eXplainable AI Approaches for Debugging and Diagnosis_. * Tack and Piech (2022) Anaïs Tack and Chris Piech. 2022. The ai teacher test: Measuring the pedagogical ability of blender and gpt-3 in educational dialogues. In _Proceedings of the 15th International Conference on Educational Data Mining_ , page 522. * Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. * Thorne et al. (2019) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019\. Generating token-level explanations for natural language inference. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 963–969. * Tigas et al. (2022) Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, and Stefan Bauer. 2022. Interventions, where and how? experimental design for causal models at scale. _arXiv preprint arXiv:2203.02016_. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30. * Veličković et al. (2018) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. _International Conference on Learning Representations_. Accepted as poster. * (45) Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_. * Yasunaga et al. (2021) Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qa-gnn: Reasoning with language models and knowledge graphs for question answering. _arXiv preprint arXiv:2104.06378_. * Ying et al. (2019) Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019\. Gnnexplainer: Generating explanations for graph neural networks. _Advances in neural information processing systems_ , 32. * Yu et al. (2022) Puxuan Yu, Razieh Rahimi, and James Allan. 2022. Towards explainable search results: A listwise explanation generator. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 669–680. * Zhan et al. (2022a) Xunlin Zhan, Yinya Huang, Xiao Dong, Qingxing Cao, and Xiaodan Liang. 2022a. Pathreasoner: Explainable reasoning paths for commonsense question answering. _Knowledge-Based Systems_ , 235:107612. * Zhan et al. (2022b) Xunlin Zhan, Yinya Huang, Xiao Dong, Qingxing Cao, and Xiaodan Liang. 2022b. Pathreasoner: Explainable reasoning paths for commonsense question answering. _Knowledge-Based Systems_ , 235:107612. * Zhang et al. (2022) Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022. Greaselm: Graph reasoning enhanced language models for question answering. _arXiv preprint arXiv:2201.08860_. * Zhou et al. (2022) Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Learning to prompt for vision-language models. _International Journal of Computer Vision_ , 130(9):2337–2348. * Zytek et al. (2022) Alexandra Zytek, Ignacio Arnaldo, Dongyu Liu, Laure Berti-Equille, and Kalyan Veeramachaneni. 2022. The need for interpretable features: Motivation and taxonomy. _ACM SIGKDD Explorations Newsletter_ , 24(1):1–13. ## References * Aggarwal et al. (2021) Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for CommonsenseQA: New Dataset and Models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 3050–3065, Online. Association for Computational Linguistics. * Brown et al. (1992) Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. _Computational Linguistics_ , 18(4):467–480. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901. * Chen et al. (2021) Qianglong Chen, Feng Ji, Xiangji Zeng, Feng-Lin Li, Ji Zhang, Haiqing Chen, and Yin Zhang. 2021. Kace: Generating knowledge aware contrastive explanations for natural language inference. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2516–2527. * Conneau and Lample (2019) Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. _Advances in neural information processing systems_ , 32. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Ding and Koehn (2021) Shuoyang Ding and Philipp Koehn. 2021. Evaluating saliency methods for neural language models. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5034–5052, Online. Association for Computational Linguistics. * Feng et al. (2020) Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1295–1309. * Guidotti et al. (2018) Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. _arXiv preprint arXiv:1805.10820_. * Huang et al. (2022) Jie Huang, Kerui Zhu, Kevin Chen-Chuan Chang, Jinjun Xiong, and Wen-mei Hwu. 2022\. DEER: Descriptive knowledge graph for explaining entity relationships. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 6686–6698, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Huang et al. (2019) Xiaowen Huang, Quan Fang, Shengsheng Qian, Jitao Sang, Yan Li, and Changsheng Xu. 2019. Explainable interaction-driven user modeling over knowledge graph for sequential recommendation. In _proceedings of the 27th ACM international conference on multimedia_ , pages 548–556. * Jain and Wallace (2019) Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In _Proceedings of NAACL-HLT_ , pages 3543–3556. * Ji et al. (2021) Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. _IEEE transactions on neural networks and learning systems_ , 33(2):494–514. * Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations (ICLR)_. * Kumar and Talukdar (2020) Sawan Kumar and Partha Talukdar. 2020. NILE : Natural language inference with faithful natural language explanations. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8730–8742, Online. Association for Computational Linguistics. * Li et al. (2022) Belinda Z Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, and Jacob Andreas. 2022. Quantifying adaptability in pre-trained language models with 500 tasks. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4696–4715. * Lin et al. (2019) Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2829–2839. * Lin et al. (2020) Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. 2020. Caire: An end-to-end empathetic chatbot. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, pages 13622–13623. * Liu et al. (2023) Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_ , 55(9):1–35. * Liu et al. (2019a) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_. * Liu et al. (2019b) Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019b. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1782–1792. * Loh et al. (2022) Hui Wen Loh, Chui Ping Ooi, Silvia Seoni, Prabal Datta Barua, Filippo Molinari, and U Rajendra Acharya. 2022. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). _Computer Methods and Programs in Biomedicine_ , page 107161. * Meng et al. (2022) Chuizheng Meng, Loc Trinh, Nan Xu, James Enouen, and Yan Liu. 2022. Interpretability and fairness evaluation of deep learning models on mimic-iv dataset. _Scientific Reports_ , 12(1):7166. * Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2381–2391. * Mireshghallah et al. (2022) Fatemehsadat Mireshghallah, Kartik Goyal, and Taylor Berg-Kirkpatrick. 2022. Mix and match: Learning-free controllable text generationusing energy language models. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 401–415. * Nye et al. (2021) Maxwell Nye, Michael Tessler, Josh Tenenbaum, and Brenden M Lake. 2021. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning. _Advances in Neural Information Processing Systems_ , 34:25192–25204. * OpenAI (2023) OpenAI. 2023. Gpt-4 technical report. _ArXiv_ , abs/2303.08774. * Ouyang et al. (2022) Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022\. Training language models to follow instructions with human feedback. _arXiv preprint arXiv:2203.02155_. * (29) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551. * Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , pages 1135–1144. * Shrikumar et al. (2017) Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In _International conference on machine learning_ , pages 3145–3153. PMLR. * Situ et al. (2021) Xuelin Situ, Ingrid Zukerman, Cecile Paris, Sameen Maruf, and Gholamreza Haffari. 2021. Learning to explain: Generating stable explanations fast. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 5340–5355, Online. Association for Computational Linguistics. * Smilkov et al. (2017) Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. _ArXiv_ , abs/1706.03825. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_ , AAAI’17, page 4444–4451. AAAI Press. * Štrumbelj and Kononenko (2014) Erik Štrumbelj and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. _Knowledge and information systems_ , 41:647–665. * Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ICML’17, page 3319–3328. JMLR.org. * Swamy et al. (2021) Vinitra Swamy, Angelika Romanou, and Martin Jaggi. 2021. Interpreting language models through knowledge graph extraction. In _Advances in Neural Information Processing Systems (NeurIPS), 1st Workshop on eXplainable AI Approaches for Debugging and Diagnosis_. * Tack and Piech (2022) Anaïs Tack and Chris Piech. 2022. The ai teacher test: Measuring the pedagogical ability of blender and gpt-3 in educational dialogues. In _Proceedings of the 15th International Conference on Educational Data Mining_ , page 522. * Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. * Thorne et al. (2019) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019\. Generating token-level explanations for natural language inference. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 963–969. * Tigas et al. (2022) Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, and Stefan Bauer. 2022. Interventions, where and how? experimental design for causal models at scale. _arXiv preprint arXiv:2203.02016_. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30. * Veličković et al. (2018) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. _International Conference on Learning Representations_. Accepted as poster. * (45) Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_. * Yasunaga et al. (2021) Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qa-gnn: Reasoning with language models and knowledge graphs for question answering. _arXiv preprint arXiv:2104.06378_. * Ying et al. (2019) Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019\. Gnnexplainer: Generating explanations for graph neural networks. _Advances in neural information processing systems_ , 32. * Yu et al. (2022) Puxuan Yu, Razieh Rahimi, and James Allan. 2022. Towards explainable search results: A listwise explanation generator. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 669–680. * Zhan et al. (2022a) Xunlin Zhan, Yinya Huang, Xiao Dong, Qingxing Cao, and Xiaodan Liang. 2022a. Pathreasoner: Explainable reasoning paths for commonsense question answering. _Knowledge-Based Systems_ , 235:107612. * Zhan et al. (2022b) Xunlin Zhan, Yinya Huang, Xiao Dong, Qingxing Cao, and Xiaodan Liang. 2022b. Pathreasoner: Explainable reasoning paths for commonsense question answering. _Knowledge-Based Systems_ , 235:107612. * Zhang et al. (2022) Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2022. Greaselm: Graph reasoning enhanced language models for question answering. _arXiv preprint arXiv:2201.08860_. * Zhou et al. (2022) Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022. Learning to prompt for vision-language models. _International Journal of Computer Vision_ , 130(9):2337–2348. * Zytek et al. (2022) Alexandra Zytek, Ignacio Arnaldo, Dongyu Liu, Laure Berti-Equille, and Kalyan Veeramachaneni. 2022. The need for interpretable features: Motivation and taxonomy. _ACM SIGKDD Explorations Newsletter_ , 24(1):1–13.
# Experimental Searches For Heavy Neutral Leptons Sophie C. Middleton California Institute of Technology, Pasadena, California 91125, USA ###### Abstract on behalf of the BABAR Collaboration The highly successful Standard Model is not complete. It does not explain the baryonic asymmetry in the Universe, the existence of dark matter or the non- zero masses of the neutrinos. Extensions of the Standard Model that propose the existence of additional Heavy Neutral Leptons (HNLs) are well-motivated and can explain several of these phenomena. In addition, light sterile neutrinos of $\mathcal{O}(\text{eV}/c^{2})$ can explain experimentally observed oscillation anomalies. The Neutrino Minimal Standard Model proposes HNLs with masses $\mathcal{O}(\text{keV}/c^{2}-\text{GeV}/c^{2})$, while more exotic models predict very large masses, up to the GUT scale. Due to the multitude of models which hypothesize HNLs, the mass range to be explored by experiments is large. Experimental searches for HNLs can be conducted at existing neutrino, beam dump and collider-based experiments, and, depending on the signature, can constrain mixing between additional neutrinos and any of the three active neutrinos. ## I Motivations Heavy Neutral Leptons (HNLs) are predicted by many extensions to the Standard Model (SM). They possess mass and therefore interact via gravity, but have no electric charge, no weak hyper-charge, no weak isospin, and no color charge. They are singlets under all gauge interactions and are subsequently referred to as ‘‘sterile neutrinos." Sterile neutrinos have long been used to explain the apparent smallness of the SM neutrino masses [1]. HNLs interact only with the active neutrinos via mixing, and possibly the Higgs boson. Current experimental limits are outlined in Refs. [2, 3]. The following sub-sections document how HNLs can help resolve several known issues with the SM and how they are ubiquitous in models that explain neutrino mass. ### I.1 Inconsistencies between Standard Model and Observation Standard Model predictions have been extensively tested and are found to be in good agreement with experimental results; in some cases with extraordinary precision of up to 1 part in a trillion [4]. Nevertheless, there remains a need to extend the SM to explain several observational phenomena including: the baryon asymmetry in the Universe (BAU), the existence of dark matter, and the non-zero mass of the neutrinos. The Neutrino Minimal Standard Model ($\nu$-MSM) [5] is one extension which requires the existence of three HNLs. It is capable of explaining the origins of neutrino masses, dark matter [6] and the BAU [7, 8]. The HNLs have Majorana masses below the electroweak scale and realise the see-saw mechanism with SM neutrinos. Two of the HNLs have masses in the $\mathcal{O}$(MeV/$c^{2}$ \- GeV/$c^{2})$ range and a third, the dark matter candidate, has mass of $\mathcal{O}$(keV/$c^{2}$). $\nu$-MSM is compatible with all current measurements. Models with GeV/$c^{2}$ scale HNLs are the subject of intensive theoretical study. Heavy Neutral Leptons with a masses from $\mathcal{O}(100$ MeV/$c^{2}$) up to a few $\mathcal{O}($GeV/$c^{2}$) can be produced in decays of SM particles, while heavier states, of a few $\mathcal{O}($GeV/$c^{2}$), can be directly produced at colliders. ### I.2 Incorporating Neutrino Mass into the Standard Model Observation of neutrino oscillations has established the non-zero mass of at least two of the SM neutrinos. Since the discovery of neutrino oscillations a global experimental program has measured, with precision, most of the oscillation parameters using neutrinos from solar, accelerator, reactor and atmospheric origins. Absolute values of these masses are yet to be determined, but experiments have measured the mass squared differences, with current bounds detailed in Ref.[9]. It appears that mixing in the lepton-sector has a very different structure compared to that of the quarks. There is no current explanation to why that might be. In addition, there are still several remaining questions regarding neutrinos such as: establishing whether neutrinos are Dirac or Majorana particles, their absolute masses and whether they exhibit CP violation. Accurately answering these questions, and accounting for neutrino masses in a consistent framework, is the focus of global experimental and theoretical effort. The most intuitive way to extend the SM to include the observed neutrino masses would be to add a term coupling neutrinos to the Higgs field, analogous to that for charged leptons: $Y^{\nu}_{\alpha\beta}\bar{L}^{\alpha}\cdot\tilde{H}\nu^{\beta}_{R}+h.c.,$ (1) where $\tilde{H}=i\sigma_{2}H^{*}$; $\sigma_{2}$ is the second Pauli matrix; $H$ is the Higgs field; $Y^{\nu}_{\alpha\beta}$ describes the couplings of the neutrino flavor states to the Higgs field; $L^{\alpha}$ is a $SU(2)_{L}$ doublet of left-handed leptons; and $\alpha,\beta$ are flavor indices. Incorporating neutrino mass in this way requires the Yukawa coupling to be orders of magnitude smaller than the that for charged leptons and that right- handed neutrinos have no Majorana mass, despite there being no symmetry preventing it. In addition, the masses and mixing angles would be expected to have a similar hierarchy as for quarks - which is not the case. The SM admits a dimension-five operator, the Weinberg operator, that is gauge invariant: $\mathcal{L}_{5}=\frac{c^{[5]}}{\Lambda}L^{T}\cdot\tilde{H}^{*}C^{\dagger}\tilde{H}^{\dagger}\cdot L+h.c.$ (2) where $\Lambda$ is the scale at which the particles responsible for lepton number violation become relevant degrees of freedom; $c^{[5]}$ is a flavor- dependent Wilson coefficient; and $C$ is the charge-conjugation matrix. The Weinberg operator leads to Majorana masses of the neutrinos after electroweak symmetry breaking. If the neutrinos are in fact Majorana particles, there are a multitude of models which can generate Majorana mass terms for left-handed fermions below the electroweak symmetry breaking scale. These models are collectively referred to as ‘‘See-Saw Models" and can also account for the smallness of the neutrino masses without introducing an extremely small Yukawa coupling, they are categorized in the three ‘‘types": 1. 1. Type I Seesaw [10, 11, 12, 13] with a singlet fermion; 2. 2. Type II Seesaw [13, 14, 15, 16] with heavy triplet scalars; 3. 3. Type III Seesaw [17, 18] with triplet fermions. One common feature of models that explain neutrino masses is the existence of a new HNL state. Severe constraints exist for eV-scale Seesaw [19] from cosmic surveys and Big-Bang Nucleo-synthesis (BBN) [20]. However, more natural solutions remain at the GeV or TeV scale. For a search at the GeV-scale the Yukawa coupling is of $\mathcal{O}(10^{-5})$ and can be searched for at existing experiments. At the TeV-scale direct searches become less effective, since the Yukawa coupling is smaller ($\mathcal{O}(10^{-6})$). ### I.3 Suggestions of Additional Sterile Neutrino States The existence of light sterile fermions with masses $\mathcal{O}(\text{eV}/c^{2})$ can also provide an explanation for observed anomalies in very short baseline oscillation measurements and cosmological data analyses [21]. Re-analysis of data from the GALLEX [22] and SAGE [23] solar neutrino experiments has exposed an unexplained 14$\pm$5$\%$ deficit in the number of recorded $\nu_{e}$ \- referred to as the ‘‘Gallium anomaly" . In addition, numerous analyses of the flux of $\bar{\nu}_{e}$ from reactors have suggested a deficit of $\bar{\nu}_{e}$ at the 98.6$\%$ C.L. [24] \- denoted as the ‘‘reactor anti-neutrino anomaly." When combined, both anomalies disfavour the no-oscillation hypothesis at 99.97$\%$ ($3.6\sigma$). A third anomaly - the ‘‘accelerator anomaly" - stems from measurements at the LSND [25] experiment which evaluated the oscillation $\nu_{e}\rightarrow\nu_{\mu}$ at a baseline of $L$ = 30m. LSND measured an excess of neutrinos at the level of 3.8$\sigma$ which could be explained by the existence of a sterile neutrino with a mass $\mathcal{O}(\text{eV}/c^{2})$. Further support for this excess was presented by the MiniBooNE experiment at the 2.8$\sigma$ level [26]. ## II Searching for Heavy Neutral Lepton States Mixing between the beyond SM (BSM) heavy neutrino mass eigenstate and the active neutrino states can be parameterized by the extended Pontecorvo Maki Nakagawa Sakata (PMNS) matrix. The additional elements, $U_{l,n}$, represent the mixing strength between the active neutrino flavor state, $l$, and the BSM $n$-th neutrino mass state: $\begin{pmatrix}\nu_{e}\\\ \nu_{\mu}\\\ \nu_{\tau}\\\ \nu_{\text{\tiny L}}\\\ \vdots\end{pmatrix}=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}&U_{e4}&\cdots\\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}&U_{\mu 4}&\cdots\\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}&U_{\tau 4}&\cdots\\\ U_{\text{\tiny L}1}&U_{\text{\tiny L}2}&U_{\text{\tiny L}3}&U_{\text{\tiny L}4}&\cdots\\\ \vdots&\vdots&\vdots&\vdots&\ddots\\\ \end{pmatrix}\begin{pmatrix}\nu_{1}\\\ \nu_{2}\\\ \nu_{3}\\\ \nu_{4}\\\ \vdots\end{pmatrix},$ (3) where $L$ represents some hypothetical additional lepton flavor. Given that analyses of cosmological data and measurements of $Z$ boson decays, summarized in Ref.[9], are consistent with there being only three charged lepton flavors it is widely assumed that any HNL, with mass less than the $Z$ mass, must be sterile, and would have no associated charged lepton. Here the PMNS matrix is extended for just one HNL, but others can be added in the same way. The PMNS matrix for the anti-neutrinos is identical to that for neutrinos under CPT symmetry. Experimental results tend to be presented as upper limits on $|U_{(l=e,\mu,\tau),4}|^{2}$ (usually at the 95 $\%$ confidence level) as a function of $m_{N}$, the mass of the possible HNL being sought. The probability of a fourth neutrino state interacting with the electron ($|U_{e4}|^{2}$) or muon ($|U_{\mu 4}|^{2}$) has tight constraints [9], limits on $|U_{\tau 4}|^{2}$ are weaker, motivating the possibility that $|U_{\tau 4}|>>|U_{e4}|$, $|U_{\mu 4}|$. This article presents a summary of current bounds on the couplings to each of the active neutrino states. In Sec.III long-established limits on all three mixing strengths are discussed. In Sec. IV recent results, which have been published in the last few years, are presented. In Sec. V the most recent result from BABAR , which places new limits on $|U_{\tau 4}|^{2}$ in the 100 MeV/$c^{2}$ $<m_{4}<$ 1300 MeV/$c^{2}$ mass range. Existing bounds in the range $\sim$ 300 MeV/$c^{2}$ to $\sim$ 1 GeV/$c^{2}$ range are particularly weak. In Sec. VI future projections for near-term projects are discussed and compared to current limits. ## III Experimental Searches There are several means of searching for HNLs, depending on the mass regime and proposed coupling to the active neutrinos. Additional neutral leptons can be searched for in cosmology, in colliders, or in high-intensity experiments. Sterile neutrinos can be responsible for contributions to electric and magnetic leptonic dipole moments, and facilitate many rare transitions and decays. Consequently, searches for additional neutrinos cross all frontiers of particle physics. ### III.1 Established Limits, prior to 2020 Figure 1: Sensitivity to Heavy Neutral Leptons with coupling to the (top) electron, (middle) muon and (bottom) tau lepton. Current bounds (filled areas) and near (next 5 years) future physics reach. All plots taken from Ref. [2]. #### III.1.1 Limits on Mixing with Electron and Muon Neutrinos Mixing of heavy neutrinos with both $\nu_{e}$ and $\nu_{\mu}$ can be probed by searching for bumps in the missing-mass distribution of pions and kaons leptonic decays. Existing bounds come from a range of experiments: * • PS 191 at CERN: [27] designed to search for neutrino decays in a low-energy neutrino beam. No such events were found and limits were placed. * • CHARM at CERN: [28] which conducted a search for HNLs in a prompt neutrino beam produced by dumping 400 GeV protons in a Cu target. Visible decays with electrons or muon in the final state: $e^{+}e^{-}\nu_{e},\mu^{+}e^{-}\nu_{e},e^{+}\mu^{-}\nu_{\mu}$ and $\mu^{+}\mu^{-}\nu_{\mu}$ were sought. This search provided limits on $|U_{e4}|^{2},|U_{\mu 4}|^{2}<10^{-7}$ for masses up to 1.5 GeV/$c^{2}$. HNLs were also sought through assuming a neutral-current neutrino interaction in the CHARM calorimeter, and looking for neutrinos decaying to muons and hadrons. This search was sensitive to masses of 0.5-2.8GeV/$c^{2}$ and provided limits of $|U_{\mu 4}|^{2}<10^{-4}$. * • Belle at KEK: [29] used 772 million $B\bar{B}$ pairs and their leptonic and semi-leptonic decays mesons decays, $B\rightarrow(X)l\nu_{R}$, where $l=e,\mu$ and $X$ was the semi-leptonic case and either a charmed meson $D$($D^{*}$), a light meson (e.g. $\pi.\rho,\eta$). This search had sensitivity to $|U_{e4}|^{2}$ and $|U_{\mu 4}|^{2}$ in a range of masses between the kaon and the $B$ meson mass. * • DELPHI at LEP: [30] the best limits on all three mixing strengths in the mass range above the $B$ meson mass come from the DELPHI detector at LEP. There $3.3\times 10^{6}$ hadronic $Z^{0}$ decays were analyzed. Four separate searches were performed for short-lived HNL production resulting in jets and for longer living HNLs giving detectable secondary vertices or calorimeter clusters. An upper limit for the branching ratio $BR(Z^{0}\rightarrow\text{HNL}+\bar{\nu})$ of about $1.3\times 10^{-6}$ at 95$\%$ confdence level was found for masses between 3.5 and 50 GeV/$c^{2}$. * • CMS at LHC: [31] used their three prompt charged lepton samples in any combination of electrons and muons collected at a center-of-mass energy of 13 TeV. Their first results, corresponding to an integrated luminosity of 35.9$fb^{-1}$. The search is performed in the HNL mass range between 1 GeV and 1.2 TeV. The allowed range of couplings is bounded from below by the Big Bang Nucleo- synthesis (BBN) constraint [32], which ensures HNLs wouldn’t live long enough in the early Universe to cause an an overabundance of Helium-4, and the See- saw limit [33], below which the mixing to active neutrinos is too weak to produce the observed oscillations. #### III.1.2 Limits on Mixing with Muon Neutrinos In addition to these searches, there are a few additional experiments which have helped to constrain the ($|U_{\mu 4}|^{2}$, $m_{N}$) parameter space: 1. 1. NuTeV at Fermilab: [34] used $2\times 10^{18}$ 800 GeV protons interacting with a beryllium oxide target and a proton dump, to search for HNLs in the 0.25–2.0 GeV/$c^{2}$ decaying to muonic final states ($\mu\mu\nu,\mu e\nu,\mu\pi$ and $\mu\rho$). Upper limits were placed on $|U_{\mu 4}|^{2}$ down to $\mathcal{O}(10^{-7})$. 2. 2. E949 at BNL: [35] searched for HNLs from the process $K^{+}\rightarrow\mu^{+}\mu_{R}$ using $1.7\times 10^{12}$ stopped kaons. They set limits on $|U_{\mu 4}|^{2}$ of the level $\mathcal{O}(10^{-9}-10^{-7})$ for HNLs of masses 175-300 MeV/$c^{2}$. #### III.1.3 Limits on Mixing with Tau Neutrinos Constraints on the ($|U_{\tau 4}|^{2}$, $m_{N}$) parameter space are much weaker; the existing constraints come from DELPHI and two other experiments in the low mass region: 1. 1. CHARM at CERN: [36] used the neutrino flux from $2\times 10^{18}$ 400 GeV protons on a solid copper target to place limits in the 10-290 MeV$/c^{2}$ mass range by re-interpreting the null result of a search for events produced by the decay of neutral particles into two electrons. 2. 2. NOMAD at CERN: [37] used $4.1\times 10^{19}$ 450 GeV protons on target collected at the WANF facility at CERN. The process $D_{s}\rightarrow\tau\nu_{R}$ followed by the decay $\nu_{R}\rightarrow\nu_{\tau}e^{+}e^{-}$ were analyzed and limits were placed in the 10 to 190 MeV$/c^{2}$ mass range. ## IV New Results, published 2020 - 2022 In the past few years several new experimental results for couplings with muons and electrons have been published: ### IV.1 MicroBooNe: Sterile neutrinos $\mathcal{O}(eV/c^{2})$ Recent results from MicroBooNe are detailed in Ref. [38]. MicroBooNE has developed 3 distinct $\nu_{e}$ searches targeting the MiniBooNE excess: 1. 1. an exclusive search for two-body $\nu_{e}$ charged current quasi-elastic (CCQE) scattering; 2. 2. a semi-inclusive search for pion-less $\nu_{e}$ events; 3. 3. an inclusive $\nu_{e}$ search containing any hadronic final state. The combined results were consistent with nominal electron neutrino rate expectations; no excess of electron neutrino events was observed. More results follow. ### IV.2 Kaon Searches: Couplings of HNLs $\mathcal{O}(MeV/c^{2})$ mixing with muons and electrons The NA62 experiment at CERN recently presented a search for for $K^{+}\rightarrow\mu^{+}+\text{HNL}$, using the 2016-2018 data set in Ref. [39]. The analysis found limits of $\mathcal{O}(10^{-8}$) of the neutrino mixing parameter $|U_{\mu 4}|^{2}$ for HNL masses in the range 200-384 MeV/$c^{2}$, with lifetime exceeding 50 ns. ### IV.3 LHC Searches: Couplings of HNLs $\mathcal{O}(GeV/c^{2})$ mixing with muons and electrons The ATLAS and CMS detectors at CERN have conducted searches for HNLs, detailed in Refs. [40, 41]. CMS performed a search for HNLs produced with displaced vertices using final states with three charged leptons (electrons or muons), the idea being that HNLs could be produced through mixing with SM neutrinos. The decay length of these particles can be large enough so that the secondary vertex of the HNL decay can be resolved with the CMS silicon tracker. The selected final state would consist of one lepton emerging from the primary proton-proton collision vertex, along with two leptons forming a displaced, secondary vertex. In this most recent analysis, data totalling 136 fb-1 were analyzed. Improved limits on $|U_{e4}|^{2}$ and $|U_{\mu 4}|^{2}$ down to $\mathcal{O}(10^{-7})$ were presented in the 1-20 GeV/$c^{2}$ mass range. ATLAS analyzed leptonic decays of $W$ bosons extracted using 32.9 - 36.1 fb-1 of 13 TeV proton-proton collisions. HNLs could produced through mixing with muon or electron neutrinos. They looked for both prompt and displaced leptonic decay signatures, where the prompt signature requires three leptons produced at the interaction point and the displaced signature comprises a prompt muon from the $W$ boson decay and the requirement of a displaced di-lepton vertex. This search placed limits on $|U_{e4}|^{2}$ and $|U_{\mu 4}|^{2}$ down to $\mathcal{O}(10^{-5})$ for HNL masses in the range 4.5–50 GeV/$c^{2}$. ## V Couplings of HNLs $\mathcal{O}(MeV/c^{2}-GeV/c^{2})$ mixing with taus at BABAR The latest analysis from BABAR [42] presents new limits on $|U_{\tau 4}|^{2}$ in the 100 $<m_{4}<$ 1300 MeV/$c^{2}$ mass range. An overview of the BABAR detector can be found in Ref. [43]. The data sample used in this analysis corresponds to an integrated luminosity of 424 fb-1. The average cross-section for $\tau$-pair production of electron-positron annihilation is $\sigma(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-})=(0.919\pm 0.003)$ nb [9]; so the data sample corresponds to $\sim 4\times 10^{8}$ produced $\tau$-pairs, before applying any reconstruction or selection criteria. ### V.1 Experimental Strategy This analysis used the approach proposed in Ref. [44], the key principle being that if a HNL is produced in $\tau$ decay, the kinematics of the visible particles would be modified with respect to SM $\tau$ decay with a massless neutrino. This search studies the 3-prong, pionic $\tau$ decay, which gives access to the region 300$<m_{4}<$1360 MeV$/c^{2}$, where current limits are less stringent. Denoting the three charged pions as a hadronic system $h^{-}$, the decay can be considered a two-bodied: $\tau^{-}\rightarrow h^{-}(E_{h},\vec{p}_{h})+\nu(E_{\nu},\vec{p}_{\nu}),$ (4) where $\nu$ describes the outgoing neutrino state. An analogous equation could be written for the $\tau^{+}$ channel. The allowed phase space of the reconstructed energy, $E_{h}$, and invariant mass, $m_{h}$, of the hadronic system would vary as functions of the mass of the neutrino. As the HNL gets heavier the proportion of the original $\tau$-lepton’s energy going to the visible pions diminishes. A visualization is presented in Fig. 2. Figure 2: Energy and invariant mass of the hadronic system as fractions of that of the incoming $\tau$ for cases: $m_{4}$ = 0, 500, 1000 MeV/$c^{2}$ are shown. In the center-of-mass frame the $\tau$-lepton energy is assumed to be $\sqrt{s}/2$, when there is no initial state radiation. Since the direction of the decaying $\tau$-lepton is not known, we cannot compute the neutrino mass directly, but we know that $E_{h}$ must fall between two extremes that define the kinematically allowed values: $E_{\tau}-\sqrt{m^{2}_{4}+q_{+}^{2}}<E_{h}<E_{\tau}-\sqrt{m^{2}_{4}+q_{-}^{2}},$ (5) where $q_{\pm}=\frac{m_{\tau}}{2}\bigg{(}\frac{m_{h}^{2}-m_{\tau}^{2}-m_{4}^{2}}{m_{\tau}^{2}}\bigg{)}\sqrt{\frac{E_{\tau}^{2}}{m_{\tau}^{2}}-1}\pm\frac{E_{\tau}}{2}\sqrt{\big{(}1-\frac{(m_{h}+m_{4})^{2}}{m_{\tau}^{2}}\big{)}\big{(}1-\frac{(m_{h}-m_{4})^{2}}{m_{\tau}^{2}}\big{)}};$ (6) and $3m_{\pi^{\pm}}<m_{h}<m_{\tau}-m_{4}$. As the HNL mass increases, the allowed phase space of the visible system is reduced in the $E_{h}$,$m_{h}$ plane. A HNL signal is sought by comparing the observed event yield density in the ($m_{h},E_{h}$) plane to a set of template 2D histogram distributions for the background, obtained by simulating all $\tau$ known decays as well as non-$\tau$ background events, and the potential HNL signal for different $m_{4}$ mass values. Although the invariant mass and outgoing hadronic energy ($m_{h}$ and $E_{h}$) are correlated, more information can be extracted by considering both variables simultaneously. ### V.2 Signal and Background Simulations Three potential sources background are considered: 1. 1. $\tau^{-}\rightarrow\pi^{-}\pi^{-}\pi^{+}\nu_{\tau}$, with an outgoing SM neutrino; 2. 2. other SM $\tau$ decays that have been misidentified as the 3-prong (3 charged pion) decay; 3. 3. non-$\tau$ backgrounds that have been misidentified as the 3-prong decay. All SM background yields are estimated from Monte Carlo (MC) simulations which are passed through the same reconstruction and digitization routines as the data. All $\tau$-pair events are simulated using the KK2F [45] generator and TAUOLA [46] which uses the averaged experimentally measured $\tau$ branching rates as listed in Ref. [9]. Several non-$\tau$ backgrounds are also studied, including: * • $e^{+}e^{-}\rightarrow\Upsilon(4S)\rightarrow B^{+}B^{-}$ and $e^{+}e^{-}\rightarrow\Upsilon(4S)\rightarrow B^{0}$ $\bar{B}^{0}$ which are simulated using EvtGen [47]; * • $e^{+}e^{-}\rightarrow u\bar{u},d\bar{d},s\bar{s}$ and $e^{+}e^{-}\rightarrow c\bar{c}$ which are simulated using JETSET [48] [49]; * • $e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}(\gamma)$ which are simulated using KK2F [50]. A total of 26 signal samples were simulated, one for each of the HNL masses across the range 100 MeV/$C^{2}$ $<m_{4}<$ 1300 MeV/$C^{2}$, at 100 MeV/$C^{2}$ increments. For each of these HNL masses, both a $\tau^{+}$ and $\tau^{-}$ signal channel were simulated. Signal samples were produced within the BABAR software environment using KK2F and TAUOLA by changing the value of the outgoing neutrino mass in TAUOLA. The generated signal was passed through the same digitization and reconstruction model as the SM background and data samples. ### V.3 Analysis Procedure A binned likelihood approach is taken in which it is assumed that the contents of a given bin, $i,j$, in the $(m_{h},E_{h})$ data histogram are distributed as a Poisson distribution and may contain events emanating from any of the SM background process, and potentially HNL signal events. The likelihood to observe the selected candidates in all the $(m_{h},E_{h})$ bins is the product of the Poisson probability to observe the selected events in each bin: $\displaystyle\mathcal{L}=\prod_{\text{charge}}^{+-}\bigg{(}\prod_{\text{channel}}^{e\mu}\bigg{(}\prod^{ij}_{\text{bin}}\bigg{(}\frac{1}{n_{\text{obs},ij}!}\bigg{[}N_{\tau,\text{gen}}\cdot|U_{\tau 4}|^{2}\cdot p_{\text{\tiny HNL},ij}+N_{\tau,\text{gen}}\cdot(1-|U_{\tau 4}|^{2})\cdot p_{\tau-\text{SM},ij}+n^{\text{reco}}_{BKG,ij}\bigg{]}^{(n_{\text{obs}})_{ij}}\times$ $exp\bigg{[}-(N_{\tau,\text{gen}}\cdot|U_{\tau 4}|^{2}\cdot p_{HNL,ij}+N_{\tau,\text{gen}}\cdot(1-|U_{\tau 4}|^{2})\cdot p_{\tau- SM,ij}+n^{\text{reco}}_{BKG,ij})\bigg{]}\bigg{)}_{\text{bin}}\times\prod_{k}f(\theta_{k},\tilde{\theta}_{k})\bigg{)}_{\text{channel}}\bigg{)}_{\text{charge}},$ (7) where $n_{\text{obs}}$ is the number of observed events in the bin $ij$, $N_{\tau,\text{gen}}$ is the number of generated $\tau$’s, $p_{HNL(\tau- SM),ij}$ is the probability of a reconstructed event being in a given bin in the HNL ($\tau-SM$) 2D template and $n^{\text{reco}}_{BKG,ij}$ is the expected number of non-$\tau$ background events. The final product is a set of Gaussian nuisance parameters. The expression involves a product over all bins, $ij$, over the two 1-prong channels, and over both $\tau$-lepton charges ($\pm$). A test statistic, $q$, can be defined as: $q=-2\text{ln}\bigg{(}\frac{\mathcal{L}_{H_{0}}(|U_{\tau 4}|_{0}^{2};\hat{\hat{\theta}}_{0},\text{data})}{\mathcal{L}_{H_{1}}(|\hat{U}_{\tau 4}|^{2};\hat{\theta},\text{data})}\bigg{)}=-2\text{ln}(\Delta\mathcal{L}),$ (8) where $\mathcal{L}$ in both the numerator and denominator describes the maximized likelihood for two instances. The denominator is the maximized (unconditional) likelihood giving the maximum likelihood estimator of $|U_{\tau 4}|^{2}$ and the set of nuisance parameters ($\hat{\theta}$); $\hat{\theta}$ is a vector of nuisance parameters that maximize the likelihood. In the numerator the nuisance parameters are maximized for a given value of $|U_{\tau 4}|^{2}$, i.e it is the conditional maximum-likelihood. The ratio, $LR$, is consequently a function of $|U_{\tau 4}|^{2}$ through the numerator. It must be noted that the numerator denotes the hypothesis for any given value of $|U_{\tau 4}|^{2}$ (including the background only case i.e. $|U_{\tau 4}|^{2}=0$). Reference [51] provides more details on likelihood- based tests. The analysis aims to find the value of $|U_{\tau 4}|^{2}$ that minimizes this quantity at the 95 $\%$ confidence level. ### V.4 Uncertainties Table 1 lists the relative contribution for each of the systematic uncertainties on the normalization. These are parameterized as Gaussian nuisance parameters. Table 1: Systematic uncertainty contribution to the event yield (in $\%$) from each source, based on comparisons between MC simulations and data. Uncertainty | Yield Change ($\pm$) ---|--- Luminosity | $0.44\%$ $\sigma(ee\rightarrow\tau\tau)$ | $0.31\%$ Branching Fractions (1 prong) | e: 0.22$\%$ | $\mu$: 0.22$\%$ Branching Fractions (3 prong) | 3$\pi$: 0.57$\%$ PID Efficiency | $e:2\%$ | $\mu$: 1$\%$ | $\pi$: 3$\%$ Bhabha Contamination | 0.2$\%$ $q\bar{q}$ Contamination (data) | 0.1$\%$ Tracking Efficiency | negligible Detector Modeling | negligible Beam Energy | negligible Tau Mass | negligible In addition to these yield uncertainties, which effect all bins uniformly, ‘‘shape" uncertainties must also be accounted for. These come from inefficiency in the MC modelling. For many hadronic $\tau$ decay channels the relative uncertainties from experimental results are large. A $\tau$-lepton decay to three charged pions is mediated by the $a_{1}(1260)$ resonance which decays through the intermediate $\rho\pi$ state. In the MC samples used in this analysis the PDG [9] average of $m_{a_{1}}=1230\pm 40$ MeV/$c^{2}$ and a Breit-Wigner averaged width of $\Gamma_{a_{1}}=420\pm 35$ are used; Ref. [9] quotes the estimated width to be between 250 - 600. The uncertainty associated with the $a_{1}$ resonance represents the dominant contribution to the systematic error in our measurement. In order to understand the effects of the uncertainty on the $a_{1}$ mass on the final results in this analysis several additional MC simulations were built, in which the $m_{a_{1}}$ was varied to $\pm 1\sigma$ of the experimental average (where $\sigma=40$ MeV/$c^{2}$ ). ### V.5 Results Figure 3 shows the upper limit at the 95$\%$ confidence level provided by this analysis using the described binned likelihood technique. The magenta line represents the upper limit when all systematic uncertainties are considered. To characterize deviations due to the uncertainty on $\Gamma_{a_{1}}$ the more conservative PDG estimates are used. The dominant systematic uncertainty is, by far, that due to the assumptions made within our simulation, the main contribution being uncertainty in the intermediate resonances for the $\tau$ 3-prong channel, and the dominant $\tau$ backgrounds. The relative systematic uncertainty decreases as the mass of the hypothetical HNL increases, this is expected since the effects of the modeling uncertainty become less apparent at higher HNL masses. Figure 3: Upper limits at 95$\%$ C.L. on $|U_{\tau 4}|^{2}$. The magenta line represents the result when uncertainties are included. The magenta line is expected to be a very conservative upper limit. ## VI Future projections References [2, 3] provide comprehensive reviews of projections for near and far term projects. Figure-1 show near-term projections (unfilled lines) for two experiments expected to publish data in the next few years. The new limit from BABAR is competitive with both the projected limits from FASER (at 150 fb-1) and NA62 (at $10^{18}$) POT. The BABAR technique is also applicable to data at Belle-II and elsewhere. There are several other experiments planned for the next decade that will also improve limits on mixing to all three active neutrinos. The SHiP experiment [52] aims to make improvements in all three parameter spaces, with projected limits down to $10^{-9}$ in the electron and muon sector and $10^{-7}$ in the tau sector, in the few GeV/$c^{2}$ region. Other limits are expected from DUNE [53], CODEX-b [54] and MATHUSLA [55]. Looking much further ahead, the FCC-ee could provide very powerful limits, down to $\mathcal{O}(10^{-9}-10^{-12})$ [56, 57] for HNL masses of 5-80 GeV/$c^{2}$. ## VII Conclusions To conclude, the existence of Heavy Neutral Leptons can provide solutions to many issues which exist within the Standard Model. Depending on the mass and coupling of the HNL to active neutrinos these new particles can be produced in a range of channels. Consequently the experimental program searching for them extends well beyond the neutrino sector. This article has documented recent results from searches at neutrino experiments, beam-dumps and collider-based experiments. This includes a new upper limit on $|U_{\tau 4}|^{2}$ set by BABAR. The technique presented in Sec. V can also be applied future searches at Belle-II. the results presented are competitive with projections for experiments coming online in the next few years. The next decade brings with it new high-intensity searches for HNLs coupling to all three active neutrinos, many experiments are able to access mass-scales which are predicted by well-motivated models. There is no denying that we sit on the precipice of a very interesting time in the search for these elusive, beyond SM particles. ###### Acknowledgements. We are grateful for the extraordinary contributions of our PEP-II colleagues in achieving the excellent luminosity and machine conditions that have made this work possible. The success of this project also relies critically on the expertise and dedication of the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and the kind hospitality extended to them. This work is supported by the US Department of Energy and National Science Foundation, the Natural Sciences and Engineer- ing Research Council (Canada), Institute of High Energy Physics(China),the Commissariat al’ Energie Atomique and Institut National de Physique Nucleaire et de Physique des Particules (France), the Bundesministerium fur Bildung und Forschung and Deutsche Forschungsge meinschaft (Germany), the Istituto Nazionale di Fisica Nucleare (Italy), the Foundation for Fundamental Research on Matter (The Netherlands), the Research Council of Norway, the Ministry of Science and Technology of the Russian Federation, and the Particle Physics and Astronomy Research Council (United Kingdom). Individuals have received support from CONACYT (Mexico), the A. P. Sloan Foundation, the Research Corporation, and the Alexander von Humboldt Foundation. ## References * Mohapatra and Senjanovic [1981a] R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23, 165 (1981a). * Beacham et al. [2019] J. Beacham et al., Journal of Physics G: Nuc. and Part. Phys. 47, 010501 (2019), ISSN 1361-6471, URL http://dx.doi.org/10.1088/1361-6471/ab4cd2. * Abdullahi et al. [2022] A. M. Abdullahi et al., in _2022 Snowmass Summer Study_ (2022), eprint 2203.08039. * Hanneke et al. [2008] D. Hanneke, S. Fogwell, and G. Gabrielse, Phys. Rev. Lett. 100, 120801 (2008), URL https://link.aps.org/doi/10.1103/PhysRevLett.100.120801. * Asaka and Shaposhnikov [2005a] T. Asaka and M. Shaposhnikov, Phys. Lett. B 620, 17–26 (2005a), ISSN 0370-2693, URL http://dx.doi.org/10.1016/j.physletb.2005.06.020. * Asaka et al. [2005] T. Asaka, S. Blanchet, and M. Shaposhnikov, Phys. Lett. B 631, 151–156 (2005), ISSN 0370-2693, URL http://dx.doi.org/10.1016/j.physletb.2005.09.070. * Asaka and Shaposhnikov [2005b] T. Asaka and M. Shaposhnikov, Phys. Lett. B 620, 17–26 (2005b), ISSN 0370-2693, URL http://dx.doi.org/10.1016/j.physletb.2005.06.020. * Boyarsky et al. [2009] A. Boyarsky, O. Ruchayskiy, and M. Shaposhnikov, Annual Review of Nuclear and Particle Science 59, 191–214 (2009), ISSN 1545-4134, URL http://dx.doi.org/10.1146/annurev.nucl.010909.083654. * Zyla et al. [2020] P. Zyla et al. (Particle Data Group), Prog. Theor. Exp. Phys. p. 083 C01 (2020). * Yanagida [1979] T. Yanagida, Conf. Proc. C 7902131, 95 (1979). * Mohapatra and Senjanovic [1980] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980). * Gell-Mann et al. [1979] M. Gell-Mann, P. Ramond, and R. Slansky, Conf. Proc. C 790927, 315 (1979), eprint 1306.4669. * Schechter and Valle [1980] J. Schechter and J. W. F. Valle, Phys. Rev. D 22, 2227 (1980). * Abela et al. [1981] R. Abela, M. Daum, G. H. Eaton, R. Frosch, B. Jost, P. R. Kettle, and E. Steiner, Phys. Lett. B 105, 263 (1981), [Erratum: Phys.Lett.B 106, 513 (1981)]. * Minehart et al. [1984] R. C. Minehart, K. O. H. Ziock, R. Marshall, W. A. Stephens, M. Daum, B. Jost, and P. R. Kettle, Phys. Rev. Lett. 52, 804 (1984). * Cheng and Li [1980] T. P. Cheng and L.-F. Li, Phys. Rev. D 22, 2860 (1980). * Lazarides et al. [1981] G. Lazarides, Q. Shafi, and C. Wetterich, Nucl. Phys. B 181, 287 (1981). * Mohapatra and Senjanovic [1981b] R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23, 165 (1981b). * Canetti and Shaposhnikov [2010a] L. Canetti and M. Shaposhnikov, Journal of Cosmology and Astroparticle Physics 2010, 001 (2010a), URL https://doi.org/10.1088%2F1475-7516%2F2010%2F09%2F001. * Ruchayskiy and Ivashko [2012a] O. Ruchayskiy and A. Ivashko, JCAP 10, 014 (2012a), eprint 1202.2841. * Palazzo [2013] A. Palazzo, Mod. Phys. Lett. A 28, 1330004 (2013), URL https://doi.org/10.1142%2Fs0217732313300048. * Kaether et al. [2010] F. Kaether, W. Hampel, G. Heusser, J. Kiko, and T. Kirsten, Phys. Lett. B 685, 47 (2010), eprint 1001.2731. * Abdurashitov and Gothers [2009] J. N. Abdurashitov and Gothers (SAGE Collaboration), Phys. Rev. C 80, 015807 (2009), URL https://link.aps.org/doi/10.1103/PhysRevC.80.015807. * Mention et al. [2011] G. Mention et al., Phys. Rev. D 83, 073006 (2011), URL https://link.aps.org/doi/10.1103/PhysRevD.83.073006. * Aguilar et al. [2001] A. Aguilar et al. (LSND Collaboration), Phys. Rev. D 64, 112007 (2001), URL https://link.aps.org/doi/10.1103/PhysRevD.64.112007. * Aguilar-Arevalo et al. [2013] A. A. Aguilar-Arevalo et al. (MiniBooNE Collaboration), Phys. Rev. Lett. 110, 161801 (2013), URL https://link.aps.org/doi/10.1103/PhysRevLett.110.161801. * Bernardi et al. [1988] G. Bernardi et al., Phys. Lett. B 203, 332 (1988). * Capone [1985] A. Capone (CHARM), in _20th Rencontres de Moriond: Electroweak Interactions_ (1985), pp. 295–302. * Liventsev et al. [2013] D. Liventsev et al. (Belle), Phys. Rev. D 87, 071102 (2013), [Erratum: Phys.Rev.D 95, 099903 (2017)], eprint 1301.1105. * Abreu et al. [1997] P. Abreu et al. (DELPHI), Z. Phys. C 74, 57 (1997), [Erratum: Z.Phys.C 75, 580 (1997)]. * Sirunyan et al. [2018] A. M. Sirunyan et al. (CMS), Phys. Rev. Lett. 120, 221801 (2018), eprint 1802.02965. * Ruchayskiy and Ivashko [2012b] O. Ruchayskiy and A. Ivashko, JCAP 10, 014 (2012b), eprint 1202.2841. * Canetti and Shaposhnikov [2010b] L. Canetti and M. Shaposhnikov, JCAP 09, 001 (2010b), eprint 1006.0133. * Vaitaitis et al. [1999] A. Vaitaitis et al. (NuTeV Collaboration), Phys. Rev. Lett. 83, 4943 (1999). * Artamonov et al. [2015] A. V. Artamonov et al. (E949 Collaboration), Phys. Rev. D 91, 052001 (2015), URL https://link.aps.org/doi/10.1103/PhysRevD.91.052001. * Orloff et al. [2002] J. Orloff, A. Rozanov, and C. Santoni, Phys. Lett. B 550, 8–15 (2002), ISSN 0370-2693, URL http://dx.doi.org/10.1016/S0370-2693(02)02769-7. * Astier et al. [2001] P. Astier et al. (NOMAD Collaboration), Phys. Lett. B 506, 27–38 (2001), ISSN 0370-2693, URL http://dx.doi.org/10.1016/S0370-2693(01)00362-8. * Abratenko et al. [2021] P. Abratenko et al. (MicroBooNE) (2021), eprint 2110.14054. * Cortina Gil et al. [2021] E. Cortina Gil et al. (NA62), Phys. Lett. B 816, 136259 (2021), eprint 2101.12304. * Aad et al. [2019] G. Aad et al. (ATLAS), JHEP 10, 265 (2019), eprint 1905.09787. * Tumasyan et al. [2022] A. Tumasyan et al. (CMS) (2022), eprint 2201.05578. * [42] _Not yet published_. * Aubert et al. [2013] B. Aubert et al. (BaBar Collaboration), Nucl. Instrum. Meth. A 729, 615 (2013). * Kobach and Dobbs [2015] A. Kobach and S. Dobbs, Phys. Rev. D 91, 053006 (2015), URL https://link.aps.org/doi/10.1103/PhysRevD.91.053006. * Jadach et al. [2000] S. Jadach, B. F. L. Ward, and Z. Was, Comput. Phys. Commun. 130, 260 (2000), eprint hep-ph/9912214. * Was and Jadach [1992] Z. Was and S. Jadach, in _26th International Conference on High-energy Physics_ (1992), pp. 1777–1780. * Lange [2001] D. J. Lange, Nucl. Instrum. Meth. A 462, 152 (2001). * Sjöstrand [1986] T. Sjöstrand, Comp. Phys. Commu. 39, 347 (1986), ISSN 0010-4655, URL https://www.sciencedirect.com/science/article/pii/0010465586900962. * Sjöstrand and Bengtsson [1987] T. Sjöstrand and M. Bengtsson, Comp. Phys. Commu. 43, 367 (1987), ISSN 0010-4655, URL https://www.sciencedirect.com/science/article/pii/0010465587900543. * Ward et al. [2003] B. Ward, S. Jadach, and Z. Was, Nuc. Phys. B - Proceedings Supplements 116, 73–77 (2003), ISSN 0920-5632, URL http://dx.doi.org/10.1016/S0920-5632(03)80147-0. * Cowan et al. [2011] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Eur. Phys. J. C 71, 1554 (2011), [Erratum: Eur.Phys.J.C 73, 2501 (2013)], eprint 1007.1727. * Gorbunov et al. [2020] D. Gorbunov, I. Krasnov, Y. Kudenko, and S. Suvorov, Phys. Lett. B 810, 135817 (2020), eprint 2004.07974. * Carbajal and Gago [2022] S. Carbajal and A. M. Gago (2022), eprint 2202.09217. * Gligorov et al. [2018] V. V. Gligorov, S. Knapen, M. Papucci, and D. J. Robinson, Phys. Rev. D 97, 015023 (2018), URL https://link.aps.org/doi/10.1103/PhysRevD.97.015023. * Lubatti et al. [2020] H. Lubatti et al. (MATHUSLA), JINST 15, C06026 (2020), eprint 1901.04040. * Blondel and Janot [2022] A. Blondel and P. Janot, Eur. Phys. J. Plus 137 (2022). * Shen et al. [2022] Y.-F. Shen, J.-N. Ding, and Q. Qin, Eur. Phys. J. C 82, 398 (2022), eprint 2201.05831.
# Height filtrations and base loci on flag bundles over a curve Yangyu Fan Academy for Multidisciplinary Studies, Beijing National Center for Applied Mathematics, Capital Normal University, Beijing, 100048, People’s Republic of China<EMAIL_ADDRESS>, Wenbin Luo Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China <EMAIL_ADDRESS>and Binggang Qu Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China <EMAIL_ADDRESS> ###### Abstract. Let $k$ be an algebraically closed field of characteristic zero. Let $C/k$ be a projective smooth curve with function field $K=k(C)$ and $G/k$ be a connected reductive group. Let $F$ be a principal $G$-bundle on $C$. Let $P\subseteq G$ be a parabolic subgroup and $\lambda:P\longrightarrow G$ be a strictly anti-dominant character. Then $F/P\longrightarrow C$ is a flag bundle and $\mathcal{L}_{\lambda}=F\times_{P}k_{\lambda}$ on $F/P$ is a relatively ample line bundle. Let $h_{\mathcal{L}_{\lambda}}$ be the height function on $X=(F/P)_{K}$ induced by $\mathcal{L}_{\lambda}$. We compute its height filtration and successive minima: the height filtration is given by a Bruhat decomposition and successive minima are given by slopes. An interesting application is that, in this case, Zhang’s inequality on successive minima can be enhanced into an equality. This is proved from the convex geometry point of view. Let $f\in N^{1}(F/P)$ be the numerical class of a vertical fiber. We also compute the augmented base loci $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)$ for any $t\in\mathbb{R}$ and it turns out that they are almost the same as the height filtration, which accords with the philosophy in Arakelov geometry that the positivity of the line bundle $\mathcal{L}_{\lambda}$ is equivalent to the positivity of its induced height function $h_{\mathcal{L}_{\lambda}}$. As a corollary, we compute the $k$-th movable cones of flag bundles over curves for all $k$. ###### Contents 1. 1 Introduction 2. 2 Height filtrations and successive minima 3. 3 Reinforcement of Zhang’s inequality into an equality 4. 4 Base loci and movable cones ## 1\. Introduction The main purpose of this paper is to give an explicit description of the height function and its related topics on a flag bundle over a curve. In this paper, all fields are of characteristic zero and all curves are projective smooth. By variety we mean an integral seperated scheme of finite type over a field. ### 1.1. Height filtration and successive minima We start by giving a reminder on height functions in Arakelov theory and the relevant history. Let $K$ be either a number field or $K=k(C)$ where $C$ is a projective smooth curve over a field $k$. Let $X/K$ be a projective variety of dimension $d$ and $\overline{L}$ be an adelic line bundle on $X$. These data induce an Arakelov height function $h_{\overline{L}}$ on $X$ ([27], see also [25, §9] for a survey). A typical case is the geometric height, which is the one we concern in this article. Here we give a definition. If $K=k(C)$, let $\mathcal{X}\longrightarrow C$ be a projective flat morphism with generic fiber $X\longrightarrow\operatorname{Spec}(K)$ and $\mathcal{L}$ be a line bundle on $\mathcal{X}$ with $\mathcal{L}_{K}\cong L$. The data $(\mathcal{X},\mathcal{L})$ define an adelic line bundle $\overline{L}$ and the height function $h_{\overline{L}}$ is given by $\displaystyle h_{\overline{L}}:X(\overline{K})\longrightarrow\mathbb{Q},\;x\longmapsto\frac{\mathcal{L}\cdot\overline{\\{x\\}}}{{\mathrm{deg}}(x)}\;\;\text{where $\overline{\\{x\\}}$ is the closure of $x$ in $\mathcal{X}$}.$ We also denote this by $h_{\mathcal{L}}$ if there is no ambiguity. If $K$ is a number field, the height function can be defined similarly by arithmetic intersection theory. Assume $L$ is ample. Let $Z_{t}(X,h_{\overline{L}})=\text{Zariski closure of}\;\big{\\{}x\in X(\overline{K}):h_{\overline{L}}(x)<t\big{\\}}$. Note that * • $t\longmapsto Z_{t}(X,h_{\overline{L}})$ is an increasing filtration of Zariski closed subsets. * • $Z_{t}(X,h_{\overline{L}})=X$ when $t\gg 0$ and $Z_{t}(X,h_{\overline{L}})=\emptyset$ when $t\ll 0$. Since Zariski topology is Noetherian, the filtration $Z_{t}(X,h_{\overline{L}})$ gives a finite filtration $X_{0}=X\supsetneq X_{1}\supsetneq X_{2}\supsetneq\cdots\supsetneq X_{r}=\emptyset$, the _height filtration_. Its jumping points $\zeta_{i}(X,h_{\overline{L}})=\inf\big{\\{}t:Z_{t}(X,h_{\overline{L}})=X_{i-1}\big{\\}}$ are called the _successive minima_. Note that our definition of successive minima are slightly different with Zhang [27]: Zhang considers only dimension jumps $e_{i}(X,h_{\overline{L}})=\inf\big{\\{}t:\dim Z_{t}(X,h_{\overline{L}})\geq d-i+1\big{\\}}$, $i=1,\dots,d+1$. In the following, $e_{i}(X,h_{\overline{L}})$ will be called _Zhang’s successive minima_ to avoid ambiguity. The first minimum $\zeta_{1}(X,h_{\overline{L}})=e_{1}(X,h_{\overline{L}})=\inf\big{\\{}t:Z_{t}(X,h_{\overline{L}})=X\big{\\}}$ is of particular interest, it is called the _essential minimum_ and also denoted by $\zeta_{\text{ess}}(X,h_{\overline{L}})$. This invariant is the protagonist of the story of the Bogomolov conjecture, which in number field case says that a closed subvariety $X$ of an abelian variety $A$ has essential minimum zero with respect to the Néron-Tate height on $A$, if and only if it is the translation of an abelian subvariety by a torsion point, we refer the reader to [26, 24] for more details. The height filtration induced by the Néron-Tate height is $A\supsetneq\emptyset$ with the unique jumping point $\zeta=0$. This allows the equidistribution of small points to happen, and consequently implies the Bogomolov conjecture. The height filtration (of toric height functions) on toric varieties is completely determined in [10]. An important example is on the projective space $\mathbb{P}^{n}_{\mathbb{Q}}$. Consider the height function induced by the universal bundle $\mathcal{O}_{\mathbb{P}^{n}_{\mathbb{Z}}}(1)$ equipped with the Fubini-Study metric $\|\cdot\|_{\text{FS}}$. The height filtration is given by successively deleting torus orbits, that is, $\mathbb{P}^{n}_{\mathbb{Q}}\supsetneq\bigcup_{i}\big{\\{}x_{i}=0\big{\\}}\supsetneq\bigcup_{i,j}\big{\\{}x_{i}=0,x_{j}=0\big{\\}}\supsetneq\cdots\supsetneq\bigcup_{i}\big{\\{}x_{i}\neq 0\text{ and $x_{j}=0$ for all $j\neq i$}\big{\\}}\supsetneq\emptyset$ with successive minima $\frac{1}{2}\log(k+1)$, $k=0,1,\dots,n$. In this article, we consider another class of varieties with group actions: flag varieties. Let $k$ be an algebraic closed field, $G/k$ be a connected reductive group, $C/k$ be a curve, and $F$ be a principal $G$-bundle on $C$ with canonical reduction $F_{Q}$ to $Q$. This defines a linear functional ${\mathrm{deg}}(F_{Q})$ on $X(T)_{\mathbb{Q}}$. Let $P\subseteq G$ be a parabolic subgroup and $\lambda:P\longrightarrow G$ be a strictly anti- dominant character. Then $F/P\longrightarrow C$ is a flag bundle and $\mathcal{L}_{\lambda}=F\times_{P}k_{\lambda}$ on $F/P$ is a relatively ample line bundle. Let $X=(F/P)_{K}$ and $h_{\mathcal{L}_{\lambda}}$ be the height function on $X$ induced by $(F/P,\mathcal{L}_{\lambda})$. We compute its height filtration and successive minima. ###### Theorem A (Theorem 2.11). The height filtration of $h_{\mathcal{L}_{\lambda}}$ on $X$ is given by successively deleting Schubert cells $C_{w}=(F_{Q}\times_{Q}QwP/P)_{K}$ for $w\in W_{Q}\backslash W/W_{P}$, i.e. $\displaystyle Z_{t}=X\Bigg{\backslash}\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t}C_{w}=\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle<t}C_{w}.$ In particular, successive minima are $\zeta_{w}=\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ and Zhang’s successive minima are $e_{i}=\min\big{\\{}\zeta_{w}:\ell(w)=\dim G/P-i+1\big{\\}}$ where $\ell(w)=\max\limits_{\sigma\in W_{Q}wW_{P}}\min\limits_{\tau\in\sigma W_{P}}\ell(\tau)$. ### 1.2. Zhang’s inequality Let $X/K$ be a projective variety of dimension $d$, and $\overline{L}$ be a relative semipositive adelic line bundle with the underlying line bundle $L$ being ample. The height of $X$ is defined to be $h_{\overline{L}}(X)=\frac{\overline{L}^{d+1}}{(d+1)L^{n}}$. In [27], Zhang gives the following inequality: $\displaystyle e_{1}\geq h_{\overline{L}}(X)\geq\frac{1}{d+1}\sum_{i=1}^{d+1}e_{i}.$ In the case of toric varieties, Zhang’s successive minima can be intepreted by convex geometry as shown in [9, 10]. Moreover, Zhang’s inequality can be proved directly [10, Theorem C] as an integral inequality of a convex function on a polytope. This can be partially generalized to arbitrary projective varieties by the theory of Okounkov bodies [19] and concave transforms [8, 2]. In this article, we carry on this spirit in the setting of flag bundles $F/P$. Given a strictly anti-dominant character $\lambda:P\longrightarrow\mathbb{G}_{m}$, representation theorists could associate to $\lambda$ a _string polytope_ $\Delta_{\lambda}$, a _weight polytope_ $\mathcal{P}_{\lambda}$ and a _weight map_ $p:\Delta_{\lambda}\longrightarrow\mathcal{P}_{\lambda}$. Let $M_{\lambda}=G\times_{P}k_{\lambda}$ be the line bundle on $G/P$ induced by $\lambda$. Then integral points in $\Delta_{\lambda}$ parametrizes a weight basis of $H^{0}(G/P,M_{\lambda})$ and $p:\Delta_{\lambda}\longrightarrow\mathcal{P}_{\lambda}$ sends an integral point to the weight of the vector parametrized by it [18, §2.2]. Since the weight polytope is the convex hull of the Weyl orbit $W\lambda$, it follows immediately from Theorem A that successive minima of $h_{\mathcal{L}_{\lambda}}$ are the value of ${\mathrm{deg}}(F_{Q})$ at vertices of $\mathcal{P}_{\lambda}$. We will also prove $\displaystyle h_{\mathcal{L}_{\lambda}}(X)=\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p\;\mathrm{d}\mu.$ An interesting application is that, Zhang’s inequality in this case can be enhanced into an equality, in the sense that ###### Theorem B (Theorem 3.1). Let $q:W/W_{P}\longrightarrow W_{Q}\backslash W/W_{P}$ be the canoniacl projection and for $w\in W_{Q}\backslash W/W_{P}$ define $m_{w}=\\#q^{-1}(w)$. Then $\displaystyle h_{\mathcal{L}_{\lambda}}(X)=\frac{1}{\\#W/W_{P}}\sum_{w\in W_{Q}\backslash W/W_{P}}m_{w}\zeta_{w}.$ This follows from a simple computation of integration on the string polytopes: in fact for any linear functional $f:X(T)_{\mathbb{R}}\longrightarrow\mathbb{R}$, we have $\displaystyle\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}f\circ p\;\mathrm{d}\mu=\frac{1}{\\#W/W_{P}}\sum_{w\in W/W_{P}}f(w\lambda).$ We remark that it is certainly not possible to enhance Zhang’s inequality into an equality in general. Consider the height function on $\mathbb{P}^{n}_{\mathbb{Q}}$ induced by the the Hermitian line bundle $\big{(}\mathcal{O}_{\mathbb{P}^{n}_{\mathbb{Z}}}(1),\|\cdot\|_{\text{FS}}\big{)}$. The variety height is $\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{i}\frac{1}{j}$ [7], which is a rational number while the successive minima are irrational numbers $\tfrac{1}{2}\log(k+1)$, $k=0,1,\dots,n$. ### 1.3. Base loci It is a general philosophy in Arakelov geometry that the positivity of $\overline{L}$ are equivalent to the positivity of its induced height function $h_{\overline{L}}$. The study of essential minimum and pseudo-effectivity provides an evidence [10, 2, 22]. Note that if the underlying line bundle $L$ is big, then $\overline{L}$ is big $\Longleftrightarrow$ $\zeta_{\mathrm{ess}}(h_{\overline{L}})>0$. On the other hand, the theory of augmented base loci $\mathrm{B}_{+}(\cdot)$ is well-developed [13, 14] in the study of positivity in algebraic geometry. Now back to the case $\mathcal{X}=F/P$, let $f$ be a line bundle given by a vertical fiber. Recall that it is a classical result that $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)$ is the union of closed subvariety over which the restricted volume is $0$ (see §4.1). In particular, if $\mathcal{Y}\subseteq\mathcal{X}$ is a closed subvariety such that $(\mathcal{L}_{\lambda}-tf)|_{\mathcal{Y}}$ is not big on $\mathcal{Y}$, then $\mathcal{Y}\subseteq\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)$. For $w\in W_{Q}\backslash W/W_{P}$ such that $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\leq t$, our calculation of $h_{\mathcal{L}_{\lambda}}$ gives that the essential minimum $\zeta_{\mathrm{ess}}(h_{(\mathcal{L}_{\lambda}-tf)|_{\mathcal{X}_{\omega}}})$ on the Schubert variety $\mathcal{X}_{w}=F_{Q}\times_{Q}\overline{QwP}/P$ is no bigger than $0$, hence $(\mathcal{L}_{\lambda}-tf)|_{\mathcal{X}_{\omega}}$ is not big by the arithmetic bigness criterion mentioned above. We thus obtain that $\displaystyle\bigcup_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\leq t}\mathcal{X}_{w}\subseteq\bigcup_{\mathcal{Y}\subseteq\mathcal{X},(\mathcal{L}_{\lambda}-tf)|_{\mathcal{Y}}\text{ not big}}\mathcal{Y}\subseteq\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)$ where $\mathcal{Y}$ runs over all closed subvariety of $\mathcal{X}$. Moreover, we prove that both inclusions are actually equality: ###### Theorem C (Theorem 4.6). The augmented base locus of $\mathcal{L}_{\lambda}-tf$ is given by $\displaystyle\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)=\mathcal{X}\Bigg{\backslash}\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w}=\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\leq t}\mathcal{C}_{w}$ where $\mathcal{C}_{w}=F_{Q}\times_{Q}QwP/P$. In particular, comparing this with Theorem A, we see that $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)_{K}=Z_{t}$ for all $t\in\mathbb{R}$, except for the critical points $t=\zeta_{w}$ at which $B_{+}(\mathcal{L}_{\lambda}-\zeta_{w}f)$ is upper continuous while $Z_{\zeta_{w}}$ is lower continuous. ### 1.4. Movable cones Recall that for a projective variety $Y$ of dimension $d$, the _k_ -th movable cones $\operatorname{Mov}^{k}(Y)$, $k=1,2,\dots,d$ are defined as $\displaystyle\operatorname{Mov}^{k}(Y)=\big{\\{}\alpha\in N^{1}(Y)_{\mathbb{R}}:\mathrm{B}_{+}(\alpha)\text{ has codimension $\geq k$}\big{\\}}.$ $\operatorname{Mov}^{k}(Y)$ are open cones in $N^{1}(Y)_{\mathbb{R}}$ and we have obvious inclusions $\displaystyle\operatorname{Mov}^{d}\subseteq\operatorname{Mov}^{d-1}\subseteq\cdots\subseteq\operatorname{Mov}^{1}.$ Note that $\operatorname{Mov}^{1}$ is the big cone and $\operatorname{Mov}^{d}$ is the ample cone. Let $k$ be an algebraically closed field, $C/k$ be a curve and $E$ be a vector bundle on $C$. Note that $N^{1}(\mathbb{P}(E))_{\mathbb{R}}$ is two dimensional, generated by $f$ and $\xi=\mathcal{O}_{\mathbb{P}(E)}(1)$. The ample cone is given by extremal ray $f$ and $\xi-\mu_{\min(E)}f$, and the big cone is given by extremal ray $f$ and $\xi-\mu_{\max(E)}f$. The former is a theorem of Miyaoka [21] and the latter can be found in [15]. Fulger and Lehmann computed $\operatorname{Mov}^{2}(\mathbb{P}(E))$ in [16]. In [4], the big cone of the Grassmann bundle of $E$ is computed and in [6], the ample cone of flag bundles of $E$ is computed. As a direct consequence of Theorem C, we compute the $k$-th movable cones of flag bundles $F/P$ over a curve $C$ for all $k$. Since any element in $N^{1}(F/P)_{\mathbb{R}}$ can be written as $\mathcal{L}_{\lambda}-tf$, we may define two functions $\langle\alpha^{\vee},\cdot\rangle$ and $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ on $N^{1}(F/P)_{\mathbb{R}}$ as * • $\langle\alpha^{\vee},\cdot\rangle$ sends $\mathcal{L}_{\lambda}-tf$ to $\langle\alpha^{\vee},\lambda\rangle$. * • $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ sends $\mathcal{L}_{\lambda}-tf$ to $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle-t$. These functions are well-defined, and ###### Theorem D (Theorem 4.9). The $k$-th movable cone $\operatorname{Mov}^{k}(F/P)$ is the cone defined by $\langle\alpha^{\vee},\cdot\rangle<0$ for any $\alpha\in\Delta\backslash\Delta_{P}$ and $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle>0$ for any $w\in W/W_{P}$ with $\ell(w)\geq n-k+1$. In particular, we find that $\mathcal{L}_{\lambda}$ is $k$-movable $\Longleftrightarrow$ the $k$-th minimum of Zhang of $h_{\mathcal{L}_{\lambda}}$ is positive. ### 1.5. Example: Grassmann bundles We provide here the computation of essential minimum and big cone on Grassmann bundles. Let $k$ be an algebraically closed field and $C/k$ be a curve with function field $K=k(C)$. Let $E$ be a vector bundle of rank $n$ on $C$ with Harder-Narasimhan filtration $0\subsetneq E_{1}\subsetneq\cdots\subsetneq E_{r}$. Let $B$ be the subgroup of $\operatorname{GL}_{n}$ of all upper triangular matrices, and let $Q$ be the parabolic subgroup containing $B$ of type $(\operatorname{rank}(E_{1}),\operatorname{rank}(E_{2}),\dots,\operatorname{rank}(E_{r}))$. Here we say a parabolic subgroup containing $B$ is of type $(a_{1},\dots,a_{r})$ if it consists of upper block triguluar matrices of shape $\left[\begin{array}[]{cccc}A_{1}&*&\cdots&*\\\ &A_{2}&\cdots&*\\\ &&\cdots&*\\\ &&&A_{r}\end{array}\right],\quad A_{i}\in\operatorname{GL}_{a_{i}}.$ Let $F(E)$ be the frame bundle of $E$, which is a $\operatorname{GL}_{n}$-principal bundle. Then the canonical reduction of $F(E)$ is $F_{Q}$ to $Q$ where $F_{Q}$ is the $Q$-bundle of frames respecting the filtration $0\subsetneq E_{1}\subsetneq\cdots\subsetneq E_{r}$. Let $\mu_{i}$ ($i=1,\dots,n$) be the $i$-th number in the sequence $\displaystyle\underbrace{\mu(E_{1}),\dots,\mu(E_{1})}_{\text{$\operatorname{rank}(E_{1})$ times}},\underbrace{\mu(E_{2}/E_{1}),\dots,\mu(E_{2}/E_{1})}_{\text{$\operatorname{rank}(E_{2}/E_{1})$ times}},\dots,\underbrace{\mu(E/E_{r-1}),\dots,\mu(E/E_{r-1})}_{\text{$\operatorname{rank}(E/E_{r-1})$ times}}.$ Note that $X(T)_{\mathbb{Q}}=\bigoplus_{i=1}^{n}\mathbb{Q}\lambda_{i}$ where $\lambda_{i}:T\longrightarrow\mathbb{G}_{m}$ is the character of taking the $i$-th entry, and $\langle{\mathrm{deg}}(F_{Q}),\lambda_{i}\rangle=\mu_{i}$. Let $\operatorname{Gr}_{r}(E)=F(E)/P$ be the Grassmann bundle of $r$-dimension quotients, where $P$ is the parabolic subgroup containing $B$ of type $(n-r,r)$. Identify the Weyl group of $G$ with $S_{n}$ and of $P$ with $S_{n-r}\times S_{r}$ as usual. Then $W/W_{P}=S_{n}\big{/}(S_{n-r}\times S_{r})$ can be identified with the multi-index set $\big{\\{}(i_{1},\dots,i_{r}):1\leq i_{1}<i_{2}<\cdots<i_{r}\leq n\big{\\}}$ via $\displaystyle\left[\begin{matrix}1&2&\cdots&n-r&n-r+1&\cdots&n\\\ *&*&\cdots&*&i_{1}&\cdots&i_{r}\end{matrix}\right]\longleftrightarrow(i_{1},\dots,i_{r}).$ $N^{1}(\operatorname{Gr}_{r}(E))$ is two dimensional with basis $\mathcal{O}(1)$ (determinant bundle of the universal bundle on $\operatorname{Gr}_{r}(E)$) and $f$ (a vertical fiber). Let $\det_{2}$ be the character $\displaystyle\det\nolimits_{2}:P\longrightarrow\mathbb{G}_{m},\quad\begin{bmatrix}A_{1}&*\\\ 0&A_{2}\end{bmatrix}\longmapsto\det(A_{2}).$ One checks that $\mathcal{O}(1)=F(E)\times_{P}k_{\det_{2}}$, $\det_{2}=\lambda_{n-r+1}+\cdots+\lambda_{n}$ in $X(T)$ and $I=(i_{1},\dots,i_{r})\in W/W_{P}$ acts on $\det_{2}$ by $I\det_{2}=\lambda_{i_{1}}+\cdots+\lambda_{i_{r}}$. Let $h_{\mathcal{O}(1)}$ be the height function on $\operatorname{Gr}_{r}(E_{K})$ induced by the model $(\operatorname{Gr}_{r}(E),\mathcal{O}(1))$. By Theorem A, the successive minima are $\langle{\mathrm{deg}}(F_{Q}),I\det_{2}\rangle=\sum_{i\in I}\mu_{i}$. (Theorem A says the successive minima are $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ for $w\in W_{Q}\backslash W/W_{P}$, but these numbers are really just $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ for $w\in W/W_{P}$.) In particular, the longest element $I_{0}=(1,2,\dots,r)$ gives the essential minimum $\displaystyle\zeta_{\text{ess}}=\langle{\mathrm{deg}}(F_{Q}),I_{0}\det\nolimits_{2}\rangle=\mu_{1}+\cdots+\mu_{r}.$ The only element in $\Delta\backslash\Delta_{P}$ is $\alpha=\lambda_{n-r}-\lambda_{n-r+1}$ and the function $\langle\alpha^{\vee},\cdot\rangle$ on $N^{1}(\operatorname{Gr}_{r}(E))$ is given by $\mathcal{O}(1)\longmapsto\langle\alpha^{\vee},\det_{2}\rangle<0$ and $f\longmapsto 0$. Let $I_{0}=(1,2,\dots,r)\in W/W_{P}$ be the longest element. Then the function $\langle{\mathrm{deg}}(F_{Q}),I_{0}\cdot\rangle$ on $N^{1}(\operatorname{Gr}_{r}(E))$ is given by $\mathcal{O}(1)\longmapsto\mu_{1}+\cdots+\mu_{r}$ and $f\longmapsto 1$. By Theorem C, the big cone is given by $\langle\alpha^{\vee},\cdot\rangle<0$ and $\langle{\mathrm{deg}}(F_{Q}),I_{0}\cdot\rangle>0$, and one sees that it is the cone given by extremal rays $f$ and $\mathcal{O}(1)-(\mu_{1}+\dots+\mu_{r})f$. ### 1.6. Notations and conventions In the following, let $k$ be an algebraically closed field of characteristic zero and $C/k$ be a projective smooth curve with function field $K=k(C)$. Let $G/k$ be a connected reductive group and denote the identity component of its center by $Z(G)$. Fix a Borel subgroup $B\subseteq G$ and a maximal torus $T\subseteq B$. Let $W$ be the Weyl group and $\Delta$ be the set of simple roots with respect to $(G,B,T)$. For any $\alpha\in\Delta$, we denote by $\alpha^{\vee}$ the corresponding simple coroot. We shall consider only parabolic subgroup $P\subseteq G$ containing $B$. For any parabolic subgroup $P$, let $W_{P}\subseteq W$ be the Weyl group $W(L_{P})$ of the Levi factor $L_{P}\subseteq P$ and $\Delta_{P}\subseteq\Delta$ be the simple roots of $L_{P}$. For parabolic subgroups $P,Q\subseteq G$ and any double coset $w\in W_{Q}\backslash W/W_{P}$, set $\ell(w)=\max_{\sigma\in W_{Q}wW_{P}}\min_{\tau\in\sigma W_{P}}\ell(\tau)$, which equals to $\operatorname{dim}QwP/P$. Here by abuse of notations, $\ell(\tau)$ is the usual length of the Weyl element $\tau$. For any linear algebraic group $\Gamma/k$, let $X(\Gamma)$ denote the character group $\mathrm{Hom}(\Gamma,\mathbb{G}_{m})$. For any $\mathbb{Z}$-algebra $A$, denote by $X(\Gamma)_{A}$ the tensor product $X(\Gamma)\otimes_{\mathbb{Z}}A$. For any linear functional $f$ on $X(\Gamma)$, we shall usually denote $f(\alpha)$ by $\langle f,\alpha\rangle$ for $\alpha\in X(\Gamma)$. For any $\alpha\in X(\Gamma)$, denote by $k_{\alpha}$ the one-dimensional algebraic representation on the vector space $k$ with $\Gamma$ acting by $\alpha$. ### 1.7. Organization of the paper Let $F$ be a principal $G$-bundle on $C$ and $\lambda:P\longrightarrow\mathbb{G}_{m}$ be a strictly anti-dominant character. Then $F/P$ is a flag bundle over $C$ and $\mathcal{L}_{\lambda}=F\times_{P}k_{\lambda}$ on $F/P$ is a relatively ample line bundle. Let $X=(F/P)_{K}$ and $h_{\mathcal{L}_{\lambda}}$ be the height function on $X$ induced by $(F/P,\mathcal{L}_{\lambda})$. In Chapter $2$, we compute the height filtration and successive minima of $h_{\mathcal{L}_{\lambda}}$. In Chapter $3$, we enhance Zhang’s inequality into an equality by convex geometry. In Chapter $4$, we compute the augmented base loci of $\mathcal{L}_{\lambda}-tf$ on $F/P$. As a corollary we get the movable cones $\operatorname{Mov}^{k}(F/P)$. ## 2\. Height filtrations and successive minima ### 2.1. Vector bundles, slope stability and Harder-Narasimhan filtration Let $E$ be a vector bundle on $C$. The degree of $E$ is ${\mathrm{deg}}(E)={\mathrm{deg}}(\det(E))$ and the slope of $E$ is $\mu(E)={\mathrm{deg}}(E)/\operatorname{rk}(E)$. It is called slope semistable if for every subbundle $F\subseteq E$, $\mu(F)\leq\mu(E)$. This is equivalent to $\mu(Q)\geq\mu(E)$ for every quotient bundle $Q$ of $E$. In general, there exists uniquely a filtration $0=E_{0}\subseteq E_{1}\subseteq\cdots\subseteq E_{r}=E$ such that * • $E_{i}/E_{i-1}$ is semistable; * • $\mu(E_{i}/E_{i-1})>\mu(E_{i+1}/E_{i})$. This filtration is called the _Harder-Narasimhan filtration_ of $E$ and we denote $\mu_{i}(E)=\mu(E_{i}/E_{i-1})$. The first and the last term are of particular interests. The first one $\mu_{1}(E)=\mu(E_{1})$ is called _maximal slope_ of $E$ and is also denoted by $\mu_{\max}(E)$. The last one $\mu_{r}(E)=\mu(E/E_{r-1})$ is called _minimal slope_ of $E$ and is also denoted by $\mu_{\min}(E)$. ### 2.2. Principal bundles For any linear algebraic group $\Gamma/k$, a _principal $\Gamma$-bundle_ on $C$ is a variety $F$ equipped with a right action of $\Gamma$ and a $\Gamma$-equivariant smooth morphism $F\longrightarrow C$ such that the map $\displaystyle F\times_{C}(C\times\Gamma)\longrightarrow F\times_{C}F,\quad(f,(x,g))\longmapsto(f,fg)$ is an isomorphism. Attached to any principal $\Gamma$-bundle $F$, one has the _degree map_ $\displaystyle{\mathrm{deg}}(F):X(\Gamma)\longrightarrow\mathbb{Z},\quad\lambda\longmapsto\langle{\mathrm{deg}}(F),\lambda\rangle={\mathrm{deg}}(F\times_{\Gamma}k_{\lambda}).$ Here ${\mathrm{deg}}(F\times_{\Gamma}k_{\lambda})$ is the degree of the line bundle $F\times_{\Gamma}k_{\lambda}$ on the curve $C$. Let $H$ be a closed subgroup of a linear algebraic group $\Gamma/k$. A _reduction of structure group_ of $F$ to $H$ is a pair $(F_{H},\varphi)$ where $F_{H}$ is a principal $H$-bundle and $\varphi:F_{H}\times_{H}\Gamma\cong F$ is an isomorphism. By the universal property of the quotient $F/H$, the assignment to any section $\sigma:C\longrightarrow F/H$ the reduction $\sigma^{*}F$ of $F$ to $H$ is a one-one correspondence between reductions of structure group of $F$ to $H$ and sections of $F/H\longrightarrow C$. Let $F$ be a principal $G$-bundle on $C$ for the connected reductive group $G$. Note that for any parabolic subgroup $P\subseteq G$ with Levi subgroup $L_{P}$, the natural inclusion $\displaystyle X(P)\longrightarrow X(L_{P})\longrightarrow X(Z(L_{P}))$ becomes an isomorphism after tensoring with $\mathbb{Q}$. Thus we have $\displaystyle X(T)_{\mathbb{Q}}\longrightarrow X(Z(L_{P}))_{\mathbb{Q}}=X(P)_{\mathbb{Q}}$ and by taking duals, we get the so-called _slope map_ $X(P)^{\vee}_{\mathbb{Q}}\longrightarrow X(T)^{\vee}_{\mathbb{Q}}$ introduced in [23, §2.1.3]. Let $F_{P}$ be a reduction of $F$ to $P$. By applying the slope map to ${\mathrm{deg}}(F_{P})$ we can define $\langle{\mathrm{deg}}(F_{P}),\lambda\rangle$ for any $\lambda\in X(T)$. ### 2.3. Semistability and canonical reduction We mainly follow [5]. ###### Definition 2.1 (semistable). A principal $G$-bundle $F$ on $C$ is called _semistable_ if for any parabolic subgroup $P$, any reduction $F_{P}$ of $F$ to $P$ and any dominant character $\lambda$ of $P$ which is trivial on $Z(G)$, we have $\langle{\mathrm{deg}}(F_{P}),\lambda\rangle\leq 0$. ###### Definition 2.2 (canonical reduction). Let $F$ be a principal $G$-bundle on $C$. A reduction $F_{P}$ of $F$ to a parabolic subgroup $P$ is called _canonical_ if the following two conditions hold: 1. (1) The the principal $L_{P}$ bundle $F_{P}\times_{P}L_{P}$ is semistable. 2. (2) For any non-trivial character $\lambda$ of $P$ which is non-negative linear combination of simple roots, $\langle{\mathrm{deg}}(F_{P}),\lambda\rangle>0$. Behrend in [3] proved canonical reduction exists uniquely. When $G=\operatorname{GL}_{n}$, the above definition of semistability recovers Mumford’s slope stability of vector bundles and the above deinition of canonical reduction recovers the Harder-Narasimhan filtration of vector bundles. ### 2.4. Height function induced by $(F/P,\mathcal{L}_{\lambda}$) Let $F$ be a principal $G$-bundle with canonical reduction $F_{Q}$ to some parabolic subgroup $Q\subseteq G$. Let $P\subseteq G$ be a parabolic subgroup. Set $\mathcal{X}=F/P$ and $X=\mathcal{X}_{K}$. A character $\lambda:P\longrightarrow\mathbb{G}_{m}$ is called strictly anti- dominant if the natural pairing $\langle\alpha^{\vee},\lambda\rangle<0$ for any $\alpha\in\Delta\backslash\Delta_{P}$. Let $\lambda:P\longrightarrow\mathbb{G}$ be a stricly anti-dominant character. Then the line bundle $M_{\lambda}=G\times_{P}k_{\lambda}$ on $G/P$ is ample. Therefore $\mathcal{L}_{\lambda}=F\times_{G}M_{\lambda}$ is a relatively ample line bundle on $\mathcal{X}=F\times_{G}G/P$ and induces a height function $\displaystyle\quad\quad\quad h_{\mathcal{L}_{\lambda}}:X(\overline{K})$ $\displaystyle\longrightarrow\mathbb{Q},\quad x\longmapsto\frac{\mathcal{L}_{\lambda}\cdot\overline{\\{x\\}}}{{\mathrm{deg}}(x)}$ where $\overline{\\{x\\}}$ is the Zariski closure of $x$ in $\mathcal{X}$. For $x\in X(K)$, let $\sigma_{x}:C\longrightarrow\mathcal{X}$ be the section induced by $x:\operatorname{Spec}(K)\longrightarrow X$ via valuative criterion, and let $F_{P,x}=\sigma_{x}^{*}F$ be the corresponding reduction to $P$. ###### Lemma 2.3. For $x\in X(K)$, $h_{\mathcal{L}_{\lambda}}(x)=\langle{\mathrm{deg}}(F_{P,x}),\lambda\rangle$. ###### Proof. By definition, $h_{\mathcal{L}_{\lambda}}(x)$ is the degree of $\sigma_{x}^{*}(\mathcal{L}_{\lambda})$ and $\langle{\mathrm{deg}}(F_{P,x}),\lambda\rangle$ is the degree of $F_{P,x}\times_{P}k_{\lambda}$. We have equality of line bundles $\displaystyle\sigma_{x}^{*}(\mathcal{L}_{\lambda})=\sigma_{x}^{*}(F\times_{P}k_{\lambda})=(\sigma_{x}^{*}F)\times_{P}k_{\lambda}=F_{P,x}\times_{P}k_{\lambda}$ and the lemma follows by taking degree. ∎ ### 2.5. A height lower bound in Schubert cells For $w\in W_{Q}\backslash W/W_{P}$, write $\mathcal{C}_{w}=F_{Q}\times_{Q}QwP/P$, $\mathcal{X}_{w}=F_{Q}\times_{Q}\overline{QwP}/P$, $C_{w}=\mathcal{C}_{w,K}$ and $X_{w}=\mathcal{X}_{w,K}$. We shall show that ###### Proposition 2.4. For any $x\in C_{w}(\overline{K})$, $h_{\mathcal{L}_{\lambda}}(x)\geq\langle{\mathrm{deg}}F_{Q},w\lambda\rangle$. Note that for any $w^{\prime}\in W_{Q}$ and $\lambda\in X(T)$, we have $w^{\prime}\lambda-\lambda\in\mathbb{Z}[\Delta_{Q}]$ consequently $\langle{\mathrm{deg}}(F_{Q}),w^{\prime}\lambda\rangle=\langle{\mathrm{deg}}(F_{Q}),\lambda\rangle$. Note also that for any $\lambda\in X(P)$ and $w\in W_{P}$, $w\lambda=\lambda$. So the number $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ is well-defined for any $\lambda\in X(P)$ and $w\in W_{Q}\backslash W/W_{P}$. ###### Definition 2.5 (relative position). A reduction $F_{P}$ of $F$ to $P$ is called _in relative position $w\in W_{Q}\backslash W/W_{P}$ with respect to_ $F_{Q}$ if the image of the natural map $\displaystyle F_{Q}\times_{C}F_{P}\longrightarrow G_{C},\quad(a,b)\longmapsto a^{-1}b$ lies in $QwP_{C}$. Here we implicitly apply the injections $\displaystyle F_{P}\hookrightarrow F_{P}\times_{P}G\cong F,\quad F_{Q}\hookrightarrow F_{Q}\times_{Q}G\cong F.$ Note that by the universal property of the quotient stacks $[G//Q\times P]$ and $[QwP//Q\times P]$, this definition coincides with the one in [23, §4.1]. ###### Lemma 2.6. When $x\in C_{w}(K)$, $F_{P,x}$ is of relative position $w$ with respect to $F_{Q}$. ###### Proof. Note that the injection $F_{P,x}=\sigma_{x}^{*}F\longrightarrow F$ has image in $F_{Q}\times_{Q}QwP$ by the commutativity of the following diagram Now the lemma follows since $F_{Q}\times_{C}F_{P,x}\longrightarrow C\times G$ factors as $\displaystyle F_{Q}\times_{C}F_{P,x}\longrightarrow F_{Q}\times_{C}\Big{(}F_{Q}\times_{Q}QwP\Big{)}\longrightarrow$ $\displaystyle\Big{(}F_{Q}\times_{C}F_{Q}\Big{)}\times_{Q}QwP\longrightarrow(C\times Q)\times_{Q}QwP=C\times QwP.$ ∎ ###### Proposition 2.7 ([23], Proposition $4.6$). If $F_{P}$ is in relative position $w$ with respect to $F_{Q}$, then $\langle{\mathrm{deg}}(F_{P}),\lambda\rangle\geq\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ for any anti-dominant character $\lambda$. Now we can prove Proposition 2.4. ###### Proof of _Proposition_ 2.4. Let $L/K$ be any finite field extension. Assume that $x\in C_{w}(L)$. Consider the following commutative diagram --- $\textstyle{X_{L}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F_{C_{L}}/P\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F/P\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Spec}(L)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\widetilde{x}}$$\scriptstyle{x}$$\textstyle{C_{L}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma_{\widetilde{x}}}$$\scriptstyle{\sigma_{x}}$$\textstyle{\operatorname{Spec}(K)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C.}$ where $C_{L}$ is the normalization of $C$ in $L$ and $F_{C_{L}}=F\times_{C}C_{L}$. Let $\mathcal{L}_{C_{L},\lambda}$ be the pullback of $\mathcal{L}_{\lambda}$ via the map $F_{C_{L}}/P\longrightarrow F/P$, which is also the line bundle associated to the character $\lambda$ of $P$ via the principal bundle $F_{C_{L}}$ on $C_{L}$. Then by Lemma 2.3, $\displaystyle h_{\mathcal{L}_{\lambda}}(x)=\tfrac{1}{[L:K]}{\mathrm{deg}}(\sigma_{x}^{*}\mathcal{L}_{\lambda})=\tfrac{1}{[L:K]}{\mathrm{deg}}(\sigma_{\widetilde{x}}^{*}\mathcal{L}_{C_{L},\lambda})=\tfrac{1}{[L:K]}\langle{\mathrm{deg}}(F_{C_{L},P,\widetilde{x}}),\lambda\rangle$ where $F_{C_{L},P,\widetilde{x}}$ is the reduction of $F_{C_{L}}$ to $P$ corresponding to the rational point $\widetilde{x}\in X_{L}(L)$. Note that the canonical reduction of $F_{C_{L}}$ is $F_{Q,C_{L}}=F_{Q}\times_{C}C_{L}$ and ${\mathrm{deg}}(F_{Q,C_{L}})=[L:K]{\mathrm{deg}}(F_{Q})$ in $X(T)_{\mathbb{Q}}^{\vee}$. By Lemma 2.6, $F_{Q,C_{L}}$ is of relative position $w$ with respect to $F_{C_{L},P,\widetilde{x}}$. We conclude by applying Proposition 2.7 to $F_{C_{L}}$. ∎ ### 2.6. Height filtration and successive minima of $h_{\mathcal{L}_{\lambda}}$ In this subsection, we employ Ballaÿ’s theorem [2, Theorem 1.2] to compute essential minimum. We start by finding the maximal slope of $\pi_{*}\mathcal{L}_{\lambda}|_{\mathcal{X}_{w}}$. ###### Lemma 2.8 ([17], Part I, §5.18). On $\mathcal{X}_{w}=F_{Q}\times_{Q}\overline{QwP}/P\subseteq\mathcal{X}$, we have $\pi_{*}\big{(}\mathcal{L}_{\lambda}|_{\mathcal{X}_{w}}\big{)}=F_{Q}\times_{Q}H^{0}(\overline{QwP}/P,M_{\lambda})$. Schieder computed the Harder-Narasimhan filtration of $F_{Q}\times_{Q}H^{0}(\overline{QwP}/P,M_{\lambda})$ in [23, §5.1]. Let $V/k$ be a finite dimensional $Q$-representation with weight decomposition $V=\bigoplus_{\nu\in X(T)}V[\nu]$. The $Q$-filtration $V_{\bullet,{\mathrm{deg}}(F_{Q})}$ of $V$ is defined by $\displaystyle V_{t,{\mathrm{deg}}(F_{Q})}=\bigoplus_{\langle{\mathrm{deg}}(F_{Q}),\nu\rangle\geq t}V[\nu].$ ###### Proposition 2.9. The Harder-Narasimhan filtration of $F_{Q}\times_{Q}H^{0}(\overline{QwP}/P,M_{\lambda})$ is $F_{Q}\times_{Q}H^{0}(\overline{QwP}/P,M_{\lambda})_{\bullet,{\mathrm{deg}}(F_{Q})}$. Moreover, the maximal slope is $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$. ###### Proof. The first assertion is a slight generalization of [23, Proposition 5.1], whose original texts only consider $G$-representations restricted to $Q$, but the proof actually works for arbitrary $Q$-representations. For the second assertion, we only need to show the highest weights in $H^{0}(\overline{QwP}/P,M_{\lambda})$ belong to $w\lambda+\mathbb{Z}[\Delta_{Q}]$. When $Q=B$, $H^{0}(\overline{BwP}/P,M_{\lambda})$ has a single highest weight $w\lambda$ by the theory of Demazure modules [17, Chapter 14]. For general $Q$, let $\pi:W/W_{P}\longrightarrow W_{Q}\backslash W/W_{P}$ be the canoniacl projection. Note that $\displaystyle\overline{QwP}/P=\bigcup_{w^{\prime}\in\pi^{-1}(w)}\overline{Bw^{\prime}P}/P.$ Since the restriction map $\displaystyle H^{0}(\overline{QwP}/P,M_{\lambda})\longrightarrow H^{0}(\overline{Bw^{\prime}P}/P,M_{\lambda})$ is surjective for each $w^{\prime}\in\pi^{-1}(w)$ and the diagonal map $\displaystyle H^{0}(\overline{QwP}/P,M_{\lambda})\longrightarrow\prod_{w^{\prime}\in\pi^{-1}(w)}H^{0}(\overline{Bw^{\prime}P}/P,M_{\lambda})$ is injective, the highest weights in $H^{0}(\overline{QwP}/P,M_{\lambda})$ belong to $w\lambda+\mathbb{Z}[\Delta_{Q}]$. ∎ ###### Corollary 2.10. The essential minimum $\zeta_{1}(h_{\mathcal{L}_{\lambda}},X_{w})$ of $h_{\mathcal{L}_{\lambda}}$ on $X_{w}$ is $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$. ###### Proof. By Ballaÿ’s theorem [2, Theorem 1.2], $\displaystyle\quad\quad\quad\zeta_{1}(h_{\mathcal{L}_{\lambda}},X_{w})$ $\displaystyle=\displaystyle\lim_{n\rightarrow\infty}\frac{\mu_{\max}(\pi_{*}\mathcal{L}_{n\lambda}|_{\mathcal{X}_{w}})}{n}$ $\displaystyle=\lim_{n\rightarrow\infty}\frac{n\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle}{n}=\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle.$ ∎ ###### Theorem 2.11. The height filtration of $h_{\mathcal{L}_{\lambda}}$ on $X$ is given by successively deleting Schubert cells $C_{w}$ for $w\in W_{Q}\backslash W/W_{P}$, i.e. $\displaystyle Z_{t}=X\Bigg{\backslash}\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t}C_{w}=\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle<t}C_{w}.$ In particular, successive minima are $\zeta_{w}=\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ and Zhang’s successive minima are $e_{i}=\min\big{\\{}\zeta_{w}:\ell(w)=d-i+1\big{\\}}$. ###### Proof. On $X_{w}$, $\zeta_{1}(h_{\mathcal{L}_{\lambda}},X_{w})=\zeta_{w}$ and for any $x\in C_{w}$, $h(x)\geq\zeta_{w}$. Thus $Z_{\zeta_{w}}(X_{w})\subseteq X_{w}\backslash C_{w}=\bigcup_{w^{\prime}<w}X_{w^{\prime}}$. For a closed subvariety $Y$ of $X$, let $Z_{t}(Y)$ be the Zariski closure in $Y$ of $\big{\\{}y\in Y:h(y)<t\big{\\}}$. We claim that $Z_{\zeta_{w}}(X_{w})=X_{w}\backslash C_{w}=\bigcup_{w^{\prime}<w}X_{w^{\prime}}$. In fact, suppose conversely that $Z_{\zeta_{w}}(X_{w})\subsetneq X_{w}\backslash C_{w}$. Then there exists $X_{w^{\prime}}\subsetneq X_{w}$ such that $Z_{\zeta_{w}}(X_{w})\cap X_{w^{\prime}}\subsetneq X_{w^{\prime}}$. This forces $Z_{\zeta_{w}}(X_{w^{\prime}})\subseteq Z_{\zeta_{w}}(X_{w})\cap X_{w^{\prime}}\subsetneq X_{w^{\prime}}$. But on $X_{w^{\prime}}$, the essential minimum is $\zeta_{w^{\prime}}<\zeta_{w}$, so $Z_{\zeta_{w}}(X_{w^{\prime}})=X_{w^{\prime}}$. We get contradiction and we have thus proved that on $X_{w}$, the set $Z_{t}(X_{w})=X_{w}$ when $t>\zeta_{w}$ and $Z_{t}(X_{w})=X_{w}\backslash C_{w}$ when $t\leq\zeta_{w}$. Let $w_{0}\in W_{Q}\backslash W/W_{P}$ be the longest element. On $X=X_{w_{0}}$, when $t>\zeta_{w_{0}}$, $Z_{t}=X_{w_{0}}$ and when $t\leq\zeta_{w_{0}}$, $Z_{t}=X_{w_{0}}\backslash C_{w_{0}}=\bigcup_{w^{\prime}}X_{w^{\prime}}$ with $w^{\prime}<w_{0}$ and $\ell(w^{\prime})=\ell(w_{0})-1$. Repeating this procedure on each smaller $X_{w^{\prime}}$, we can complete the proof by induction. ∎ ## 3\. Reinforcement of Zhang’s inequality into an equality In this section, we shall establish the following Zhang-type equality on flag varieties over function fields: the height of the variety in fact equals to a weighted average of the successive minima. ###### Theorem 3.1. Let $q:W/W_{P}\longrightarrow W_{Q}\backslash W/W_{P}$ be the canoniacl projection. For $w\in W_{Q}\backslash W/W_{P}$, define $m_{w}=\\#q^{-1}(w)$. Then $\displaystyle h_{\mathcal{L}_{\lambda}}(X)=\frac{1}{\\#W/W_{P}}\sum_{w\in W_{Q}\backslash W/W_{P}}m_{w}\zeta_{w}.$ We shall establish this theorem by convex geometry. Let $E$ be a vector bundle on $C$ with HN filtration $0=E_{0}\subseteq E_{1}\subseteq\cdots\subseteq E_{r}=E$ with successive slopes $\mu_{i}=\mu(E_{i}/E_{i-1})$. Chen defines a measure $\nu_{E}$ on $\mathbb{R}$ by $\displaystyle\nu_{E}=\frac{1}{\operatorname{rank}(E)}\sum_{i=1}^{r}\operatorname{rank}(E_{i}/E_{i-1})\delta_{\mu_{i}},$ where $\delta_{a}$ is the Dirac measure on $\mathbb{R}$ supported at $a\in\mathbb{R}$. For any $\varepsilon>0$ and any measure $\mu$, let $T_{\varepsilon}\mu$ be the unique measure such that $\displaystyle\int_{\mathbb{R}}f(x)\mathrm{d}T_{\varepsilon}\mu(x)=\int_{\mathbb{R}}f(\varepsilon x)\mathrm{d}\mu(x)$ for any compactly-supported continuous function $f:\mathbb{R}\longrightarrow\mathbb{R}$. ###### Theorem 3.2 (Chen; [11], Theorem 1.1). Let $\pi:\mathcal{X}\longrightarrow C$ be a projective flat morphism and $\mathcal{L}$ be a line bundle on $\mathcal{X}$. Assume $\mathcal{L}$ is generically big. Then 1. (1) $\big{\\{}T_{\frac{1}{n}}\nu_{\pi_{*}\mathcal{L}^{n}}\big{\\}}_{n}$ converges weakly. We denote the limit measure by $\nu^{\pi}_{\mathcal{L}}$. 2. (2) $\displaystyle\operatorname{Vol}(\mathcal{L})=\dim\mathcal{X}\cdot\operatorname{Vol}(\mathcal{L}_{K})\cdot\int_{\mathbb{R}}x_{+}\;\mathrm{d}\nu^{\pi}_{\mathcal{L}}$, where $x_{+}=\max\\{0,x\\}$. Let $w_{0}\in W$ be the longest element and fix a reduced decomposition $w_{0}=s_{1}\cdot s_{2}\cdots s_{{\ell(w_{0})}}$. Recall that $\lambda:P\longrightarrow\mathbb{G}_{m}$ is a strictly anti-dominant character and $N=\dim G/P$. Let $\Delta_{\lambda}=\Delta_{\underline{w}_{0},\lambda}\subseteq\mathbb{R}^{N}$ be the string polytope associated to $\lambda$ with respect to $\underline{w}_{0}=(s_{1},\cdots,s_{\ell(w_{0})})$. It is equipped with a weight map $p:\Delta_{\lambda}\longrightarrow\mathcal{P}_{\lambda}$ where $\mathcal{P}_{\lambda}\subseteq X(T)_{\mathbb{R}}$ is the weight polytope associated to $\lambda$ (see [1, §1] for details). The following properties will be useful: 1. (1) The volume $\operatorname{Vol}(\Delta_{\lambda})$ of the convex polytope equals to the volume $\operatorname{Vol}(M_{\lambda})$ of the line bundle. 2. (2) Integral points $\Delta_{\lambda}\cap\mathbb{Z}^{N}$ parametrize a basis of $H^{0}(G/P,M_{\lambda})$. In particular, $\\#\Delta_{\lambda}\cap\mathbb{Z}^{N}=\dim H^{0}(G/P,M_{\lambda})$. 3. (3) Lattice points $\mathcal{P}_{\lambda}\cap X(T)$ parameterize weights occuring in $H^{0}(G/P,M_{\lambda})$ and $p$ sends an integral point to the underlying weight of the corresponding basis vector. 4. (4) $\Delta_{\lambda_{1}}+\Delta_{\lambda_{2}}=\Delta_{\lambda_{1}+\lambda_{2}}$ and $p_{\lambda_{1}}+p_{\lambda_{2}}=p_{\lambda_{1}+\lambda_{2}}$. Here the sum is the Minkowski sum. In particular, the diagram $\textstyle{\Delta_{\lambda}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{\lambda}}$$\scriptstyle{\cdot n}$$\textstyle{\Delta_{n\lambda}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p_{n\lambda}}$$\textstyle{\mathcal{P}_{\lambda}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\cdot n}$$\textstyle{\mathcal{P}_{n\lambda}}$ is commutative for any $n\in\mathbb{N}$, where $\cdot n$ is the scalar multiplication by $n$. Let $\mu$ be the Lebesgue measure on $\mathbb{R}^{N}$ normalized with respect to $\mathbb{Z}^{N}$. ###### Lemma 3.3. The following equalities hold: $\displaystyle\nu_{\pi_{*}\mathcal{L}^{n}_{\lambda}}=\frac{1}{\\#(\Delta_{n\lambda}\cap\mathbb{Z}^{N})}\sum_{x\in\Delta_{n\lambda}\cap\mathbb{Z}^{N}}\delta_{{\mathrm{deg}}F_{Q}\circ p(x)},$ $\displaystyle T_{\frac{1}{n}}\nu_{\pi_{*}\mathcal{L}_{\lambda}^{n}}=\frac{1}{\\#(\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N})}\sum_{x\in\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N}}\delta_{{\mathrm{deg}}F_{Q}\circ p(x)},$ $\displaystyle\nu^{\pi}_{\mathcal{L}_{\lambda}}=\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\cdot({\mathrm{deg}}F_{Q}\circ p)_{*}\mu.$ ###### Proof. Bearing Lemma 2.8, Proposition 2.9 and the aforementioned properties of string polytopes at hand, we can carry over the argument in [11, Proposition 3.5] to show the first equality. Appling $T_{1/n}$ and taking limits, we obtain the second and third equalities. ∎ ###### Theorem 3.4. The variety height $h_{\mathcal{L}_{\lambda}}(X)$ of $X$ is $\displaystyle\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p\;\mathrm{d}\mu$. ###### Proof. We may consider $\mathcal{L}_{\lambda}+tf$ for large $t$ so that $\mathcal{L}_{\lambda}+tf$ is ample, where $f$ is the line bundle given by a vertical fiber. The measure $\mu^{\pi}_{\mathcal{L}_{\lambda}+tf}$ is supported on $\mathbb{R}_{+}$ and is a right shift of $\mu^{\pi}_{\mathcal{L}_{\lambda}}$ by $t$, so Theorem 3.2 says $\displaystyle\quad\quad\quad(\mathcal{L}_{\lambda}+tf)^{N+1}$ $\displaystyle=(N+1)\operatorname{Vol}(\mathcal{L}_{\lambda,K})\int_{\mathbb{R}}x\;\mathrm{d}\nu^{\pi}_{\mathcal{L}_{\lambda}+tf}$ $\displaystyle=(N+1)\int_{\Delta_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p+t\;\mathrm{d}\mu$ where the second equality is due to Lemma 3.3 and $\operatorname{Vol}(\mathcal{L}_{\lambda,K})=\operatorname{Vol}(\Delta_{\lambda})$. Thus $\displaystyle h_{\mathcal{L}_{\lambda}+tf}(X)=\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p\;\mathrm{d}\mu+t.$ Note that $h_{\mathcal{L}_{\lambda}+tf}(X)=h_{\mathcal{L}_{\lambda}}(X)+t$. So substracting $t$, we get $\displaystyle h_{\mathcal{L}_{\lambda}}(X)=\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p\;\mathrm{d}\mu.$ ∎ To prove Theorem 3.1, we need the following general result. ###### Proposition 3.5. For any linear function $f:X(T)_{\mathbb{R}}\longrightarrow\mathbb{R}$, $\displaystyle\frac{1}{\operatorname{Vol}(\Delta_{\lambda})}\int_{\Delta_{\lambda}}f\circ p\;\mathrm{d}\mu=\frac{1}{\\#W/W_{P}}\sum_{w\in W/W_{P}}f(w\lambda).$ We start with two lemmata. ###### Lemma 3.6. The pushforward measure $p_{*}\mu$ on $\mathcal{P}_{\lambda}$ is $W$-invariant. ###### Proof. Note that the Lebesgue measure $\mu$ on $\Delta_{\lambda}$ is the limit of the Dirac measure $\delta_{\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N}}$, i.e. $\mu=\lim_{n}\delta_{\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N}}$. Consider the map $\Delta_{n\lambda}\longrightarrow\mathcal{P}_{n\lambda}$ and note that it sends integral points to its corresponding weights. Dividing by n, we see that $\displaystyle p_{*}\delta_{\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N}}=\operatorname{mult}_{n}\delta_{\mathcal{P}_{\lambda}\cap\frac{1}{n}X(T)},$ where $\operatorname{mult}_{n}$ is the multiplicity function $\displaystyle\operatorname{mult}_{n}:\mathcal{P}_{\lambda}\cap\tfrac{1}{n}X(T)\longrightarrow\mathbb{N},\quad\mu\longmapsto\dim H^{0}(G/P,L_{n\lambda})[n\mu].$ Since both $\operatorname{mult}_{n}$ and $\delta_{\mathcal{P}_{\lambda}\cap\tfrac{1}{n}X(T)}$ are $W$-invariant, we deduce that $\displaystyle p_{*}\mu=\lim_{n}p_{*}\delta_{\Delta_{\lambda}\cap\frac{1}{n}\mathbb{Z}^{N}}$ is also $W$-invariant. ∎ ###### Lemma 3.7. The map $\mathcal{P}_{\lambda}\longrightarrow\mathcal{P}_{\lambda}$, $\mu\longmapsto\sum_{w\in W}w\mu$ is constant. ###### Proof. We claim that for any $\mu_{1},\mu_{2}\in X(T)$ such that $\mu_{1}|_{Z(G)}=\mu_{2}|_{Z(G)}$, $\displaystyle\sum_{w\in W}w\mu_{1}=\sum_{w\in W}w\mu_{2}.$ In fact, for any simply connected cover $G^{sc}\longrightarrow G$, the preimage $T^{sc}$ of $T$ is a maximal torus in $G^{sc}$ and $W=W(G,T)=W(G^{sc},T^{sc})$. Via the natural map $X(T)\longrightarrow X(T^{sc})$, we are reduced to the case $G$ is simply connected. In this case, $T^{W}=Z(G)$ and the desired equality follows directly. $H^{0}(G/P,L_{\lambda})$ is a simple $G$-module so all weights $\mu$ occuring in $H^{0}(G/P,L_{\lambda})$ sharing the same restriction to $Z(G)$. So by the above claim, the lemma is true for lattice points $\mathcal{P}_{\lambda}\cap X(T)$. Note that the map $\mu\longmapsto\sum_{w\in W}w\mu$ is linear, we are done. ∎ ###### Proof of _Proposition 3.5_. Let $\mathcal{P}_{\lambda,+}$ be the intersection of $\mathcal{P}_{\lambda}$ with the anti-dominant cone $\big{\\{}\langle\alpha^{\vee},\lambda\rangle<0:\alpha\in\Delta\backslash\Delta_{P}\big{\\}}$. Note that the subgroup $W_{P}\subseteq W$ acts trivially on $\mathcal{P}_{\lambda}$. By Lemma 3.6, one has $\displaystyle\int_{\Delta_{\lambda}}f\circ p\;\mathrm{d}\mu=\int_{\mathcal{P}_{\lambda}}f\;\mathrm{d}p_{*}\mu=\int_{\mathcal{P}_{\lambda,+}}\Big{(}\sum_{w\in W/W_{P}}f(wx)\Big{)}\;\mathrm{d}p_{*}\mu.$ By the linearity of $f$ and Lemma 3.7, it moreover equals to $\displaystyle\int_{\mathcal{P}_{\lambda,+}}f\Big{(}\sum_{w\in W/W_{P}}wx\Big{)}\;\mathrm{d}p_{*}\mu=\Big{(}\sum_{w\in W/W_{P}}f(w\lambda)\Big{)}\int_{\mathcal{P}_{\lambda,+}}1\;\mathrm{d}p_{*}\mu.$ On the other hand, $\displaystyle\operatorname{Vol}(\Delta_{\lambda})=\int_{\mathcal{P}_{\lambda}}1\;\mathrm{d}p_{*}\mu=\\#{W/W_{P}}\int_{\mathcal{P}_{\lambda,+}}1\;\mathrm{d}p_{*}\mu.$ Taking ratios, we obtain the desired equality. ∎ ###### Proof of _Theorem 3.1_. Take $f={\mathrm{deg}}(F_{Q})$ in Proposition 3.5 and apply Theorem 3.4. ∎ ###### Remark 3.8. The fact that the variety height and successive minima are given by $\displaystyle\displaystyle h_{\mathcal{L}_{\lambda}}(X)=\frac{1}{\operatorname{Vol(\Delta_{\lambda})}}\int_{\mathcal{P}_{\lambda}}{\mathrm{deg}}(F_{Q})\;\mathrm{d}p_{*}\mu,\quad\zeta_{w}=\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ shows that the Arakelov geometry of $(F/P,\mathcal{L}_{\lambda})$ depends only on the function ${\mathrm{deg}}(F_{Q})$ on $X(T)_{\mathbb{Q}}$, which might be surprising at the first glance. This is in fact because the Arakelov geometry of $(F/P,\mathcal{L}_{\lambda})$ is determined by the Harder-Narasimhan filtration of $\pi_{*}\mathcal{L}_{n\lambda}=F_{Q}\times_{Q}H^{0}(G/P,M_{n\lambda})$ (Lemma 2.8) for all $n\geq 1$, and the unipotent part of $Q$ acts trivially [23, Lemma 5.1]. ###### Example 3.9. Let $G=\operatorname{GL}_{4}$, $E$ be a vector bundle on $C$ of rank $4$ and consider the $\operatorname{Gr}(2,4)$-bundle $\operatorname{Gr}_{2}(E)$. In this case the string polytopes (with respect to a certain reduced decomposition) is just the Gelfand-Zetlin polytope [18, Remark 2.4]. Identify $X(T)_{\mathbb{Q}}$ with $\bigoplus_{i}\mathbb{Q}\lambda_{i}$ as before. The line bundle $\mathcal{O}(1)$ correspond to the character $\lambda=(0,0,1,1)$. The Gelfand-Zetlin polytope $\operatorname{GZ}_{\lambda}$ is defined by $0\leq x_{2}\leq x_{1}\leq x_{3}\leq 1$ and $x_{2}\leq x_{4}\leq x_{3}$ in $\mathbb{R}^{4}$, with vertices $(0,0,0,0)$, $(0,0,1,0)$, $(0,0,1,1)$, $(1,0,1,0)$, $(1,0,1,1)$ and $(1,1,1,1)$. The weight polytope $\mathcal{P}_{\lambda}$ is the convex hull of $(0,0,1,1)$, $(0,1,0,1)$, $(0,1,1,0)$, $(1,1,0,0)$, $(1,0,1,0)$ and $(1,0,0,1)$, the weight map $p:\operatorname{GZ}_{\lambda}\longrightarrow\mathcal{P}_{\lambda}$ is $\displaystyle(x_{1},x_{2},x_{3},x_{4})\longmapsto(x_{1},-x_{1}+x_{2}+x_{3},-x_{2}-x_{3}+x_{4},-x_{4})+(0,0,1,1)$ and ${\mathrm{deg}}(F_{Q})$ is the function on $X(T)_{\mathbb{Q}}$ given by $\lambda_{i}\longmapsto\mu_{i}$ as in §1.5. A direct computation shows that $\displaystyle\operatorname{Vol}(\operatorname{GZ}_{\lambda})=\frac{1}{12},\quad\int_{\operatorname{GZ}_{\lambda}}{\mathrm{deg}}(F_{Q})\circ p\;\mathrm{d}\mu=\frac{1}{24}(\mu_{1}+\mu_{2}+\mu_{3}+\mu_{4}).$ Thus $h(X)=\frac{1}{2}(\mu_{1}+\mu_{2}+\mu_{3}+\mu_{4})$. On the other hand, $W/W_{P}=S_{4}\big{/}(S_{2}\times S_{2})$, $\\#W/W_{P}=6$ and $\displaystyle\sum_{w\in W/W_{P}}\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle=\sum_{1\leq i<j\leq 4}\mu_{i}+\mu_{j}.$ We see that $\frac{1}{\\#W/W_{P}}\sum_{w\in W/W_{P}}\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle$ is also $\frac{1}{2}(\mu_{1}+\mu_{2}+\mu_{3}+\mu_{4})$. ## 4\. Base loci and movable cones ### 4.1. Preliminaries on algebraic geometry We briefly recall some general theory of base loci. Let $\mathbf{k}$ be a field. In our case, it would be either the base field $k$ or the function field $K=K(C)$. Let $Y\rightarrow\operatorname{Spec}\mathbf{k}$ be a normal projective variety over $\mathbf{k}$, and $L$ be a line bundle over $X$. Let $V$ be a $\mathbf{k}$-subspace of $H^{0}(Y,L)$. Consider the evaluation map $\displaystyle H^{0}(Y,L)\otimes_{\mathbf{k}}L^{\vee}\rightarrow\mathcal{O}_{Y}.$ The base locus $\mathrm{Bs}(V)$ of $V$ is the reduced scheme associated to the closed subscheme defined by the ideal sheaf given by the image of $\pi^{*}V\otimes{L^{\vee}}\rightarrow\mathcal{O}_{Y}$. ###### Definition 4.1. The stable base locus of $L$ is defined as $\mathrm{B}(L)=\bigcap_{n\geq 1}\mathrm{Bs}(H^{0}(Y,nL))$. Take any ample line bundle $A$. The augmented base locus of $L$ is defined as $\mathrm{B}_{+}(L)=\bigcap_{n\geq 1}\mathrm{B}(nL-A)$, which is independent of the choice of $A$. The following characterization of augmented base loci in terms of restricted volumes is well known, which we include here for the reader’s convenience. ###### Theorem 4.2 ([14], Theorem C). Let $Z$ be a closed subvariety of $Y$. The restricted volume is defined as $\displaystyle\operatorname{Vol}(Y|Z,L)=\limsup_{n\rightarrow+\infty}\frac{\dim_{\mathbf{k}}\mathrm{Im}(H^{0}(Y,nL)\rightarrow H^{0}(Z,nL|_{Z}))}{n^{\dim Z}/(\dim Z)!}$ where $\mathrm{Im}(\cdot)$ denotes the image. Then $\mathrm{B}_{+}(L)=\bigcup_{\operatorname{Vol}(Y|Z,L)=0}Z$. Let $\mathrm{Pic}(Y)$ be the picard group of $Y$. Since $\mathrm{B}(L)=\mathrm{B}(mL)$ for any $m>1$, we can extend the definitions of $\mathrm{B}(\cdot)$ and $\mathrm{B}_{+}(\cdot)$ to $\mathrm{Pic}(Y)_{\mathbb{Q}}:=\mathrm{Pic}(Y)\otimes\mathbb{Q}$. Moreover, let $N^{1}(Y):=\mathrm{Div}(X)/\equiv_{num}$ be the numerical group of divisors. It is proved in [13] that the augmented base loci are well defined on $N^{1}(Y)_{\mathbb{R}}:=N^{1}(Y)\otimes_{\mathbb{Z}}\mathbb{R}$. Let $\mathcal{Y}$ be a projective variety with a surjective morphism $\mathcal{Y}\rightarrow C$, and $\mathcal{L}$ be a line bundle over $\mathcal{Y}$. For any scheme-theoretical point $p\in C$, we denote by $\kappa(p)$ the residue field of $p$. Let $\mathcal{Z}$ be a subvariety of $\mathcal{Y}$. The fiber $\mathcal{Z}\times_{C}\mathrm{Spec}(\kappa(p))$ of $\mathcal{Z}$ over $p$, is denoted as $\mathcal{Z}_{p}$. We also denote by $\mathcal{L}_{p}$ the restriction $\mathcal{L}|_{\mathcal{Y}_{p}}$ of $\mathcal{L}$ to $\mathcal{Y}_{p}$. We have the following lemma, which will be useful for our computation of augmented base loci in the next subsection. ###### Lemma 4.3. The restriction of $\mathrm{B}(\mathcal{L})$ to $\mathcal{Y}_{p}$ can be computed by $\displaystyle B(\mathcal{L})_{p}=\bigcap\nolimits_{n\geq 1}\mathrm{Bs}(\mathrm{Im}(H^{0}(\mathcal{X},n\mathcal{L})\otimes_{k}\kappa(p)\rightarrow H^{0}(\mathcal{Y}_{p},n\mathcal{L}_{p}))).$ ### 4.2. Augmented base loci on the flag bundle $\mathcal{X}$ Let $\lambda\in X(P)$ be a strictly anti-dominant character. We fix a closed point $p_{0}\in C$, and set $f=\pi^{*}(\mathcal{O}_{C}(p_{0}))$. In this subsection, we are going to compute the augmented base locus $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)$. Remind that we have the decomposition $\displaystyle H^{0}(G/P,M_{\lambda})=\bigoplus_{\nu\in X(T)}H^{0}(G/P,M_{\lambda})[\nu]$ and the filtration $\displaystyle H^{0}(G/P,M_{\lambda})_{t,{\mathrm{deg}}(F_{Q})}=\bigoplus_{\langle{\mathrm{deg}}(F_{Q}),\nu\rangle\geq t}H^{0}(G/P,M_{\lambda})[\nu].$ We start with a lemma describing the vanishing behaviour of sections in $H^{0}(G/P,M_{\lambda})[\nu]$. ###### Lemma 4.4. Take $w\in W/W_{P}$. Then 1. (1) $s\neq 0\in H^{0}(G/P,M_{\lambda})[w\lambda]$ $\Longrightarrow$ $s(x)\neq 0$ for any $x\in BwP/P$. 2. (2) $s\in H^{0}(G/P,M_{\lambda})[\nu]$ for $\nu\not\leq w\lambda$ $\Longrightarrow$ $s(x)=0$ for any $x\in\overline{BwP}/P$. ###### Proof. Consider the restriction map $\displaystyle\mathrm{Res}:\ H^{0}(G/P,M_{\lambda})\longrightarrow H^{0}(\overline{BwP}/P,M_{\lambda}),$ which is $B$-equivariant and surjective. Since $H^{0}(G/P,M_{\lambda})[w\lambda]$ and $H^{0}(\overline{BwP}/P,M_{\lambda})[w\lambda]$ are both one-dimensional, one has $\mathrm{Res}(s)\neq 0$ for any $s\neq 0\in H^{0}(G/P,M_{\lambda})[w\lambda]$. By the closedness of the vanishing locus of $\mathrm{Res}(s)$, there exists $x\in BwP/P$ such that $\mathrm{Res}(s)(x)\neq 0$. Note that by the highest weight theory, $B$ acts on $\mathrm{Res}(s)$ via $w\lambda$ and $BwP/P$ is a single $B$-orbit under translation. Consequently, $\mathrm{Res}(s)(x)\neq 0$ for any $x\in BwP/P$. As for $(2)$, for any $\nu\not\leq w\lambda$, $H^{0}(\overline{BwP}/P,M_{\lambda})[\nu]=0$. Consequently $\mathrm{Res}(s)=0$ for any $s\in H^{0}(G/P,M_{\lambda})[\nu]$. ∎ ###### Lemma 4.5. Let $p$ be a scheme-theoretical point of $C$. Consider the filtration $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{\bullet,{\mathrm{deg}}(F_{Q})}$ of $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})$ given by $\displaystyle H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t,{\mathrm{deg}}(F_{Q})}=F_{Q,\kappa(p)}\times_{Q}H^{0}(G/P,M_{\lambda})_{t,{\mathrm{deg}}(F_{Q})}$ for $t\in\mathbb{R}$, we have $\operatorname{Bs}\Big{(}H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t,{\mathrm{deg}}(F_{Q})}\Big{)}=\mathcal{X}_{p}\backslash\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t}\mathcal{C}_{w,p}$. ###### Proof. Let $F_{T,\kappa(p)}$ be a reduction of $F_{Q,\kappa(p)}$ to $T$. For any $\nu\in X(T)$, consider the $\kappa(p)$-subspace $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})[\nu]=F_{T,\kappa(p)}\times_{T}H^{0}(G/P,M_{\lambda})[\nu]$ of $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})=F_{T,\kappa(p)}\times_{T}H^{0}(G/P,M_{\lambda})$. For $w^{\prime}\in W/W_{P}$, set $C_{w^{\prime}}=F_{T,\kappa(p)}\times_{T}Bw^{\prime}P/P$ and $X_{w^{\prime}}=F_{T,\kappa(p)}\times_{T}\overline{Bw^{\prime}P}/P$. Then $\mathcal{C}_{w,p}=\coprod_{\pi(w^{\prime})=w}C_{w^{\prime}}$ and $\mathcal{X}_{w,p}=\bigcup_{\pi(w^{\prime})=w}X_{w^{\prime}}$ where $\pi:W/W_{P}\longrightarrow W_{Q}\backslash W/W_{P}$ is the natural projection. Since the principal $T$-bundle $F_{T,\kappa(p)}$ is trivial, we deduce that $s(x)\neq 0$ for any $x\in C_{w^{\prime}}$, any $w^{\prime}\in\pi^{-1}(w)$, and any $s\in H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})[w^{\prime}\lambda]$ from Lemma 4.4(1). Thus if $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t$, $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})[w^{\prime}\lambda]\subseteq H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t,{\mathrm{deg}}(F_{Q})}$ for any $w^{\prime}\in\pi^{-1}(w)$ and consequently, $x\not\in\operatorname{Bs}(H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{\geq t})$ for any $x\in C_{w^{\prime}}$. This proves $\operatorname{Bs}(H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t,{\mathrm{deg}}(F_{Q})})\subseteq\mathcal{X}_{p}\backslash\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t}\mathcal{C}_{w,p}$. On the other hand, we claim that if $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle<t$, then for any $s\in H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t,{\mathrm{deg}}(F_{Q})}$, the restriction of $s$ to $X_{w}$ is zero. In fact, we may assume $s\in H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})[\nu]$ for some $\nu$ and $\langle{\mathrm{deg}}(F_{Q}),\nu\rangle\geq t$ implies that $\nu\not\leq w^{\prime}\lambda$ for any $w^{\prime}\in\pi^{-1}(w)$ (because if $\nu\leq w^{\prime}\lambda$ for some $w^{\prime}$, then $\langle{\mathrm{deg}}(F_{Q}),\nu\rangle\leq\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle<t$). By Lemma 4.4(2), the restriction of $s$ to $X_{w^{\prime}}$ is zero for each $w^{\prime}$. We win. ∎ Now we are ready to compute the augmented base locus of $\mathcal{L}_{\lambda}-tf$. ###### Theorem 4.6. The augmented base locus of $\mathcal{L}_{\lambda}-tf$ is given by $\displaystyle\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)=\mathcal{X}\Bigg{\backslash}\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w}=\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\leq t}\mathcal{C}_{w}.$ ###### Proof. We may assume that $t\in\mathbb{N}$ by taking a sufficiently large tensor power of $\mathcal{L}_{\lambda}-tf$. Let $t_{0}=\min\\{\langle{\mathrm{deg}}(F_{Q}),\nu\rangle:H^{0}(G/P,M_{\lambda})[\nu]\not=0\text{ and }\langle{\mathrm{deg}}(F_{Q}),\nu\rangle>t\\}$. Take $E=F_{Q}\times_{Q}H^{0}(G/P,M_{\lambda})_{t_{0},{\mathrm{deg}}(F_{Q})}\otimes\mathcal{O}_{C}(-tx)$, which is a subbundle of $\pi_{*}(\mathcal{L}_{\lambda}-tf).$ Moreover, $\mu_{\min}(E)=t_{0}-t>0$, i.e. $E$ is ample. For each scheme-theoretical point $p\in C$, we have $E(p):=E\otimes_{C}\kappa(p)=F_{Q,\kappa(p)}\times H^{0}(G/P,M_{\lambda})_{t_{0},{\mathrm{deg}}(F_{Q})}.$ The ampleness of $E$ gives that $\displaystyle H^{0}(C,\mathrm{Sym}^{m}E)\otimes_{k}\kappa(p)\longrightarrow\mathrm{Sym}^{m}(E(p))$ is surjective for every $m\gg 0$. Since $\pi_{*}(\mathcal{L})\otimes_{C}\kappa(p)=H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})$, we have the commutative diagram which implies that $V:=\mathrm{Im}(\mathrm{Sym}^{m}E(p)\rightarrow H^{0}(\mathcal{X}_{p},m\mathcal{L}_{\lambda,p}))$ is contained in the image $\mathrm{Im}(H^{0}(C,\pi_{*}(m\mathcal{L}_{\lambda}-tmf))\otimes_{k}\kappa(p)\rightarrow H^{0}(\mathcal{X}_{p},m\mathcal{L}_{\lambda,p}))=\mathrm{Im}(H^{0}(\mathcal{X},m\mathcal{L}_{\lambda}-tmf)\otimes_{k}\kappa(p)\rightarrow H^{0}(\mathcal{X},m\mathcal{L}_{\lambda,p})).$ Hence $\displaystyle\mathrm{B}(\mathcal{L}_{\lambda}-tf)_{p}\subseteq\mathrm{Bs}(V)=\mathrm{Bs}(E(p))=\mathcal{X}_{p}\backslash\coprod\nolimits_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w,p}$ by applying Lemma 4.5 to $H^{0}(\mathcal{X}_{p},\mathcal{L}_{\lambda,p})_{t_{0},{\mathrm{deg}}(F_{Q})}$. Consequently, (4.1) $\displaystyle\mathrm{B}(\mathcal{L}_{\lambda}-tf)\subseteq\mathcal{X}\backslash\coprod\nolimits_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w}.$ Now we take an ample line bundle $\mathcal{A}$ over $\mathcal{X}$ of form $\mathcal{L}_{\lambda^{\prime}}-t^{\prime}f$ where $\lambda^{\prime}\in X(T)$ is a strictly anti-dominant character and $\langle{\mathrm{deg}}(F_{Q}),\nu\rangle-t^{\prime}>0$ for any $\nu\in X(T)$ such that $H^{0}(G/P,M_{\lambda^{\prime}})[\nu]\not=0.$ Take sufficiently large $n$ such that $\langle{\mathrm{deg}}(F_{Q}),w\lambda^{\prime}\rangle-t^{\prime}<n(t_{0}-t)$ for any $w\in W_{Q}\backslash W/W_{P}$. Thus $\langle{\mathrm{deg}}(F_{Q}),w(n\lambda-\lambda^{\prime})\rangle>nt-t^{\prime}$ if and only if $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\geq t_{0}.$ Similarly as in (4.1), we obtain that $\displaystyle\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)\subset\mathrm{B}(n(\mathcal{L}_{\lambda}-tf)-\mathcal{L}_{\lambda^{\prime}}-t^{\prime}f)\subseteq\mathcal{X}\Bigg{\backslash}\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w}.$ On the other hand, for any $w\in W_{Q}\backslash W/W_{P}$ such that $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle\leq t$, since $\displaystyle\limsup_{m\rightarrow+\infty}\frac{\mu_{\max}(\pi_{*}(m(\mathcal{L}_{\lambda}-tf)|_{\mathcal{X}_{w}})}{n}=\zeta_{1}(\mathcal{L}_{\lambda}-tf|_{\mathcal{X}_{w}})=\zeta_{w}-t\leq 0,$ we have $\operatorname{Vol}(\mathcal{L}_{\lambda}-tf|_{\mathcal{X}_{w}})=0$ by a comparison result [12, Theorem 2.4] deduced from the Riemann-Roch theorem. Therefore $\operatorname{Vol}(\mathcal{X}|\mathcal{X}_{w},\mathcal{L}_{\lambda}-tf)=0$, which implies that $\mathcal{X}_{w}\subseteq B_{+}(\mathcal{L}_{\lambda}-tf)$ due to Theorem 4.2. In conclusion, $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)=\mathcal{X}\backslash\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle>t}\mathcal{C}_{w}.$ ∎ ###### Remark 4.7. Comparing Theorem 2.11 and Theorem 4.6, we find that $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)_{K}$ agrees with the height filtration $Z_{t}$ for every $t\not=\zeta_{w}$. At critical points, $\mathrm{B}_{+}$ is upper continuous while $Z_{t}$ is lower continuous. ### 4.3. Positive cones of flag bundles over curves We first give a reminder on the definitions of various positive cones of a projective variety. Let $Y$ be a smooth projective variety over a field $\mathbf{k}$ (of characteristic zero). We recall the following: 1. (1) The big cone $\mathrm{Big}(Y)\subseteq N^{1}(Y)_{\mathbb{R}}$ is the cone of numerical classes of big divisors. Its closure is called _Pseudo-effective cone_ , denoted as $\mathrm{Psef}(Y)$. 2. (2) The ample cone $\mathrm{Amp}(Y)\subseteq N^{1}(Y)_{\mathbb{R}}$ is the cone of numerical classes of ample divisors. Its closure is called nef cone, denoted as $\mathrm{Nef}(Y)$. 3. (3) For each $k=1,\cdots,\dim Y$, the $k$-th movable cone is defined as $\operatorname{Mov}^{k}(Y)=\big{\\{}D\in N^{1}(Y)_{\mathbb{R}}:\mathrm{codim}(\mathrm{B}_{+}(D))\geq k\big{\\}}$ which is open in $N^{1}(Y)_{\mathbb{R}}$. They are open cones in $N^{1}(Y)_{\mathbb{R}}$ and we have obvious inclusions $\operatorname{Mov}^{d}\subseteq\cdots\subseteq\operatorname{Mov}^{1}$. Note that $\operatorname{Mov}^{1}$ is the big cone and $\operatorname{Mov}^{d}$ is the ample cone. Now we come back to the case of flag varieties. Denote by $\displaystyle c:X(P)\longrightarrow\mathrm{Pic}(G/P),\quad\lambda\longmapsto M_{\lambda}$ the character map. We have an exact sequence [20, Theorem 18.32]: $\displaystyle 0\longrightarrow X(G)\longrightarrow X(P)\longrightarrow\mathrm{Pic}(G/P)\longrightarrow\mathrm{Pic}(G)\longrightarrow\mathrm{Pic}(P)\longrightarrow 0.$ Since $G$ and $P$ are connected linear algebraic groups over a characteristic zero field, both $\mathrm{Pic}(G)$ and $\operatorname{Pic}(P)$ are finite. Notice that the numerical equivalence and the linear equivalence coincide on $G/P$ since $G/P$ is smooth fano. Therefore the linear map $X(P)_{\mathbb{R}}\rightarrow N^{1}(G/P)_{\mathbb{R}}$ is surjective. It is well-known that $M_{\lambda}$ is ample $\Longleftrightarrow$ $M_{\lambda}$ is big $\Longleftrightarrow$ $\lambda$ is strictly anti-dominant, thus $\displaystyle\mathrm{Amp}(G/P)=\mathrm{Big}(G/P)=\big{\\{}c(\lambda):\langle\alpha^{\vee},\lambda\rangle<0\text{ for all $\alpha\in\Delta\backslash\Delta_{P}$}\big{\\}}$ where the pairing $\langle\cdot,\cdot\rangle$ is extended from $X(P)$ to $X(P)_{\mathbb{R}}$ by linearity. This implies $\mathrm{Mov}^{i}(G/P)=\mathrm{Amp}(G/P)$ for any $i=1,\cdots,\dim(G/P)$, which is the image of the strictly anti-dominant cone in $X(P)$ under $c:X(P)_{\mathbb{R}}\longrightarrow N^{1}(G/P)_{\mathbb{R}}$. We now compute $\mathrm{Mov}^{i}(F/P)$. ###### Lemma 4.8. Let $f\in N^{1}(F/P)$ be the class of a vertical fiber. Then every element in $N^{1}(F/P)$ can be writen as $\mathcal{L}_{\lambda}-tf$. ###### Proof. Apply [20, Theorem 18.32] to the $G$-bundle $F\longrightarrow C$. We get an exact sequence $\displaystyle X(P)\longrightarrow\operatorname{Pic}(F/P)\longrightarrow\operatorname{Pic}(F)\longrightarrow\operatorname{Pic}(P).$ Apply it to the $P$-bundle $F\longrightarrow F/P$. We get another exact sequence $\displaystyle X(G)\longrightarrow\operatorname{Pic}(C)\longrightarrow\operatorname{Pic}(F)\longrightarrow\operatorname{Pic}(G).$ Note that after tensoring $\mathbb{R}$, $\operatorname{Pic}(P)_{\mathbb{R}}=\operatorname{Pic}(G)_{\mathbb{R}}=0$. Consider the diagram where $g$, $\phi$ and $\psi$ are just pullbacks. Note that $g$ and $\psi$ are surjective. Let $x\in\operatorname{Pic}(F/P)$ such that $g(x)=\psi(a)$ for some $a\in\operatorname{Pic}(C)_{\mathbb{R}}$. Then $g(x-\phi(a))=0$ and the first line is exact at the middle, so $x-\phi(a)=f(b)$. ∎ By the above description, we define two functions $\langle\alpha^{\vee},\cdot\rangle$ and $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ on $N^{1}(F/P)_{\mathbb{R}}$ as 1. (1) $\langle\alpha^{\vee},\cdot\rangle$ sends $\mathcal{L}_{\lambda}-tf$ to $\langle\alpha^{\vee},\lambda\rangle$. 2. (2) $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ sends $\mathcal{L}_{\lambda}-tf$ to $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle-t$. To see they are well-defined, it suffices to deal with the case $\mathcal{L}_{\lambda}$ equals to a multiple of $f$. In this case, we see that $M_{\lambda}$ is a trivial line bundle on $G/P$ by restricting $\mathcal{L}_{\lambda}$ to a fiber. This implies $\lambda\in X(G)$ by the exact sequence $0\longrightarrow X(G)\longrightarrow X(P)\longrightarrow\operatorname{Pic}(G/P)$. Then $\mathcal{L}_{\lambda}=F\times_{P}k_{\lambda}=\pi^{*}(F\times_{G}k_{\lambda})$ where $\pi$ is the structure map $F/P\longrightarrow C$. In particular, $\mathcal{L}_{\lambda}=\langle{\mathrm{deg}}(F),\lambda\rangle f$ in $N^{1}(F/P)_{\mathbb{R}}$. 1. (1) $\langle\alpha^{\vee},\cdot\rangle$ sends $f$ to zero and sends $\lambda\in X(G)$ also to zero. This proves $\langle\alpha^{\vee},\cdot\rangle$ is well- defined. 2. (2) $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ sends $\langle{\mathrm{deg}}(F),\lambda\rangle f$ to $\langle{\mathrm{deg}}(F),\lambda\rangle$ since $\lambda\in X(G)$ is invariant under Weyl group action and sends $\lambda\in X(G)$ also to $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle=\langle{\mathrm{deg}}(F_{Q}),\lambda\rangle=\langle{\mathrm{deg}}(F),\lambda\rangle$ since $F\times_{G}k_{\lambda}=F_{Q}\times_{Q}G\times_{G}k_{\lambda}=F_{Q}\times_{Q}k_{\lambda}$. This proves $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle$ is well-defined. ###### Theorem 4.9. The $k$-th movable cone $\operatorname{Mov}^{k}(F/P)$ is the cone defined by 1. (1) $\langle\alpha^{\vee},\cdot\rangle<0$ for any $\alpha\in\Delta\backslash\Delta_{P}$, and 2. (2) $\langle{\mathrm{deg}}(F_{Q}),w\cdot\rangle>0$ for any $w\in W/W_{P}$ with $\ell(w)\geq n-k+1$. ###### Proof. The sufficiency follows immediately after Theorem 4.6. For the necessity, let $\mathcal{L}_{\lambda}-tf$ be a line bundle on $F/P$ such that $\mathrm{codim}(\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf))\geq k$. Then in particular $\mathcal{L}_{\lambda}-tf$ is big. This impies $M_{\lambda}$ is big on $G/P$, which is equivalent to $\langle\alpha^{\vee},\lambda\rangle<0$ for all $\alpha\in\Delta\backslash\Delta_{P}$. Then $\lambda$ is strictly anti- dominant and Theorem 4.6 applies, saying $\mathrm{B}_{+}(\mathcal{L}_{\lambda}-tf)=\mathcal{X}\backslash\coprod_{\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle-t>0}\mathcal{C}_{w}$. It has $\operatorname{codim}\geq k$ $\Longleftrightarrow$ $\langle{\mathrm{deg}}(F_{Q}),w\lambda\rangle-t>0$ for all $w$ with $\ell(w)\geq n-k+1$. ∎ ###### Corollary 4.10. $\mathcal{L}_{\lambda}-tf\in\operatorname{Mov}^{k}(F/P)$ $\Longleftrightarrow$ $e_{k}(h_{\mathcal{L}_{\lambda}})>t$ where $e_{k}$ is the $k$-th minimum of Zhang. ## References * [1] Valery Alexeev and Michel Brion. Toric degenerations of spherical varieties. Selecta Math. (N.S.), 10(4):453–478, 2004. * [2] François Ballaÿ. Successive minima and asymptotic slopes in Arakelov geometry. Compos. Math., 157(6):1302–1339, 2021. * [3] Kai A. Behrend. Semi-stability of reductive group schemes over curves. Mathematische Annalen, 301(1):281–305, 1995. * [4] Indranil Biswas, Amit Hogadi, and A. J. Parameswaran. Pseudo-effective cone of Grassmann bundles over a curve. Geom. Dedicata, 172:69–77, 2014. * [5] Indranil Biswas and Yogish I. Holla. Harder–Narasimhan reduction of a principal bundle. Nagoya Mathematical Journal, 174:201–223, 2004. * [6] Indranil Biswas and A. J. Parameswaran. Nef cone of flag bundles over a curve. Kyoto J. Math., 54(2):353–366, 2014. * [7] J.-B. Bost, H. Gillet, and C. Soulé. Heights of projective varieties and positive Green forms. J. Amer. Math. Soc., 7(4):903–1027, 1994. * [8] Sébastien Boucksom and Huayi Chen. Okounkov bodies of filtered linear series. Compos. Math., 147(4):1205–1229, 2011. * [9] José Ignacio Burgos Gil, Patrice Philippon, and Martín Sombra. Arithmetic geometry of toric varieties. Metrics, measures and heights. Astérisque, (360):vi+222, 2014. * [10] José Ignacio Burgos Gil, Patrice Philippon, and Martín Sombra. Successive minima of toric height functions. Annales de l’institut Fourier, 65(5):2145–2197, 2015. * [11] Huayi Chen. Computing volume function on projective bundle over a curve. Hodge theory and algebraic geometry, RIMS Kôkyûroku, 1745:169–182, 2011. * [12] Huayi Chen. Majorations explicites des fonctions de Hilbert-Samuel géométrique et arithmétique. Math. Z., 279(1-2):99–137, 2015. * [13] Lawrence Ein, Robert Lazarsfeld, Mircea Mustaţă, Michael Nakamaye, and Mihnea Popa. Asymptotic invariants of base loci. Ann. Inst. Fourier (Grenoble), 56(6):1701–1734, 2006. * [14] Lawrence Ein, Robert Lazarsfeld, Mircea Mustaţă, Michael Nakamaye, and Mihnea Popa. Restricted volumes and base loci of linear series. Amer. J. Math., 131(3):607–651, 2009. * [15] Mihai Fulger. The cones of effective cycles on projective bundles over curves. Math. Z., 269(1-2):449–459, 2011. * [16] Mihai Fulger and Brian Lehmann. Zariski decompositions of numerical cycle classes. J. Algebraic Geom., 26(1):43–106, 2017. * [17] Jens Jantzen. Representations of Algebraic Groups, volume 107 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, Rhode Island, second edition, 2007. * [18] Kiumars Kaveh. Note on cohomology rings of spherical varieties and volume polynomial. J. Lie Theory, 21(2):263–283, 2011. * [19] Robert Lazarsfeld and Mircea Mustaţă. Convex bodies associated to linear series. Ann. Sci. Éc. Norm. Supér. (4), 42(5):783–835, 2009. * [20] J. S. Milne. Algebraic groups, volume 170 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2017. The theory of group schemes of finite type over a field. * [21] Yoichi Miyaoka. The Chern classes and Kodaira dimension of a minimal variety. In Algebraic geometry, Sendai, 1985, volume 10 of Adv. Stud. Pure Math., pages 449–476. North-Holland, Amsterdam, 1987. * [22] Binggang Qu and Hang Yin. Arithmetic Demailly Approximation Theorem, April 2023. arXiv:2208.13230 [math]. * [23] Simon Schieder. The Harder-Narasimhan stratification of the moduli stack of G-bundles via Drinfeld’s compactifications. Selecta Mathematica, 21(3):763–831, 2015. * [24] L. Szpiro, E. Ullmo, and S. Zhang. Équirépartition des petits points. Invent. Math., 127(2):337–347, 1997. * [25] Xinyi Yuan. Algebraic dynamics, canonical heights and Arakelov geometry. In Fifth International Congress of Chinese Mathematicians. Part 1, 2, volume 51, pt. 1, 2 of AMS/IP Stud. Adv. Math., pages 893–929. Amer. Math. Soc., Providence, RI, 2012. * [26] Shou-Wu Zhang. Equidistribution of small points on abelian varieties. Ann. of Math. (2), 147(1):159–165, 1998. * [27] Shouwu Zhang. Small points and adelic metrics. J. Algebraic Geom., 4(2):281–300, 1995.
# Volume growth, number of ends and the topology of a complete submanifold Vicent Gimeno Departament de Matemàtiques- Institut of New Imaging Technologies, Universitat Jaume I, Castellon, Spain<EMAIL_ADDRESS>and Vicente Palmer Departament de Matemàtiques- Institut of New Imaging Technologies, Universitat Jaume I, Castellon, Spain<EMAIL_ADDRESS> ###### Abstract. Given a complete isometric immersion $\varphi:P^{m}\longrightarrow N^{n}$ in an ambient Riemannian manifold $N^{n}$ with a pole and with radial sectional curvatures bounded from above by the corresponding radial sectional curvatures of a radially symmetric space $M^{n}_{w}$, we determine a set of conditions on the extrinsic curvatures of $P$ that guarantees that the immersion is proper and that $P$ has finite topology in the line of the results in [24] and [25]. When the ambient manifold is a radially symmetric space, it is shown an inequality between the (extrinsic) volume growth of a complete and minimal submanifold and its number of ends which generalizes the classical inequality stated in [1] for complete and minimal submanifolds in $\mathbb{R}^{n}$. We obtain as a corollary the corresponding inequality between the (extrinsic) volume growth and the number of ends of a complete and minimal submanifold in the Hyperbolic space together with Bernstein type results for such submanifolds in Euclidean and Hyperbolic spaces, in the vein of the work [12]. ###### Key words and phrases: volume growth, minimal submanifold, end, Hessian-Index comparison theory, extrinsic distance, total extrinsic curvature, second fundamental form, gap theorem, Bernstein-type theorem. ###### 2000 Mathematics Subject Classification: Primary 53A20 53C40; Secondary 53C42 * Work partially supported by the Caixa Castelló Foundation, and DGI grant MTM2010-21206-C02-02. ## 1\. Introduction A natural question in Riemannian geometry is to explore the influence of the curvature conduct of a complete Riemannan manifold on its geometric and topological properties. Classical results concernig this are the gap theorems showed by Greene and Wu in [7], (see too [8]), and, when it is considered a minimal submanifold (properly) immersed in the Euclidean space $\mathbb{R}^{n}$, the Berstein-type theorems showed by Anderson in [1] and by Schoen in [32]. Greene and Wu’s results states, roughly speaking, that a Riemannian manifold with a pole and with faster than quadratic decay of its sectional curvatures is isometric to the Euclidean space. On the other hand, Anderson proved, as a corollary of a generalization of the Chern-Osserman theorem on complete and minimal submanifolds of $\mathbb{R}^{n}$ with finite total (extrinsic) curvature, that any of such submanifolds having one end is an affine $n$-plane. More examples concerning submanifolds immersed in an ambient Riemannian manifold and the analysis of its (intrinsic and extrinsic) curvature behavior are the gap results, (of Bernstein-type), given by Kasue and Sugahara in [12] (see Theorems A and B), where an accurate (extrinsic) curvature decay forces to minimal, (or not) submanifolds with one end of the Euclidean and Hyperbolic spaces to be totally geodesic, and the gap results for minimal submanifolds in the Euclidean space with controlled scalar curvature given by Kasue in [13]. The estimation of the number of ends of these submanifolds plays a fundamental rôle in all the Bernstein-type results above mentioned. In this way, it is proved in [1] (see Theorems 4.1 and 5.1 in that paper) that given a complete and minimal submanifold $\varphi:P^{m}\longrightarrow\mathbb{R}^{n}$, ($m>2$) having finite total curvature $\int_{P}\|B^{P}\|^{m}d\sigma<\infty$, its (extrinsic) volume growth, defined as the quotient $\frac{\operatorname{Vol}(\varphi(P)\cap B^{0,n}_{t})}{\omega_{n}t^{n}}$ is bounded from above by the number of ends of $P$, $\mathcal{E}(P)$, namely (1.1) $\lim_{t\to\infty}\frac{\operatorname{Vol}(\varphi(P)\cap B^{0,n}_{t})}{\omega_{n}t^{n}}\leq\mathcal{E}(P)$ where $B^{b,n}_{t}$ denotes the metric $t-$ ball in the real space form of constant curvature $b$, $I\\!\\!K^{n}(b)$, and $\|B^{P}\|$ denotes the Hilbert-Schmidt norm of the second fundamental form of $P$ in $\mathbb{R}^{n}$. If moreover $\mathcal{E}(P)=1$, it is concluded (using inequality (1.1)) the Bernstein-type result above alluded, namely, that $P^{m}$ is an affine plane, i.e. totally geodesic in $\mathbb{R}^{n}$, (see Theorem 5.2 in [1]). In the paper [3] it was proved that inequality (1.1) is in fact an equality when the minimal submanifold in $\mathbb{R}^{n}$ exhibits an accurate decay of its extrinsic curvature $\|B^{P}\|$ and in the paper [12] it was proved that, if the submanifold $P$ has only one end and the decay of its extrinsic curvature $\|B^{P}\|$ is faster than linear, (when the ambient space is $\mathbb{R}^{n}$) or than exponential, (when the ambient space is $\mathbb{H}^{n}(b)$), then it is is totally geodesic. Within this study of the behavior at infinity of complete and minimal submanifolds with finite total curvature immersed in the Euclidean space, it was proved also in [1] and in [22] that the immersion of a complete and minimal submanifold $P$ in $\mathbb{R}^{n}$ or $\mathbb{H}^{n}(b)$ satisfying $\int_{P}\|B^{P}\|^{m}d\sigma<\infty$ is proper and that $P$ is of finite topological type. We should mention here the results in [24] and in [25], where has been stated new conditions on the decay of the extrinsic curvature for a completely immersed submanifold $P$ in the Euclidean space ([24]) and in a Cartan- Hadamard manifold ([25]) which guarantees the properness of the submanifold and the finiteness of its topology. In view of these results, it seems natural to consider the following three issues: 1. (1) Can the properness/finiteness results in [24] and [25] be extended to submanifolds immersed in spaces which have not necessarily non-positive curvature?, 2. (2) Do we have an analogous to inequality (1.1) between the extrinsic volume growth and the number of ends when we consider a minimal submanifold (properly) immersed in Hyperbolic space which exhibit an accurate extrinsic curvature decay?. 3. (3) Moreover, is it possible to deduce from this inequality a Bernstein-type result in the line of [1] and [12]?. We provide in this paper a (partial) answer to these questions, besides other lower bounds for the number of ends for (non-minimal) submanifolds in the Euclidean and Hyperbolic spaces and other gap results related with these estimates. As a preliminary view of our results, we have the following theorems, Theorem 1.1 and Theorem 1.2, which follows directly from our Theorem 3.5. In Theorem 1.1 we have the answer to the two last questions, namely, setting equation (1.1), but in the Hyperbolic case, and a Bernstein-type result for minimal submanifolds in the Hyperbolic space, in the line studied by Kasue and Sugahara in [12], (see assertion (A-iv) of Theorem A). On the other hand, Theorem 1.2 encompasses a slightly less general version of assertion (A-i) of Theorem A in [12]. ###### Theorem 1.1. Let $\varphi:P^{m}\longrightarrow\mathbb{H}^{n}(b)$ be a complete, proper and minimal immersion with $m>2$. Let us suppose that for sufficiently large $R_{0}$ and for all points $x\in P$ such that $r(x)>R_{0}$, (i.e. outside a compact), $\|B^{P}_{x}\|\leq\frac{\delta(r(x))}{e^{2\sqrt{-b}\,r(x)}}$ where $r(x)=d_{\mathbb{H}^{n}(b)}(o,\varphi(x))$ is the (extrinsic) distance in $\mathbb{H}^{n}(b)$ of the points in $\varphi(P)$ to a fixed pole $o\in\mathbb{H}^{n}(b)$ such that $\varphi^{-1}(o)\neq\emptyset$ and $\delta(r)$ is a smooth function such that $\delta(r)\to 0$ when $r\to\infty$. Then: 1. (1) The finite number of ends $\mathcal{E}(P)$ is related with the volume growth by $\operatorname{Sup}_{t>0}\frac{D_{t}(o)}{\operatorname{Vol}(B_{t}^{b,m})}\leq\mathcal{E}(P)$ where $D_{t}(o)=\\{x\in P:r(x)<t\\}=\\{x\in P:\varphi(x)\in B^{b,n}_{t}(o)\\}$ is the extrinsic ball of radius $t$ in $P$, (see Definition 2.1). 2. (2) If $P$ has only one end, $P$ is totally geodesic in $\mathbb{H}^{n}(b)$ When the ambient manifold is $\mathbb{R}^{n}$, we have the following Bernstein-type result as in [12]: ###### Theorem 1.2. Let $\varphi:P^{m}\longrightarrow\mathbb{R}^{n}$ be a complete non-compact, minimal and proper immersion with $m>2$. Let us suppose that for sufficiently large $R_{0}$ and for all points $x\in P$ such that $r(x)>R_{0}$, (i.e. outside the compact extrinsic ball $D_{R_{0}}(o)$ with $\varphi^{-1}(o)\neq\emptyset$), $\|B^{P}_{x}\|\leq\frac{\epsilon(r(x))}{r(x)}$ where $\epsilon(r)$ is a smooth function such that $\epsilon(r)\to 0$ when $r\to\infty$. Then: 1. (1) The finite number of ends $\mathcal{E}(P)$ is related with the volume growth by $\operatorname{Sup}_{t>0}\frac{\operatorname{Vol}(D_{t})}{\operatorname{Vol}(B_{t}^{0,m})}\leq\mathcal{E}(P)$ 2. (2) If $P$ has only one end, $P$ is totally geodesic in $\mathbb{R}^{n}$. These results, that we shall prove in Section 8, (together the corollaries of Section 4), follows from two main theorems, stablished in Section 3. In the first (Theorem 3.1) we show that a complete isometric immersion $\varphi:P^{m}\longrightarrow N^{n}$, ($m>2$), with controlled second fundamental form in a complete Riemannian manifold which possess a pole and has controlled radial sectional curvatures is proper and has finite topology. In the second (Theorem 3.4) it is proved that a complete and proper isometric immersion $\varphi:P^{m}\longrightarrow M^{n}_{w}$, ($m>2$), with controlled second fundamental form in a radially symmetric space $M^{n}_{w}$ with sectional curvatures bouded from below by a radial function has its volume growth bounded from above by a quantity which involve its (finite) number of ends. The proof of both theorems follows basically the argumental lines of the proofs given in [24] and [25] and some ideas in [3]. An important difference to these results is that, on our side, we allow to the ambient manifold to have positive sectional curvatures, bounding from above only the sectional curvatures of the planes containing radial directions. However, to show the properness of the immersion in [25], the ambient manifold must have non- positive sectional curvatures, and to assure the finiteness of the topology of the immersion $P$, this ambient manifold must be, in addition, simply connected, (i.e. a Cartan-Hadamard manifold). This difference is based in following considerations. To obtain the finiteness of the topology in Theorem 3.1, we show that the restricted, (to the submanifold) extrinsic distance to a fixed pole (in the ambient manifold) has no critical points outside a compact and then, we apply classical Morse theory. To show that the extrinsic distance function has no critical points we compute its Hessian as we can find it in [16] and [27]. These results are, in its turn, based in the Jacobi-Index analysis for the Hessian of the distance function given in [6], in particular, its Theorem A, (see Subsection 2.3). This comparison theorem is different of the Hessian comparison Theorem 1.2 used in [25]: while in this last theorem, the space used as a model to compare is the real space form with constant sectional curvature equal to the bound on the sectional curvatures of the given Riemannian manifold, in our adaptation of Theorem A in [6], (see Theorem 2.10), only the sectional curvatures of the planes containing radial directions from the pole are bounded by the corresponding radial sectional curvatures in a radially symmetric space used as a model. We also note at this point that although we use the definition of pole given by Greene and Wu in [6], (namely, the exponential must be a diffeomorphism at a pole), in fact, the comparison of the Hessians in Theorem A holds along radial geodesics from the poles defined as those points which have not conjugate points, as in [25]. ### 1.1. Outline The outline of the paper is the following. In Section §.2 we present the definiton of extrinsic ball, together the basic facts about the Hessian comparison theory of restricted distance function we are going to use and an isoperimetric inequality for the extrinsic balls which plays an important rôle in the proof of Theorem 3.4 . Section §.3 is devoted to the statement of the main results (Theorem 3.1, Theorem 3.4 and Theorem 3.5). We shall present in Section 4 two lists of results based in Theorems 3.1, 3.4 and 3.5: the first set of consequences is devoted to bound from above the volume growth of a submanifold by the number of its ends, in several contexts, obtaining moreover some Bernstein-type results. In the second set of corollaries are stated some compactification theorems for submanifolds in $\mathbb{R}^{n}$, in $\mathbb{H}^{n}$ and in $\mathbb{H}^{n}\times\mathbb{R}^{l}$. Sections §.5, §.6, §.7 are devoted to the proof of Theorems 3.1, 3.4, and 3.5, respectively. Theorem 1.1, Theorem 1.2 and the corollaries stated in Section §.4 are proved in Section §.8. ## 2\. Preliminaires ### 2.1. The extrinsic distance We assume throughout the paper that $\varphi:P^{m}\longrightarrow N^{n}$ is an isometric immersion of a complete non-compact Riemannian $m$-manifold $P^{m}$ into a complete Riemannian manifold $N^{n}$ with a pole $o\in N$, (this is the precise meaning we shall give to the word submanifold along the text) . Recall that a pole is a point $o$ such that the exponential map $\exp_{o}\colon T_{o}N^{n}\to N^{n}$ is a diffeomorphism. For every $x\in N^{n}-\\{o\\}$ we define $r(x)=r_{o}(x)=\operatorname{dist}_{N}(o,x)$, and this distance is realized by the length of a unique geodesic from $o$ to $x$, which is the radial geodesic from $o$. We also denote by $r|_{P}$ or by $r$ the composition $r\circ\varphi:P\to\mathbb{R}_{+}\cup\\{0\\}$. This composition is called the extrinsic distance function from $o$ in $P^{m}$. The gradients of $r$ in $N$ and $r|_{P}$ in $P$ are denoted by $\nabla^{N}r$ and $\nabla^{P}r$, respectively. Then we have the following basic relation, by virtue of the identification, given any point $x\in P$, between the tangent vector fields $X\in T_{x}P$ and $\varphi_{*_{x}}(X)\in T_{\varphi(x)}N$ (2.1) $\nabla^{N}r=\nabla^{P}r+(\nabla^{N}r)^{\bot},$ where $(\nabla^{N}r)^{\bot}(\varphi(x))=\nabla^{\bot}r(\varphi(x))$ is perpendicular to $T_{x}P$ for all $x\in P$. ###### Definition 2.1. Given $\varphi:P^{m}\longrightarrow N^{n}$ an isometric immersion of a complete and connected Riemannian $m$-manifold $P^{m}$ into a complete Riemannian manifold $N^{n}$ with a pole $o\in N$, we denote the extrinsic metric balls of radius $t>0$ and center $o\in N$ by $D_{t}(o)$. They are defined as the subset of $P$: $D_{t}(o)=\\{x\in P:r(\varphi(x))<t\\}=\\{x\in P:\varphi(x)\in B^{N}_{t}(o)\\}$ where $B^{N}_{t}(o)$ denotes the open geodesic ball of radius $t$ centered at the pole $o$ in $N^{n}$. Note that the set $\varphi^{-1}(o)$ can be the empty set. ###### Remark 2.2. When the imersion $\varphi$ is proper, the extrinsic domains $D_{t}(o)$ are precompact sets, with smooth boundary $\partial D_{t}(o)$. The assumption on the smoothness of $\partial D_{t}(o)$ makes no restriction. Indeed, the distance function $r$ is smooth in $N-\\{o\\}$ since $N$ is assumed to possess a pole $o\in N$. Hence the composition $r|_{P}$ is smooth in $P$ and consequently the radii $t$ that produce smooth boundaries $\partial D_{t}(o)$ are dense in $\mathbb{R}$ by Sard’s theorem and the Regular Level Set Theorem. We now present the curvature restrictions which constitute the geometric framework of our study. ###### Definition 2.3. Let $o$ be a point in a Riemannian manifold $N$ and let $x\in N-\\{o\\}$. The sectional curvature $K_{N}(\sigma_{x})$ of the two-plane $\sigma_{x}\in T_{x}N$ is then called a $o$-radial sectional curvature of $N$ at $x$ if $\sigma_{x}$ contains the tangent vector to a minimal geodesic from $o$ to $x$. We denote these curvatures by $K_{o,N}(\sigma_{x})$. ### 2.2. Model spaces Throughout this paper we shall assume that the ambient manifold $N^{n}$ has its $o$-radial sectional curvatures $K_{o,N}(x)$ bounded from above by the expression $K_{w}(r(x))=-w^{\prime\prime}(r(x))/w(r(x))$, which are precisely the radial sectional curvatures of the $w$-model space $\,M^{m}_{w}\,$ we are going to define. ###### Definition 2.4 (See [23], [10] and [6]). A $w-$model $M_{w}^{m}$ is a smooth warped product with base $B^{1}=[0,\Lambda[\,\subset\mathbb{R}$ (where $0<\Lambda\leq\infty$), fiber $F^{m-1}=\mathbb{S}^{m-1}_{1}$ (i.e. the unit $(m-1)$-sphere with standard metric), and warping function $w\colon[0,\Lambda[\to\mathbb{R}_{+}\cup\\{0\\}$, with $w(0)=0$, $w^{\prime}(0)=1$, and $w(r)>0$ for all $r>0$. The point $o_{w}=\pi^{-1}(0)$, where $\pi$ denotes the projection onto $B^{1}$, is called the center point of the model space. If $\Lambda=\infty$, then $o_{w}$ is a pole of $M_{w}^{m}$. ###### Proposition 2.5. The simply connected space forms $\mathbb{K}^{m}(b)$ of constant curvature $b$ are $w-$models with warping functions $w_{b}(r)=\begin{cases}\frac{1}{\sqrt{b}}\sin(\sqrt{b}\,r)&\text{if $b>0$}\\\ \phantom{\frac{1}{\sqrt{b}}}r&\text{if $b=0$}\\\ \frac{1}{\sqrt{-b}}\sinh(\sqrt{-b}\,r)&\text{if $b<0$}.\end{cases}$ Note that for $b>0$ the function $Q_{b}(r)$ admits a smooth extension to $r=\pi/\sqrt{b}$. ###### Proposition 2.6 (See Proposition 42 in Chapter 7 of [23]. See also [6] and [10]). Let $M_{w}^{m}$ be a $w-$model with warping function $w(r)$ and center $o_{w}$. The distance sphere $S^{w}_{r}$ of radius $r$ and center $o_{w}$ in $M_{w}^{m}$ is the fiber $\pi^{-1}(r)$. This distance sphere has the constant mean curvature $\eta_{w}(r)=\frac{w^{\prime}(r)}{w(r)}$. On the other hand, the $o_{w}$-radial sectional curvatures of $M_{w}^{m}$ at every $x\in\pi^{-1}(r)$ (for $r>0$) are all identical and determined by $K_{o_{w},M_{w}}(\sigma_{x})=-\frac{w^{\prime\prime}(r)}{w(r)}.$ and the sectional curvatures of $M_{w}^{m}$ at every $x\in\pi^{-1}(r)$ (for $r>0$) of the tangent planes to the fiber $S^{w}_{r}$ are also all identical and determined by $K(r)=K_{M_{w}}(\Pi_{S^{w}_{r}})=\frac{1-(w^{\prime}(r))^{2}}{w^{2}(r)}.$ ###### Remark 2.7. The $w-$model spaces are completely determined via $w$ by the mean curvatures of the spherical fibers $S^{w}_{r}$: $\,\eta_{w}(r)=w^{\prime}(r)/w(r)\,\quad,$ by the volume of the fiber $\,\operatorname{Vol}(S^{w}_{r})\,=V_{0}\,w^{m-1}(r)\,\quad,$ and by the volume of the corresponding ball, for which the fiber is the boundary $\,\operatorname{Vol}(B^{w}_{r})\,=\,V_{0}\,\int_{0}^{r}\,w^{m-1}(t)\,dt\,\quad.$ Here $V_{0}$ denotes the volume of the unit sphere $S^{0,m-1}_{1}$, (we denote in general as $S^{b,m-1}_{r}$ the sphere of radius $r$ in the real space form $I\\!\\!K^{m}(b)$) . The latter two functions define the isoperimetric quotient function as follows $\,q_{w}(r)\,=\,\operatorname{Vol}(B^{w}_{r})/\operatorname{Vol}(S^{w}_{r})\quad.$ Besides the rôle of comparison controllers for the radial sectional curvatures of $N^{n}$, we shall need two further purely intrinsic conditions on the model spaces: ###### Definition 2.8. A given $w-$model space $\,M^{m}_{w}\,$ is called balanced from below and balanced from above, respectively, if the following weighted isoperimetric conditions are satisfied: $\displaystyle\text{Balance from below:}\quad q_{w}(r)\,\eta_{w}(r)$ $\displaystyle\geq 1/m\quad\text{for all}\quad r\geq 0\quad;$ $\displaystyle\text{Balance from above:}\quad q_{w}(r)\,\eta_{w}(r)$ $\displaystyle\leq 1/(m-1)\quad\text{for all}\quad r\geq 0\quad.$ A model space is called totally balanced if it is balanced both from below and from above. ###### Remark 2.9. If $\,K_{w}(r)\geq-\eta_{w}^{2}(r)\,$ then $\,M_{w}^{m}\,$ is balanced from above. If $\,K_{w}(r)\leq 0\,$ then $\,M_{w}^{m}\,$ is balanced from below, see the paper [16] for a detailed list of examples. ### 2.3. Hessian comparison analysis The 2.nd order analysis of the restricted distance function $r_{|_{P}}$ defined on manifolds with a pole is governed by the Hessian comparison Theorem A in [6]. This comparison theorem can be stated as follows, when one of the spaces is a model space $M^{m}_{w}$, (see [27]): ###### Theorem 2.10 (See [6], Theorem A). Let $N=N^{n}$ be a manifold with a pole $o$, let $M=M_{w}^{m}$ denote a $w-$model with center $o_{w}$. Suppose that every $o$-radial sectional curvature at $x\in N\setminus\\{o\\}$ is bounded from above by the $o_{w}$-radial sectional curvatures in $M_{w}^{m}$ as follows: $K_{o,N}(\sigma_{x})\,\leq\,-\frac{w^{\prime\prime}(r)}{w(r)}$ for every radial two-plane $\sigma_{x}\in T_{x}N$ at distance $r=r(x)=\operatorname{dist}_{N}(o,x)$ from $o$ in $N$. Then the Hessian of the distance function in $N$ satisfies (2.2) $\displaystyle{{\rm Hess}\,}^{N}(r(x))(X,X)$ $\displaystyle\,\geq\,{{\rm Hess}\,}^{M}(r(y))(Y,Y)$ $\displaystyle=\eta_{w}(r)\left(\|X\|^{2}-\langle\nabla^{M}r(y),Y\rangle_{M}^{2}\right)$ $\displaystyle=\eta_{w}(r)\left(\|X\|^{2}-\langle\nabla^{N}r(x),X\rangle_{N}^{2}\right)$ for every vector $X$ in $T_{x}N$ and for every vector $Y$ in $T_{y}M$ with $\,r(y)=r(x)=r\,$ and $\,\langle\nabla^{M}r(y),Y\rangle_{M}=\langle\nabla^{N}r(x),X\rangle_{N}\,$. ###### Remark 2.11. As we mentioned in the Introduction, inequality (2.2) is true along the geodesics emanating from $o$ and $o_{w}$ which are free of conjugate points of $o$ and $o_{w}$, (see Remark 2.3 in [6]). Other relevant observation is that the bound given in inequality (2.2) does not depend on the dimension of the model space, (see Remark 3.7 in [27]). We present now a technical result concerning the Hessian of a radial function, namely, a function which only depends on the distance function $r$. For the proof of this result, and the rest of the results in this subsection, we refer to the paper [27]. ###### Proposition 2.12. Let $N=N^{n}$ be a manifold with a pole $o$. Let $r=r(x)=\operatorname{dist}_{N}(o,x)$ be the distance from $o$ to $x$ in $N$. Let $F:\mathbb{R}\longrightarrow\mathbb{R}$ a smooth function. Then, given $q\in N$ and $X,Y\in T_{q}N$, (2.3) $\begin{split}{{\rm Hess}\,}^{N}F\circ r|_{q}(X,Y)&=F^{\prime\prime}(r)(\nabla^{N}r\otimes\nabla^{N}r)(X,Y)\\\ &+F^{\prime}(r){{\rm Hess}\,}^{N}r|_{q}(X,Y)\end{split}$ Now, let us consider a complete isometric immersion $\varphi:P^{m}\longrightarrow N$ in a Riemannian ambient manifold $N^{n}$ with pole $o$, and with distance function to the pole $r$. We are going to see how the Hessians (in $P$ and in $N$), of a radial function defined in the submanifold are related via the second fundamental form $B^{P}$ of the submanifold $P$ in $N$. As before, we identify, given any $q\in P$, the tangent vectors $X\in T_{q}P$ with $\varphi_{*_{q}}X\in T\varphi(q)N$ along the next results. ###### Proposition 2.13. Let $N^{n}$ be a manifold with a pole $o$, and let us consider an isometric immersion $\varphi:P^{m}\longrightarrow N$. If $r|_{P}$ is the extrinsic distance function, then, given $q\in P$ and $X,Y\in T_{q}P$, (2.4) ${{\rm Hess}\,}^{P}r|_{q}(X,Y)={{\rm Hess}\,}^{N}r|_{\varphi(q)}(X,Y)+\langle B^{P}_{q}(X,Y),\nabla^{N}r|_{q}\rangle$ where $B^{P}_{q}$ is the second fundamental form of $P$ in $N$ at the point $q\in P$. Now, we apply Proposition 2.12 to $F\circ r|_{P}=F\circ r\circ\varphi$, (considering $P$ as the Riemannian manifold where the function is defined), to obtain an expression for ${{\rm Hess}\,}^{P}F\circ r|_{P}(X,Y)$ . Then, let us apply Proposition above to ${{\rm Hess}\,}^{P}r|_{P}(X,Y)$, and we finally get: ###### Proposition 2.14. Let $N=N^{n}$ be a manifold with a pole $o$, and let $P^{m}$ denote an immersed submanifold in $N$. Let $r|_{P}$ be the extrinsic distance function. Let $F:\mathbb{R}\longrightarrow\mathbb{R}$ be a smooth function. Then, given $q\in P$ and $X,Y\in T_{q}P$, (2.5) $\begin{split}{{\rm Hess}\,}^{P}F\circ r|_{q}(X,Y)&=F^{\prime\prime}(r(q))\langle\,\nabla^{N}r|_{q},X\,\rangle\langle\,\nabla^{N}r|_{q},Y\,\rangle\\\ &+F^{\prime}(r(q))\\{{{\rm Hess}\,}^{N}r|_{q}(X,Y)\\\ &+\langle\nabla^{N}r|_{q},B^{P}_{q}(X,Y)\,\rangle\,\\}\end{split}$ ### 2.4. Comparison constellations and Isoperimetric inequalities The isoperimetric inequalities satisfied by the extrinsic balls in minimal submanifolds are on the basis of the monotonicity of the volume growth function $f(r)=\frac{Vol(D_{r})}{Vol(B_{r}^{w})}$, a key result to prove Theorem 1.1. We have the following theorem. ###### Theorem 2.15 (See [16], [17], [18], [19] and [26]). Let $\varphi:P^{m}\longrightarrow N^{n}$ be a complete, proper and minimal immersion in an ambient Riemannian manifold $N^{n}$ which possess at least one pole $o\in N$. Let us suppose that the $o-$radial sectional curvatures of $N$ are bounded from above by the $o_{w}-$radial sectional curvatures of the $w-$model space $M_{w}^{m}$: $K_{o,N}(\sigma_{x})\leq-\frac{w^{\prime\prime}(r(x))}{w(r(x))}\,\,\,\forall x\in N$ and assume that $M^{m}_{w}$ is balanced from below. Let $D_{r}$ be an extrinsic $r$-ball in $P^{m}$, with center at a pole $o\in N$ in the ambient space $N$. Then: (2.6) $\frac{\operatorname{Vol}(\partial D_{r})}{\operatorname{Vol}(D_{r})}\geq\frac{\operatorname{Vol}(S^{w}_{r})}{\operatorname{Vol}(B^{w}_{r})}\,\,\,\,\,\textrm{for all}\,\,\,r>0\quad.$ Furthermore, if $\varphi^{-1}(o)\neq\emptyset$, (2.7) $\operatorname{Vol}(D_{r})\geq\operatorname{Vol}(B^{w}_{r})\,\,\,\,\textrm{for all}\,\,\,r>0\quad.$ Moreover, if equality in inequalities (2.6) or (2.7) holds for some fixed radius $R$ and if the balance of $M^{m}_{w}$ from below is sharp $q_{w}(r)\,\eta_{w}(r)\,>\,1/m\,$ for all $r$, then $D_{R}$ is a minimal cone in the ambient space $N^{n}$, so if $N^{n}$ is the hyperbolic space $\,\mathbb{H}^{n}(b)\,$, $\,b<0\,$, then $P^{m}\,$ is totally geodesic in $\mathbb{H}^{n}(b)$. If, on the other hand, the ambient space is $\mathbb{R}^{n}$ and equality in inequalities (2.6) or (2.7) holds for all radius $r>0$ then $P^{m}$ is totally geodesic in $\mathbb{R}^{n}$. On the other hand, and also as a consequence of inequality (2.6), the volume growth function $f(r)=\frac{Vol(D_{r})}{Vol(B_{r}^{w})}$ is a non-decreasing function of $r$. ## 3\. Main Results We prove in this section our main results, stablishing a set of conditions that assures that our submanifolds are properly immersed and have finite topology and bounding from below, under certain conditions, the number of its ends. ###### Theorem 3.1. Let $\varphi:P^{m}\longrightarrow N^{n}$ be an isometric immersion of a complete non-compact Riemannian $m$-manifold $P^{m}$ into a complete Riemannian manifold $N^{n}$ with a pole $o\in N$ and satisfying $\varphi^{-1}(o)\neq\emptyset$. Let us suppose that: 1. (1) The $o-$radial sectional curvatures of $N$ are bounded from above by the $o_{w}-$radial sectional curvatures of the $w-$model space $M_{w}^{m}$: $K_{o,N}(\sigma_{x})\leq-\frac{w^{\prime\prime}(r(x))}{w(r(x))}\,\,\,\forall x\in N.$ 2. (2) The second fundamental form $B^{P}_{x}$ in $x\in P$ satisfies that, for sufficiently large radius $R_{0}$, and for some constant $c\in]0,1[$: $\|B^{P}_{x}\|\leq c\,\eta_{w}(\rho^{P}(x))\,\,\,\forall x\in P-B^{P}_{R_{0}}(x_{o})$ where $\rho^{P}(x)$ denotes the intrinsic distance in $P$ from some fixed $x_{o}\in\varphi^{-1}(o)$ to $x$. 3. (3) For any $r>0$, $w^{\prime}(r)\geq d>0$ and $(\eta_{w}(r))^{\prime}\leq 0$. Then $P$ is properly immersed in $N$ and it is $C^{\infty}$\- diffeomorphic to the interior of a compact smooth manifold $\overline{P}$ with boundary. ###### Remark 3.2. To show that $\varphi$ is proper, we shall use Theorem 2.10. Hence, it is enough to assume that $o$ is a pole in the sense that there are not conjugate points along any geodesic emanating from $o$, (see [5] and [30]). Therefore our statement about the properness of the immersion includes ambient manifolds $N$ that admit non-negative sectional curvatures, unlike the ambient manifold in Theorem 1.2 in [25]. On the other hand, to prove the finiteness of the topology of $P$ we need to assume that the ambient manifold $N$ posses a pole as it is defined in [6], namely, a point $p\in N$ where $exp_{p}$ is a $C^{\infty}$ diffeomorphism. However, although our ambient manifold must be diffeomorphic to $\mathbb{R}^{n}$ in this case, (as in Theorem 1.2 in [25], where the ambient space must be a Cartan-Hadamard manifold), also admits non- negative sectional curvatures. To complete the benchmarking with the hypotheses in [24] and [25], we are going to compare the assumptions (2) and (3) in Theorem 3.1 with the notion of “submanifold with tamed second fundamental form” introduced in [24]. It is straightforward to check that if $\varphi:P^{m}\longrightarrow N^{n}$ is an immersion of a complete Riemannian $m$\- manifold $P^{m}$ into a complete Riemannian manifold $N^{n}$ with sectional curvatures $K_{N}\leq b\leq 0$, and $P$ has tamed second fundamental form, in the sense of Definition 1.1 in [25], then there exists $R_{0}>0$ such that for all $r\geq R_{0}$, the quantity $a_{r}:=\operatorname{Sup}\\{\frac{w_{b}}{w_{b}^{\prime}}(\rho^{P}(x))\|B^{P}_{x}\|:x\in P-B^{P}_{r}\\}$ satisfies $a_{r}<1$. Hence, taking $r=R_{0}$, we have that for all $x\in P-B^{P}_{R_{0}}$, and some $c\in(0,1)$, $\|B^{P}_{x}\|\leq c\eta_{w_{b}}(\rho^{P}(x))\,.$ On the other hand, when $b\leq 0$, then $w_{b}^{\prime}(r)\geq 1>0\,\,\forall r>0$ and $(\eta_{w_{b}}(r))^{\prime}\leq 0\,\,\forall r>0$. All these observations make us consider our Theorem 3.1 as a natural and slight generalization of assertions (b) and (c) of Theorem 1.2 in [25]. Observe that if we assume the properness of the immersion we obtain the following version of Theorem 3.1, where we can remove the hypothesis about the decrease of the function $\eta_{w}(r)$ because the norm of the second fundamental form $\|B^{P}_{x}\|$ is bounded by the value of $\eta_{w}$ at $r(x)$ instead of $\rho^{P}(x)$ : ###### Theorem 3.3. Let $\varphi:P^{m}\longrightarrow N^{n}$ be an isometric and proper immersion of a complete non-compact Riemannian $m$-manifold $P^{m}$ into a complete Riemannian manifold $N^{n}$ with a pole $o\in N$ and satisfying $\varphi^{-1}(o)\neq\emptyset$. Let us suppose that, as in Theorem 3.1, the $o-$radial sectional curvatures of $N$ are bounded from above as $K_{o,N}(\sigma_{x})\leq-\frac{w^{\prime\prime}(r(x))}{w(r(x))}\,\,\,\forall x\in N\,,$ and for any $r>0$, $w^{\prime}(r)\geq d>0$. Let us assume moreover that the second fundamental form $B^{P}_{x}$ in $x\in P$ satisfies that, for sufficiently large radius $R_{0}$: $\|B^{P}_{x}\|\leq c\,\eta_{w}(r(x))\,\,\,\forall x\in P-D_{R_{0}}(o)$ where $c$ a positive constant such that $c<1$ . Then $P$ is $C^{\infty}$\- diffeomorphic to the interior of a compact smooth manifold $\overline{P}$ with boundary. We are going to see how to estimate the area growth function of $P$, defined as $g(r)=\frac{Vol(\partial D_{r})}{Vol(S_{r}^{w})}$ by the number of ends of the immersion $P$, $\mathcal{E}(P)$, when the ambient space $N$ is a radially symmetric space. ###### Theorem 3.4. Let $\varphi:P^{m}\longrightarrow M^{n}_{w}$ be an isometric and proper immersion of a complete non-compact Riemannian $m$-manifold $P^{m}$ into a model space $M^{n}_{w}$ with pole $o_{w}$. Suppose that $\varphi^{-1}(o_{w})\neq\emptyset$, $m>2$ and moreover: 1. (1) The norm of second fundamental form $B^{P}_{x}$ in $x\in P$ is bounded from above outside a (compact) extrinsic ball $D_{R_{0}}(o)\subseteq P$ with sufficiently large radius $R_{0}$ by: $\|B^{P}_{x}\|\,\leq\,\frac{\epsilon(r(x))}{(w^{\prime}(r(x)))^{2}}\eta_{w}(r(x))\,\,\,\forall x\in P-D_{R_{0}}$ where $\epsilon$ is a positive function such that $\epsilon(r)\to 0$ when $r\to\infty$. 2. (2) For $r$ sufficiently large, $w^{\prime}(r)\geq d>0$. Then, for sufficiently large $r$, we have: (3.1) $\frac{Vol(\partial D_{r})}{Vol(S_{r}^{w})}\leq\frac{\mathcal{E}(P)}{\left(1-4\epsilon(r)\right)^{\frac{(m-1)}{2}}}$ where $\mathcal{E}(P)$ is the (finite) number of ends of $P$. When we consider minimal immersions in the model spaces, we have the following result, which is an inmediate corollary from the above theorem, and Theorem 2.15 in Section 2. ###### Theorem 3.5. Let $\varphi:P^{m}\longrightarrow M^{n}_{w}$ be a complete non-compact, proper and minimal immersion into a ballanced from below model space $M^{n}_{w}$ with pole $o_{w}$. Suppose that $\varphi^{-1}(o_{w})\neq\emptyset$ and $m>2$. Let us assume moreover the hypotheses (1) and (2) in Theorem 3.4. Then 1. (1) The (finite) number of ends $\mathcal{E}(P)$ is related with the (finite) volume growth by (3.2) $1\leq\lim_{r\to\infty}\frac{Vol(D_{r})}{Vol(B_{r}^{w})}\leq\mathcal{E}(P)$ 2. (2) If $P$ has only one end, P is a minimal cone in $M_{w}^{n}$. ## 4\. Corollaries As we have said in the Introduction, we have divided the list of results based in Theorem 3.1 and in Theorem 3.4 in two series of corollaries. The first set of consequences follows the line of Theorem 1.1 and Theorem 1.2, (which are in fact the main representatives of these results) presenting upper bounds for the volume and area growth of a complete and proper immersion in the real space form $I\\!\\!K^{n}(b)$, ($b\leq 0$), in terms of the number of its ends. In the second set of corollaries, are stated compactification theorems for complete and proper immersions in $\mathbb{R}^{n}$, $\mathbb{H}^{n}(b)$ and $\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$. The first of these corollaries constitutes a non-minimal version of Theorem 1.1: ###### Corollary 4.1. Let $\varphi:P^{m}\longrightarrow\mathbb{H}^{n}(b)$ be a complete non-compact and proper immersion with $m>2$. Let us suppose that for sufficiently large $R_{0}$ and for all points $x\in P$ such that $r(x)>R_{0}$, (i.e. outside the compact extrinsic ball $D_{R_{0}}(o)$ with $\varphi^{-1}(o)\neq\emptyset$), $\|B^{P}_{x}\|\leq\frac{\delta(r(x))}{e^{2\sqrt{-b}\,r(x)}}$ where $r(x)=d_{\mathbb{H}^{n}(b)}(o,\varphi(x))$ is the (extrinsic) distance in $\mathbb{H}^{n}(b)$ of the points in $\varphi(P)$ to a fixed pole $o\in\mathbb{H}^{n}(b)$ and $\delta(r)$ is a smooth function such that $\delta(r)\to 0$ when $r\to\infty$.Let $\\{t_{i}\\}_{i=1}^{\infty}$ be any non-decreasing sequence such that $t_{i}\to\infty$ when $i\to\infty$. Then the finite number of ends $\mathcal{E}(P)$ is related with the area growth of $P$ by: $\liminf_{i\to\infty}\frac{\operatorname{Vol}(\partial D_{t_{i}})}{\operatorname{Vol}(S_{t_{i}}^{b,m-1})}\leq\mathcal{E}(P)$ The corresponding non-minimal statement of Theorem 1.2 is: ###### Corollary 4.2. Let $\varphi:P^{m}\longrightarrow\mathbb{R}^{n}$ be a complete non-compact and proper immersion with $m>2$. Let us suppose that for sufficiently large $R_{0}$ and for all points $x\in P$ such that $r(x)>R_{0}$, (i.e. outside the compact extrinsic ball $D_{R_{0}}(o)$ with $\varphi^{-1}(o)\neq\emptyset$), $\|B^{P}_{x}\|\leq\frac{\epsilon(r(x))}{r(x)}$ where $r(x)=d_{\mathbb{R}^{n}}(o,\varphi(x))$ is the (extrinsic) distance in $\mathbb{R}^{n}$ of the points in $\varphi(P)$ to a fixed pole $o\in\mathbb{R}^{n}$ and $\epsilon(r)$ is a smooth function such that $\epsilon(r)\to 0$ when $r\to\infty$. Let $\\{t_{i}\\}_{i=1}^{\infty}$ be any non-decreasing sequence such that $t_{i}\to\infty$ when $i\to\infty$. Then the finite number of ends $\mathcal{E}(P)$ is related with the area growth by: $\liminf_{i\to\infty}\frac{\operatorname{Vol}(\partial D_{t_{i}})}{\operatorname{Vol}(S_{t_{i}}^{0,m-1})}\leq\mathcal{E}(P)$ Concerning the compactification results we have the following result given by Bessa, Jorge and Montenegro in [24] and by Bessa and Costa in [25]: ###### Corollary 4.3. Let $\varphi:P^{m}\longrightarrow I\\!\\!K^{n}(b)$ be a complete non-compact immersion in the real space form $I\\!\\!K^{n}(b)$, ($b\leq 0$). Let us suppose that for all points $x\in P\setminus B^{P}_{R_{0}}(o)$ (for sufficientlty large $R_{0}$, where $o$ is a pole in $I\\!\\!K^{n}(b)$ such that $\varphi^{-1}(o)\neq\emptyset$) : $\|B^{P}_{x}\|\leq c\,h_{b}(\rho^{P}(x))$ where $\rho^{P}(x)$ is the (intrinsic) distance to a fixed $x_{o}\in\varphi^{-1}(o)$ and $c$ is a positive constant such that $c<1$ and $h_{b}(r)=\eta_{w_{b}}(r)=\begin{cases}\phantom{\sqrt{b}}1/r&\text{if $b=0$}\\\ \sqrt{-b}\coth(\sqrt{-b}\,r)&\text{if $b<0$}\quad.\end{cases}$ is the mean curvature of the geodesic spheres in $I\\!\\!K^{n}(b)$. Then $P$ is properly immersed in $I\\!\\!K^{n}(b)$ and it is diffeomorphic to the interior of a compact smooth manifold $\overline{P}$ with boundary. Our last result concerns isometric immersions in $\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$: ###### Corollary 4.4. Let $\varphi:P^{m}\longrightarrow\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$ be a complete non-compact immersion. Let us consider a pole $o\in\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$ such that $\varphi^{-1}(o)\neq\emptyset$. Let us suppose that for all points $x\in P\setminus B^{P}_{R_{0}}(x_{o})$, where $x_{o}\in\varphi^{-1}(o)$ and for $R_{0}$ sufficiently large: $\|B_{x}\|\leq\frac{c}{\rho^{P}(x)}\,\,.$ Here $\rho^{P}(x)$ denotes the intrinsic distance in $P$ from the fixed $x_{o}\in\varphi^{-1}(o)$ to $x$ and $c$ is a positive constant such that $c<1$. Then $P$ is properly immersed in $\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$ and it is diffeomorphic to the interior of a compact smooth manifold $\overline{P}$ with boundary. ## 5\. Proof of Theorem 3.1 ### 5.1. $P$ is properly immersed Let us define the following function: (5.1) $F(r):=\int_{0}^{r}w(t)dt$ Observe that $F$ is injective, because $F^{\prime}(r)=w(r)>0\,\,\,\forall r>0$, and $F(r)\to\infty$ when $r\to\infty$. Applying Theorem 2.10 and Proposition 2.14, we obtain, for all $x\in P$, and given $X\in T_{x}P$, (5.2) $\displaystyle{{\rm Hess}\,}^{P}_{x}F(r)(X,X)$ $\displaystyle\geq w^{\prime}(r(x))\|X\|^{2}+w(r(x))\langle B^{P}_{x}(X,X),\nabla^{N}r\rangle$ $\displaystyle\geq w^{\prime}(r(x))\|X\|^{2}-w(r(x))\|B^{P}_{x}\|\,\,\|X\|^{2}$ By hypotesis there exist a geodesic ball $B^{P}_{r_{1}}(x_{0})$ in $P$, with $r_{1}\geq R_{0}$, such that for any $x\in P\setminus B^{P}_{r_{1}}(x_{0})$, $\|B^{P}_{x}\|\ \leq c\eta_{w}(\rho^{P}(x))$. On the other hand, as $\eta_{w}(r)$ is non-increasing and $r(x)\leq\rho^{P}(x)$ because $\varphi$ is isometric, we have $c\eta_{w}(\rho^{P}(x))\leq c\eta_{w}(r(x))$, so if $x\in P\setminus B^{P}_{r_{1}}$ : (5.3) $\displaystyle{{\rm Hess}\,}^{P}_{x}F(r)(X,X)$ $\displaystyle\geq w^{\prime}(r(x))\|X\|^{2}-w(r)c\eta_{w}(\rho^{P}(x))\,\|X\|^{2}$ $\displaystyle\geq w^{\prime}(r(x))\|X\|^{2}\left(1-c\right)\geq d\left(1-c\right)>0$ The above result implies that there exists $r_{1}\geq R_{0}$ such that $F\circ r$ is a strictly convex function outside the geodesic ball in $P$ centered at $x_{0}$, $B^{P}_{r_{1}}(x_{0})$. And hence, as $r(x)\leq\rho^{P}(x)$ for all $x\in P$, (and therefore $B^{P}_{r_{1}}(x_{0})\subseteq D_{r_{1}}$), $F\circ r$ is a strictly convex function outside the extrinsic disc $D_{r_{1}}$. Let $\sigma:[0,\rho^{P}(x)]\to P^{m}$ be a minimizying geodesic from $x_{0}$ to $x$. If we denote as $f=F\circ r$, let us define $h:\mathbb{R}\to\mathbb{R}$ as $h(s)=F(r(\sigma(s)))=f(\sigma(s))$ Then, (5.4) $(f\circ\sigma)^{\prime}(s)=h^{\prime}(s)=\sigma^{\prime}(s)(f)=\langle\nabla^{P}f(\sigma(s)),\sigma^{\prime}(s)\rangle$ and hence, (5.5) $\displaystyle(f\circ\sigma)^{\prime\prime}(s)$ $\displaystyle=h^{\prime\prime}(s)=\sigma^{\prime}(s)(\langle\nabla^{P}f(\sigma(s)),\sigma^{\prime}(s)\rangle)=\langle\nabla^{P}_{\sigma^{\prime}(s)}\nabla^{P}f(\sigma(s)),\sigma^{\prime}(s)\rangle$ $\displaystyle+\langle\nabla^{P}f(\sigma(s)),\nabla^{P}_{\sigma^{\prime}(s)}\sigma^{\prime}(s)\rangle=Hess^{P}_{\sigma(s)}f(\sigma(s))(\sigma^{\prime}(s),\sigma^{\prime}(s))$ We have from (5.3) that $(f\circ\sigma)^{\prime\prime}(\tau)={{\rm Hess}\,}^{P}f(\sigma(\tau))(\sigma^{\prime},\sigma^{\prime})\geq d(1-c)$ for all $\tau\geq r_{1}$ . And for $\tau<r_{1}$, $(f\circ\sigma)^{\prime\prime}(\tau))\geq a=\inf_{x\in B^{P}_{r_{1}}}\\{{{\rm Hess}\,}^{P}f(x)(\nu,\nu),\,|\nu|=1\\}$. Then $\displaystyle(f\circ\sigma)^{\prime}(s)$ $\displaystyle=$ $\displaystyle(f\circ\sigma)^{\prime}(0)+\int_{0}^{s}(f\circ\sigma)^{\prime\prime}(\tau)d\tau$ $\displaystyle\geq$ $\displaystyle(f\circ\sigma)^{\prime}(0)+\int_{0}^{r_{1}}a\,d\tau+d\,\int_{r_{1}}^{s}(1-c)d\tau$ $\displaystyle\geq$ $\displaystyle(f\circ\sigma)^{\prime}(0)+a\,r_{1}+d\,(1-c)(s-r_{1})$ On the other hand, as (5.7) $\nabla^{P}f(\sigma(s))=\nabla^{P}F(r(\sigma(s)))=F^{\prime}(r(\sigma(s)))\nabla^{P}r|_{\sigma(s)}=w(r(\sigma(s)))\nabla^{P}r|_{\sigma(s)}$ then $\nabla^{P}f(\sigma(0))=w(r(\sigma(0)))\nabla^{P}r|_{\sigma(0)}=w(0)\nabla^{P}r|_{\sigma(0)}=0$ so we have that (5.8) $(f\circ\sigma)^{\prime}(0)=\langle\nabla^{P}f(\sigma(0)),\sigma^{\prime}(0)\rangle=0$ We also have that $(f\circ\sigma)(0)=F(r(\sigma(0)))=F(0)=0$. Hence, applying inequality (5.1), (5.9) $f(\sigma(s))=(f\circ\sigma)(0)+\int_{0}^{s}(f\circ\sigma)^{\prime}(\tau)d\tau\geq ar_{1}s+d(1-c)\\{\frac{1}{2}s^{2}-r_{1}s\\}$ Therefore, $\displaystyle F(r(x))$ $\displaystyle=$ $\displaystyle f(x)=f(\sigma(\rho^{P}(x)))=\int_{0}^{\rho^{P}(x)}(f\circ\sigma)^{\prime}(s)\,ds$ $\displaystyle\geq$ $\displaystyle\int_{0}^{\rho^{P}(x)}a\,r_{1}+d\,(1-c)(s-r_{1})\,ds$ $\displaystyle=$ $\displaystyle a\,r_{1}\rho^{P}(x)\ +d\,(1-c)\left(\frac{\rho^{P}(x)^{2}}{2}-r_{1}\,\rho^{P}(x)\right)$ Hence, if $\rho^{P}\to\infty$ then $F(r(x))\to\infty$ and then, as $F$ is strictly increasing, $r\to\infty$ so the immersion is proper. ### 5.2. $P$ has finite topology We are going to see that $\nabla^{P}r$ never vanishes on $P\setminus D_{r_{1}}$. To show this, we consider, as in the previous subsection, any geodesic in $P$ emanating from the pole $o$, $\sigma(s)$. We have, using inequality (5.1), that (5.11) $\langle\nabla^{P}f(\sigma(s)),\sigma^{\prime}(s)\rangle=(f\circ\sigma)^{\prime}(s)\geq a\,r_{1}+d\,(1-c)(s-r_{1})>0\,\,\forall s>r_{1}$ Hence, as $\|\sigma^{\prime}(s)\|=1\,\,\forall s$, then $\|\nabla^{P}f(\sigma(s))\|>0$ for all $s>r_{1}$. But we have computed $\nabla^{P}f(\sigma(s))=w(r(\sigma(s)))\nabla^{P}r|_{\sigma(s)}$, so, as $w(r)>0\,\,\forall r>0$, then $\|\nabla^{P}r|_{\sigma(s)}\|>0\,\,\forall s>r_{1}$ and hence, $\nabla^{P}r|_{\sigma(s)}\neq 0\,\,\forall s>r_{1}$. We have proved that $\nabla^{P}r$ never vanishes on $P\setminus B^{P}_{r_{1}}$, so we have too that $\nabla^{P}r$ never vanishes on $P\setminus D_{r_{1}}$. Let $\phi:\partial D_{r_{1}}\times[r_{1},+\infty)\to P\setminus D_{r_{1}}$ be the integral flow of a vector field $\frac{\nabla^{P}r}{\|\nabla^{P}r\|^{2}}$ with $\phi(p,r_{1})=p\in\partial D_{r_{1}}$ It is obvious that $r(\phi(p,t))=t$ and $\phi(\cdot,t):\partial D_{r_{1}}\to\partial D_{t}$ is a diffeomorphism. So $P$ has finitely many ends, and each of its ends is of finite topological type. In fact, applying Theorem 3.1 in [20], we conclude that, as the extrinsic annuli $A_{r_{1},R}(o)=D_{R}(o)\setminus D_{r_{1}}(o)$ contain no critical points of the extrinsic distance function $r:P\longrightarrow\mathbb{R}^{+}$, then $D_{R}(o)$ is diffeomorphic to $D_{r_{1}}(o)$ for all $R\geq r_{1}$ and hence the annuli $A_{r_{1},R}(o)$ are diffeomorphic to $\partial D_{r_{1}}\times[r_{1},R]$. ###### Remark 5.1. To show Theorem 3.3, we argue as in the beginning of the proof of Theorem 3.1: with the same function $F(r)$ we obtain inequality (5.2). But now we have as hypothesis that $\|B^{P}_{x}\|\leq c\,\eta_{w}(r(x))$, so we don’t need that $\eta_{w}^{\prime}(r)\leq 0$ to get inequality (5.3). ## 6\. Proof of Theorem 3.4 We are going to see first that $P$ has finite topology. As $P$ is properly immersed, we shall apply Theorem 3.3 and for that, it must be checked that hypotheses in that theorem are acomplished. First, we have hypothesis (1) in Theorem 3.3 because $N=M^{n}_{w}$. On the other hand, as $w^{\prime}(r)\geq d>0\forall r>0$ and, for some $R_{0}$, we have that $\|B^{P}_{x}\|\leq\frac{\epsilon(r(x))}{(w^{\prime}(r(x)))^{2}}\eta_{w}(r(x))\,\,\,\forall x\in P-D_{R_{0}}$ where $\epsilon$ is a positive function such that $\epsilon(r)\to 0$ when $r\to\infty$, hence $0\leq\lim_{r\to\infty}\frac{\epsilon(r)}{(w^{\prime}(r))^{2}}\leq\lim_{r\to\infty}\frac{\epsilon(r)}{d^{2}}=0$. Therefore, for some constant $c<1$, there exist $R_{0}$ such that $\|B^{P}_{x}\|\leq c\eta_{w}(r(x))\,\,\,\forall x\in P-D_{R_{0}}$. Therefore, as $\varphi:P\longrightarrow M^{n}_{w}$ is a proper immersion, we have by Theorem 3.3 that $P$ has finite topological type and thus $P$ has finitely many ends, each of finite topological type. Hence we have, in an analogous way than in [1], and for $r_{1}\geq R_{0}$ as in Section 5: (6.1) $P-D_{r_{1}}=\cup_{k=1}^{\mathcal{E}(P)}V_{k}$ where $V_{k}$ are disjoint, smooth domains in $P$. Along the rest of the proof, we will work on each end $V_{k}$ separately. Let $V$ denote one element of the family $\\{V_{k}\\}_{k=1}^{\mathcal{E}(P)}$, and, given a fixed radius $t>r_{1}$, let $\partial V(t)$ denote the set $\partial V(t)=V\cap\partial D_{t}=V\cap S^{w}_{t}$, where $S^{w}_{t}$ is the geodesic $t$-sphere in $M^{n}_{w}$. This set is a hypersurface in $P^{m}$, with normal vector $\frac{\nabla^{P}r}{\|\nabla^{P}r\|}$, and we are going to estimate its sectional curvatures when $t\to\infty$. Suppose that $e_{i},e_{j}$ are two orthonormal vectors of $T_{p}\partial V(t)$ on the point $p\in\partial V(t)$. Then the sectional curvature of the plane expanded by $e_{i},e_{j}$ is, using Gauss formula: (6.2) $\displaystyle K_{\partial V(t)}$ $\displaystyle(e_{i},e_{j})=K_{P}(e_{i},e_{j})+\langle B^{\partial V-P}(e_{i},e_{i}),B^{\partial V-P}(e_{j},e_{j})\rangle$ $\displaystyle-\|B^{\partial V-P}(e_{i},e_{j})\|^{2}=K_{N}(e_{i},e_{j})+\langle B^{\partial V-P}(e_{i},e_{i}),B^{\partial V-P}(e_{j},e_{j})\rangle$ $\displaystyle-\|B^{\partial V-P}(e_{i},e_{j})\|^{2}+\langle B^{P}(e_{i},e_{i}),B^{P}(e_{j},e_{j})\rangle-\|B^{P}(e_{i},e_{j})\|^{2}$ $\displaystyle\geq K_{N}(e_{i},e_{j})+\langle B^{\partial V-P}(e_{i},e_{i}),B^{\partial V-P}(e_{j},e_{j})\rangle$ $\displaystyle-\|B^{\partial V-P}(e_{i},e_{j})\|^{2}-2\|B^{P}\|^{2}$ where $B^{\partial V-P}$ is the second fundamental form of $\partial V(t)$ in $P$. But this second fundamental form is for two vector fields $X,Y$ in $T\partial V(t)$: (6.3) $\displaystyle B^{\partial V-P}(X,Y)$ $\displaystyle=\langle\nabla_{X}^{P}Y,\frac{\nabla^{P}r}{||\nabla^{P}r||}\rangle\frac{\nabla^{P}r}{||\nabla^{P}r||}=\langle\nabla_{X}^{P}Y,\nabla^{P}r\rangle\frac{\nabla^{P}r}{||\nabla^{P}r||^{2}}$ $\displaystyle=X(\langle Y,\nabla^{P}r\rangle)\frac{\nabla^{P}r}{||\nabla^{P}r||^{2}}-\langle Y,\nabla^{P}_{X}\nabla^{P}r\rangle\frac{\nabla^{P}r}{||\nabla^{P}r||^{2}}$ $\displaystyle=-{{\rm Hess}\,}^{P}r(X,Y)\frac{\nabla^{P}r}{||\nabla^{P}r||^{2}}$ Then, since, for all $X,Y\in T_{p}M^{n}_{w}$ (6.4) ${{\rm Hess}\,}^{M^{n}_{w}}r(X,Y)=\eta_{w}(r)\langle X,Y\rangle-\langle X,\nabla^{M^{n}_{w}}r\rangle\langle Y,\nabla^{M^{n}_{w}}r\rangle$ we have, (using the fact that $e_{i}$ are tangent to the fiber $S^{w}_{t}$, and Proposition 2.6), that (6.5) $K_{M^{n}_{w}}(e_{i},e_{j})=K(t)=\frac{1}{w^{2}(t)}-\eta^{2}_{w}(t)$ so for any $p\in\partial V(t)$ such that $t=r(p)$ is sufficiently large: (6.6) $\displaystyle K_{\partial V(t)}(e_{i},e_{j})\geq$ $\displaystyle K_{M^{n}_{w}}(e_{i},e_{j})+\frac{{{\rm Hess}\,}^{P}_{p}r(e_{i},e_{i}){{\rm Hess}\,}^{P}_{p}r(e_{j},e_{j})}{||\nabla^{P}r||^{2}}$ $\displaystyle-\frac{{{\rm Hess}\,}^{P}_{p}r(e_{i},e_{j})^{2}}{||\nabla^{P}r||^{2}}-2\|B^{P}\|^{2}$ $\displaystyle\geq K(t)+\frac{\left(\eta_{w}(t)-\|B^{P}\|\right)^{2}-\|B^{P}\|^{2}}{||\nabla^{P}r||^{2}}-2\|B^{P}\|^{2}$ $\displaystyle\geq\eta^{2}_{w}(t)\left(1-2\frac{\|B^{P}\|}{\eta_{w}(t)}-2\left(\frac{\|B^{P}\|}{\eta_{w}(t)}\right)^{2}+\frac{K(t)}{\eta^{2}_{w}(t)}\right)$ $\displaystyle\geq\eta^{2}_{w}(t)\left(1-4\frac{\|B^{P}\|}{\eta_{w}(t)}+\frac{K(t)}{\eta^{2}_{w}(t)}\right)$ $\displaystyle=\eta^{2}_{w}(t)\left(1+\frac{K(t)}{\eta^{2}_{w}(t)}\right)\left(1-4\frac{\frac{\|B^{P}\|}{\eta_{w}(t)}}{1+\frac{K(t)}{\eta^{2}_{w}(t)}}\right)$ $\displaystyle\geq\frac{1}{w^{2}(t)}\left(1-4\|B^{P}\|w^{\prime}(t)w(t)\right)\geq\frac{1}{w^{2}(t)}\left(1-4\epsilon(t)\right)$ where we recall that, by hypothesis, $\|B^{P}\|\leq\frac{\epsilon(t)}{(w^{\prime}(t))^{2}}\eta_{w}(t)$ for all $t=r(x)>R_{0}$, and $\epsilon$ is a positive function such that $\epsilon(r)\to 0$ when $r\to\infty$. If we denote as $\delta(t)=\frac{1}{w^{2}(t)}\left(1-4\epsilon(t)\right)$ we have for each $t$ sufficiently large that $K_{\partial V(t)}(e_{i},e_{j})\geq\delta(t)$ holds everywhere on $\partial V(t)$ and $\delta(t)$ is a positive constant. Then, the Ricci curvature of $\partial V(t)$ is bounded from below, for these sufficiently large radius $t$ as $Ricc_{\partial V(t)}(\xi,\xi)\geq\delta(t)(m-1)\|\xi\|^{2}>0\,\,\forall\xi\in T\partial V(t)$ so, applying Myers’ Theorem $\partial V(t)$ is compact and has diameter $d(\partial V(t))\leq\frac{\pi}{\sqrt{\delta(t)}}$ (see [30]). Applying on the other hand Bishop’s Theorem, (see Theorem 6 in [2]), we obtain: (6.7) $\displaystyle\operatorname{Vol}(\partial V(t))\leq\frac{\operatorname{Vol}(S^{0,m-1}(1))}{\sqrt{\delta(t)^{m-1}}}$ and hence (6.8) $\displaystyle\frac{\operatorname{Vol}(\partial V(t))}{\operatorname{Vol}(S_{t}^{w})}\leq$ $\displaystyle\frac{1}{w(t)^{m-1}\sqrt{\delta(t)^{m-1}}}$ $\displaystyle=\frac{1}{\left(1-4\epsilon(t)\right)^{(m-1)/2}}$ Therefore, since for $t$ large enough $Vol(\partial D_{t}(o))\leq\sum_{i=1}^{\mathcal{E}(P)}Vol(\partial V_{i}(t))$ where $V_{i}$ denotes each end of $P$ then: (6.9) $\displaystyle\frac{\operatorname{Vol}(\partial D_{t}(o))}{\operatorname{Vol}(S_{t}^{w})}\leq\frac{\mathcal{E}(P)}{\left(1-4\epsilon(t)\right)^{(m-1)/2}}$ ## 7\. Proof of Theorem 3.5 To show assertion (1) we apply Theorem 2.15 and inequality (3.1) in Theorem 3.4 to obtain, for $r$ sufficiently large, (we suppose that $\varphi^{-1}(o_{w})\neq\emptyset$, and take $o\in\varphi^{-1}(o_{w})$ in order to have that $\operatorname{Vol}(D_{r}(o))\geq\operatorname{Vol}(B^{w}_{r})$ for all $r>0$) : (7.1) $\displaystyle 1\leq$ $\displaystyle\frac{\operatorname{Vol}(D_{r}(o))}{\operatorname{Vol}(B^{w}_{r})}\leq\frac{\operatorname{Vol}(\partial D_{r}(o))}{\operatorname{Vol}(S_{r}^{w})}$ $\displaystyle\leq\frac{\mathcal{E}(P)}{\left(1-4\epsilon(r)\right)^{(m-1)/2}}$ Moreover, we know (again using Theorem 2.15) that the volume growth function is non-decreasing. Therefore, taking limits in (7.1) when $r$ goes to $\infty$, we obtain: (7.2) $1\leq\lim_{r\to\infty}\frac{\operatorname{Vol}(D_{r}(o))}{\operatorname{Vol}(B^{w}_{r})}=\operatorname{Sup}_{r>0}\frac{\operatorname{Vol}(D_{r}(o))}{\operatorname{Vol}(B^{w}_{r})}\leq\mathcal{E}(P)$ Now, to prove assertion (2), we have, if $P$ has one end, that (7.3) $1\leq\operatorname{Sup}_{r>0}\frac{\operatorname{Vol}(D_{r}(o))}{\operatorname{Vol}(B^{w}_{r})}\leq 1$ Hence, as $f(r)=\frac{\operatorname{Vol}(D_{r}(o))}{\operatorname{Vol}(B^{w}_{r})}$ is non-decreasing, then $f(r)=1\,\,\forall r>0$, so we have equality in inequality (2.6) for all $r>0$, and $P$ is a minimal cone, (see [17] for details). ## 8\. Proof of Theorems 1.1 and 1.2 and the Corollaries ### 8.1. Proof of Theorem 1.1 We are going to apply Theorem 3.5. To do that, we must to check hypotheses (1) and (2) in Theorem 3.4. We have, in this case, that the ambient manifold is the hyperbolic space $\mathbb{H}^{n}(b)$. Therefore all of its points are poles, so there exist at least $o\in\mathbb{H}^{n}(b)$ such that $\varphi^{-1}(o)\neq\emptyset$. As it is known, Hyperbolic space $\mathbb{H}^{n}(b)$ is a model space with $w(r)=w_{b}(r)=\frac{1}{\sqrt{-b}}\sinh\sqrt{-b}r$ so $w_{b}^{\prime}(r)=\cosh\sqrt{-b}r\geq 1\,\,\forall r>0$. Therefore, hypothesis (2) in Theorem 3.4 is fulfilled in this context. Concerning hypothesis (1), it is straightforward that (8.1) $\displaystyle\|B^{P}_{x}\|$ $\displaystyle\leq\frac{\delta(r(x))}{e^{2\sqrt{-b}\,r(x)}}\leq\frac{\epsilon(r)\sqrt{-b}}{\sinh\sqrt{-b}r\cosh\sqrt{-b}r}$ $\displaystyle=\frac{\epsilon(r)}{\cosh^{2}\sqrt{-b}r}\sqrt{-b}\coth\sqrt{-b}r=\frac{\epsilon(r)}{(w_{b}^{\prime}(r))^{2}}\eta_{w_{b}}(r)$ where $\epsilon(r)=\frac{\delta(r(x))}{4\sqrt{-b}}$ goes to $0$ when $r$ goes to $\infty$. Hence, also hypothesis (1) in Theorem 3.4 is fulfilled so, applying inequality (3.2) in Theorem 3.5, (because $P$ is minimal) (8.2) $1\leq\lim_{r\to\infty}\frac{Vol(D_{r})}{Vol(B_{r}^{w_{b}})}\leq\mathcal{E}(P)$ Finally, when $P$ has one end, then $\lim_{r\to\infty}\frac{Vol(D_{r})}{Vol(B_{r}^{w_{b}})}=1$. Since $P$ is minimal, by Theorem 2.15, $f(r)=\frac{Vol(D_{r})}{Vol(B_{r}^{w_{b}})}$ is a monotone non-decreasing function, and, on the other hand, $f(r)\geq 1\,\,\forall r>0$ because inequality (2.7). Hence $f(r)=1\,\,\forall r>0$, so $f^{\prime}(r)=0\,\,\forall r>0$. This last equality implies the equality in inequality (2.6) for all $r>0$, (see [17] or [18] for details), and we apply equality assertion in Theorem 2.15 to conclude that $P$ is totally geodesic in $\mathbb{H}^{n}(b)$. ### 8.2. Proof of Theorem 1.2 In this case, we apply Theorem 3.5, being $M^{n}_{w}=\mathbb{R}^{n}$, i.e., being $w(r)=w_{0}(r)=r$, ($b=0$). Hence, $w_{0}^{\prime}(r)=1>0\,\,\forall r>0$ and $\eta_{0}(r)=\frac{1}{r}$ and hypotheses (1) and (2) in this theorem are trivially satisfied. When $P$ has only one end we conclude as before that the volume growth function is constant so we conclude equality in (2.6) for all radius $r>0$. Hence $P$ is totally geodesic in $\mathbb{R}^{n}$ applying the corresponding equality assertion in Theorem 2.15. ### 8.3. Proof of Corollary 4.1 We are considering now a complete and proper immersion in $\mathbb{H}^{n}(b)$, as in Theorem 1.1, but $P$ is not necessarily minimal. In this setting hypotheses (1) and (2) in Theorem 3.4 are fulfilled (as we have checked in the proof above, without using minimality). Hence taking limits in (3.1) when we consider an increasing sequence $\\{t_{i}\\}_{i=1}^{\infty}$ such that $t_{i}\to\infty$ when $i\to\infty$, we have: $\liminf_{i\to\infty}\frac{\operatorname{Vol}(\partial D_{t_{i}})}{\operatorname{Vol}(S_{t_{i}}^{b,m-1})}\leq\mathcal{E}(P)$ ### 8.4. Proof of Corollary 4.2 Hypotheses (1) and (2) in Theorem 3.4 are trivially satisfied and we argue as in the proof of Corollary 4.1 to obtain the result. ### 8.5. Proof of Corollary 4.3 We apply Theorem 3.1. Our ambient manifold is $I\\!\\!K^{n}(b)$, ($b\leq 0$), so hypothesis (1) about the bounds for the radial sectional curvature holds, and as $w(r)=w_{b}(r)$ hence $w_{b}^{\prime}(r)\geq 1>0\,\,\forall r>0$ and $\eta_{w_{b}}^{\prime}(r)\leq 0\,\,\forall r>0$. This means that hypothesis (3) is fulfilled. Hypothesis (2) in Theorem 3.1 holds because $\|B^{P}_{x}\|\leq c\,h_{b}(\rho^{P}(x))$ where $\rho^{P}(x)$ is the (intrinsic) distance to a fixed $x_{o}\in\varphi^{-1}(o)$ and $c$ is a positive constant such that $c<1$. ### 8.6. Proof of Corollary 4.4 We apply again Theorem 3.1, having into account that the ambient space is the Cartan-Hadamard manifold $\mathbb{H}^{n}(b)\times\mathbb{R}^{l}$ and the model space used to compare is $\mathbb{R}^{m}$, with $w(r)=w_{0}(r)=r$. ## References * [1] M. Anderson The compactification of a minimal submanifold by the Gauss Map., Preprint IEHS (1984). * [2] I. Chavel Eigenvalues in Riemannian geometry, vol. 115 of Pure and Applied Mathematics, Academic Press Inc., Orlando, FL, 1984. Including a chapter by Burton Randol, With an appendix by Jozef Dodziuk. * [3] Q. Chen On the volume growth and the topology of complete minimal submanifolds of a Euclidean space J. Math. Sci. Univ. Tokyo 2 (1995), 657-669. * [4] S. S. Chern & R. Osserman Complete minimal surfaces in euclidean $n$-space., J. d’Analyse Math. vol. 19, 15-34, (1967). * [5] M. P. do Carmo Riemannian Geometry, Birkhauser, Boston Inc., 1992. * [6] R. Greene and H. Wu Function theory on manifolds which possess a pole., Lecture Notes in Math., vol. 699, Springer-Verlag, Berlin and New York, 1979\. * [7] R. Greene and H. Wu Gap theorems for noncompact Riemannian manifolds, Duke Math. J. 49 (1982), 731-756. * [8] R. Greene and H. Wu, On a new gap phenomenon in riemannian geometry, Proc. Nac. Acad. Sci. USA. 79 (1982), 714-715. * [9] R. E. Greene, P. Petersen & S. Zhu Riemannian Manifolds of Faster-Than-Quadratic Curvature Decay, Int. Math. Research Notices, 1994, No. 9. * [10] A. Grigor’yan Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds, Bull. Amer. Math. Soc. 36 (1999), 135–249. * [11] Luquésio P. Jorge & W. Meeks III The topology of complete minimal surfaces of finite total Gaussian curvature., Topology, vol. 22 (2), 203-221, (1983). * [12] A. Kasue, & K. Sugahara Gap theorems for certain submanifolds of Euclidean spaces and hyperbolic space forms., Osaka J. Math. 24 (1987), 679-704. * [13] A. Kasue Gap theorems for minimal submanifolds of Euclidean space., J. Math. Soc. Japan 38 (3) (1986), 473-492. * [14] H. B. Lawson Lectures on minimal submanifolds., Monografias de Matemática, IMPA, Rio de Janeiro, Brazil. * [15] S. Markvorsen On the mean exit time from a minimal submanifold., J. Diff. Geom. 29 (1989), 1–8 * [16] S. Markvorsen & V. Palmer Torsional rigidity of minimal submanifolds., Proc. London Math Soc.(3) 93 (2006) 253-272. * [17] S. Markvorsen and V. Palmer The relative volume growth of minimal submanifolds., Archiv der Mathematik, 79 (2002), 507–514. * [18] S. Markvorsen and V. Palmer Generalized isoperimetric inequalities for extrinsic balls in minimal submanifolds., Journal reine angew. Math., 551 (2002), 101–121. * [19] S. Markvorsen and V. Palmer On the isoperimetric rigidity of extrinsic minimal balls., Differential Geometry and its Applications , 18 (2003), 47–54. * [20] J. Milnor Morse theory, Lecture Notes in Math., 699, (1979), Springer Verlag, Berlin. * [21] S. Muller & V. Sverak On surfaces of finite total curvature., J. Diff. Geometry, 42, 229-258 (1995). * [22] G. De Oliveira Compactification of minimal submanifolds of hyperbolic space., Comm. Analysis and Geometry, 1 (1993), 1-29. * [23] B. O’Neill Semi-Riemannian Geometry; With Applications to Relativity., Academic Press (1983). * [24] G. Pacelli Bessa & L. Jorge & J. Fabio Montenegro Complete submanifolds of $\mathbb{R}^{n}$ with finite topology., Communications in Analysis and Geometry, ISSN 1019-8385, Vol. 15, 4, 2007 , 725-732 * [25] G. Pacelli Bessa & M. Silvana Costa .On Submanifolds With Tamed Second Fundamental Form., Glasgow Mathematical Journal, 51, 2009, 669-680 * [26] V. Palmer Isoperimetric inequalities for extrinsic balls in minimal submanifolds and their applications., J. London Math. Soc. (2) 60, 2 (1999), 607–616. * [27] V. Palmer On deciding whether a submanifold is parabolic of hyperbolic using its mean curvature , Simon Stevin Transactions on Geometry, vol 1. 131-159, Simon Stevin Institute for Geometry, Tilburg, The Netherlands, 2010. * [28] R. Osserman Global properties of minimal surfaces in $\mathbb{E}^{\,3}$ and $\mathbb{E}^{\,n}$., Ann. of Math. vol. 80, 340-364, (1964). * [29] Q.H. Ruan New gap theorem on complete Riemannian manifolds., (2006). Retrieved from http://arxiv.org/abs/math/0605360 * [30] T. Sakai Riemannian Geometry, Translations of Mathematical Monographs, vol. 149, A.M.S.1996Addison-Wesley, Reading, MA 1990. * [31] K. Shiohama Total curvature and minimal areas of complete open surfaces., Proc. Amer. Math. Soc, 94 num 2 (1985), 310-316. * [32] R. Schoen Uniqueness, symmetry and embededness of minimal surfaces., J. Diff. Geometry, 18 (1983), 791-809. * [33] Y.T. Siu & S.T. Yau Complete Kähler manifolds with nonpositive curvature of faster than quadratic decay., Annals of Math, 105 (1977), 225-264. * [34] B. White Complete surfaces of finite total curvature., J. Diff. Geometry, 26, 315-326 (1987)
# The jet and resolved features of the central supermassive black hole of M 87 observed with EHT Makoto Miyoshi National Astronomical Observatory, Japan, 2-21-1, Osawa, Mitaka, Tokyo, Japan, 181-8588 Yoshiaki Kato Computational Astrophysics Laboratory RIKEN, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan, e-mail: <EMAIL_ADDRESS>Junichiro Makino Department of Planetology, Kobe University, 1-1 Rokkodaicho, Nada-ku, Kobe, Hyogo 650-0013, Japan, e-mail: <EMAIL_ADDRESS> (Accepted 2022/05/05) ###### Abstract We report our independent image reconstruction of the M 87 from the public data of the Event Horizon Telescope Collaborators (EHTC). Our result is different from the image published by the EHTC. Our analysis shows that (a) the structure at 230 GHz is consistent with those of lower frequency VLBI observations, (b) the jet structure is evident at 230 GHz extending from the core to a few mas, though the intensity rapidly decreases along the axis, and (c) the “unresolved core” is resolved into bright three features presumably showing an initial jet with a wide opening angle of $\sim 70^{\circ}$. The ring-like structures of the EHTC can be created not only from the public data, but also from the simulated data of a point image. Also, the rings are very sensitive to the FOV size. The u-v coverage of EHT lack $\sim 40~{}\mu\rm as$ fringe spacings. Combining with a very narrow FOV, it created the $\sim 40~{}\mu\rm as$ ring structure. We conclude that the absence of the jet and the presence of the ring in the EHTC result are both artifacts owing to the narrow FOV setting and the u-v data sampling bias effect of the EHT array. Because the EHTC’s simulations only take into account the reproduction of the input image models, and not those of the input noise models, their optimal parameters can enhance the effects of sampling bias and produce artifacts such as the $\sim 40~{}\mu\rm as$ ring structure, rather than reproducing the correct image. jets, accretion disks — black hole physics — galaxies: active — galaxies: individual (M 87) — interferometer: VLBI — data calibrations — data sampling bias ††journal: ApJ††facilities: EHT††software: AIPS (Greisen, 2003), DIFMAP (Shepherd, 1997) ## 1 Introduction Supermassive black holes (SMBHs) at the centers of galaxies often have spectacular jets sharply collimated and extended to intergalactic scale. However, the mechanism of the generation of such jets by the black holes has been an enigma for over a century (Blandford et al., 2019). The SMBH of the elliptical galaxy M 87, the first object of the astrophysical jet discovery (Curtis, 1918), is the best place to study the origin of the jet because it has the largest apparent angular size for black holes with strong jets, due to the relatively small distance (16.7 Mpc; Mei et al. (2007)) and large mass ($6.1\pm~{}0.4\times 10^{9}~{}M_{\odot}$; Gebhardt et al. (2011)), which implies that $1~{}R_{\rm S}$ = 7$~{}\mu\rm as$. The black hole with the largest apparent angular size, Sgr A∗ is present in our galaxy, but unfortunately, it has no jet and its activity is very low in comparison to that of a typical AGN. In addition, it is difficult to obtain high-resolution images of Sgr A∗ owing to its rapid time variability during VLBI observations (Miyoshi et al., 2019; Iwata et al., 2020). Observations of the core and jet of M 87 have been performed in multiple wavelengths, from X-ray to radio (Biretta et al., 1995; Sparks, Biretta & Macchetto, 1996; Biretta, Sparks & Macchetto, 1999; Perlman et al., 1999, 2001; Marshall et al., 2002; Wilson & Yang, 2002; Lister & Homan, 2005; Perlman & Wilson, 2005; Harris et al., 2006; Madrid et al., 2007; Wang & Zhou, 2009). Also, with high spatial resolution observations using VLBI of the SMBH of M 87 have been performed in multiple frequencies up to 86 GHz (Reid et al., 1989; Junor, Biretta & Livio, 1999; Lobanov, Hardee & Eilek, 2003; Ly, Walker & Wrobel, 2004; Cheung, Harris & Stawartz, 2007; Kovalev et al., 2007; Ly et al., 2007; Walker et al., 2008; Hada et al., 2011; Hardee & Eilek, 2011; Asada & Nakamura, 2012; Giroletti et al., 2012; Hada et al., 2013; Nakamura & Asada, 2013; Asada et al., 2014; Hada et al., 2016; Mertens et al., 2016; Walker et al., 2016; Britzen et al., 2017; Hada et al., 2017; Kim et al., 2018). Using the ”core shift” technique, the distance between the brightness peak of the core and the actual location of the SMBH has been estimated to be from 14 to 23 $R_{\rm S}$ (Hada et al., 2011). Observations with higher spatial resolution at 230 GHz should allow further exploration of the core and jet. Pioneering observations of EHT 111https://eventhorizontelescope.org/ were started at 2008 (Doeleman et al., 2008). In 2017, EHT attained sufficient sensitivity by including phased-ALMA in the array and equipping all stations with 32 Gbps recording systems. The EHTC reported their findings of a ring-shaped black hole shadow from the observational data (the EHTC, 2019). The ring diameter was approximately $42~{}\mu\rm as$, which is consistent with that expected from the measured mass of M 87 SMBH ($6\times 10^{9}~{}M_{\odot}$) using stellar dynamics (Gebhardt et al., 2011) 222The M 87 black hole mass is still controversial. A mass of $M_{\mathrm{BH}}=(3.5^{+0.9}_{-0.7})\times 10^{9}\ M_{\odot}$ (68 $\%$ confidence) is obtained from gas dynamics (Walsh et al., 2013).. We found three problems in the EHTC imaging results. First, although the EHT’s intrinsic FOV (Field Of View) is large enough to cover both the core and the jet structure together, no jet structure has been reported by the EHTC. The M 87 jet is powerful and has been detected in lower frequency VLBI observations. There was no detailed description of the investigation of the jet structure in (the EHTC, 2019); in 2017, the EHT array achieved unprecedented sensitivity, so it is not surprising that many AGN experts have strong expectations for detecting new jet structures of M 87. Second, the ring diameter of the EHTC imaging ($d=42\pm~{}3~{}\mu\rm as$; Event Horizon Telescope collaboration (2019a)) coincides with the separation between the main beam and the first sidelobe in the dirty beam (identical to point spread function (PSF)) of the EHT u-v coverage for the M 87 observations. In the EHTC paper, there is no description of the concrete structure of the dirty beam, such as sidelobes. Misidentification of sidelobes as real images is a common occurrence in radio interferometer observations with a small number of stations such as the EHT array. The EHTC do not seem to take such a risk into account (at least it is not clearly mentioned in their paper). There is a possibility that the EHTC ring is a mixture of the real image and the residual sidelobes in the diffraction patterns. The last problem is the brightness temperature of the ring reported by the EHTC ($T_{\rm b}=6\times 10^{9}~{}K$ at most from Figure 3 in Event Horizon Telescope collaboration (2019a) 333 The EHTC show several different ”fiducial” images in their papers. We believe that the images shown in Figure 3 of Event Horizon Telescope collaboration (2019a) are the FINAL ”fiducial” images of the EHTC because Event Horizon Telescope collaboration (2019a) is for reporting the scientific results about ”the shadow of the supermassive black hole”. ), which is significantly lower than that of their previous M 87 observations ($T_{\rm b}$ from 1.23 to 1.42 $\times 10^{10}$ K; Akiyama et al. (2015)) despite having higher spatial resolutions. 444 The possibility of time variation in brightness temperature cannot be ruled out. The number of measurements is extremely small, and future observations are desirable. The 86 GHz Very Long Baseline Array (VLBA; Napier et al. (1993)) observations have shown that the core brightness temperature is $T_{\rm b}=1.8\times 10^{10}$ K (Hada et al., 2016). Kim et al. (2018) also reported the brightness temperature is $T_{\rm b}\sim(1-3)\times 10^{10}$ K at 86 GHz. The spatial resolutions of both observations are lower than that of EHT ($\theta_{BEAM}>100~{}\mu\rm as$), but they show higher brightness temperatures. In any case, it is quite rare to observe a brightness temperature of less than $10^{10}~{}K$ for the M 87 core by VLBI. In observations of very compact objects, if the spatial resolution is low, the measured brightness temperature could be underestimated because the solid angle of the emission region tends to be estimated larger than the actual size. If the spatial resolution is higher, the measured brightness temperature can be expected to be higher because the solid angle of the emission region can be more accurately identified. The measured brightness temperature increases until the spatial resolution becomes sufficient to determine the fine structure of the compact object. However, the measured brightness temperature may surely decrease once sufficient spatial resolution is achieved and the fine structure is recognized. The EHTC observations show a ring diameter of about $40~{}\mu\rm as$, almost the same as the estimated source size in Akiyama et al. (2015). However, since it is a ring structure, the center of the image is darker, so assuming that the flux density is the same 555 The EHTC papers do not show the flux density of the ring image. , the highest-brightness part in the ring image should show a higher brightness temperature than that indicated by Akiyama et al. (2015). The lower brightness temperatures and/or flux densities in the images obtained by the EHTC could be the result of the insufficient recovery of the data coherence by improper calibrations. Because of these three problems we decided to reanalyze the data released by the EHTC 666First M 87 EHT Results: Calibrated Data https://eventhorizontelescope.org/for-astronomers/data http://datacommons.cyverse.org/browse/iplant/home/shared/commons_repo/curated/theEHTC_FirstM87Results_Apr2019 DOI:10.25739/g85n-f134 . Using the public data released by the EHTC, we succeeded in reconstructing the core and jet structure in M 87. We have resolved the region containing the SMBH in M 87 for the first time and found the structure of the core and knot separated by $\sim 33~{}\mu\rm as$ ($550~{}\rm au$ or 4.7 $R_{\rm S}$) on the sky, which shows time variation. This could be the scene of the initial ejection of the jet from the core. We also found a feature to the west, $\sim 83~{}\mu\rm as$ away from the core. These facts are important for identifying the jet formation mechanism from SMBHs. We need further observations to determine the nature of the features. We also found emissions along the axis of the jet up to a point a few mas from the core, showing that the edges of the jet are brighter, similar to what was observed at low frequencies. We first describe the observational data released by the EHTC in Section 2, our data calibration and imaging process in Section 3, and our imaging results in Section 4. Then, we investigate how the EHTC ring was created in Section 5. In Appendix A, we show that the EHT array cannot detect any feature whose size is larger than $30~{}\mu\rm as$. As a supplement to Section 5.2, Appendix B shows the dirty beam (PSF) shapes of the EHT array for the M 87 observations in two different types: natural weighting and uniform weighting. Both show the substructure with a scale of $\sim 40~{}\mu\rm as$. In Appendix C, we show that the missing spatial Fourier components of $\sim 40~{}\mu\rm as$ also affect the structure in our CLEAN map. ## 2 Observational data The observational data were recorded on 5, 6, 10, and 11 April 2017. The EHT array consists of seven submillimeter radio telescopes located at five places across the globe, yielding a baseline length over 10000 km (the EHTC, 2019). For the observational details and the instruments, refer to the series of the EHTC papers (Event Horizon Telescope collaboration , 2019a, b, c, d, e, f). The raw data archives have not been released by the EHTC yet, but they released the calibrated visibility data with their recipe of the data reduction procedure. We first analyzed the released EHT data sets of M 87 using the standard VLBI data calibration procedure and imaging methods without referring to their data procedure. The data are time-averaged into 10 sec bins and are stored into 2 IF channels. According to the header of the public FITS data the IF bandwidth is 1.856 GHz in each IF. Because of the removal of data of the strong calibrator source (3C 279), we could not perform the fringe search to correct the errors of station positions, clock parameters, and the receiving band-path calibration by ourselves. Therefore, our independent calibration was limited to the self-calibration method. We checked the details of the data and noticed that the visibility values of the RR-channel and LL-channel are exactly the same. The headers of the EHTC open FITS data files contain two data columns labeled RR and LL, respectively; the FITS format data does indeed contain data columns labeled RR and LL. We checked all the original public FITS data sets (there are 8 sets) and confirmed that the data in the RR and LL columns are the same in all the data sets; there are a total of 51119 pairs of RR and LL, and all the pairs have exactly the same real, imaginary, and weight values. We found in a document of the EHTC the following description: 777 README.md in https://github.com/eventhorizontelescope/2019-D01-01 > The data are time averaged over 10 seconds and frequency averaged over all > 32 intermediate frequencies (IFs). All polarization information is > explicitly removed. To make the resulting ‘uvfits‘ files compatible with > popular very-long-baseline interferometry (VLBI) software packages, the > circularly polarized cross-hand visibilities ‘RL‘ and ‘LR‘ are set to zero > along with their errors, while parallel-hands ‘RR‘ and ‘LL‘ are both set to > an estimated Stokes *I* value. Measurement errors for ‘RR‘ and ‘LL‘ are each > set to sqrt(2) times the statistical errors for Stokes *I*. In other words, the open data in the EHTC FITS format are not the visibility of either polarization, but the Stokes I, $V_{ij,I}=(V_{ij,RR}+V_{ij,LL})/2$, (Event Horizon Telescope collaboration , 2019c), and the above-calculated values are stored in the columns of RR and LL. This information is not included in the attached tables or files of the FITS data. For this, the EHTC should have used the FITS format for intensity data instead of using that for dual polarization data. Also, it means that the corrections made between the correlator output and the open data cannot be independently verified. EHTC’s open data integrates the wide frequency band of 1.86 GHz into a single channel. Such wideband integration is extremely rare and unsuitable for public data because it results in loss of information over a wide field in the data due to the bandwidth smearing effect. The effect is similar to the peripheral light fall-off of optical camera lenses. Visibility data integrated in the frequency direction reduces the sensitivity in peripheral vision. This phenomenon occurs because originally independent (u, v) points are integrated in frequency domain. The further away from the center of the field of view (phase center), become larger the size of the PSF and lower the peak; the detection sensitivity in the peripheral vision becomes worse (Thompson et al., 2001; Bridle & Schwab , 1989, 1999). Due to the bandwidth smearing effect, the peak of the PFS away from the phase center is suffered attenuation as shown in Figure 1. In the case of the EHTC open data, the ratios of peak heights relative to that at the phase center are $\sim~{}50~{}\%$ at a radius of 5 mas, and $\sim~{}27~{}\%$ at a radius of 10 mas. Even at a radius of 20 mas from the center, the ratio is $\sim~{}14~{}\%$. If a component of sufficient intensity is present at an even far position from the center, it will be detected. We did not abandon such a possibility and set a wide field for imaging as explained in Section 3. Figure 1: The bandwidth smearing effect calculated explicitly for the EHTC open data by following equations 6-75 and 6-76 in Thompson et al. (2001). Adapted synthesized beam size is $\theta_{b}=21.06~{}\mu as$, which is the geometric mean of major and minor axes of the beam shapes of the four observing days shown in Table 1 of Event Horizon Telescope collaboration (2019d). We substituted $\Delta\nu~{}=1.856~{}GHz$ for the bandwidth and $\nu_{0}=~{}229.071~{}GHz$ for the observing frequency. The coherence time of the obtained data has a significant impact on the data analysis and imaging results; the EHTC shows the atmospheric coherence time for all observations in the 2017 campaign (Event Horizon Telescope collaboration , 2019b), but not for those limited to the M 87 observations only. Therefore, we used the AIPS task COHER to check the coherence time of the visibility data. Here, the coherence time is defined as the time when the amplitude becomes $1/e\sim 0.36$ by vector averaging. The task COHER cannot identify the reasons for the coherence loss. In any case, the calculated coherence time implies the total amount of coherence loss that the data has suffered. The coherence time $T_{cor}=0.45\pm 0.7\rm~{}min$ was obtained from the entire data set (average of all baselines). However, the coherence time was not constant; the data from the first two days showed $T_{cor}=0.54\pm 0.91\rm~{}min$ and the data from the last two days showed $T_{cor}=0.35\pm 0.36\rm~{}min$. We took it as significant that $39\%$ of the total data showed $T_{cor}\sim 0.167\rm~{}min~{}(\sim 10~{}sec)$. Without any kind of calibrations, we do not expect to improve SNR by long time integration. We decided that no meaningful solution could be obtained by increasing the integration time (SOLINT, solution interval) in self-calibration. Therefore, we always set SOLINT to $0.15~{}min$ when performing self-calibration. We used both data channels in their original form. We found that our calibrations of the EHT data sets can be significantly improved and also obtained an improved solution for calibrations using the hybrid mapping method (Pearson & Readhead, 1984; Readhead & Wilkinson, 1978; Schwab, 1980). The observations were performed over four days. We succeeded in increasing the sensitivity by integrating two days’ data or all of them. ## 3 Our data calibration and Imaging In this section, we report on the procedures and results of data calibration and imaging using standard methods of VLBI data analysis for sources with unknown structures. In Section 3.1 we describe the hybrid mapping procedures used in this study. Section 3.2 describes how we identified the second feature from the first map, and Section 3.3 describes the process that followed. In Section 3.4 , we present our final images. In Section 3.5, we present a solution for self-calibration of both amplitude and phase, using the final image as a model to determine the quality of the EHT public data. ### 3.1 Hybrid mapping process In the analysis of VLBI data, the hybrid mapping method is widely used to obtain a calibration solution for the data and to reconstruct the brightness distribution. Hybrid mapping, which consists of repeatedly assuming one image model, performing self-calibration, obtaining a trial solution for calibration, and improving the image model for the next self-calibration, is the only method that is essential for precise calibration of VLBI data (Pearson & Readhead, 1984; Readhead & Wilkinson, 1978; Schwab, 1980). VLBI systems are not so stable in phase and amplitude as connected radio interferometers. In addition, millimeter- and submillimeter-wave observations are more affected by atmospheric variations. Therefore, the hybrid mapping method is becoming more and more important in the calibration of high frequency VLBI data such as the EHT observations. We performed a standard hybrid mapping process using the tasks CALIB and IMAGR in AIPS (the NRAO Astronomical Image Processing System 888http://www.aips.nrao.edu/index.shtm, Greisen (2003)). ### 3.2 The first step in hybrid mapping process #### 3.2.1 Solutions of self-calibration using a point source model As a first step in this process, a single point source (located at the origin) was used as the first image model to obtain a solution for the visibility phase calibration from the self-calibration. The parameters used for the task CALIB are listed in Table 1. As mentioned in Section 2, the coherence time of EHT public data is very short. The solution interval (SOLINT) was set to 0.15 minutes. We set the $SNR~{}cutoff=3$ for safety. This SNR cutoff value is larger than what many researchers use in the end. Solutions that did not meet the criteria (SNR cutoff) were flagged and abandoned. Figure 2 shows the phase solution for the first step. Because phase is a relative quantity, the phase of the ALMA station (AA) is used here as a reference. For all stations, the non-zero and time-varying phase values were calculated by self-calibration. The four stations, APEX (AP), SMT (AZ), LMT (LM), and Pico Veleta (PV), always show the same respective trends over the four days of observations, suggesting that errors in station positions remain (a sinusoidal curve with a period of one sidereal day is observed at all stations when there is an error in the position of the observed object). Another feature is the phase difference that occurs between IF1 and IF2, which is almost fixed for all stations except JCMT (JC) and AA (phase reference station), respectively. If the EHT public data were sufficiently calibrated, the above two phenomena should not appear. In conclusion, the ”calibrated” data published by the EHTC is not yet sufficiently calibrated. In order to obtain reliable images, the EHT’s public data needs to be further calibrated. Parameters | ---|--- SOLTYPE | ’L1’ SOLMODE | ’P’ (phase only) SMODEL | 1,0 (1 Jy single point) REFANT | 1 (ALMA) SOLINT (solution interval) | 0.15 (min) APARM(1) | 1 APARM(7) (SNR cut off) | 3 Table 1: Parameters of CALIB for the first self-calibration. Figure 2: Initial phase (only) solutions obtained by self-calibration using one-point model. The red dots are the solutions for IF 1 data, and the blue dots are those for IF 2 data. The solution for L (left-handed circular) polarization is not plotted here; the visibility data for LL is exactly the same as for RR, so the solution for L is the same as the solution for R in the Figure. #### 3.2.2 The first CLEAN map Figure 3 shows the CLEAN (Clark, 1980; H$\ddot{o}$gbom, 1974) image obtained from the data after applying the first phase calibration solution shown in Figure 2. The parameters of the imaging of IMAGR are shown in Table 2. (In all figures showing the imaging results, the x-axis indicates relative Right Ascension and the y-axis relative Declination.) The purpose of this imaging is to find the second brightest component following the central brightest peak. As will be explained in Section 5.1, despite the fact that the EHTC has involved the largest number of stations ever, the u-v coverage of the EHT array is formed by only 7 stations, or actually 5 stations if we exclude the very short baselines. The synthesized beam (dirty beam) is not as sharply shaped as the dirty beams of multi-element interferometers such as ALMA and VLA. It is not easy to find the complex brightness distribution of the observed sources from a tentative map composed of such a scattered dirty beam (PSF). Therefore, we performed CLEAN, specified by the parameters shown in Table 2. This method is effective when the structure of the observed source is not point symmetric. We set the loop gain (GAIN) to 1.0 and extracted all of the brightest peak from the dirty map in the first CLEAN subtraction. Next, this component was replaced by a sharp Gaussian restoring beam and combined with the brightness distribution of the remaining dirty map. The image in Figure 3 was created in this way. This has the effect of removing the bright but scattered PSF shape of the brightest point that dominates the dirty map, and clarifying the presence of the second brightest component in the image. Note that if the data is not properly calibrated and the actual PSF corresponding to a point source differs from the theoretically calculated PSF shape, the brightness distribution caused by the brightest point may remain in the afterimage. However, such remaining brightness distribution also shows a point-symmetric structure with respect to the location of the brightest point (practically the same location as the center of the map), and does not contribute to the asymmetric structure of the image. Therefore, if there is an asymmetric structure in the image, it is not related to the brightest component, but is due to another bright point source. So, by searching for the asymmetric structure, we can find the second component of the observed source. This image (Figure 3 shows its central $600~{}\mu as$ square) has a nearly point-symmetric structure with respect to the center of the map. The overall feature is a series of multiple ridges in the $PA=55^{\circ}$ direction. This structure is due to the non-uniformity of the u-v coverage. In addition to the central P, there are several other bright features. The peak brightness of these features is shown in Table 3. Features a, b, c, d, e, f, g, h, i, j, and k have corresponding features located at their symmetry points (denoted as a∗, b∗, c∗, d∗, e∗, f∗, g∗, h∗, i∗, j∗, and k∗). Curiously, the features located in the upper right from the center are always brighter than the symmetric features located in the lower left, i.e., the brightness ratio is greater than one. This may indicate the existence of a large-scale asymmetric brightness distribution in the observed object, extending from the center to the upper right. This is roughly consistent with the M 87 jet propagation direction $PA=-72~{}^{\circ}$ (Walker et al., 2018). Examining the brightness ratio of each pair, we find that the pair c & c∗ ($ratio=1.146$) is the largest, followed by the pair a & a∗ ($ratio=1.122$). Regarding the absolute value of brightness, the brightness of a ($2.396\times 10^{-2}$ Jy/Beam) is larger than that of c ($1.982\times 10^{-2}$ Jy/Beam). In addition, there is a ridge extending from the central bright point (P) toward feature a. The direction of this ridge ($PA=-45~{}^{\circ}$) is completely different from the direction of multiple ridges seen in the entire image, and no other ridge shows the same direction. Based on these characteristics, we decided to continue the hybrid mapping by adopting two points for the next image model. In other words, we chose to use a model with a point at each of the two locations, the center and the location of feature a. Parameters | ---|--- DOCALIB | 2 CELLSIZE | $1.22011\times 10^{-6}$, $1.22011\times 10^{-6}$ (asec) FLDSIZE | 8192, 8192 (pix) ROBUST | 0 NITER | 1 GAIN | 1.0 Table 2: Parameters of IMAGR for the first trial imaging. Figure 3: First step image of our hybrid mapping process. The restored beam size is about $20~{}\mu\rm as$, and the center $600~{}\mu\rm as$ square of the map is magnified. Contour lines are drawn at every 20$\%$ level of the peak brightness of $3.981~{}\times 10^{-1}$ (arbitrary units are used in the figure, not Jy/beam). One pixel corresponds to $1.22011~{}\mu\rm as$. Name | Brightness (Jy/beam) | Name | Brightness (Jy/beam) | Ratio | Order ---|---|---|---|---|--- P | $8.736\times 10^{-2}$ | | | | a | $2.396\times 10^{-2}$ | a∗ | $2.136\times 10^{-2}$ | 1.122 | 2 b | $2.438\times 10^{-2}$ | b∗ | $2.187\times 10^{-2}$ | 1.115 | 3 c | $1.982\times 10^{-2}$ | c∗ | $1.729\times 10^{-2}$ | 1.146 | 1 d | $2.099\times 10^{-2}$ | d∗ | $1.889\times 10^{-2}$ | 1.111 | 4 e | $2.780\times 10^{-2}$ | e∗ | $2.534\times 10^{-2}$ | 1.097 | 5 f | $2.420\times 10^{-2}$ | f∗ | $2.257\times 10^{-2}$ | 1.072 | 7 g | $2.368\times 10^{-2}$ | g∗ | $2.205\times 10^{-2}$ | 1.074 | 6 h | $2.334\times 10^{-2}$ | h∗ | $2.282\times 10^{-2}$ | 1.023 | 9 i | $2.364\times 10^{-2}$ | i∗ | $2.337\times 10^{-2}$ | 1.012 | 10 j | $2.475\times 10^{-2}$ | j∗ | $2.404\times 10^{-2}$ | 1.030 | 8 k | $2.328\times 10^{-2}$ | k∗ | $2.319\times 10^{-2}$ | 1.004 | 11 Table 3: Peak brightness of the features that appeared in the first step image. The 11 features near the center are shown. P is the peak in the center that was replaced by the restoring beam. Features other than P are as shown in the brightness distribution of the residual dirty map. ### 3.3 Our hybrid mapping process After obtaining the image models for the two points, more than 100 iterations, including trials and errors, were performed in the hybrid mapping process. Most of the CLEAN images were run with the parameters listed in Table 4. As described in Event Horizon Telescope collaboration (2019d), care must be taken in the choice of FOV, as incorrect restrictions will result in incorrect image structures. Considering the well-known structure of M 87, we restricted the imaging region by eight BOXes where emission could be detected. For the self-calibration, the selected CLEAN components were used as the next imaging model and the parameters in Table 1 were used. By repeating the phase-only self-calibration in this way, we were able to find better images and calibration solutions. This is because the method of simultaneously solving the amplitude solution with hybrid mapping has the risk of going in the wrong direction. Self-calibration of phase-only can be done with a certain degree of safety. Even if the wrong model is used and a completely wrong solution is obtained, the closed phase is automatically preserved and convergence to a completely wrong image can usually be avoided. On the other hand, the amplitude solution from self-calibration can be infinitely large or small by using the wrong image model. It is safe to try the self-calibration of the amplitude solution after a confident correct image model is obtained from the phase self-calibration. Parameters | ---|--- ANTENNAS | 0 (all) DOCALIB | 2 CELLSIZE | $1.5\times~{}10^{-6}$, $1.5\times~{}10^{-6}$ (asec) UVRANGE | 0, 0 (no limit) FLDSIZE | 16384, 16384 (pix) ROBUST | 0 NITER | 40000 GAIN | 0.05 or 0.005 FLUX | -1.0 (Jy) BMAJ, BMIN | 0.00002, 0.0002, and 0 (asec) BPA | 0 (∘) RASHIFT | $-9.25\times~{}10^{-3}$ (asec) DECSHIFT | $8.15\times~{}10^{-3}$ (asec) NBOXES | 8 BOX(1) | -1, 912, 1329, 2185 (pix) BOX(2) | -1, 820, 2049, 2729 (pix) BOX(3) | -1, 912, 3105, 3129 (pix) BOX(4) | -1, 1184, 4481, 3689 (pix) BOX(5) | -1, 1792, 6321, 4473 (pix) BOX(6) | -1, 2448, 8753, 5673 (pix) BOX(7) | -1, 2848, 11537, 7001 (pix) BOX(8) | -1, 2800, 13377, 7833 (pix) Table 4: Typical parameters of IMAGR for our hybrid mapping process. The imaging area is $24.576~{}\rm mas$ square, but it is limited by the BOX setting. ”$Flux=-1.0$” means that the terminating condition of CLEAN subtraction is the first occurrence of a negative maximum value. ### 3.4 Our final image In the latter half of the imaging process, we selected several candidates for the final image by comparing the difference in closure phase between the image and the real data. Furthermore, we created several image models using the CLEAN components of the candidate images to find the best image with the minimum closure residuals. Since we found that the source structure has time variation, we divided the data into the data of the first two days and the data of the last two days. As a result, we obtained the best images as shown in Figure 4. The data from the first two days was a composite of seven raw CLEAN maps, and the data from the last two days was a composite of nine raw CLEAN maps. Both consist of CLEAN components only. In the images of the first two days, adjacent CLEAN components within $2~{}\mu\rm as$ are merged into one component. In the images of the last two days, adjacent CLEAN components within $5~{}\mu\rm as$ are merged into one component. Since the eight BOXes cover a large area, the resulting CLEAN component contains three types of emission: real emission, associated diffraction (sidelobes), and false acquisitions. For example, the bright emission on the right edge is not real. Such unreal bright spots often appear when the VLBI data is analyzed and imaged. When such unreal brightness appears, there may actually be strong emissions outside the BOXes. We produced a large image (30 arcsec field of view) using very short baseline (SM-JC and AA-AP) visibility data, but could not detect any new strong features. To get a more complete image, we need to select only the real ones from these CLEAN components and perform CLEAN imaging again with each narrow BOX to cover the selected ones. However, since our data did not have enough u-v coverage and quality to select the correct ones among the CLEAN components, we gave up the task of extracting the CLEAN components this time. Nevertheless, the quality of the final image does not seem to be too bad. The closure residuals of the resulting image show a small variance comparable to the EHTC ring image. The residual of our map for the first two days of data is $-4.90~{}\pm~{}37.93^{\circ}$ (the residual of the EHTC ring is $-0.73~{}\pm~{}45.33^{\circ}$) and the residual of our map for the last two days of data is $1.79~{}\pm~{}42.11^{\circ}$ (the residual of the EHTC ring is $4.22~{}\pm~{}45.74^{\circ}$). Here, we integrated the 5-minute data points and calculated the closure phase of all triangles. For more information on closure phase residuals, see Section 4.3.3. The EHTC ring images for comparison were generated using the EHTC-DIFMAP pipeline. It should be emphasized that the final image clearly contains an unrealistic CLEAN component, but still shows the same level of closure phase residuals as the image of the EHTC ring. We also used this final image model to attempt self-calibration of the amplitude and phase for the solution, performed CLEAN, and obtained new images. However, the residuals of the closure phase in the new image were not improved. Therefore, we terminated the hybrid mapping without amplitude calibration. The images shown in Figure 4 are our best final images. The two upper panels in Figures 7 and the panels in Figure 10 are partial extracts from the final images. Figure 8 shows the full image of the last two days of data. Figure 4: The best images obtained from the analysis. The left panel shows the best image for the data of the first two days. The right panel shows the best image for the data of the last two days. In both images, the grouped- CLEAN components are convolved with a circular Gaussian restoring beam of HPBW $=200~{}\mu\rm as$. A logarithmic pseudo-color is used to express the large differences in brightness distribution (arbitrary unit). The eight red circles indicate the BOX area used. The bright spot seen on the right (west) edge is not real brightness distributions (see the content in this Section 3.4). ### 3.5 Solutions of self-calibration both amplitude and phase using a final image model Here we show solutions of the self-calibration for both amplitude and phase using one of the final images, although we did not apply it to the data calibration. The reason we dare to show the unused solutions here is that we believe this study provides insight into the quality of EHT public data and the reliability of our images. We performed our self-calibration using CALIB in the AIPS task in ”A&P” mode with the image (CLEAN components). Other parameters of CALIB are the same as in the first self-calibration (Table 1). Figures 5 and 6 show the self-calibration solutions (the total amount of solutions to be applied for data calibration). Figure 5 shows the phase solution. Compared to the initial phase solution shown in Figure 2, there is no significant change in the overall perspective. This can be attributed to the fact that the brightness distribution of the observed source is concentrated in the center and does not deviate significantly from the self-calibration solution assuming a single point source. However, there are offsets in the phase changes of JC, LM, PV, and SM stations. In addition, the phases of the two stations in Hawaii, JC and SM, show a larger phase dispersion than the solution of self-calibration assuming a point source on the 100 and 101 days’ data. Although small in comparison, the phases of JC and SM on 95 and 96 days’ data also show phase scatters on the same hours. The amplitude solution is shown in Figure 6. In general, errors in amplitude are due to noise in the atmosphere and in the receiving system. In addition, changes in aperture efficiency depending on the elevation angle of antenna often cause systematic errors in amplitude. These effects can be measured by an auxiliary method. For large aperture antennas, gain loss due to offset tracking of the target source from the narrow main beam angle may occur, which is difficult to calibrate. Furthermore, coherency (phase stability) loss is observed due to the variations in each station clock and atmospheric variations, which is more difficult to measure correctly than other error factors. Amplitude solutions for AA, AP, PV and SM stations are within $50\%$ fluctuation ($\sim 1.0\pm 0.5$). Such values are often found in amplitude solutions of most of the self-calibrations of VLBI data. On the other hand, JC and LM stations occasionally show large amplitude solutions reaching 10 and 30, respectively. The JCMT station shows a large amplitude value at $T~{}\sim 4.25~{}$hour on the last observation day (101 day). On the other hand, for the LMT station, the amplitude is large for several times as follows. (a) $T\sim 1~{}hour$ and $T\sim 2.6~{}hour$ on the first observing day (095 day) (b) $T\sim 1~{}hour$ on the second observing day (096 day) (c) $T\sim 2.25~{}hour$ and $T\sim 6.25~{}hour$ on the 3rd observing day (100 day). The EHTC did not show the overall calibrations to be applied, but noticed the sudden large-amplitude errors at the LMT station (Figure 21 in Event Horizon Telescope collaboration (2019d)). These large amplitude solutions may have implied that the resultant image is significantly wrong. For comparisons, we examined the solutions of self- calibration in the case of the EHTC ring image. Consequently, we found that the self-calibration solutions by the EHTC ring image also demonstrate large amplitudes occasionally, similar to those of our image (Section 5.6). Therefore, if such large amplitudes found in self-calibration solutions are negative signs against the resultant image quality, the results obtained by both the EHTC and our work should be rejected. The EHTC is concerned and considering the fact that some stations require large-amplitude corrections during data analysis. EHTC then analyzed the data from 3C 279, which was observed with M 87, and obtained consistent imaging results from all imaging methods. At the same time, the EHTC found that the amplitude correction was consistent with that obtained with M 87 (Event Horizon Telescope collaboration (2019d)). The amplitude corrections they found are also consistent with those we showed above. In other words, it is natural to consider that such a large amount of amplitude variation actually occurred. To add, the fact that the EHTC obtained a non-ring structure from the 3C 279 data and that the amount of error corrections the EHTC obtained at that time were consistent with those obtained from the M 87 data does not mean that the ring image of the EHTC is the correct image of M 87. The large amplitude solutions from the self- calibrations indicate that the ”calibrated” data released by the EHTC are not of high quality with respect to the amplitude. Figure 5: Phase solutions obtained from self-calibration with A&P mode using CALIB in AIPS. As the image model, all of the grouped CLEAN components of the last two days’ image were used. The red points show the solutions for IF 1 data, the blue ones show those for IF 2 data. Figure 6: Amplitude solutions obtained from self-calibration with A&P mode using CALIB in AIPS. As the image model, all of the grouped CLEAN components of the last two days’ image were used. The red points show the solutions for IF 1 data, the blue ones show those for IF 2 data. ## 4 Imaging results In this section, we describe the brightness distribution obtained in our final image models. Unlike the EHTC result, we could not detect any ring structure but found that the emissions at 230 GHz come not only from the narrow central region less than $128~{}\mu\rm as$ in diameter (the EHTC’s FOV), but also from the jet region. We found a core-knot structure at the center and weak spot- like features along the M 87 jet stream though the reliability must be discussed. In Section 4.1, we show the structure of the central region. In Section 4.2, features seeming to belong to jet are presented. In Section 4.3, we investigate the reliability of our final image from three points of views: the attainable sensitivity (Section 4.3.1), the robustness of the main features (Section 4.3.2), and the self-consistency of our imaging (Section 4.3.3), where we compare with those of the EHTC. ### 4.1 The core Figure 7: Images of the central core region: The top left panel is the image obtained from the data of the first two days. The upper right panel is that obtained from the data of the last two days. In both images, the CLEAN components are grouped and convolved with the restoring beam of a circular Gaussian with HPBW of $20~{}\mu\rm as$. The brightness distribution is shown in pseudo-color (arbitrary unit). As shown in the upper left panel, three features, C(core), K(knot), and W(west component), were detected. The lower left panel shows the difference between the last image and the first image, i.e., the time variation of the brightness distribution during 5 days. The lower right panel shows the positions of the C, K, and W peaks. The red crosses are for the first two days, and the blue crosses are for the last two days. The size of the crosses is ten times the size of the error bars in position. The small squares also indicate the location of the large increase in intensity. | the first two days’ | the last two days’ ---|---|--- Peak position | R. A. ($\mu\rm as$) | $\delta$ ($\mu\rm as$) | R. A. ($\mu\rm as$) | $\delta$ ($\mu\rm as$) Core | $~{}-1.8\pm 0.6$ | $~{}-1.5\pm 0.6$ | $~{}-2.3\pm 0.5$ | $~{}-1.5\pm 0.5$ Knot | $-22.2\pm 0.5$ | $~{}27.0\pm 0.5$ | $-23.1\pm 0.3$ | $~{}22.5\pm 0.3$ West | $-78.1\pm 0.3$ | $-33.0\pm 0.3$ | $-77.2\pm 0.3$ | $-37.5\pm 0.3$ | $\Delta$ R. A. ($\mu\rm as$) | $\Delta\delta$ ($\mu\rm as$) Core | | | $-0.5$ | $~{}~{}0.0$ Knot | $-0.9$ | $~{}-4.5$ West | $0.9$ | $~{}-4.5$ Position of intensity increase | | R. A. ($\mu\rm as$) | $\delta$ ($\mu\rm as$) Core area a | | $-22.4\pm 0.4$ | $~{}15.0\pm 0.4$ Knot area a | | $-29.8\pm 0.7$ | $~{}40.5\pm 0.7$ b | | $-40.8\pm 0.3$ | $~{}33.0\pm 0.3$ c | | $-48.0\pm 0.4$ | $~{}43.5\pm 0.4$ West area a | | $-74.7\pm 0.7$ | $-43.5\pm 0.7$ b | | $-79.9\pm 0.5$ | $-15.0\pm 0.5$ c | | $-99.8\pm 0.7$ | $-25.5\pm 0.7$ Integrated Intensity | (mJy) | (mJy) Core | $55.6\pm 5.2$ | $66.1\pm 4.7$ Knot | $33.5\pm 2.7$ | $44.9\pm 2.3$ West | $22.5\pm 1.2$ | $30.2\pm 1.4$ Brightness temperature | (K) | (K) Core | $>1.0\times 10^{10}$ | $>1.2\times 10^{10}$ Knot | $>6.0\times 10^{~{}9}$ | $>8.1\times 10^{~{}9}$ West | $>4.0\times 10^{~{}9}$ | $>5.4\times 10^{~{}9}$ Grouped CLEAN components | (mJy) | (mJy) $F_{GCC}>0.1mJy$ | $707.4~{}(n=1151)$ | $1032.6~{}(n=1657)$ All | $767.8~{}(n=2824)$ | $1154.6~{}(n=7844)$ Table 5: Properties of main features: positional offsets from the map phase center in $\mu\rm as$, flux densities in mJy, and minimum brightness temperatures in Kelvin calculated with the assumption that the emission area is $15~{}\mu\rm as$ in diameter. Positions of intensity increase in features are also shown. At the bottom, the sum-up intensities and the numbers of the grouped CLEAN components in the whole images are shown. In the central core region, we could not find the ring structure reported by the EHTC, but found a core-knot structure. Figure 7 shows the images of the central region ($300~{}\mu\rm as$ square). As noted in Section 3.4, since the data calibration is not yet complete, our final images show the sidelobe structure around actual features. This is a common phenomenon in synthesis imaging with radio interferometers with only a few stations. The images in Figure 7 show that “the unresolved VLBI core” in M 87 has finally resolved into substructures. The high spatial resolution of the EHT array clearly shows the presence of two bright peaks, i.e., the core and knot structure. The core is indicated by ”C” and the knot by ”K” in the upper left panel of Figure 7. In addition, we found a feature, ”W”, located west about $83~{}\mu\rm as$ apart from the core C. The flux densities from the obtained CLEAN components are $F_{C}\sim 60~{}mJy$ for the core (C), $F_{K}\sim 40~{}mJy$ for the knot (K), and $F_{W}\sim 25~{}mJy$ for the west feature (W). In this observation, the solid angle of features was not so clear. Here, we assume that the solid angle of the emission is $15~{}\mu\rm as$ in diameter, and calculate the brightness temperatures (lower limit). The average brightness of feature C is $T_{\rm b}=1.1~{}\times 10^{10}$ K. Feature K has a brightness of $T_{\rm b}=7.1~{}\times 10^{9}$ K. Feature W is $T_{\rm b}=4.7~{}\times 10^{9}$ K. Thus, we have detected central features with brightness temperatures higher than the EHTC ring (up to $T_{\rm b}\sim 6\times 10^{9}$ K). The solid angle assumed here is the maximum size of a single, smoothed object that the EHT array can detect in the 230 GHz (Appendix A). Therefore, the actual brightness temperature is likely to be much higher. If the solid angle of the emission is $5~{}\mu\rm as$ in diameter, the brightness temperature of core C reaches $10^{11}~{}K$. If this is the case, the brightness temperature is an order of magnitude higher than the previous measurement cases (Kim et al., 2018; Hada et al., 2016; Akiyama et al., 2015). This is mainly because the size of the emitting region has been identified as smaller due to the higher spatial resolution. High brightness temperatures were often detected from some active galactic nuclei (Horiuchi et al., 2004; Homan et al., 2011), and can be explained by the Doppler boosting effect of relativistic motion of jet approaching toward us. Previous observations found no high velocity movement in the M 87 central core. Therefore, the brightness temperatures are not due to such Doppler boosting effects. If they actually reflect the physical temperatures, they can be explained easily by the simple RIAF (Radiatively Inefficient Accretion Flow) disks (Kato, Fukue, & Mineshige , 2008; Nakamura et al., 1997). Our observational results are consistent with those of previous studies, supporting the existence of the RIAF disk in the M 87 core (Di Matteo et al., 2003). There is a clear difference between the two images observed over the five days. According to Event Horizon Telescope collaboration (2019c), they found a change in the closure phase between data sets from the first two days and the last two days. In other words, there was a clear time variation. However, the EHTC could not clearly identify from the structure of that ring where that change occurred (Event Horizon Telescope collaboration , 2019d). We identified the change in the closure phase as due to a change in the core knot structure (features C and K). In particular, the change in the position of feature K was also seen in the trial images during the hybrid mapping process. Assuming that features are single components, we fitted a Gaussian brightness distribution to each feature and measured the central position and displacement over five days. Relative to the position of feature C, the change in position of feature K is $\Delta\alpha=-0.4~{}\mu\rm as$, $\Delta\delta=-4.5~{}\mu\rm as$ in 5 days, and the proper motion is $0.33~{}\rm mas/yr~{}(v\sim 0.1~{}c)$. Feature K appears to be approaching feature C as if showing an inflow motion. However, if we look at the differences in the brightness distributions shown in the lower left of the Figure 7, we can see that the changes in the brightness distribution of feature K occur in three places, all at the north end of feature K. In the latter measurement of the position of feature K, feature K appears to be moving south because the brightness distribution of feature C affects the measurement; K is moving north on the line of $PA=-38^{\circ}$ as a whole. The position of feature C has hardly changed, but the location of ”a” where the intensity increased is at the northwest side of feature C. In other words, the structure of features C and K and their time variation can be interpreted as an outflow emanating in the direction of $PA=-38^{\circ}$ from the origin. There has never been a measurement of the motion of a knot so close to the central core. In comparison, it is difficult to interpret what feature W is. Three hypotheses are presented below. 1. 1. Gravitational lensing image. Feature W is morphologically similar to feature K, and the pattern of brightness variation is also similar. This can be attributed to the formation of the gravitational lens image of feature K due to the strong gravitational field of the SMBH in M 87. Assuming that the position of SMBH is approximately equal to the position of feature C (the distance between the core of M 87 and SMBH is $41~{}\mu\rm as$ or 6 $R_{\rm S}$; Hada et al. (2011)), feature W would be located (or at least projected) at 12 $R_{\rm S}$ from the SMBH. There is a possibility that the radio waves emitting from feature K to the far side, orbiting in the strong gravity field of the SMBH, and being changed the propagation direction, come towards us (Black Hole Echo; Saida (2017)). If feature W is such a lensing image caused by the strong gravity field of the SMBH, it should be the image of the backside of feature K, so it is most likely a mirror image of feature K. However, the shape of feature W does not look like such an image. Needless to say, there are many possibilities for a gravitational lensing due to a strong gravity field, so detailed calculations are required to deny it completely; however, the possibility that feature W is a gravitational lensing image is not very high. 2. 2. Another central black hole. Feature C is the primary SMBH of M 87, and feature W is a secondary SMBH orbiting the primary SMBH. If there is a binary SMBH in M 87, it can be permanently observed with the EHT array. Based on these observations, we calculated two possible orbits. 1. (a) The proper motion of feature W ($\mu=0.34~{}\rm mas/yr,~{}\rm v\sim 0.09~{}c$) is assumed to be due to a circular orbit motion, and its orbital radius is assumed to be the separation $R=83~{}\mu\rm as$ from feature C. In this case, the orbital period T is $\sim 1.5$ years. If the real radius is $R=1.4\times~{}10^{3}~{}au$, the mass of central object $M_{c}$ is only $1.2\times~{}10^{6}~{}M_{\odot}$. Since the estimated mass is too small as compared to those of the previous M 87 studies, this assumption must be rejected. 2. (b) We assume that the observed proper motions and structure change of feature W are only due to changes in surrounding matters, and that the measured proper motions of feature W have nothing to do with its orbital motion. In other words, we assume here that no change in the position of the center of gravity of feature W is observed. Also, it is assumed that feature C has an SMBH with a mass of $6\times 10^{9}~{}M_{\odot}$ and that the orbital radius of feature W is the distance between features C and W. The distance between them is $83~{}\mu\rm as$ ($1.4\times 10^{3}$ au or $11.9~{}R_{\rm S}$). It is consistent with the 86 GHz core size of $\sim 80~{}\mu\rm as$ at 86 GHz observed in 2014 (Hada et al., 2016), suggesting that the two features C and W are not transient. Also sinusoidal oscillations of the position angle of the jet were observed with a period of roughly 8 to 10 years (Walker et al., 2018). If the two features C, and W compose a binary of black holes, its orbital motion can cause such jet oscillations. Certainly, the apparent separation of approximately $1.4\times 10^{3}$ au is too short to explain the observed period of the jet oscillation. However, if the real distance is longer by a factor of about 3.42, which is the correction factor of the viewing angle of the jet axis from us ($\sim 17~{}^{\circ}$), the orbital period of the binary can be $\sim 10$ years. 3. 3. Unstable initial knot. Feature W is another knot moving toward a different direction. The jet of M 87 is known to have a wide opening angle at scales well below $1\rm~{}mas$ (Junor, Biretta & Livio, 1999). Furthermore, Walker et al. (2018) found evidence from 43 GHz observations that the initial opening angle $\theta_{app}$ is $\sim 70~{}^{\circ}$. We found the angle $\angle KCW$ is $70~{}^{\circ}$, and further that the line of the average jet axis ($PA=-72^{\circ}$) divides its angle almost evenly into $34^{\circ},~{}36^{\circ}$. Furthermore, the lines CK and CW extend in the directions of $PA=-38^{\circ}$ and $PA=252^{\circ}$, respectively. These directions are very similar to the ridges observed at 43 GHz from where Walker et al. (2018) measured the initial opening angle. We guess that not only feature K, feature W is also an initial knot that has just emerged from the core, and still, the shape is very unstable and shows changes significantly. Adopting the most conservative hypothesis, feature W, like feature K, can be understood to be a knot that represents the initial jet structure. As we will discuss in Section 4.3.2, the core-knot structure (features C and K) is robust in the sense that it can be obtained with different imaging parameters. On the other hand, the $40~{}\mu\rm as$ ring of the EHTC is sensitive to BOX parameters and can be easily destroyed, even if it can be created as shown in Section 5.7. Due to the robustness of the core-knot structure, we consider it to be a real structure. On the other hand, the feature W is sensitive to the BOX size, so its detection is not as reliable as it could be. However, the structure corresponding to feature W had already appeared in the first imaging results (Section 3.2.2). That is, feature c in Table 3 is in a similar position to feature W and also shows the largest asymmetry. Also, if we run the EHTC-DIFMAP pipeline without its BOX setting, an emission feature appears at the position close to feature W (lower right panel of Figure 24). These suggest that the feature W is also a real structure. ### 4.2 The jet Here, we show the overall brightness distribution (Section 4.2.1) and that of so-called the jet launching region, which is a few mas away from the core (Section 4.2.2). #### 4.2.1 The overall structure Figure 8: The overall structure of M 87 we obtained (image from the data of the last two days). The grey line shows the average direction of the jet axis ($PA=-72^{\circ}$ from Walker et al. (2018)). The restoring beam is $200~{}\mu\rm as$ circular Gaussian, which is much larger than the default beam in order to make the emission obvious. The logarithmic pseudo-color (arbitrary unit) is used to enhance the darker parts of the image. The image consists only of the grouped CLEAN components obtained. For comparison, the average image at 43 GHz from VLBA observations taken from Figure 1 in Walker et al. (2018) is shown in the inset. The bright spot seen on the right (west) edge is not real brightness distribution (see Section 3.4). Figure 9: Flux density distribution corrected for the band width smearing effects. Here we show the sum of the flux densities of the CLEAN components obtained in each small region. The horizontal axis shows the distance along the average direction of the jet axis ($PA=-72^{\circ}$ from Walker et al. (2018)) from the map center (near the peak of the core). The binning intervals are 0.25 mas. The vertical axis shows the sum of the flux densities in Jy (logarithmic scale). The green line shows the flux density distribution for the image from data of the first two days, and the red line shows the flux density distribution for the image from data of the last two days. The peaks seen on the right edge are not real flux densities (see Section 3.4). Figure 8 shows the overall structure of the M 87 we obtained. In order to make the emission obvious, we used a restoring beam of $200~{}\mu\rm as$ circular Gaussian, 10 times larger than the default beam size. As already mentioned, it is found that the emission at 230 GHz comes not only from the central point source, but also from other regions. The EHTC either assumed or concluded that there is no bright source outside the narrow region ($128~{}\mu\rm as$ in diameter) where the ring was found. However, we found that the emission was not from such a narrow range, but from a wide range of more than a few milli-arcseconds (mas). This is consistent with the results of VLBI observations at 43 GHz and 86 GHz. Our final image shows a similar structure to the average image in the 43 GHz band (the inset of Figure 8). There are two main similarities. First, as in the 43 GHz image, our image shows that the ”jet” has an extended structure leading to the core. Then, up to a few mas away from the core along the jet axis, both edges are bright, as in the previous observations. Second, the brightness distribution of the jet in the 230-GHz is consistent with the trend of those obtained from lower-frequency observations. The core is vastly brighter than the jet structure. Within the radius of 0.25 mas ($250~{}\mu\rm as$) from the center, 63 % (the first two days’ image) and 75 % (the last two days’ image) of all obtained flux densities are concentrated. However, features C, K, and W (several tens mJy at most, see Table 5) do not occupy them, rather the flux densities are distributed over a wider area. In contrast, the EHTC rings have a total of about 500 mJy that is contained entirely within a diameter of only a few tens of $\mu\rm as$. The results of this observation at 230 GHz show that the brightness in the jet region is orders of magnitude lower than that in the core region. In addition, the decay of the flux density along the jet is more rapid at 230 GHz than in the lower frequency observations. Compared to the peak luminosity of the core, the relative intensities are $6.6\times~{}10^{-2}$ at 0.25 mas from the core, $9.9\times~{}10^{-3}$ at 0.5 mas, and $2.3\times 10^{-3}$ at 1 mas (Figure 9). While, in the observation at 43 GHz, the decreases of intensity are $2.8\times~{}10^{-1}$ at 0.25 mas from the core peak, $8.5\times~{}10^{-2}$ at 0.5 mas, and $2.5\times~{}10^{-2}$ at 1 mas (from the upper panel of Figure 6 in Walker et al. (2018)). At 1 mas position, the intensity of the jet structure is 2.5 $\%$ of the core peak at 43 GHz, however the intensity at 230 GHz is only 0.2 $\%$ of the core peak, namely the intensity of the jet structure is greatly attenuated at 230 GHz. However, with respect to the structure of the brightness and intensity distribution of both edges, the trend is in great agreement with the previous results of the M 87 jet. The total flux density measured by the EHTC (Event Horizon Telescope collaboration , 2019c) was 1.12 and 1.18 Jy, for the first two days and the last two days, respectively. In contrast, the total flux density of the CLEAN component in our analysis is 767.8 and 1154.6 mJy, respectively. That is, there are missing flux densities of 353 and 25.4 mJy, respectively 999The sum of the flux densities we obtained as CLEAN components is larger than that of the EHTC ring. We made the ring according to their open procedure and measured its flux density ($\sim 500~{}\rm mJy$).. The difference between the total flux density of our image and the single-dish flux density is most likely due to the presence of extended emission somewhere, which the present EHT array cannot detect. As shown in Appendix A, the EHT array cannot detect an smoothed emission feature (like Gaussian brightness distributions) with size larger than $30~{}\mu\rm as$. #### 4.2.2 The Jet launching region In this section, we present the structure within a few mas from the core. We have found emission belonging to the jet component that was not detected by the EHTC. In Figure 10, the brightness distribution in this region is represented in three ways. The logarithmic pseudo-color (arbitrary unit) is used to represent the large differences in brightness distribution. The upper panels (a) and (b) are composed by restoring beams of a circular Gaussian with HPBW of $20~{}\mu\rm as$, which corresponds to the size of the spatial resolution of the EHT array for M 87 observations. The middle panels (c) and (d) are composed by restoring beams of circular Gaussian with HPBW of $200~{}\mu\rm as$. In order to facilitate comparison with previous results, the beam size is close to the spatial resolution of previous lower frequency observations (43, 86 GHz). Panels (e) and (f) in the bottom row show the image by the large restoring beam overlaid with the 43 GHz averaged image by the VLBA (Walker et al., 2018). Note that the 43 GHz image is time-averaged over 17 years, so knot-like features have been averaged out. It can be seen that the brightness distribution at 230 GHz is consistent with that at 43 GHz. The left panels (a), (c), and (e) show images of the data from the first two days. Panels (b), (d), and (f) on the right show images of the data from the last two days. Obviously, they are different from each other. However, the differences seen in the regions of a few mas cannot be attributed to the intrinsic variability of the source that occurred during the five days. Rather, it seems to be mainly dependent on the observational conditions. The emission areas of our 230 GHz results are consistent with that of the 43 GHz average image. They also show that both edges of the jet are brightened, which phenomena have been observed so far. Based on our data analysis, it seems that the detection of emissions in the range of several mas from the core of the M 87 has been successful to some extent. Figure 10: Images of the core and the jet launching region: Panels (a), (c), and (e) on the left show images of data from the first two days. The right panels (b), (d), and (f) show the images of data from the last two days. The logarithmic pseudo-color (arbitrary unit) is used to represent the large differences in brightness distribution. The upper panels (a) and (b) are composed by a circular Gaussian restoring beam with HPBW of $20~{}\mu\rm as$. The middle panels (c) and (d) are composed by a circular Gaussian restoring beam with HPBW of $200~{}\mu\rm as$. The levels of contour lines in the panels (c) and (d) are $10^{-5}$, $10^{-4}$, $10^{-3}$, $10^{-2}$, and $10^{-1}$ of the peak brightness. The lower two panels, (e) and (f), show overlaid ones with the VLBA averaged image at 43 GHz. The contour lines show the VLBA averaged image at 43 GHz taken from Figure 3 in Walker et al. (2018), and the levels of contour lines are -0.3, 0.3, 0.6, 0.85, 1.2, 1.7 mJy beam-1. Their restoring beam shape is $0.43\times$ 0.21 mas with $PA=-16^{\circ}$. Peak positions of the two images are used for the alignment of the two images. ### 4.3 Reliability of our final images As mentioned in Section 3, our calibration method was limited to self- calibration because the public the EHTC data do not contain raw data. We also had to give up on the amplitude self-calibration because the closure phase residuals were not reduced as compared to the case when only phase calibration was performed. Therefore, the calibration is not yet fully complete. As clues to the reliability, we describe the properties of the final images from three aspects: detection limit (Section 4.3.1), robustness (Section 4.3.2), and the self-consistency of our imaging (Section 4.3.3), where our images show better self-consistency than those of the EHTC. #### 4.3.1 From sensitivity Figure 11: Distributions of the grouped CLEAN components with flux densities larger than 0.1 mJy. Red dots are from the image of the first two days, blue dots are from the image of the last two days. The sloped line indicates the average direction of the jet axis ($PA=-72^{\circ}$ from Walker et al. (2018)). Event Horizon Telescope collaboration (2019c) shows that the typical sensitivity of a baseline connected to ALMA is $\sim 1~{}mJy$. We estimate that this sensitivity is for an integration of about 5 minutes. For an on- source time of 2 days, the attainable sensitivity reaches close to $0.1~{}mJy$ (ALMA-LMT baseline, $S/N~{}=~{}4$, assuming a point source). It is difficult to estimate the practical sensitivity of the synthesized image of an interferometer composed of antennas with different performances, such as the EHT array. However, it is unlikely that the image sensitivity will not be worse than the baseline sensitivity noted above. Here, we consider $0.1~{}mJy$ to be a reliable detection limit for our final images. Figure 11 shows the distribution of the grouped CLEAN components with flux densities larger than $0.1~{}mJy$. Almost all of the components are concentrated within a few mas of the core. (The remaining components are located in a false bright spot created outside the range of this figure, about $20~{}mas$ west of the center.) The image from the core to a few mas along the average jet axis seems to be reliable in terms of detection limits. A large number of grouped CLEAN components with flux densities larger than $0.1~{}mJy$ are found in our final images. The number of grouped CLEAN components with flux densities larger than $0.1~{}mJy$ is 1151 from the images of the first two days (with a sum of flux density of $707.4~{}mJy$), and 1657 from the images of the last two days (with a sum of flux density of $1032.6~{}mJy$) (Table 5). #### 4.3.2 The robustness of our final image In this section, we discuss another property of our final images: the robustness of the image structure. If the data is not yet completely calibrated, BOX technique is effective. As well as the EHTC, we also used the BOX technique to limit the imaging area (FOV). This technique has the potential to produce good images even if the calibration is incomplete. On the other hand, it may be creating structures that do not actually exist. In fact, the bright spot on the right side of our final image (Figure 8) is such an example. Therefore, care must be taken when using the BOX technique, because a false structure will be created in the BOX area, and the real structure outside of the BOX area will be removed from the image. We examine how the image is affected when we change the BOX parameters. In other words, we investigate the stability of the image. We compare the images obtained by changing the size of the BOX. The panels in Figure 12 show the four cases. The upper left panel (a) is the image without BOX (the FOV is $24.576~{}\rm mas$ square). The top right panel (b) shows the image with the same 8 BOXes that we used to obtain our final images. The lower right panel (c) is the image with a small BOX (circle with diameter $D=5~{}\rm mas$ is used). The lower right panel (d) is the image with a very narrow BOX (circle with diameter $D=128~{}\mu\rm as$ that corresponds to the FOV the EHTC used). These four CLEAN images were produced using data of the entire four days. In all panels, the emission can be seen at the positions of features C (core) and K (knots). On the other hand, feature W disappears in the case where the BOX setting is omitted (no BOX case). In the case of the EHTC FOV, no emission is seen at the position of feature W because the position of feature W is outside the BOX setting. Without the BOX setting, the S/N of the image is degraded. From the comparison between panel (a) and the other panels, we can see that the BOX setting compensates for the lack of calibration and improves the image quality. Thus, presence or absence of the BOX setting seems to have an effect on the image quality. Another noteworthy point is that in the case of the very narrow BOX setting (panel d), several different bright spots newly appear in the BOX. Moreover, some of them are located at the boundaries of the BOX. In such a case, other actual brightness distributions could exist outside the BOX setting. Since feature W disappears in the CLEAN image without the BOX setting. Feature W is considered to be less reliable than features C and K. As mentioned at the end of the Section 4.1, there are other reasons to consider that feature W is a real existence. Figure 12: Comparison of images obtained by changing the size of Box. Panel (a) is the image without BOX (the FOV is $24.576~{}\rm mas$ square). Panel (b) shows the image with the same 8 BOXes that we used to obtain our final images. Panel (c) is the image with a small BOX (circle with diameter $D=5~{}\rm mas$ is used). Panel (d) is the image with a very narrow BOX (circle with diameter $D=128~{}\mu\rm as$ that corresponds to the FOV the EHTC used). These four CLEAN images were produced using data of the entire four days. #### 4.3.3 Self-consistency of our imaging as compared to those of the EHTC images At the end of this section on image reliability, we present the degree of matching between the visibility and the image model. Here, we compare the results with those of the EHTC ring. 1. 1. Relations between the visibility amplitude and u-v distance (projected baseline length): The amplitude of visibility obtained by inverse Fourier transforming the image model is compared with those of the observed visibility data. Figures 13, and 14 correspond to Figure 12 in Event Horizon Telescope collaboration (2019d). This kind of comparison of visibilities is often performed to check the reliability of an image. However, here, the observed visibility data are calibrated by self-calibration solutions using the image model. Therefore, it is important to note that the amplitudes of the observed visibility data and those from the image model are no longer independent of each other. What can be safely determined from this comparison is the internal consistency of the imaging and calibration process. Figure 13 shows the data from the first two days, and Figure 14 shows the data from the last two days. The top row of each shows the variation of the visibility amplitude with respect to the projected baseline length. The red dots are those of the image model. The middle and bottom panels show the normalized residual amplitudes between the image model and the calibrated observation data. The plotted points are the calibrated raw data that have not been time-integrated, and we can see that the scatter is much larger than seen in their Figure 12 (Event Horizon Telescope collaboration (2019d)), where the time-integrated points have been plotted. We can see that the average and standard deviation of the normalized residuals of our final image are much smaller than any of the EHTC ring images in Figure 15. As an example, we show the normalized residual values for $t=180~{}sec$ integration. For the data of the first two days, our image shows $\rm NR_{ours}=0.030\pm 0.539$, while the EHTC images show $\rm NR_{EHTC}=0.148\pm 0.933$. For the data of the last two days, our image shows $\rm NR_{ours}=0.127\pm 1.259$, while the EHTC images show $\rm NR_{EHTC}=0.589\pm 2.370$. Here, in the case of EHTC, we used the simple averages of those values for the four EHTC images. One thing that interests us is the large discrepancy in amplitude of the EHTC ring image cases at the longest baseline lengths over $8\times 10^{9}\lambda$. It is three times larger than those of our final image cases. Since the EHTC ring images are very compact, if the images are really correct, the amplitude residuals at the longer baseline should become small at least. Another is the amplitudes at the very shorter baselines nearly $\rm zero~{}\lambda$. They contain the components of the extended structure which are resolved by high spatial resolution by EHT, so it is not surprising even if they do not match. Our images reproduce the amplitudes of the very short baselines well, but the differences are more significant in the cases of the EHTC rings. In our cases, the normalized residuals are 4 at most, but in the cases of the EHTC rings, they are distributed widely in the range of 0 to 15, which is not surprising since the EHTC rings are compact and have no extended components. However, in the Figure 12 in Event Horizon Telescope collaboration (2019d), the maximum is 4, as if the result shows good self-consistency. The EHTC Figure also shows the same results for the normalized residuals at the longest baselines. This is not consistent with our own analysis. Perhaps a different integration time of the data may cause this apparent discrepancy. (There is no explanation for the integration time of the data points in Figure 12 in Event Horizon Telescope collaboration (2019d)). Since the scatter of data points is affected by thermal noise, its value changes depending on the integration time of the data. Therefore, we examined the amounts of normalized residuals by changing the integration time. Figure 15 shows the average and standard deviation of the obtained normalized residuals. It can be seen that at any integration time, our final image always shows smaller values than that of the EHTC ring image. The averages of the 10-second integrations and integrations over 180 seconds differ by a factor of 3, which explains the discrepancy above. The diagram of the visibility amplitude and u-v distance shows that our final images, both the first and the last days, show a better consistency of imaging and calibration processing compared to the cases of the EHTC ring images. The diagrams indicate that our images, not those of EHTC, are supported by the data. 2. 2. Closure phase variations: Following Figure 13 of Event Horizon Telescope collaboration (2019d), we show the closure phases of the observed data and those derived from image models for some triangles in Figure 16. We added the closure phases of ALMA-LMT-SMA and LMT-PV-SMA to the three triangles (ALMA- LMT-PV, ALMA-SMT-LMT, SMT-LMT-SMA) shown by the EHTC. Closure phase is a quantity that is free from systematic phase errors and reflects the observed source structure. All panels in Figure 16 show large phase variations, which correspond not to time variation in the structure of the observed source, but to time variation of the shape of the triangle composed by the three stations as seen from the observed source. The green dots are the closure phase corresponding to the EHTC ring image, and the red dots correspond to our image. The dots of our image (green dots) appear to be more aligned with the observed data than those of the EHTC ring image. Our image is more complex than the EHTC ring, resulting in short-term small closure phase variations. The three panels from the top right toward the bottom correspond to the panels shown in Figure 13 of Event Horizon Telescope collaboration (2019d). In the case of the two triangles ALMA-LMT-PV and SMT-LMT-SMA, our results are consistent with those of EHTC. However, our results for the closure phase in ALMA-SMT-LMT triangle differ from those of EHTC. In our case, the closure phase shows an increase from $+25^{\circ}$ to $+85^{\circ}$, while that of the EHTC shows a decrease from $-25^{\circ}$ to $-80^{\circ}$. Both the first and last values and the amount of change are opposite in positive and negative. All triangles were examined, but none were identical to the closure phase variation shown by the EHTC for the ALMA-SMT-LMT triangle. Our closure phase values are from the AIPS task, CLPLT. All triangles were examined, and none showed the similar variation that the EHTC showed for the triangle. Also, there are no significant closure phase discrepancies between the real and model data. There seems nothing wrong with the CLPLT calculations. In our analysis, there is no clear difference in closure phase matching between our images and the EHTC rings. A notable difference is in the case of the LMT-PV-SMA triangle, where the closure phase in the EHTC rings is beyond $\pm 3~{}\sigma$ error bars, whereas in our final images, it manages to fall within it. The values of the closure phases also change depending on the integration time of the data; however, even when the integration time is changed, the residuals of either of them do not become overwhelmingly small. Figure 17 shows the statistics of the closure phase residuals for all triangles. As far as the closure phase residuals are concerned, between our images and the EHTC rings, there is no significant difference. Our image of the core-knot structure also shows the same magnitude of closure phase residuals as those of the EHTC ring image. As an example, we show the standard deviations of the closure phase residuals at $t=180~{}sec$ integration. For the data of the first two days, our image shows $\sigma_{ours}=40.5^{\circ}$, while the EHTC image shows $\sigma_{EHTC}=38.5^{\circ}$. For the data of the last two days, our image shows $\sigma_{ours}=43.2^{\circ}$, while the EHTC image shows $\sigma_{EHTC}=43.7^{\circ}$. As for closure phase residual, there is no significant difference between ours and the EHTC rings. If we claim that the EHTC ring image is correct due to the closure phase residual, then our images are also correct at the same time.101010 The correct image satisfies the closure phase of the observed data. However, the opposite is not always true. Moreover, there are numerous image models that show small closure phase residuals. If these residuals are due to thermal noise, they should decrease inversely proportional to the root square of the integration time T, but as Figure 16 shows, they do not decrease in that way. It means that both images of ours and the EHTC’s still have differences from the true image. Figure 13: Relation between the visibility amplitude and u-v distance for the first two days of data. The left panels are for our image, and the right panels are for the EHTC ring image model with the EHTC DIFMAP pipeline using 096H data (April 6). Every dot is a raw visibility point for a 10-second integration. The top panels show the plots of visibility amplitude versus u-v distance. The black dots in them show those calibrated by self-calibration solutions using the image model. The red dots are those from the Fourier- transformation of the image model. The middle and bottom panels show normalized amplitude residuals. The middle panel shows all data points. The vertical axis scale is very large due to data points with very large values, as indicated by black lines. The bottom panels show a magnified view of the range of normalized residuals from -5 to +15. In all cases, the minimum value of the normalized residuals is greater than -1. Figure 14: Relation between the visibility amplitude and u-v distance for the last two days of data. The left panels are for our image, and the right panels are for the EHTC ring image model with the EHTC DIFMAP pipeline using 101H data (April 11). Every dot is a raw visibility point for a 10-second integration. The top panels show the plots of visibility amplitude versus u-v distance. The black dots in them show those calibrated by self-calibration solutions using the image model. The red dots are those from the Fourier-transformation of the image model. The middle and bottom panels show normalized amplitude residuals. The middle panel shows all data points. The vertical axis scale is very large due to data points with very large values, as indicated by black lines. The bottom panels show a magnified view of the range of normalized residuals from -5 to +15. In all cases, the minimum value of the normalized residuals is greater than -1. Figure 15: Statistics of normalized residuals. The upper panels show the average values of the normalized residuals. The lower panels show the standard deviations. The left panels are the data of the first two days, and the right ones are the data of the last two days. The black line shows the case of our final image, and other color lines show the cases of the EHTC images. Figure 16: Closure phase variations of five triangles (ALMA-LMT-PV, ALMA-SMT-LMT, SMT-LMT-SMA, ALMA-LMT-SMA, and LMT-PV-SMA). Closure phases of the observed data are shown by black dots with $\pm 3~{}\sigma$ error bars, which are obtained from 5 minutes integration. Red dots show those from our image models. Green dots show those from EHTC ring image models (the EHTC 096H image for the first two day’s data, the EHTC 101H image for the last two day’s). Figure 17: Standard deviations of the closure phase residuals from all triangles. The case of the EHTC image (for the first two days) is shown by black line, that for the last two days by cyan line. The cases of our final images are shown by red line for the first days and blue line for the last days. The green line shows in the case that the residuals are due from thermal noise. ## 5 Why the EHTC found their $\sim 40~{}\mu\rm as$ ring? In Sections 3 and 4, we tried to reconstruct the image from the EHT public data using the hybrid mapping process. Our final images contain a core-knot structure at the center and features along the jet axis towards the outside. It is consistent with those obtained by 43 GHz or 86 GHz observations. On the other hand, three of the EHTC imaging teams all obtained $\sim 40~{}\mu\rm as$ rings similar to each other, but no jet structure. In this section, we show the evidences that the EHTC ring is an artifact. The essential reason why the ring image was obtained by all the EHTC imaging teams is the limited u-v coverage of the EHT array for M 87, namely the data sampling bias, though the EHTC realized 230 GHz VLBI observations on a scale that has never been accomplished before. In addition, the very narrow FOV settings of the EHTC strongly help to create $\sim 40~{}\mu\rm as$ ring shape from the EHT u-v data sampling bias. First, in Section 5.1, we discuss the nature of the u-v coverage of EHT for the M 87 observations. The spatial Fourier components for the fringe spacing of $\sim 40~{}\mu\rm as$ are lacking. Second, in Section 5.2, we discuss how the dirty beam (PSF) of the M 87 EHT observations is affected by this lack of the spatial Fourier components for the fringe spacing of $\sim 40~{}\mu\rm as$. Third, in Section 5.3, we show the dirty map is greatly affected by the PSF shape in the case of the EHTC data. Fourth, in Section 5.4, we show that even from the simulated visibility data of a point source we can create $\sim 40~{}\mu\rm as$ rings. This means that the u-v coverage of the EHT array for M 87 can create the $\sim 40~{}\mu\rm as$ ring regardless of the real structure of the observed object. In other words, the EHTC result is indistinguishable from an artifact. Fifth, in Section 5.5, we investigate one of their open procedures for imaging demonstration. The EHTC used three methods for their imaging. We investigated their DIFMAP(Shepherd, 1997) pipeline, which is the closest to traditional procedures for VLBI, and found an improper point. In the EHTC-DIFMAP pipeline, they set a very narrow FOV setting by using BOX technique. When we ran the EHTC-DIFMAP pipeline without the BOX, the $40~{}\mu\rm as$ ring disappeared. Instead, a core-knot structure appeared. We also checked the output of the simulated data for other shape model images and found that the EHTC-DIFMAP pipeline did not reproduce the input model images correctly. In other words, the EHTC-DIFMAP pipeline does not prove the correctness of the EHTC ring image from the data. In addition, we estimate the amount of calibrations that the EHTC performed during their imaging process (by the DIFMAP method), and compare them with those we did in our analysis. We found that a large amount of additional ”calibrations” are required to make the $40~{}\mu\rm as$ ring (Section 5.6). Also, we discuss the robustness of the EHTC $\sim 40~{}\mu\rm as$ ring structure in Section 5.7. The structure is very sensitive to the imaging parameters. If we change the BOX size larger, the ring image changes significantly. Despite their ”isolated image analysis” and surveys involving large-scale simulations, the EHTC have obtained artificial ring images. Finally, we explain in Section 5.8 why their objective survey has produced artifacts. They determined their optimal imaging parameters through large-scale simulations. However, they did not take into account the sampling bias that tends to produce the $\sim 40~{}\mu\rm as$ structure. The EHTC focused only on the reproducibility of the input image models and not on the simultaneous reproducibility of the input error models. It cannot be ruled out that their optimal parameters may have had the property of creating the EHTC’s $\sim 40~{}\mu\rm as$ ring by not performing proper calibration and imaging, but rather by enhancing the sampling bias effect. The facts shown in this section are strong evidence that the EHTC $\sim 40~{}\mu\rm as$ ring image is a forcibly created artifact. In Section 5.9, we summarize the reasons why the EHTC obtained the artifact image unintentionally. ### 5.1 u-v coverage of the EHT array for M 87 We here investigate the u-v sampling of the EHT array for M 87 observations. The u-v coverage itself is shown in the EHTC paper I (Event Horizon Telescope collaboration , 2019a). It shows a good u-v coverage as an interferometer of practically 5 stations and 10 baselines. We looked at the number of data samples from another point of view. Figure 18 shows the distribution of spatial Fourier components (fringe spacings) sampled by EHT for M 87 during the four observational days. The number of sampled data is plotted against the fringe spacings in $\mu\rm as$ unit. We can see that there are no sample for ranges $d=26-28~{}\mu\rm as$, $d=44-46~{}\mu\rm as$, and $d=95-100~{}\mu\rm as$ 111111 Fringe spacings larger than $100~{}\mu\rm as$ are omitted in the histogram. We are interested in the samplings contributing to very high spatial resolutions less than $100~{}\mu\rm as$. . The number of sampled data for the size of the EHT ring ($d=42\pm~{}3~{}\mu\rm as$) is quite small, and for spacing between $d=44-46~{}\mu\rm as$, there are no sampled data. The few numbers of sampling around $42~{}\mu\rm as$ fringe spacings should affect the imaging performance. In the following subsections, we investigate how this lack of spatial Fourier components can affect the imaging result. Figure 18: Distribution of the sampling data (all data of the four days from all baselines). The x-axis shows the fringe spacings of sampled visibility data in $\mu\rm as$ unit. The y-axis shows the number of sampled data. Here, samples larger than the fringe spacing of $100~{}\mu\rm as$ are omitted. The red line segment indicates the range of the diameter of the ring measured by the EHTC ($d=42\pm 3~{}\mu\rm as$). ### 5.2 Substructures in the dirty beam The EHTC show no description of the dirty beam structure. We, then, examine the dirty beam calculated from the u-v coverage. Dirty beam is the term used in the field of radio interferometer and its meaning is the same as that of PSF (Point Spread Function) in optics. In other words, it is the diffraction image of a single point when the data calibration is perfect. If we can obtain all spatial Fourier components, the dirty beam becomes a two-dimensional $\delta$-function. In practice, we can obtain only a limited sample of spatial Fourier components, and thus the dirty beam has a complex shape. Figure 19 shows the dirty beam of the EHT array on the first day observation of M 87. The FWHM of the main beam is approximately 20$~{}\mu\rm as$. We can see a point-symmetric pattern around the main beam. This pattern includes peaks lower than the main beam. Such peaks are often called “sidelobes” in radio astronomy. We can see that the distances between the main peak and the first peaks of sidelobes are about 45$~{}\mu\rm as$. 121212 Further, the first sidelobe levels reach more than 70 % of the height of the main beam. The sidelobe level is still extremely high even when changing u-v weights. Natural and uniform weighting cases, both show that the first sidelobe levels reach more than 60 % of the height of the main beam (see Appendix B). The separation between them corresponds to the radial range for which the spatial Fourier components are missing, and is very close to the diameter of the EHTC ring ($42~{}\pm~{}3~{}\mu\rm as$). Thus, it is important to clarify whether or not this PSF structure has affected the ring image or not. In addition, Appendix B shows other PSF shapes by different weightings of u-v points. Even in these PSF shapes, the separations between the main beam and adjacent sidelobes do not change much. They also show substructures with $\sim 40~{}\mu\rm as$ spacings. Figure 19: Point spread function (dirty beam) of EHT on the first day observation of M 87. The contour levels are at every $10~{}\%$ points of the peak from $-100$ to $100~{}\%$. Positive levels are shown by white lines, and negative levels by black dotted lines. The white dotted lines are for zero levels. The grayscale ranges from the minimum to the maximum brightness of the dirty beam image. The first sidelobe levels reach more than 70 % of the height of the main beam. We put two red arrows with a length of $45~{}\mu\rm as$ between the main beam and the first sidelobes. For the dirty beam image, we set the parameter $ROBUST=0$ in IMAGR (its default setting). ### 5.3 The relation between the dirty map convolved by the PSF shape and the ring structure Here, we show what happens if we are not very careful in using the dirty map when applying the hybrid mapping process. A dirty map is an image created by simply performing an inverse Fourier transform on the obtained spatial Fourier components. Therefore, it is influenced by the data sampling bias, that is, the structure of PSF is convoluted into the dirty map. Furthermore, if the data calibration is insufficient, it will be reflected in the dirty map, and the obtained structure in such a dirty map can be far from the actual image. Therefore, it is dangerous to perform self-calibration using the dirty map as the image model. It is certain that in the case of multi-element radio interferometers like ALMA, or VLA, the dirty beams have sharp main beams and very low sidelobes. Only in such cases, it is not so dangerous to estimate the true image by the dirty map. While, in the case of VLBI observations, the dirty beam is comparatively dirty, usually, we do not estimate the true image from them and do not use them as the model image for self-calibration. We, however, try that here and show what happens. The left-side panels of Figure 20 show the dirty maps from the data of the first day observations by EHT. To obtain these maps, we applied the phase-only calibration by the self- calibration technique using a point source as the image model. We can see a ring-like structure at the center, and many images which look like the ghosts of this central ring-like structure. This is not a blurred intrinsic image of M 87, but a strong reflection of the substructure in the dirty beam (PSF) of the EHT array. Figure 21 shows the dirty map and the dirty beam. Not surprisingly, they agree rather well, and that means the central ring-like structure we showed in Figure 21 is just the reflection of the shape of the dirty beam and not a physical reality. If we were to believe there should be a ring, it would be very natural to select the partial image from what looks like a ring in the dirty map. If we do so, what will happen? To answer this question, we tried the calibration of phase and amplitude using this ring-like structure at the center as the image model. We calculated the amount of calibration of the amplitude and phase by the self- calibration method. After the calibration is applied, we made an image using the CLEAN algorithm assuming that the source is single and compact. Here, the CLEAN subtraction area is limited by a circle with $30~{}\mu\rm as$ radii centered at ($+2~{}\mu\rm as$, $+22~{}\mu\rm as$) from the map center, which is the BOX setting as used in the EHTC-DIFMAP pipeline (Section 5.5). The obtained ring image is shown in the right panel of Figure 20. The size of the ring is close to that of the EHTC ring. What we also would like to emphasize here is that in order to create a ring structure, we need a narrow BOX setup. Without a narrow BOX, a ring structure cannot be created. If we change the position or size of the BOX, the ring deforms. When we use BOX offset setting of $22~{}\mu\rm as$ from the center, we get a ring very similar to the EHTC ring. In other words, the shape strongly depends on the location of the imaging region. As we will show in Section 5.4, the EHTC BOX setting limits the FOV so narrowly that it can produce the EHTC $\sim 40~{}\mu\rm as$ ring shape even from simulated data that does not contain the ring shape. Close inspection reveals that the ring-like structure of the dirty map is at an offset position from the center. Since the structure of the PSF always shows center symmetry, we can suppose that the offset ring-like structure is not due to the PSF, but really exists at the offset position. However, the shape of Figure 3 shows that this is not the case. Figure 3 shows the structure of the dirty beam shape after deconvolution from only the brightest point at the center of the dirty map. There is no ring-like structure here. We can see that the ring-like structure of the dirty map (Figure 20) is created by convolution of the dirty beam to the brightest point in the center. The ring-like structure is offset because the true source image inherent in the data does indeed exhibit non-center symmetry, but it is not an offset ring, but a core-knot structure as shown in Figure 7. Figure 20: Ring image obtained using the dirty map. The left panels show the dirty map of the first day data from the EHT observation. The right panels show the image resulting from self-calibration of the central part of the dirty map as an image model. The upper panels show a larger area of $500~{}\mu\rm as$ width, and the lower panels show the enlargements of the central part of the images. Figure 21: Comparison of the dirty beam (PSF) and the dirty map. The left panel shows the dirty beam (PSF) and the center panel shows the dirty map. The right panel shows a case where the two images were adjusted to fit well together and overlap. The yellow contours show the dirty beam, and the grayscale show the dirty map. The dirty beam (PSF) here is the same as shown in Figure 19, but the number of contours has been reduced. The dirty map here is the same as in the left panels of Figure 20, but the grayscale shows the intensity inversion image. ### 5.4 Ring from the simulated data of different structures In this section, we demonstrate that, with the u-v coverage of EHT for M 87, one could “observe” a ring even if the physical sources have different structures. As the physical source structure, we consider a single point with flux density of 1 Jy at the center. We made the simulation visibility data by applying the Fourier transformation to the single point image (We used UVMOD, a task in AIPS.). The u-v coverage is exactly the same as that of EHT for M 87 observations. As for noise, we consider two cases, one with relatively large noise (case 1) and the other with no noise (case 2). For case 1, we added thermal noise. The noise level is proportional to the weight of each data noted in original FITS public data. The signal-to-noise ratio (S/N) is 0.01 on average. We then perform experiments of calibration and imaging. For calibration, we obtain solutions from self-calibration with incorrect image models that are different from the true structures of the source, then apply the solutions to the data. We inspect what kinds of images appear from CLEAN imaging. For the self-calibration, we assume models of two incorrect images. One is the EHTC ring image (model A) and the other is a pair of two points separated by $45~{}\mu\rm as$ (model B). Model B is a pair of points of 0.5 Jy located at ($0~{}\mu\rm as$, $0~{}\mu\rm as$) and ($0~{}\mu\rm as$, $+45~{}\mu\rm as$) respectively. The locations of these two points roughly correspond to the positions of the main peak and the first sidelobe peak in dirty beam. We get calibration solutions of amplitude and phase from the self- calibrations (We used CALIB, a task in AIPS.). By using IMAGR, a task in AIPS, we performed the CLEAN imagings. We limit the CLEAN subtraction area by a narrow BOX setting; a circle with $30~{}\mu\rm as$ radii centered at ($+2~{}\mu\rm as$, $+22~{}\mu\rm as$) from the map center, which is a mimic BOX setting as used in the EHTC-DIFMAP pipeline (Section 5.5). The left panel in Figure 22 shows the ring image obtained from the large noise data (case 1). The obtained image from CLEAN is identical to the model image A used in self-calibration. When the S/N is very low, the CLEAN image after self-calibration can be identical to the model image used. Therefore, the result of case 1 does not tell much. The results of case 2 are surprising. The middle panel in Figure 22 shows the ring image obtained from no noise data. The obtained image from CLEAN is very close to the model image A used in self-calibration. Even when there is no noise, if we start with the assumption that there is a ring in the self- calibration image model, with the u-v coverage of EHT, we obtain a ring similar to that obtained by the EHTC. The right panel in Figure 22 shows the result of using model image B for self- calibration. Though we used the data of a single point with no noise, we obtained a two-point image, which is very close to model B used in self- calibration. In addition, the appeared two points show small shifts from the positions in model B. The center point shifted towards the east, while the upper point shifted towards the west. These shifts seem to be the influence of the structure of the dirty beam. The shift of the center point is along the ridge elongated from the main beam in the dirty beam, while the shift of the upper point is towards the peak position of the first sidelobe. The results of these experiments show that the reproduced images are very close to the image model used in the self-calibration. Such a result does not occur when there is sufficient u-v coverage, as in the case of VLA and ALMA. Unfortunately, the u-v coverage of EHT is not sufficient, and it has a special bias that makes it prone to producing shapes (including rings) with a size of $\sim 40~{}\mu\rm as$. We have shown that the shape of the PSF (essentially the effect of the sampling bias that makes it easy to create $\sim 40~{}\mu\rm as$ structures) has the power to create $\sim 40~{}\mu\rm as$ ring structures even from data with only one central point structure (centrosymmetry). However, the most powerful factor is not data correction by self-calibration with wrong model images. Here and in Section 5.3, we showed that the very narrow FOV setting of the BOX has great power to create a $\sim 40~{}\mu\rm as$ ring structure. In Section 5.5, we will discuss the setting of BOX in the EHT-DIFMAP pipeline. Figure 22: Resultant images from the simulated data sets. Case 1-A (left panel) shows the ring shape created from nearly noise data set, after applying the solution by self-calibration using the EHTC ring as the image model. Case 2-A (middle panel) shows the ring shape created from a single point data set with infinite S/R (no noise), after applying the solution by self-calibration using the EHTC ring as the image model. Case 2-B (right panel) is a two-point image created by applying the solution by self-calibration with two points as image models to a single point data (no noise data) with infinite S/N. The two red crosses indicate the positions of the points in the model image used in self-calibration. ### 5.5 What the EHTC really did in their DIFMAP imaging process In this section, we investigate the data processing pipeline that the EHTC used for imaging the EHTC ring with DIFMAP 131313https://github.com/eventhorizontelescope/2019-D01-02. Among the three methods the EHTC used for the imaging, DIFMAP is the closest to the usual procedure 141414difmap/EHT_Difmap in the above web page. Other two are rather new, and difficult to study here. Since all three methods gave essentially the same image, we believe it is sufficient to analyze one of the three methods to find out why the ring structure was formed. The EHTC-DIFMAP imaging team used the hybrid mapping technique to calibrate the data and obtained the image. The hybrid mapping method is the standard method of VLBI calibration and imaging. Let us first summarize the standard procedure of VLBI data calibration and imaging. The radio interferometer samples the spatial Fourier components of the brightness distribution of the observed source (often called visibility). Theoretically, the brightness distribution of the observed source can be obtained by collecting the samples and performing the inverse Fourier transform. However, there are actually two problems as follows. (a) The sampled visibility data contain errors, so they are not the correct spatial Fourier components. Therefore, it is necessary to calibrate them. Self- calibration is a method of obtaining calibration solutions by using an assumed image. If the assumed image is not correct, the correct calibration solution cannot be obtained. (b) The number of observed samples is limited. The inverse Fourier transform alone does not produce the correct image. The PSF does not become a point. Therefore, the PSF shape must be deconvolved. For this purpose, image processing is performed. As a method to obtain correct calibration solutions and to attain clear imaging results, hybrid mapping, which alternates between the above two tasks, is usually used in VLBI imaging. We use self-calibration for calculating the calibration solution using a tentative image model, and CLEAN for performing deconvolution of the scattered PSF shape. Hybrid mapping is the following algorithm. (a) Assume an image model for self- calibration. For the first one, a 1-point model is usually used. (b) Calculate tentative calibration solutions from self-calibration using the image model. (c) Calibrate visibility data using the tentative calibration solutions. (d) Obtain the next tentative image using CLEAN from the calibrated visibility data. CLEAN image is composed of point sources (CLEAN components). (e) Make the next image model for self-calibration by picking up reliable CLEAN components from the image. (f) Go back to (b) and repeat the iteration from (b) to (e) until a satisfactory image is obtained. The visibility (spatial Fourier component) is a complex quantity and so it has amplitude and phase. It is safe first to repeat self-calibrations only for phase solutions until a nearly satisfactory image is obtained and to perform self-calibration for amplitude and phase by using that image. This is because self-calibration for amplitude tends to give a solution which is too close to the model image used in self-calibration (Cornwell & Fomalont, 1999). However, the repetition of amplitude self-calibration is often performed under the careful check of the practical quality of the data, and then, in many cases, good calibration solutions can be obtained. In addition, the BOX technique is an auxiliary means used for imaging with CLEAN. (It is essentially unrelated to the hybrid mapping method.) We can limit the area of CLEAN subtraction intentionally by the BOX setting. The technique may give us a good image, despite an incomplete calibration. As already mentioned, we used the BOX technique in our analysis. Also the EHTC- DIFMAP team did. Now we investigate the EHTC-DIFMAP procedure. The EHTC-DIFMAP imaging team performed the self-calibration only for phase solutions with a single point source model, following the general procedure of the hybrid mapping method. However, after the first phase self-calibration, they used the BOX technique with a very narrow BOX151515BOXCircMask_r30_x-0.002_y0.022.win in the web in the first imaging trial. The CLEAN subtraction area was restricted to within the circle of $60~{}\mu\rm as$ diameter specified by the BOX. This means that from the beginning of their data analysis, the EHTC-DIFMAP team assumed that the image is a very compact single one, and excluded other possibilities. The BOX is roughly a circle of diameter $60~{}\mu\rm as$ composed of 30 small rectangles. By comparing the BOX shape and the dirty beam we found that the BOX covers the main beam and the first sidelobe, but leaves out the second and other sidelobes. The BOX is not located at the phase center but offset by $+22~{}\mu\rm as$ in the y-direction ($\delta$ direction). This offset of $22~{}\mu\rm as$ coincides with the radius of the EHTC ring. Figure 23 shows the EHTC BOX position on the dirty beam (PSF) aligned by their phase centers. The area of the EHTC BOX covers (1) the main beam peak, (2) the ridge extending to the left (east) from the main beam peak, (3) a part of the first sidelobe located north of the main beam, and (4) an area of negative strength near the center of the box area. The structure of the dirty beam within the EHTC BOX is close to the shape of the EHTC $\sim 40~{}\mu\rm as$ ring. The PSF structure within the BOX explains not only the diameter of the EHTC ring, which is $\sim 40~{}\mu\rm as$, but also the asymmetric structure of the EHTC ring. The EHTC ring has an asymmetric structure with a bright south side. The brighter south side corresponds to the main beam (the brightest in the box), while the darker north side of the ring corresponds to the first sidelobe in the north (the less bright peak in the box). If such a narrow BOX with the offset is used from the beginning to the end of the hybrid mapping process, the effect of the dirty beam can be enhanced as we showed in Section 5.3. It is quite clear that the EHTC $\sim 40~{}\mu\rm as$ ring is the result of such enhancement of $\sim 40~{}\mu\rm as$ substructures in PSF. Figure 23: The size and position of the BOX used in their DIFMAP pipeline (blue shaded area). We overlaid the BOX on the dirty beam (PSF) which is already shown in Figure 19. We align the two figures at their phase centers. We actually ran the EHTC-DIFMAP pipeline to investigate its behavior and performance. The left panel of Figure 24 shows the resultant ring image from running the EHTC-DIFMAP pipeline with the default parameter settings. On the other hand, the right panel of Figure 24 shows the image obtained by removing the BOX setting (Figure 23) from the EHTC-DIFMAP pipeline. The ring in the BOX setting region has been destroyed and the brightness distribution has been moved to outside the BOX setting region. The new brightness distribution shows a core-knot structure in the center, similar to the results of our analysis. In addition, several emission peaks appear in the $PA=45^{\circ}$ and $PA=225^{\circ}$ directions from the center. These are probably sidelobes, but it is very interesting that one emission peak in the $PA=225^{\circ}$ direction corresponds to the feature W we found in our analysis. This result implies that the very narrow BOX setting of the EHTC-DIFMAP pipeline is the main factor that creates the EHTC ring image. We also created simulated data and applied the EHTC-DIFMAP pipeline to it to investigate what kind of images are obtained. The purpose of this is to check if the EHTC-DIFMAP pipeline always calibrates the data correctly and reproduces the true picture of the data. We created simulated data with the task UVMOD in AIPS and acquired images with both the task IMAGR in AIPS and the EHTC-DIFMAP pipeline for comparison. Figure 25 shows examples of the simulation results. Three cases are shown. (1) a core-knot model aligned in about $PA=-38.7^{\circ}$ direction (left panels), (2) a two-point model aligned North-South direction (center panels), and (3) a $40~{}\mu\rm as$ diameter ring, but the ring width is $2~{}\mu\rm as$, different from that of the EHTC ring (right panels). The top panels show images obtained from data simulated using the AIPS IMAGR task with the same box settings as the EHTC-DIFMAP pipeline. The IMAGR produced the expected image. The bottom panels of Figure 25 show the results of the EHTC-DIFMAP pipeline. The original images of the simulated data could not be reproduced. In the case of the core-knot model (lower left panel), the knot component is lost. Also, in the case of the two-point model (bottom center), the weak north point has disappeared. Even more surprisingly, in the case of the $40~{}\mu\rm as$ diameter ring model (bottom right panel), the image quality is degraded and the ring structure is obscured. Since these images are compact and fit within the BOX setting of the EHTC- DIFMAP pipeline, this phenomenon is not due to a narrow field of view caused by the BOX setting. This is probably because the performance of the hybrid mapping process of the EHTC-DIFMAP pipeline strongly depends on the image structure and noise structure of the input data. Note that our simulation this time is not only that shown in Figure 25. The summary of the simulation is as follows. 1. 1. A single point source model of 1 Jy with no noise. The image was reproduced from both IMAGR in AIPS and the EHTC-DIFMAP pipeline. 2. 2. Complete noise model. No specific image was detected from the IMAGR in AIPS, but a single point was obtained from the EHTC-DIFMAP pipeline. This is probably due to the fact that the first self-calibration was performed using the image model of a single point source. This is a common phenomenon that occurs when self-calibration is applied to data sets with very low S/N. 3. 3. Core knot model. Two points model consisting of a point of 1 Jy at the center ($0~{}\mu\rm as$, $0~{}\mu\rm as$) and a point of 0.25 Jy at ($-25~{}\mu\rm as$, $+20~{}\mu\rm as$). The distance between them is about $32~{}\mu\rm as$ s and they are located along a line in the direction $PA=-38.7^{\circ}$. 12 simulation data with noise ranging from zero to 640 Jy/weight were generated. IMAGR in AIPS detected the core-knot structure as expected. In the EHTC-DIFMAP pipeline, the knot component disappeared in 11 of the 12 cases. 4. 4. Ring images with diameters around $40~{}\mu\rm as$. The ring width is $2~{}\mu\rm as$, and the ring diameters are 30, 35, 40, and 45 $\mu\rm as$. The flux density of the image is 1 Jy. No noise was added. Using IMAGR in AIPS, the hole in the center of the ring is obscured when the ring diameter is smaller than $30~{}\mu\rm as$. When the ring diameter is larger, the ring can be recognized. On the other hand, using the EHTC-DIFMAP pipeline, the structure of the ring became ambiguous in all images, like the example shown in Figure 25. 5. 5. Two-point model (shift in the north-south direction). Place a point of 1Jy at the center ($0~{}\mu\rm as$, $0~{}\mu\rm as$) and a point of 0.5 Jy north of it. There are 11 intervals: 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, and 50 $~{}\mu\rm as$. Using IMAGR in AIPS, two points cannot be separated and resolved when the spacing is very narrow. At a separation of 10$~{}\mu\rm as$, the image is elongated in the north-south direction. If the separation is larger than 25$~{}\mu\rm as$, the two points are reproduced separately. Using the EHTC-DIFMAP pipeline, when the separation is larger than 45$~{}\mu\rm as$, the two points are reproduced correctly, but otherwise the north point is not reproduced in the correct position. 6. 6. Two-point model (east-west shift). A point of 1Jy is placed at the center ($0~{}\mu\rm as$, $0~{}\mu\rm as$) and a point 0.5-Jy is placed at $\delta=42~{}\mu\rm as$. The R. A. position of the second point is shifted every $5~{}\mu\rm as$ from -20$~{}\mu\rm as$ to 20$~{}\mu\rm as$ to create 9 simulation data. The positions of these two points are reproduced from IMAGR in AIPS. the EHTC-DIFMAP pipeline reproduces these two points in four cases. In the other four cases, the position of the north point deviated from the correct position by more than 10 $\mu\rm as$. Thus, in most cases, the EHTC-DIFMAP pipeline did not reproduce the intrinsic image of the simulation data. Figure 10 of Event Horizon Telescope collaboration (2019d) shows that their procedure successfully reproduces images from simulated data (ring, crescent, disk, and double source), but their results are inconsistent with ours (at least for the EHT DIFMAP case). In our simulations, noise is either not added at all, or if added, it is thermal noise. IMAGR in AIPS almost reproduces the input model images, while the EHTC-DIFMAP pipeline does not. The difference between the two methods is the hybrid mapping process. The EHTC-DIFMAP pipeline lacks the performance of the general calibration of the data described in Section 5.8.3. Figure 24: Resultant images from the EHTC-DIFMAP pipeline. The upper panels show the entire field of view with the default setting of the EHTC-DIFMAP pipeline (1 mas square). The lower panels show the enlargement of the central part of the upper images. The left panels show the result of running the EHTC- DIFMAP pipeline with the default parameter settings. The light blue circle shows the location and size of the BOX. The ring-shaped image appears in the BOX area. The right panels show the resulting image when the EHTC-DIFMAP pipeline is run without the BOX setting. In the enlarged image (bottom right), we can see the emission peaks corresponding to the three components we found: core (C), knots (K), and west component (W). Here, we used the data named SR1_M87_2017_095_hi_hops_netcal_StokesI.uvfits. Figure 25: Resulting images of simulation data. The upper panels show the imaging results when using IMAGR in AIPS. The lower panels show the imaging results when using the EHTC-DIFMAP pipeline with default parameter settings. The same BOX setting is used for IMAGR as well as the EHTC-DIFMAP pipeline. The light blue circles show the position and size of the BOX setting. The three results are presented. A core knot model (left panel), a two-point model (center panel), and a $\sim 40~{}\mu\rm as$ ring model (right panel). The red crosses on the left and in the center indicate the positions of the emission points placed during the simulation data preparation. ### 5.6 The amounts of calibrations performed in the EHTC imaging process The EHTC papers give the detailed description of the data calibrations at the pre-stage process, but not much about the calibrations performed during the imaging process (through the self-calibration in the hybrid mapping process of the EHTC-DIFMAP imaging team case). This is an insufficient description of data calibrations because it is rare that only the pre-calibrations are sufficient for obtaining VLBI fine images. We, therefore, estimate and show the amounts of calibrations performed by the EHTC during their imaging process. We attempted to reconstruct them in the following procedure. First, following the EHTC open procedure, we reproduced the EHTC ring image of the first observing day (the left panel in Figure 26). Then, using the CLEAN components of the ring image as the model image, we performed self-calibration by the task CALIB in AIPS and got solutions for both phase and amplitude. The parameters of CALIB are shown in Table 6. In Figures 27 and 28, we show the total amount of calibrations both of phase and amplitude in the case of the EHTC ring image model. Note that we performed CLEAN imaging to verify if the solutions from the self-calibrations are consistent with the original the EHTC ring image. First, we flagged out data points where the amplitude solutions were more than 4 by using the task SNCOR in AIPS. Here we followed the standard VLBI teaching, which states that ”discarding bad data will make a better image than keeping them”. Then, we applied the solutions to the selected data and obtained an image using the task IMAGR in AIPS for CLEAN. The right panel of Figure 26 shows the resultant image whose quality is similar to that of the original EHTC ring. Thus, “our” solutions of self- calibrations successfully reproduced the EHTC ring image. The phase and amplitude solutions for the EHTC ring image are very similar to, or worse than our results in Figures 5 and 6. Therefore, if such large amplitudes and their rapid variations found in the self-calibration solutions are negative signs against the resultant image quality, both the results of our final images and the EHTC ring images should be rejected. This implies that the ”calibrated” data released by the EHTC are not of high quality (as is often the case with VLBI archival data). SOLTYPE | ’L1’ ---|--- SOLMODE | ’A&P’ (both phase and amplitude) REFANT | 1 (ALMA) SOLINT (solution interval) | 0.15 (min) APARM(1) | 1 APARM(7) (S/N cut off) | 3 Table 6: Parameters of CALIB for the amplitude and phase self-calibration. Figure 26: Left panel shows the EHTC ring image of the first day observations. We obtained this image by using their open procedure. The right panel shows our reconstruction image by applying the self-calibration solutions shown in Figures 27 and 28. The same BOX setting as that used in the EHTC-DIFMAP pipeline was used. Also the same restoring beam was applied ($23.1\times 17.0~{}\mu\rm as$ with $PA=44.4^{\circ}$). Figure 27: Phase solutions obtained from self-calibration using the EHTC ring image. The red points show the solutions for IF 1 data, the blue ones show those for IF 2 data. Figure 28: Amplitude solutions obtained from self-calibration using the EHTC ring image. The red points show the solutions for IF 1 data, the blue ones show those for IF 2 data. ### 5.7 Sensitivity of the $\sim 40~{}\mu\rm as$ ring to BOX size In Section 5.6, we reproduced the ring structure obtained by the EHTC, by using the very narrow box used by the EHTC. If the ring image is the definitely correct image, it should only weakly depend on the BOX size. In fact, we studied the effect of changing the BOX in Section 4.3.2, and found that the main structure of our final image (features C & K) does not disappear. In this section, we investigate how sensitive the ring structure obtained by the EHTC to the BOX method they used. We calibrated the data by applying the calibration solutions obtained from the self-calibration using the EHTC ring as the image model, which are shown in Figures 27 and 28. We performed CLEANs using BOXes of circles with diameters of (a) $60~{}\mu\rm as$, (b) $80~{}\mu\rm as$, (c) $100~{}\mu\rm as$, (d) $120~{}\mu\rm as$, (e) $240~{}\mu\rm as$, and (f) $450~{}\mu\rm as$. Their centers are located at the same position as that of the EHTC-DIFMAP case. Other parameters of the CLEANs are the same as those used for the right image in Figure 26. The results are shown in Figures 29 and 30. Figure 29 shows views of the entire imaging area of $\rm 2~{}mas$ square, and Figure 30 shows enlarged views of the central $256~{}\mu\rm as$ square. The BOXes used are indicated by blue circles. The ring structure in panel (a) ($D=60~{}\mu\rm as$) is the same as that in the right panel of Figure 26. Here, we can see the ring structure similar to that obtained by the EHTC. The ring extends beyond the BOX area. This is because the obtained CLEAN components are near the boundary of the BOX, and a $20~{}\mu\rm as$ restoring beam is convolved to form the CLEAN map. In the cases of (b) $D=80~{}\mu\rm as$ and (c) $D=100~{}\mu\rm as$, the ring is still recognizable. However, we can see CLEAN components not on the ring but near the boundary of the BOX when the BOX becomes larger. In the case of (d) $D=120~{}\mu\rm as$, there are several CLEAN components almost on the boundary of the BOX, though the ring structure is still recognizable. In the cases of (e) $D=240~{}\mu\rm as$ and (f) $D=450~{}\mu\rm as$, there is no ring. Instead, one elongated bright spot appears. We also tried larger BOX cases, $D=900~{}\mu\rm as$, and $D=1300~{}\mu\rm as$, and the resulted image is similar to the case of $D=450~{}\mu\rm as$. They show a bright spot at the map center. If the field of view (FOV) is smaller than the case (d), a ring image will appear, and if FOV is larger than that, the ring image will be destroyed. In other words, the boundary FOV is around $120~{}\mu\rm as$. It is a queer coincidence that the EHTC imaging teams set the FOV to be $128~{}\mu\rm as$ (Event Horizon Telescope collaboration , 2019d), just around the boundary FOV. (It is $60~{}\mu\rm as$ in the case of the EHTC-DIFMAP pipeline as already mentioned) 161616Figure 6 in Event Horizon Telescope collaboration (2019d) shows similar test results to ours, but they only show the results with BOX sizes narrower than $100~{}\mu\rm as$ in diameter.. Thus, we can conclude that the EHTC ring appears only when a very narrow BOX is applied. Figure 29: Stability of the EHTC ring structure against the BOX sizes: The whole $2~{}mas$ square images of the CLEAN results by IMAGR (AIPS). Panels (a) – (f) show the resultant images in cases of expanding the diameter of the BOX area with $D=60,80,100,120,240$, and $450~{}\mu\rm as$ respectively. The blue circle shows the respective BOX area. The center positions of the circles are ($-2~{}\mu\rm as$, $+22~{}\mu\rm as$) as followed the BOX the EHTC-DIFMAP team used. Figure 30: Stability of the EHTC ring structure against the BOX sizes: The enlarged views of the central $256~{}\mu\rm as$ square from the resultant CLEAN images by IMAGR (AIPS). Panels (a) – (f) show the resultant images in cases of expanding the diameter of the BOX area with $D=60,80,100,120,240$, and $450~{}\mu\rm as$ respectively. The blue circle shows the respective BOX area. The center positions of the circles are ($-2~{}\mu\rm as$, $+22~{}\mu\rm as$) as followed the BOX the EHTC-DIFMAP team used. ### 5.8 Problems in the EHTC analysis Here we explain why the EHTC created the image of the ring. The EHTC described the details of their imaging process in Event Horizon Telescope collaboration (2019d), in which we found some of the reasons why they created the artifact. The EHTC conducted various surveys, but their methods were not objective and biased towards their desires from the very beginning of their analysis. They also failed to perform the basic data checking that VLBI experts always do. These points are noted below. #### 5.8.1 Blind test by the EHTC According to Event Horizon Telescope collaboration (2019d), ”in first stage, four teams, each blind to the others’ work, produced images of M 87. This stage allowed us to avoid shared human bias and to assess common features among independent reconstructions.” They added, ”All four images show an asymmetric ring structure. For both RML teams and both CLEAN teams, the ring has a diameter of approximately $40~{}\mu\rm as$, with brighter emission in the south.” The EHTC first conducted a blind test and obtained consistent images of each other. But their isolated imaging processes were not really blind tests. The angular size of a black hole shadow can be easily calculated from the distance and mass of the black hole. As every black hole shadow researcher knows, if the mass of the SMBH in M 87 is $\sim 6\times 10^{9}M_{\odot}$, the expected shadow size is about $40~{}\mu\rm as$ in diameter. Based on that calculation, everyone tries to find a ring or arc-shaped image with a diameter of $40~{}\mu\rm as$. The members of the EHTC imaging team are not exception. As shown in the previous Section 5.1, the EHT public data has a sampling bias in that it lacks $\sim 40~{}\mu\rm as$ spatial Fourier component. This bias tends to create $\sim 40~{}\mu\rm as$ scale structures that are not present in the original image. It is due to this sampling bias effect that the PFS has a $\sim 40~{}\mu\rm as$ scale structure (Section 5.2). Not surprisingly, all the EHTC imaging teams, trying to find the black hole shadow with a size of $\sim 40~{}\mu\rm as$, were able to find the desired illusion in the sampling bias. The blind imaging test should be conducted by researchers with no prior knowledge of the expected image of the SMBH in M 87. #### 5.8.2 Attention to sampling bias It is well known that sampling bias in data can seriously affect data analysis. For sampling bias, two things should have been done. First, PSF (dirty beam) structure should have been checked. In radio interferometer data analyses, checking the structure of the dirty beam usually provides insight into the effects of insufficient u-v coverage; in a sparse array configuration such as the EHT array, it is more important to check the shape of the dirty beam and utilize it as consideration for false imaging. Event Horizon Telescope collaboration (2019d) describes the size and shape of the main beam of the dirty beam and the corresponding CLEAN beam, but we do not find any discussion of the overall dirty beam structure in the EHTC papers. As already shown, the separations between the main beam and the first sidelobes of the dirty beam structure are $\sim 40~{}\mu\rm as$, which coincidentally is the same as the expected size of the black hole shadow in M 87. An old-time VLBI expert would have noticed the spacing. Second, performance of the new imaging method with respect to sampling bias needs to be verified. The CLEAN algorithm, the well-known imaging method in radio interferometry, was originally designed to obtain correct images by deconvolving false structures appearing in dirty beams (the origin of which is the sampling bias). However, in actual use, it is not easy to completely eliminate this effect; most of the relevant researchers know the CLEAN method’s performance against sampling bias from its long history of use, and pay attention to its residual structure. The EHTC used two new imaging methods that are unfamiliar to most radio astronomers, and judging from the case of the CLEAN algorithm, we infer that the performance of the new methods against sampling data bias is not perfect. Its performance needs to be demonstrated. #### 5.8.3 The EHTC simulations The EHTC conducted simulations to find the optimal parameters for imaging. According to Event Horizon Telescope collaboration (2019d), ”In the second stage,” the EHTC ”reconstructed synthetic data from a large survey of imaging parameters and then compared the results with the corresponding ground truth images. This stage allowed us to select parameters objectively to use when reconstructing images of M 87.” However, there are two major problems with this simulation. First, it seems that the idea of what to be obtained from the simulation was not well considered: the EHTC created synthetic data from the model images and searched for optimal parameters to reproduce the input model images. But what
M. Xu et al *Qiheng Zhou ( # PDMA: Probabilistic Service Migration Approach for Delay-aware and Mobility- aware Mobile Edge Computing Minxian Xu Qiheng Zhou Huaming Wu Weiwei Lin Kejiang Ye Chengzhong Xu Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China Center for Applied Mathematics, Tianjin University, Tianjin, China School of Computer Science and Engineering, South China University of Technology, Guangzhou, China State Key Lab of IOTSC, Department of Computer Science, University of Macau, Macau SAR, China<EMAIL_ADDRESS> (26 April 2016; 6 June 2016; 6 June 2016) ###### Abstract [Summary]As a key technology in the 5G era, Mobile Edge Computing (MEC) has developed rapidly in recent years. MEC aims to reduce the service delay of mobile users, while alleviating the processing pressure on the core network. MEC can be regarded as an extension of cloud computing on the user side, which can deploy edge servers and bring computing resources closer to mobile users, and provide more efficient interactions. However, due to the user’s dynamic mobility, the distance between the user and the edge server will change dynamically, which may cause fluctuations in Quality of Service (QoS). Therefore, when a mobile user moves in the MEC environment, certain approaches are needed to schedule services deployed on the edge server to ensure the user experience. In this paper, we model service scheduling in MEC scenarios and propose a delay-aware and mobility-aware service management approach based on concise probabilistic methods. This approach has low computational complexity and can effectively reduce service delay and migration costs. Furthermore, we conduct experiments by utilizing multiple realistic datasets and use iFogSim to evaluate the performance of the algorithm. The results show that our proposed approach can optimize the performance on service delay, with 8% to 20% improvement and reduce the migration cost by more than 75% compared with baselines during the rush hours. ###### keywords: Mobile Edge Computing, Edge Service Allocation, Service Migration, Latency, User Mobility ††articletype: Article Type ## 1 Introduction Driven by the fast growth of the Internet of Things (IoT) applications, low latency has become a major concern for service providers to ensure user experience. Traditionally, the services can be supported via cloud computing platforms, however, due to the increase in network loads, accessing resources from remote servers in Clouds can lead to higher transmission costs and delays, which can also degrade the user experience 1. To improve the user experience, computing resources can be moved in close proximity to the end- users. Therefore, as a new paradigm complementing cloud computing, Mobile Edge Computing (MEC) has been proposed to enable efficient management of applications with low latency and constrained energy 2. The motivation of MEC is to extend the cloud computing capabilities to the edge of the network 3. The services and applications can be deployed on the edge servers, and the part of the tasks on users’ devices can be offloaded to the edge servers. Therefore, services can be executed coordinately based on the resources provisioned by both local mobile devices and edge servers 4. This feature has made MEC an attractive paradigm for many delay-sensitive applications, e.g., Internet of Vehicles (IoV), Augmented Reality (AR), and Virtual Reality (VR) 5. However, how to deploy the services to the specific edge servers is a challenge, since deploying the service to an edge server with a long distance to the user’s device can still cause high latency and the edge servers can be heterogeneous 6. Failing to support the users with satisfactory Quality of Service (QoS) will undermine the benefits of the MEC paradigm. In addition, compared with the traditional cloud computing paradigm, MEC brings a new challenge when mobile users (e.g., cars) move among different geographical locations 7. The distance and the latency between the users and edge devices keep changing dynamically due to the movement of users. Therefore, the services are required to be redeployed or migrated to suitable locations for the users to ensure QoS. To address the above challenges, some key questions should be answered, including how to place the edge services in the initial phase as well as when and where to migrate the edge services according to the system running status. Trade-offs can exist when selecting the moment and locations to migrate services. For example, too frequent migrations can lead to communication overheads, while not migrating services can produce service response delay when a user is moving from the original location to another location that is far away from the original one. The running and network status of edge servers change with time, which may also affect the service delay. When it comes to developing efficient service allocation algorithms for the MEC environment, it is significant to take the delay and user mobility into consideration. However, an optimal deployment solution for edge services can become inefficient if the deployment takes a few minutes to complete. Therefore, an efficient service allocation algorithm that can be executed within a short time would support the MEC environment in a better manner 8. In this work, we use the probabilistic-based approach to study the service allocation in MEC. Benefiting from the low algorithm complexity, our approach can be easily extended to the realistic environment. The main contributions are as below: * • We formulated the optimization problem as a service allocation decision problem to minimize the overall delay and cost, thereby reducing the service delay and migration costs. User mobility related to service delay is also considered in our problem. We prove that the optimization problem is NP- hardness. * • We proposed a delay-aware and mobility-aware approach based on Bernoulli trial to determine the decision of deploying services. Through the theoretic analysis, we prove that our approach can be bounded with $1+\frac{(2+\varepsilon+\delta)JR}{(1+\varepsilon+\delta)(J+R)}$. * • We conducted experiments in iFogSim 9 with the base station dataset and the taxi movement dataset derived from realistic traces 10 to evaluate the performance of our proposed algorithm. The results demonstrate that our approach can achieve better performance compared with baselines in terms of overall delay and costs with significant reduction, e.g., 8% to 20% reduction in overall delay and over 75% decrease in migration cost. The remainder of this paper is organized as follows: Section 2 discusses and compares the related work. Sections 3 presents the system model and the target problem. Section 4 introduces our proposed approach for edge services management. Section 5 demonstrates the results based on realistic datasets by simulating the taxi movement scenario in iFogSim. Finally, the conclusions and future research directions are provided in Section 6. ## 2 Related Work Edge service allocation management in MEC has been investigated by some research work. The related work can be categorized with 1) delay-aware management and 2) mobility-aware management. Table 1: Comparison of related work Approach | Algorithm Complexity | User Mobility | Service Management | Service Amount ---|---|---|---|--- High | Low | Yes | No | Initial Placement | Migration | Single | Multiple Wang et al. 11 | $\surd$ | | $\surd$ | | | $\surd$ | | $\surd$ Wang et al. 12 | $\surd$ | | $\surd$ | | | $\surd$ | $\surd$ | Wang et al. 13 | $\surd$ | | $\surd$ | | | $\surd$ | $\surd$ | Samanta et al. 14 | | $\surd$ | | $\surd$ | | $\surd$ | $\surd$ | Samanta et al. 15 | | $\surd$ | | $\surd$ | | $\surd$ | $\surd$ | Poularakis et al. 16 | | $\surd$ | | $\surd$ | $\surd$ | | | $\surd$ Pasteris et al. 17 | $\surd$ | | | $\surd$ | $\surd$ | | $\surd$ | Zhang et al. 18 | $\surd$ | | | $\surd$ | $\surd$ | | $\surd$ | Wan et al. 19 | $\surd$ | | | $\surd$ | $\surd$ | | $\surd$ | Gao et al. 20 | | $\surd$ | | $\surd$ | $\surd$ | | $\surd$ | Yu et al. 21 | | $\surd$ | | $\surd$ | $\surd$ | | $\surd$ | Badri et al. 8 | | $\surd$ | | $\surd$ | $\surd$ | | $\surd$ | Ouyang et al. 22 | | $\surd$ | $\surd$ | | | $\surd$ | $\surd$ | Wu et al. 23 | $\surd$ | | $\surd$ | | $\surd$ | | $\surd$ | Ghosh et al. 24 | $\surd$ | | $\surd$ | | | | | $\surd$ Shi et al. 25 | $\surd$ | | $\surd$ | | | | | $\surd$ Yu et al. 26 | $\surd$ | | $\surd$ | | | | | $\surd$ Our approach | | $\surd$ | $\surd$ | | $\surd$ | $\surd$ | | $\surd$ ### 2.1 Delay-Aware Edge Service Management Several articles based on Markov Decision Process (MDP) have been proposed. Wang et al. 11 investigated service coordination among edge clouds to support delay-aware service migration for mobile users. A reinforcement learning approach based on the MDP model is also applied to find the optimal decisions on migrating services. Wang et al. 12 presented a dynamic service migration approach for MEC based on MDP to find the optimal solution. An approach based on service migration by using MDP to capture costs to design service migration policy was also proposed in 13. However, the solution space of these approaches grow fast when the number of edge devices and users increase, the scalability can be the limitation for these approaches to be applied to the realistic scenario. Therefore, some assumptions need to be relaxed when applying in a practical scenario. Different from these works, our proposed work can be performed with much lower complexity. Samanta et al. 14 studied a dynamic microservice scheduling scheme for MEC- enabled IoT to improve the network throughput and ensure the QoS for users. A distributed and latency-aware microservice scheduling policy was also introduced to reduce service latency while ensuring transmission rate to minimize microservices completion time 15. Poularakis et al. 16 investigated an algorithm to jointly optimize service placement and request routing to support multi-cell networks in MEC under multiple resource constraints. The proposed algorithm can achieve near-optimal performance for utilizing available resources to maximize the number of requests and reducing latency. Pasteris et al. 17 considered service placement in MEC with heterogeneous resources. Their objective is to maximize the total reward by using an approximate algorithm. The designed algorithm can be applied to both small scale and large scale environments with two subroutines. Unfortunately, user mobility is not considered in these works, and most of them focused on the initial placement of services. In contrast, both the initial service placement and dynamic service migration are considered in our work. ### 2.2 Mobility-Aware Edge Service Management Mobility-aware service management in MEC has attracted the attention of researchers. Zhang et al. 18 presented a deep Q-network based approach for task migration in the MEC system, which can obtain the optimal migration policy without knowing the information of user mobility. Wan et al. 19 introduced a joint optimization methodology to assign resources to tasks based on evolutionary computation by considering power usage and latency in MEC environment. Gao et al. 20 introduced an approach to jointly optimize the access network selection and service placement in MEC. The long-term optimization problem is decomposed into a set of one-shot problems to reduce computation time. Yu et al. 21 investigated collaborative service placement in MEC to relieve the limited capacity of base stations by collaboratively utilize the resources from adjacent base stations. To solve this optimization problem, a decentralized algorithm based on matching theory was also proposed. Badri et al. 8 considered energy-aware service placement in MEC as a multi- stage stochastic program, which can maximize the QoS under the constraint of limited energy. A parallel sample average approximate algorithm was also proposed to solve the energy-aware problem. These works consider multiple objectives optimization, while the service migration is not modelled in these articles. In contrast, we consider service migration in our proposed approach. Ouyang et al. 27 proposed a mobility-aware service placement framework for cost-efficient MEC. In this approach, the latency is reduced under the constraint of long-term migration cost based on Lyapunov optimization. An adaptive service placement mechanism aimed at improving latency and service migration cost was also presented 22, which was modelled as a contextual multi-armed bandit problem and solved by an online algorithm based on the Thompson-sampling approach. Wu et al. 23 formulated the mobility-aware service selection problem in MEC as an optimization problem and solved it by a heuristic algorithm that combines the genetic algorithm and simulated annealing algorithm. The proposed approach can reduce the response time of service and the algorithm execution time is one order of magnitude lower than the baselines. Our work differs from these studies by considering multiple services management rather than single service management. Ghosh et al. 24 proposed a mobility-driven cloud-fog-edge collaborative real- time framework. The framework considers user mobility and performs location prediction of users using the Hidden Markov model. It enables efficient information processing and reduces delay. Although considering user mobility, these approaches focus on computation offloading, in mobile edge computing. Shi et al. 25 introduced a mobility-aware computation offloading decision method. It takes user mobility into consideration and adopts an adaptive genetic algorithm for offloading decisions. The proposed method can achieve a better offloading success rate and lower energy consumption of mobile users. Yu et al. 26 presented a dynamic mobility-aware partial offloading algorithm to investigate the amount of data to be offloaded, which is based on location prediction to minimize energy consumption and service delay. Different from the service management approach in our approach, they perform location prediction and optimize the service delay via offloading decision instead of service placement among edge servers. ### 2.3 Critical Analysis Our proposed work and the related work is compared in Table 1. As a summary, our work contributes to the growing body of research in service allocation in MEC. To address the delay-aware and mobility-aware challenges of MEC, we apply a probabilistic-based algorithm for both service initial assignment and dynamic service migration with low computation complexity. We also consider the user mobility and multiple service management during our service allocation process. In addition, the performance of our approach is validated based on the data derived from realistic traces. ## 3 System Model and Problem Statement In this section, the system model is introduced for the key components of service scheduling in the typical MEC system. The model provides mechanisms for abstracting various functions and operations into an optimization problem. A case study is also provided to clarify the process of service scheduling in the MEC scenario. ### 3.1 MEC System Model Figure 1: MEC System Model As shown in Figure 1, the typical MEC system model can provide support for various applications, e.g., health monitoring, smart city, and smart mobile applications. The model contains several key components include mobile users and edge servers. Mobile users can utilize mobile devices and applications to request services from the edge servers via wireless access points (APs). The edge servers are generally small-scale data centers deployed by cloud and telecom operators near mobile users, which can connect to data centers through a gateway via the Internet. The edge servers and mobile users are separated by the air interface based on the advanced wireless communication and networking technologies. From the communication perspective, in MEC systems, communications are typically between mobile users and APs with the possibility of the device to device (D2D) communications. The edge servers can be co-located with wireless APs, such as base stations and WiFi routers, which can significantly reduce the capital expenditure. Apart from providing the wireless interface for edge servers, the wireless APs can also support access to resources from remote data centers via backhaul links. The computation tasks can be offloaded between the edge serves and remote data centers. If the mobile device cannot connect to edge servers due to the limited wireless interfaces, the D2D communications with neighboring devices can be complementary. To improve user experience, the content delivery networks (CDNs) provide the cached data that enable the users to access the data efficiently. Currently, different commercial technologies can be utilized for mobile communications, e.g., 5G network based on the combination of long-term evolution (LTE) and new radio-access technologies has been standardized and put into commercial use. These technologies can support efficient wireless communication from mobile users to APs for varying data rates and transmission ranges. Bluetooth can be used for short-range D2D communications in the MEC system. And WiFi, LTE, and 5G are more powerful technologies for long-range communications between mobile users and edge servers, which can be dynamically switched based on the link reliability. ### 3.2 Problem Definitions In this subsection, we will introduce the objectives that our approach aims to achieve, including the overall delay and migration cost, which present the costs from the perspectives of user and service provider respectively. #### 3.2.1 Basic Entities In our model, we use $U$ to represent a set of users with size $N$, where $u_{i}\in U$ and $i\in\\{1,2,\ldots,N\\}$. Let $E$ to represent a set of edge servers with size $J$, where $E_{j}\in E$ and $j\in\\{1,2,\ldots,J\\}$ and $BS$ to represent base stations with size $L$, $BS_{l}\in BS$ and $l\in\\{1,2,\ldots,L\\}$. Let $S$ denote the set of services with size $R$, where $S_{r}\in S$ and $r\in\\{1,2,\ldots,R\\}$. For each user $u_{i}\in U$, we denote geographic location at time $t$ by $p_{i}(t)=(lat_{i},lng_{i})$, specifically in latitude $lat_{i}$ and longitude $lng_{i}$. Each user utilizes one edge service represented as $S_{u_{i}}$, which needs to be deployed on edge servers. We denote the edge server by $E_{j}\in E$, which is attached to a base station $BS_{l}$. Therefore, the geographical location of the edge server is the same as that of the base station, denoted as $p_{l}=(lat_{l},lng_{l})$. The resource capacity of each edge server is represented by $C_{E_{j}}(a_{j},m_{j},n_{j})$, where $a_{j},m_{j}$, and $n_{j}$ represent the CPU, memory, and network capacity respectively, and the capacity will affect the computation latency of the services. User $u_{i}$ accesses the services $S_{u_{i}}$ from the edge servers $E_{j}$ is denoted as $E_{u_{i},j}$. At time $t$, user $u_{i}$ connects to the nearest base station $BS_{l}$, we denote the distance between the user $u_{i}$ and the base station $BS_{l}$ as $D_{i,l}=\left\|p_{i}(t)-p_{l}\right\|$, that is the Euclidean distance between the user and the base station. The nearest base station is denoted as $current\ base\ station$, and as the user moves, $current\ base\ station$ of the user will automatically switch. And, $selected\ base\ station$ denotes the base station on which the edge server that hosts $u_{i}$’s service is deployed. Because of the switch of $current\ base\ station$ and the variation of the edge server’s workload, the latency of the edge service may increase, which are modelled as parameters in our model. Therefore, a scheduling algorithm is required to determine if the service should be migrated to another edge server and to choose a destination server to perform the service migration. #### 3.2.2 Overall Delay In our model, the $overall\ delay$ mainly consists of three parts, namely, $communication\ delay$, $computation\ delay$ and $migration\ delay$. We will explain these three parts of delay in the following sections, which can determine the user experience. Communication delay can be split into two parts, including the delay of the data transmission from user $u_{i}$ to its $current\ base\ station$ and the delay of $current\ base\ station$ forwarding data to the base station that hosts the edge server. Users perform the first part through the wireless channel. Here, we apply Eq. (1) for calculating the maximum transmission rate ($tr$) of the wireless channel based on Shannon Theory 28. $tr=W\log_{2}{(1+\frac{S_{p}}{gN_{p}})},$ (1) where $W$ denotes the channel bandwidth, $S_{p}$ denotes the transmission power of the mobile device and $N_{p}$ denotes the noise power. Besides, channel gain between the location of the user and its $current\ base\ station$ is denoted as $g$, varying as the user moves from one place to another. The second part of the $communication\ delay$ is the transmission delay between $current\ base\ station$ ($BS_{c}$) and $migrated\ base\ station$ ($BS_{m}$). We use a matrix $M_{c,m}$ to represent the delay between $BS_{c}$ and $BS_{m}$. $M_{c,m}$ is infinite if $BS_{c}$ and $BS_{m}$ are not connected directly. Therefore, the second part of the communication delay can be computed by finding the shortest path with the minimum transmission delay, which we denote as $D(BS_{c},BS_{m})$. Then, we can get the communication delay $T_{cm}$ by: $T_{cm}(u_{i},E_{j},E_{m})=\frac{c_{i}}{tr}+D(BS_{c},BS_{m}).$ (2) where $c_{i}$ is the task size of $u_{i}$. Computation delay is the task execution time of services deployed on the edge server. Since each edge server can host several services and execute multiple tasks at the same time, the execution time of each task varies due to the available resources of the edge server. The task execution time $T_{cp}$ can be calculated by: $T_{cp}(u_{i},E_{j})=\frac{c_{i}}{w_{j}}.$ (3) where $w_{j}$ is the computational workload allocated to the task by edge server $E_{j}$. Migration delay is the downtime of service migration. During the migration, the service needs to be suspended for a period of time and then the in-memory state of the service will be transferred to the destination edge server. Then, the service will restart on the new edge server and process service requests from mobile users. Therefore, the migration will also increase the service delay due to the downtime, which should also be considered in the overall delay. The migration delay $T_{m}$ can be modelled as: $T_{m}(BS_{c},BS_{m})=\left\\{\begin{array}[]{ll}0,&\textrm{if $BS_{c}=BS_{m}$},\\\ M_{c},&\textrm{if $BS_{c}\neq BS_{m}$},\end{array}\right.$ when the migration is not triggered, the migration delay is 0, and $M_{c}$ can be a constant, indicating the migration delay. Therefore, the overall delay can be calculated based $T_{cp}$ and $T_{cm}$. One of our objectives is to minimize the overall delay to assure user experience that can be represented as: $\min:\quad\frac{1}{P\times N\times J}\sum_{t}^{P}\sum_{i}^{N}\sum_{j}^{J}\big{\\{}T_{cp}(u_{i},E_{j})+T_{cm}(u_{i},E_{j},E_{m})+T_{m}(BS_{c},BS_{m})\big{\\}},$ (4) where $t\in\\{1,2,\ldots,P\\}$ represents the observed time interval. #### 3.2.3 Migration Cost The migration cost is applied as another important metric in our model. From the perspective of service providers, it represents the cost for migrating services and the placement of services. The migration cost of service $S_{r}$ is denoted as $C^{S_{r}}_{j,m}=F(S_{r},E_{j},E_{m})$, where $F$ is the function to calculate the transmission cost and computation cost of service migration. As a user keeps moving in a period of time, its service may be migrated many times to reduce the delay and ensure the quality of experience. Thus, we can compute the sum of the cost of all service migration and obtain the migration cost by Eq. (5): $\displaystyle\min:$ $\displaystyle\quad C_{o}=\sum_{r=1}^{R}{\sum_{k=1}^{|SM^{r}|}}C^{r}_{SM^{r}_{k,j},SM^{r}_{k,m}},$ (5) $\displaystyle\textrm{s.t.}:$ $\displaystyle\quad\sum_{i}c_{i}\leq c_{j},\forall E_{u_{i},j},$ (6) where $SM^{r}$ is the set of all services to be migrated, $SM^{r}_{k,j}$ and $SM^{r}_{k,m}$ denote the source edge server $E_{j}$ and the destination edge server $E_{m}$ of the $k^{th}$ migration of $SM_{r}$, respectively. Our objective function is to minimize the migration cost $C_{o}$ in Eq. (5), while satisfying the constraint in Eq. (6) that the requested resources of services should be no more than the maximum capacity of $E_{j}$. ### 3.3 Case Study Figure 2: Case Study As shown in Figure 2, we consider a case study consisting of a set of base stations, each co-located with an edge server deployed with multiple edge services, and a group of mobile users moving around among different areas covered by different base stations. To handle the service scheduling process and ensure service continuity, the entities in this case study include: (1) a MEC platform containing edge servers, (2) edge services execute users’ requests, (3) user location collector, (4) service migration controller, and (5) a virtualization infrastructure. Each mobile user is associated with a specific edge service, which handles users’ requests and can be migrated to another edge server by tracking the mobile user’s movements. For example, as a mobile user approaches the edges of a base station that can be covered, the user location collector running on edge servers informs the nearby edge servers that the mobile user is about to perform the handover to a new area covered by another base station. The information is then used by the service allocation controller to decide whether to migrate the edge service to the other edge server, and in that case, which edge server is to be performed the migration. The service allocation controller has an overview of the entire MEC system and server as the orchestration. Finally, the virtualization infrastructure provides computation, storage, and network resources to provision resources for edge services. It can also manage the migration process by collecting information about the remaining resources of edge servers. In this use case, as shown in Figure 2, we consider the following situation: first, user $u_{1}$ connects to the base station $BS_{1}$, but the service required by $u_{1}$ is deployed on $E_{1}$ built aside $BS_{2}$. Thus, to access the service deployed on $E_{1}$, $u_{1}$ should transmit message to $BS_{1}$ first, and then $BS_{1}$ forwards the message to $BS_{2}$, which sends the packet to $E_{1}$. On the other hand, user $u_{2}$ connects to the base station $BS_{3}$, attached by the edge server $E_{2}$ hosting the service of $u_{2}$, so the message transmission between $u_{2}$ and $E_{2}$ does not need other base stations. Our objective is to optimize the allocation of edge services on edge servers to avoid the high delay for users. Although the use case can be applied to both live and static service migrations, we focus on live and stateful service migration processes. Service migration is ensured between edge servers through the backhaul links that have sufficient data rate, thus the performance of service migration performance is not undermined by network traffics. This assumption can be relaxed, but in this paper, we restrict the analysis to this case, which also conforms with the powerful capability of the 5G scenario. To realize our proposed approach, at each time interval, mobile user locations should be gathered to calculate the delay between the user and edge servers. The utilization of edge servers is given as the probability to perform the service allocation between edge servers. The probability is then used in the service scheduling algorithm to compute the possibility of edge servers accepting migrated services. To be noted, the mobile user location collector and the scheduling algorithm in the service allocation allocator are independently executed. Therefore, when the service allocation controller decides which edge service to be migrated and where should be migrated, the mobile user location collector sends the most updated location data for each mobile user. ### 3.4 Proof of NP-hardness In this subsection, we prove that the problem we aim to solve is an NP- hardness problem, and the proof is as below: Proof: We consider the decision version of the set cover problem 17, which is NP-complete. We have set $\mathcal{Z}$, a set $\mathcal{A}$ of subset $\mathcal{Z}$, and a number $k\in\mathbb{N}$. The objective is to find if a set $\mathcal{B}$, which is subset of $\mathcal{A}$, can exist to satisfy $|\mathcal{B}|\leq k$ and $\bigcup\mathcal{B}=\mathcal{Z}$. We define the Service Migration with Set Constraints (SMSC) problem as follows: We assume that we have $|\mathcal{A}|-k+1$ types of services. And each type of service can have multiple services. Each type of service will only be deployed on a single node, which means one node will not have more than one service with the same type. One of the service types is special and denoted as $i^{\prime}$. Let $S^{\prime}\triangleq S\backslash\\{i^{\prime}\\}$. We also define $V\triangleq\mathcal{A}$. Every node is a subset of $\mathcal{Z}$. For every normal service type $i$, we have a user $u_{i}$ that needs to connect with this type of service. For this user, we define $S_{u_{i}}\triangleq i$ and $E_{u_{i}}\triangleq V$. For every $z\in\mathcal{Z}$, there is a user $u_{z}^{{}^{\prime}}$. For the user, let $S_{u_{z}^{{}^{\prime}}}\triangleq i^{\prime}$ and $E_{u_{z}^{{}^{\prime}}}\triangleq\\{Y\in\mathcal{A}:z\in Y\\}$. We consider that the solution to the set cover problem is $\mathcal{X}$, and the service migration solution is defined as $M$. For each service $i\in S^{\prime}$, it chooses an node $j_{i}$ in $\mathcal{A}\backslash\mathcal{X}$ and $j_{i}\neq j_{i^{*}}$. For $i^{*}\in S^{\prime}$ with $i^{*}\neq i$. This can be assured as $|\mathcal{A}\backslash\mathcal{X}|\geq|\mathcal{A}|-k=|S^{\prime}|$. For each service $i\in S^{\prime}$, we can define $M_{i}\triangleq{j_{i}}$ and $M_{i^{\prime}}=\mathcal{X}$. This migration operation is feasible as a single node is only deployed with one instance of the same type of service. The objective that every user can be satisfied with QoS can be denoted as: consider a service $i\in S^{\prime}$, the user $u_{i}$ can be satisfied if $M_{S_{u_{i}}}\bigcap E_{u_{i}}=M_{i}\bigcap V=M_{i}\neq\emptyset$. For every $z\in\mathcal{Z}$, the user $u_{z}^{{}^{\prime}}$ can be satisfied if there is a set $Y^{\prime}\in\mathcal{X}$ with $z\in Y$, therefore, $M_{S_{u_{i}^{{}^{\prime}}}}\bigcap E_{k_{z}^{{}^{\prime}}}=M_{i^{{}^{\prime}}}\bigcap\\{Y\in\mathcal{A}:z\in Y\\}=M\bigcap\\{Y\in\mathcal{A}:z\in Y\\}\supseteq\\{Y^{\prime}\\}$. The above proof shows that a solution exists to satisfy the QoS of users in SMSC problem. If all the users are satisfied as defined in the above SMSC problem, we have for every $i\in S^{\prime}$, the user $u_{i}$ is satisfied with the QoS requirement. Therefore service $S_{u_{i}}=i$ must be placed on some nodes. Let $\mathcal{K}$ be the set of all edge nodes $j$ that services in $S^{\prime}$ are deployed. Since only one service of the same type will be placed on the same node, we can have $|\mathcal{K}|\geq|S^{\prime}|$, which is bounded below by $|\mathcal{A}|-k=|V|-k$. Define $\mathcal{A}=V\backslash\mathcal{K}$ which has cardinality at most $k$. For any $z\in\mathcal{Z}$, if user $u_{z}^{{}^{\prime}}$ is satisfied, there must be at least one node in $E_{u_{z}^{{}^{\prime}}}=\\{Y\in\mathcal{A}:z\in Y\\}$ that service $S_{u_{z}^{{}^{\prime}}}=i^{\prime}$ is placed. We denote this node as $Y^{\prime}$ and $Y^{\prime}\notin\mathcal{K}$. The reason is $i^{\prime}$ is already deployed there. We consider that a single node only hosts a single instance of the same type of service, no need to deploy services on nodes in $S^{\prime}$. Then we can have $Y^{\prime}\in\mathcal{X}$ and $z\in\bigcup\mathcal{X}$. Since it holds every $z\in\mathcal{Z}$, we can have $\bigcup\mathcal{X}=\mathcal{Z}$, which means $\mathcal{X}$ is the solution to the set cover problem. The above proves that the SMSC is NP-hard. ## 4 Probabilistic Delay-aware and Mobility-aware Approach for edge service management In this section, we introduce Probabilistic based Delay-aware and Mobility- aware Approach (PDMA) for edge service management. We focus on two service scheduling procedures, Service Assignment and Service Migration. We present our probabilistic-based algorithms to perform service scheduling while reducing service latency and migration costs. The proposed approach is inspired by the probabilistic method proposed in [26] for VM consolidation in the cloud computing environment, and revisions have been made to adapt to the scenario of MEC. ### 4.1 Service Assignment In the service assignment procedure, when a mobile user sends a service request, the service assignment algorithm assigns a suitable edge server to host the service. In most service assignment algorithms, cloud coordinators need to perform computation-intensive calculations to determine an allocation decision, which requires massive computing resources and long processing time. Additionally, in MEC, the edge-to-cloud communication delay is much higher than the edge-to-edge delay. Therefore, performing the scheduling algorithms by the cloud coordinator will greatly increase the service delay. To address this, our approach leaves the decisions to each edge server. The mobile user sends the request for a service assignment to the edge servers. And, instead of sending this request to the cloud, each edge server decides whether to host new services based on the current resource utilization and network status. For example, we consider the CPU utilization of edge servers as the metric when making a service assignment decision. If the CPU utilization is close to or even exceeds the utilization threshold, the edge server is very likely to be overloaded after deploying a new service, which will decrease the processing capability of the edge server and thus lead to higher service latency. To avoid performance degradation, the edge server should not accept the service assignment request. On the other hand, those idle servers can be shut down to reduce energy consumption by migrating services to other servers. Other edge servers with moderate CPU utilization will have a higher success rate when making assignment decisions. However, different from the VM scheduling in the cloud data center, the service scheduling in MEC involves the data transmission of mobile services over long distances via wireless channels. Therefore, edge servers should also take the transmission cost into consideration when making service assignment decisions. In our approach, the edge server performs a Bernoulli trial to make the service assignment decision. A successful trial indicates that the server can host a new service. We utilize an assignment function to determine the probability of a successful trial. The assignment function is defined as follows: $\displaystyle f(x,p,T)$ $\displaystyle=\frac{1}{M_{p}}x^{p}(T-x),\ 0\leq x\leq 1,$ (7) $\displaystyle M_{p}$ $\displaystyle=\frac{p^{p}}{(p+1)^{p+1}}T^{p+1},$ (8) where $x$ is the utilization of a certain resource of the edge server, $T$ is the upper threshold of the utilization of this type of resource, e.g., CPU, and $p$ is the shape parameter that can adjust the probability distribution. If $x>T$, the value of the assignment is 0, which means rejecting the service assignment request. $M_{p}$ is a regularization parameter used to adjust the value of the assignment function $f$ to the maximum value of 1. Figure 3: Assignment Function, $T$ = 0.9 Figure 3 shows the distribution of the assignment function with varied $p$ value. Assuming that the resource utilization threshold $T$ is 0.9, the resource utilization $x$ changes within the range of [0,$T$]. As shown in the figure, under different shape parameters, the probability value of the assignment function first increases with the growth of resource utilization $x$, and reaches the maximum value when $x=\frac{pT}{p+1}$. This trend conforms to the basic idea of our service assignment. For different shape parameters, the value of this function is also very low when the resource utilization is close to the threshold, which avoids the allocation of new services to the server that is prone to be overloaded. We can also observe that when the shape parameter is larger, the highest acceptance probability is closer to the threshold. Therefore, we can alter the shape parameter based on the average resource utilization to adjust the probability of service assignment. If the trial is successful, it means the edge server agrees to accept the deployment of the new service and responses with an acceptance message. The coordinator is responsible for collecting all the messages and selecting the most suitable edge server to host the service. Specifically, the most suitable one can be the server with the least migration cost to minimize the impact of service migration on user service experience. If all the edge servers reject the assignment request, the coordinator should scale the edge data center (e.g., deploy a new edge server), and allocate the new edge services. The pseudocodes of the service assignment algorithm are described in Algorithm 1. The algorithm focuses on assigning the associated edge service for a mobile user to a specific edge server. The service assignment algorithm can be utilized for both initial service assignment or service assignment in the service migration process. First, the mobile user connects to the nearest base station and sends the service request to the edge server. If $initial$ is true, it means that the approach is performing the initial placement of a mobile service, so the algorithm sets the location of the service the same as the mobile user (lines 1-2). The algorithm attempts to collect assignment decisions from all edge servers that satisfy the delay threshold at the current time slot. $ES\\_Candidate$ includes all edge servers that accept the service assignment request (lines 4-9). If the candidate list is not empty, we select an edge server that is nearest to the current location of service to assign the service. Otherwise, if no available edge server is in the candidate list, the algorithm should perform $scaleUp$ to switch on an idle edge server to host the service (lines 10-14). Input: Mobile User $U_{k}$, Edge Service $S_{k}$, Edge Server List $ES\\_List$, Initial Assignment $initial$, Delay Threshold $T_{d}$ 1 2if _$initial==True$_ then 3 $S_{k}.location\leftarrow U_{k}.location$ 4 5$ES\\_Candidate=[]$ 6for _$ES_{i}\ in\ ES\\_List$_ do 7 $delay\leftarrow getDelay(U_{k},S_{k},ES_{i})$ according to Eq. (2) 8 if _$delay <T_{d}$_ then 9 $decision\leftarrow getAssignmentDecision(ES_{i})$ based on Eq. (7) 10 if _$decision==Accept$_ then 11 $ES\\_Candidate=ES\\_Candidate\cup ES_{i}$ 12 13 14 15if _$ES\\_Candidate\ is\ empty$_ then 16 $newES\leftarrow\ scaleUp()$ 17else 18 $newES\leftarrow\ getNearest(S_{k},ES\\_Candidate)$ $Assign(S_{k},newES)$ Algorithm 1 Service Assignment Algorithm Complexity analysis of Algorithm 1: the decision of whether the service is an initial assignment or not (lines 1-2) takes a time of $O(1)$; the candidate edge servers collection process (lines 4-9) takes a time of $O(J)$, where $J$ is the number of edge servers; the edge server selection from the candidate list (lines 10-14) takes a time of $O(Jlog(J))$, which is based on sorting algorithm; and the final assignment operation takes $O(1)$ time. Therefore, we can conclude that the time complexity of Algorithm 1 is $O(Jlog(J))$. ### 4.2 Service Migration Since the mobile user moves in real-time, it may move away from the edge server, leading to higher communication latency and affecting the service experience. At the same time, the running status of mobile services can also change dynamically. For example, the load of edge servers may exceed the upper threshold and make the edge servers become over-utilized, resulting in performance degradation. Over-utilized edge servers may not be able to process tasks efficiently, which leads to increase in the service delay. Therefore, it is required to detect the overloaded situation and perform dynamic service migration to optimize the deployment of services and reduce the delay. In the following part of this section, we will introduce our service migration algorithm. We divide the service migration into two situations for consideration, and the pseudocodes are shown in Algorithm 2. (1) Delay violates the threshold (lines 3-9). The service delay is monitored during the runtime of the service. When the delay exceeds the predefined threshold, the service needs to be migrated to ensure the QoS. The current edge server needs to find the other edge servers that can meet the communication delay requirements, and add them to a candidate list. Afterwards, another round of service allocation should be performed based on the candidate list. (2) Edge server becomes over-utilized (lines 14-22). Under this scenario, although the service delay can satisfy the delay requirement, the edge server becomes over-utilized. Therefore, service migration is also required to optimize the resource utilization of edge servers and avoid performance degradation. To achieve this, a high migration function $f^{h}_{m}$ is required for the migration decision. $f^{h}_{m}=\Big{(}1+\frac{x-1}{1-T_{h}}\Big{)}^{\beta},$ (9) where $x$ is the resource utilization of edge server, $T_{h}$ is the upper threshold in resource utilization, and $\beta$ is the shape parameter. Figure 4: Migration function, $\beta=0.25$ Figure 4 shows the distribution of the migration function with varied $T_{h}$ value. Similar to the service assignment procedure, the edge servers use the high migration function to decide whether to perform service migration. If the Bernoulli trial is successful, it means the edge server agrees to migrate the services currently running on the server to a new edge server. After the service migration decision is made, it is also necessary to select the services to be migrated, and then the algorithm performs a new round of service assignment for the selected services. For the over-utilized edge servers, the service migration algorithm sorts all services in descending order based on resource utilization. Then we sequentially deallocate services from the server until the resource utilization of the edge server is lower than the upper threshold. For the services in the $ToMigrate$ list, the algorithm performs the service assignment procedure to assign them to new edge servers (lines 23-24). Complexity analysis of Algorithm 2: the service migration triggered by delay violation (lines 3-9) takes a time of $O(R_{m}\cdot J)$, where $R_{m}$ is the maximum number of edge services allocated on edge servers, and $J$ is the number of edge servers; allocating the migrated services (lines 10-12, lines 23-24) takes a time of $O(R\cdot J\log(J))$, where $R$ is the maximum number of services in the whole system; the service migration triggered by over- utilized edge servers (lines 14-22) takes a time of $O(J\cdot R_{m}^{2}\log(R_{m}))$. Therefore, the time complexity of Algorithm 2 is $O(R_{m}\cdot J+2R\cdot J\log(J)+J\cdot R_{m}^{2}\log(R_{m}))$. To be noted, our approach supports the smooth connection of services by applying service replication, which is quite similar to the VM migration process in cloud computing. When an edge service is going to be migrated, a copy of edge service will be first replicated to another edge while the original edge service is still running and connecting with mobile user. When the replicated edge service is ready, the connection will be switched from the original one to the migrated one. After the mobile user connects to the migrated edge service, the original edge service can be destroyed if no user connects to it. Input: Mobile User $U$, Edge Service $S$, Edge Server List $ES\\_List$, Delay Threshold $T_{d}$ 1 2$ToMigrate=[]$ 3 4// (1) Delay violates the threshold. 5 6for _$ES_{i}\ in\ ES\\_List$_ do 7 for _$S_{j}\ in\ ES_{i}.Service\\_List$_ do 8 $U\leftarrow S_{j}.user$ 9 $delay\leftarrow getDelay(U,S_{j},ES_{i})$ according to Eq. (2) 10 if _$delay >=T_{d}$_ then 11 $ToMigrate\leftarrow ToMigrate\cup S_{j}$ 12 Deallocate $S_{j}$ from $ES_{i}$ 13 14 15 16for _$S_{i}\ in\ ToMigrate$_ do 17 $ServiceAssignment(S_{i})$ by using Algorithm 1 18$ToMigrate.clear()$ 19 20// (2) Migrate services from over-utilized edge servers. 21 22for _$ES_{i}\ in\ ES\\_List$_ do 23 if _$overUtilized(ES_{i})==true$_ then 24 $result\leftarrow getMigrationDecision(ES_{i})$ based on Eq. (9) 25 if _$result==Accept$_ then 26 27 Sort $ES_{i}.Service\\_List$ by CPU Utilization 28 while _$overUtilized(ES_{i})==true$ _ do 29 $S\leftarrow getService(ES_{i})$ 30 $ToMigrate\leftarrow ToMigrate\cup S$ 31 Deallocate $S$ from $ES_{i}$ 32 33 34 35for _$S_{i}\ in\ ToMigrate$_ do 36 $ServiceAssignment(S_{i})$ by using Algorithm 1 Algorithm 2 Service Migration Algorithm ### 4.3 PDMA Competitive Analysis We apply competitive analysis to analyze our proposed approach based on probabilistic management for services on edge servers. We assume that there are $J$ heterogeneous edge servers, and $R$ heterogeneous services. The communication time between the user and edge server from the original connection and new connection (after migration) is denoted as $t_{c}$ and $t_{c}^{\prime}$. The corresponding connection cost per unit time are denoted as $C_{e}$ and $C_{e}^{\prime}$. The processing time of the original edge server is $t_{p}$, and the processing time of the migrated edge server is $t_{p}^{\prime}$. The processing cost per unit time for the original edge server and migrated edge server are denoted as $C_{p}$ and $C_{p}^{\prime}$. Let $t_{m}$ be the migration time and $C_{m}$ be the migration cost per unit time. Without loss of generality, we can define $t_{c}C_{e}=1$, $t_{p}C_{p}=\varepsilon$ and $t_{m}C_{m}=\delta$. Let $\tau$ be the times of migration that happens during the observation time. ###### Theorem 4.1. The upper bound of the competitive ratio of PDMA algorithm for the edge service migration is $\frac{PDMA(U)}{OPT(U)}\leq 1+\frac{(2+\varepsilon+\delta)JR}{(1+\varepsilon+\delta)(J+R)}$. ###### Proof 4.2. Under the normal status, the number of services deployed on edge servers is ${R}/{J}$, while in QoS violated or overloaded situation, at least ${R}/{J}+1$ services are deployed to a single edge server. Thus, the maximum number of QoS violated nodes is $J_{o}=\lfloor\frac{R}{{R}/{J}+1}\rfloor$, which is equivalent to $J_{o}=\lfloor{R/J+R}\rfloor$. For a set of users $U$, the optimal offline algorithm for problem only keeps the services on edge servers and migrates minimum services, thus the total cost of an optimal offline algorithm is defined as: $OPT(U)=\tau(t_{c}C_{e}J+t_{p}C_{p}J+t_{m}C_{m}J).$ (10) For our proposed approach, the total cost with migration can be defined as below: $PDMA(U)=\tau\\{t_{c}C_{e}(J+J_{o})+t_{c}^{\prime}C_{e}^{\prime}J_{o}+t_{p}C_{p}J+t_{p}^{\prime}C_{p}^{\prime}J_{o}+t_{m}C_{m}(J+J_{o})\\}.$ (11) According to our proposed approach, the communication cost that user connect with the migrated node should be no more than the orignal node, thus $t_{c}^{\prime}C_{e}^{\prime}\leq t_{c}C_{e}\ $. And the processing cost of migrated node is no more than the orignal node, thus $t_{p}^{\prime}C_{p}^{\prime}\leq t_{p}C_{p}$. Then we have: $PDMA(U)\leq\tau\\{t_{c}Ce(J+2J_{o})+t_{p}C_{p}(J+J_{o})+t_{m}C_{m}(J+J_{o})\\}.$ (12) Therefore, the competitive ratio of an optimal deterministic algorithm as: $\begin{split}\frac{PDMA(U)}{OPT(U)}&\leq\frac{\tau\\{t_{c}Ce(J+2J_{o})+t_{p}C_{p}(J+J_{o})+t_{m}C_{m}(J+J_{o})\\}}{\tau(t_{c}C_{e}J+t_{p}C_{p}J+t_{m}C_{m}J)}\\\ &=\frac{t_{c}C_{e}J+t_{p}C_{p}J+t_{m}C_{m}J+(2t_{c}C_{e}+t_{p}C_{p}+t_{m}C_{m})J_{o}}{t_{c}C_{e}J+t_{p}C_{p}J+t_{m}C_{m}J}\\\ &=1+\frac{(t_{c}C_{e}+t_{p}C_{p}+t_{m}C_{m})J_{o}+t_{c}C_{e}J_{o}}{(t_{c}C_{e}+t_{p}C_{p}+t_{m}C_{m})J}\\\ &=1+\frac{J_{o}}{J}+\frac{t_{c}C_{e}J_{o}}{(t_{c}C_{e}+t_{p}C_{p}+t_{m}C_{m})J}.\end{split}$ (13) As $J_{o}=\lfloor\frac{JR}{J+R}\rfloor$, we have $J_{o}\leq\frac{JR}{J+R}$ The competitive ratio is defined as: $\begin{split}\frac{PDMA(U)}{OPT(U)}&\leq 1+\frac{J_{o}}{J}+\frac{J_{o}}{(1+\varepsilon+\delta)J}\\\ &=1+\frac{(2+\varepsilon+\delta)J_{o}}{(1+\varepsilon+\delta)J}\\\ &\leq 1+\frac{(2+\varepsilon+\delta)JR}{(1+\varepsilon+\delta)(J+R)}.\end{split}$ (14) Figure 5: Base station dataset Figure 6: Taxi traces ## 5 Performance Evaluations To evaluate algorithm performance, we simulate the service migration scenario in MEC based on iFogSim 9 and conduct experiments with several baselines. To carry out the experiments, three datasets are utilized for our experiments, including (1) the location of the base stations, (2) the mobility traces of users, and (3) the workload data of edge services. We will first introduce these three datasets in this section, and then explain the configurations and procedures of our experiments. At the end of the section, we will present the evaluation results of our algorithms. ### 5.1 Datasets Description We use three real-world datasets mentioned above to carry out our experiments. First, we get the base station dataset from antenna distribution dataset 29, which consists of the location information of 422 base stations in San Francisco. Figure 5 shows the distribution of the base station dataset. It can be observed that the density of edge servers varies in different areas, e.g., more base stations are deployed in the central business district. To simulate a real-world MEC scenario, we obtained realistic mobility traces of 536 taxis in San Francisco 10. The dataset records the locations of each taxi (represented by the latitude and longitude) every 60 seconds on May 31, 2008. Each taxi in this dataset acts as a mobile user in our simulation and runs one mobile service that needs communications with edge servers while traveling around the city. The distance between base stations and mobile users can be calculated by Euclidean distance. This combines the first two datasets and also helps us to simulate an Internet of Vehicles (IoV) 30 scenario. Figure 6 depicts the trace of one taxi in the whole day. We can notice that the location of the taxi can be changed significantly during the day, which also demonstrates the need for service migration to support the user in a delay-aware and mobility-aware manner. We also simulate the workload of each service running on the edge servers based on the dataset derived from PlanetLab workload 31 that includes CPU utilization data of thousands of VMs allocated to servers. We utilize the CPU utilization of VMs to represent the CPU utilization of edge services. The utilization of edge servers will be influenced by the utilization of edge services deployed on them. ### 5.2 Rush Hour Simulation Nowadays, the population in the large city is quite concentrated, especially during the rush hour, e.g., 8:00 am to 9:00 am on the weekday’s morning. When a large volume of mobile users rush into certain roads and areas in the city, resulting in serious traffic congestion. During the rush hour, the edge servers in the crowded areas are more prone to be overloaded compared with non-rush hours. If the edge services are not properly scheduled, the delay of the services will be greatly increased, which will affect the quality of experience of users. Therefore, attention should be paid to the edge service management in rush hour. To simulate the service scheduling in rush hours, based on the original workloads, we select a period of time as rush hour and a crowded area to simulate the scenario of the city during the rush hour, and evaluate the effectiveness of our scheduling algorithm. We first utilize the K-means clustering algorithm 32 on the mobility traces of taxis to select a location with the highest density of taxis. Then, we use this location as the center to frame a square ($4km\times 4km$) as the congested area in the rush hour. We then extract the data of 147 base stations in this area from the whole dataset. The red square in Figure 5 shows the selected area, which represents a much more dense distribution of mobile users than other areas. Afterwards, to choose the rush hour, we count the number of taxis in this area in different time periods and pick three hours of May 31, 2008 with the maximum number of taxis. After that, we extract the mobility traces of taxis moving in this congested area during the rush hour. For this part of the experiment, we generate new workload traces for edge services derived from the original PlanetLab dataset. As the original resource utilization is low in PlanetLab dataset, we consider multiple edge services are connected so that they should be deployed together, in order to increase the resource utilization of edge servers and thus simulating the heavy workloads during the rush hour. This assumption conforms to the motivation of microservice architecture 33 that can be applied to the MEC environment. Based on the above steps, we can perform simulations for the rush hour scenario, and the results will be demonstrated in the following sections. ### 5.3 Experiment Configurations We conducted all our experiments on the same computer with iFogSim. The experimental configurations are as below: For Eq. (1), we set the channel bandwidth $W$ to be 20 Mhz and transmitted power of taxi $S_{p}$ to be 0.5W, and the noise power $N_{p}$ to be $2\times 10^{-13}$ W. Besides, the wireless channel gain $g$ is set as $127+30\log{d}$. We generate the delay matrix at every scheduling interval, and the delay of each link $m_{i,j}\in M$ is randomly generated between 5ms and 50ms. For the configurations of edge servers, each server has 8 CPU cores with Millions of Instructions Per Second (MIPS) of 2000, 3000, and 4000, 80GB of RAM, and 10TB of storage. Each edge service is randomly configured to request 1000, 1500, 2000, 2500 of MIPS, respectively, and 8 GB of RAM. The instructions of the task executed by each edge service are configured as 60 million. For the algorithms, the scheduling interval is configured to 60 seconds. The delay threshold of PDMA is configured as 75ms. To evaluate the performance of our service migration algorithm, we focus on two metrics to evaluate the performance, including the Migration Cost and the Overall Delay based on the optimization objectives. In addition, we also record the Number of Overloaded Servers to evaluate the overloaded situation, especially for the rush hours. The descriptions of the metrics are as below: * • Overall delay represents the average delay of all services during the experiments as represented in Eq. (4). * • Migration cost is the sum of the cost of all service migrations as represented in Eq. (6). In our simulation, the function $F$ calculates the distance between the source server and destination server. * • Average number of overloaded edge servers is applied to evaluate the effect of algorithms to relieve the overloaded situation. The overloaded hosts are identified based on the predefined utilization threshold. In our experiments, we set the utilization threshold as 0.9, as this value has been used widely to identify overloaded hosts in data centers. We also compare our approach with three scheduling algorithms. * • Nearest edge server first (NF): it assigns the edge service to the edge server closest to the mobile user based on distance, which can reduce communication costs. * • Never migrate (NM): it never migrates the edge services, thus the migration costs can be reduced. * • Top-K: it sorts all the edge servers by their CPU utilization and selects one randomly from the top K busiest servers to host a new coming or migrated edge service randomly. Here, $K$ is configured as $0.1\times J$, where $J$ is the size of edge servers. * • CHERA34: it is a clustering-based heuristic algorithm for edge resource allocation. It adopts a clustering procedure to allocate applications to suitable edge servers and minimizes the average service response time. ### 5.4 Experiments and Results We divide the experiments into two parts, including the experiments on PlanetLab traces and the experiments on Rush Hour traces. In each part, we investigate two parameters on algorithms performance. The first one is $distance\ threshold$ to represent the coverage of base stations. If the distance between a mobile user and a base station exceeds the distance threshold, the user will not be able to access the base station and thus the service connection should be switched to another base station, which will affect the service delay. The second one is ratio of clients and servers number to demonstrate the scalability of scheduling algorithms. Since the PlanetLab traces specify the number of clients, we reduce the number of edge servers to modify the ratio. And, for the Rush Hour simulations, we increase the volume of mobile users in the crowded area to simulate the scenario. #### 5.4.1 Results with PlanetLab Traces We first present the experiment results based on PlanetLab traces by varying the coverage of the base station and number of edge servers. (a) Varied coverage of base stations We configure the number of mobile users to be 1000 and vary the distance threshold from 200m to 2000m. As shown in Figure 7, the distance threshold has a slight impact on the three metrics. Figure 7(a) shows the results of overall delay. CHERA achieves close delay with PDMA and performs lower delay than NF and others. The overall delay of PDMA is lower than 62ms when the coverage of the base station is longer than 200m. The migration cost of PDMA is much lower than that of CHERA, indicating that PDMA can decrease the communication overhead during the service migration. Since the CPU utilization of PlanetLab and the ratio of mobile clients to edge servers is low, very few edge servers become overloaded during the simulation. (b) Varied number of edge servers The average volume of mobile users in PlanetLab traces is 1000. To modify the ratio of clients and servers number, we configure the number of edge servers to be from 100 to 400. As shown in Figure 8, we can notice that PDMA is able to maintain good performance when there are fewer servers. When there are 100 edge servers, the overall delay of PDMA is close to that of CHERA and is 13.5ms lower than that of NM. The migration cost of PDMA is only 4.3%, 15.8% and 30.3% compared with Top-K, CHERA and NF, respectively. For the number of overloaded edge servers, less than 5 servers are over-utilized when using PDMA, while CHERA can lead to more than 12. ((a)) Overall delay ((b)) Migration cost ((c)) Number of Overloaded Servers Figure 7: PlanetLab: Performance comparison of algorithms with varied distance thresholds ((a)) Overall delay ((b)) Migration cost ((c)) Number of Overloaded Servers Figure 8: PlanetLab: Performance comparison of algorithms with varied number of edge servers #### 5.4.2 Results with Rush Hour Traces We present the experiment results of rush hour traces in this part. First, we only use the data of base stations in the selected area and deploy one edge server on each base station. Then, we configure the two parameters with varied values and run several rounds of simulations. (a) Varied coverage of base stations We configure the number of mobile users to be 1000 and vary the distance threshold from 200m to 2000m. As shown in Figure 9(a), except for NM, other algorithms can perform better on overall delay when increasing the distance threshold. It results from the decrease in the number of service migrations, which can also be observed in Figure 9(b). The migration cost of PDMA is the closest to that of NM (the cost is zero). The NF always chooses the nearest edge server to the mobile user to migrate the service and thus leading to more migration cost. Top-K performs the worst on migration cost because the top K busiest servers may be far away from the current edge server. Figure 9(c) shows the number of overloaded servers when utilizing different algorithms. PDMA controls the number of overloaded servers to be less than 20 while CHERA incurs nearly 40 overloaded edge servers and other algorithms incur more than 50. The reason is that PDMA will not select those edge servers that are likely to be overloaded and thus avoiding more overloaded situations. (b) Varied volume of mobile users In order to evaluate the performance of scheduling algorithms in the rush hours, we increase the number of mobile users to simulate weekday’s morning when more mobile users enter the crowded area. We fix the distance threshold as 1000m and vary the volume of clients from 200 to 1000. As shown in Figure 10(c), the higher volume of mobile clients causes more edge servers to be overloaded, leading to performance degradation and higher computation delay. Therefore, it can be noticed in Figure 10(a) that the overall delay also becomes higher when the number of clients increases. In addition, a higher service delay that exceeds the delay threshold will trigger more migrations, increasing the migration cost and the migration downtime. When the number of clients increases to be more than 400, PDMA achieves better overall delay than NF. The overall delay of PDMA is 64ms, which is 2ms more than that of CHERA, and 6ms, 16ms less than that of NF and NM, respectively, when the volume of clients is 1000. The migration cost of PDMA is also maintained at a very low level. For example, when there are 1000 clients, CHERA produces 4.6 times more migration cost than PDMA. Compared with other algorithms, PDMA is able to prevent edge servers from becoming overloaded. As we can observe from Figure 10(c), less than 20 edge servers become over- utilized when there are 1000 mobile clients, which is 55% lower than that of CHERA and is 75% lower than those of NM and Top-K. Less over-loaded servers can reduce edge servers’ energy consumption and also prevent edge servers from performance degradation during the rush hour. In addition, PDMA has the lowest increase rate in terms of the three adopted metrics among all algorithms, which validates the scalability of PDMA that it can be adapted to the scenarios with clients rushing into the MEC system while ensuring user experience. ((a)) Overall delay ((b)) Migration cost ((c)) Number of Overloaded Servers Figure 9: Rush Hour: Performance comparison of algorithms with varied distance thresholds ((a)) Overall delay ((b)) Migration cost ((c)) Number of Overloaded Servers Figure 10: Rush Hour: Performance comparison of algorithms with varied volume of clients ### 5.5 Scalability Discussions In this part, we discuss the scalability of our proposed approach. The problem of optimally allocating edge services to edge servers can be modelled as a bin-packing problem with varied bin sizes and prices, where bins represent the edge servers and items are the edge services to be allocated, bins sizes are the available resources of edge servers, and prices correspond to the communication costs and migration costs when allocating edge services to the edge server. As the problem is very complex and proved to be NP-hard, achieving the optimal solution to the problem can be quite time-consuming, especially for a large MEC system with a huge number of edge servers. We have compared our approach with some deterministic algorithms, e.g., NF and Top-K, which are variants of the classical Best Fit Decreasing (BFD) algorithm that allocates services to the server with the least increased costs. The BFD algorithm is a polynomial algorithm and has been proved to use no more than $11/9\cdot OPT+1$ bins 35, where $OPT$ is the minimum theoretical number of edge servers. The centralized and deterministic algorithms, like BFD, can function well for the MEC system with a limited number of edge servers but can be inefficient for large-scale MEC systems considering the NP-hardness of migrating multiple edge services simultaneously. Conversely, given the probabilistic nature of PDMA, it is suitable for large-scale MEC systems. We argue that it is not necessary to send allocation requests to all the edge servers in a large MEC system, as the edge server far away are not prone to be deployed with edge services considering the mobile users are with low probability to move to the distant location within the short time. With the Bernoulli trails in our proposed approach that send allocation requests to part of servers in the system, the traffic overheads can be reduced compared with the BFD-based approaches, and edge servers are added only when strictly needed. Therefore, the required number of edge servers is close to the required number of the BFD algorithm. In addition, PDMA fits well with the large MEC system with distributed edge servers, as each service allocation request can be forwarded to the edge servers in a specific area. This allows the leverage of system heterogeneity by choosing the most cost-efficient edge edges. To evaluate the scalability of PDMA, we performed simulations with MEC with different user scales as shown in Figure 10. The results confirm that as the number of users increases, the migration cost will only increase slightly. The good scalability is also confirmed by the other performance metrics. For example, the number of overloaded edge servers increases slowly with the growth of the number of users. ## 6 Conclusions and Future Work This paper addresses the NP-hardness problem of delay-aware and mobility-aware service management in the MEC environment, which is sensitive to the communication costs generated in this environment. The aim is to allocate edge services to the suitable edge servers through the initial assignment and dynamic service migration to satisfy the users in terms of response time when they are moving around. With PDMA proposed in this work, the assignment and migration of edge services are based on Bernoulli trials that decide whether the edge server will accept the deployment of a specific service based on the running status. The probabilistic and low-complexity nature of our proposed approach makes it to be efficient in an environment with a large number of edge servers and rather short execution time. Especially compared with the online learning-based approaches, which can have significant computational complexity growth when the number of servers and services increases. A theoretical proof has been provided to illustrate that our proposed approach can be bounded to the optimal solution. Simulation results based on iFogSim also demonstrate that our proposed approach can reduce the communication delays for users and transmission costs due to the service migration. For rush hours in the urban city, the proposed approach can efficiently improve the user experience. As for future work, we would like to 1) investigate the proposed approach into a prototype system, 2) apply learning-based approach to predict the mobility of users, and 3) integrate offloading techniques into our model to further improve algorithm performance. ## Acknowledgment This work is supported by Key-Area Research and Development Program of Guangdong Province (NO. 2020B010164003), National Natural Science Foundation of China (No. 62072451, 62072187s), and SIAT Innovation Program for Excellent Young Researchers. ## References * 1 Xu M, Buyya R. Brownout Approach for Adaptive Management of Resources and Applications in Cloud Computing Systems: A Taxonomy and Future Directions. ACM Comput. Surv. 2019; 52(1). doi: 10.1145/3234151 * 2 Galloway JM, Smith KL, Vrbsky SS. Power aware load balancing for cloud computing. In: Proceedings of the World Congress on Engineering and Computer Science. ; 2011: 19–21. * 3 Xu M, Buyya R. BrownoutCon: A software system based on brownout and containers for energy-efficient cloud computing. Journal of Systems and Software 2019; 155: 91 - 103. doi: https://doi.org/10.1016/j.jss.2019.05.031 * 4 Wu H, Zhang Z, Guan C, Wolter K, Xu M. Collaborate Edge and Cloud Computing With Distributed Deep Learning for Smart City Internet of Things. IEEE Internet of Things Journal 2020; 7(9): 8099-8110. * 5 Brogi A, Forti S, Guerrero C, Lera I. How to place your apps in the fog: State of the art and open challenges. Software: Practice and Experience 2020; 50(5): 719-740. doi: 10.1002/spe.2766 * 6 Shahidinejad A, Ghobaei-Arani M. Joint computation offloading and resource provisioning for edge-cloud computing environment: A machine learning-based approach. Software: Practice and Experience 2020\. doi: 10.1002/spe.2888 * 7 Guo Y, Wang S, Zhou A, Xu J, Yuan J, Hsu CH. User allocation-aware edge cloud placement in mobile edge computing. Software: Practice and Experience 2020; 50(5): 489-502. doi: 10.1002/spe.2685 * 8 Badri H, Bahreini T, Grosu D, Yang K. Energy-Aware Application Placement in Mobile Edge Computing: A Stochastic Optimization Approach. IEEE Transactions on Parallel and Distributed Systems 2020; 31(4): 909-922. * 9 Gupta H, Vahid Dastjerdi A, Ghosh SK, Buyya R. iFogSim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, Edge and Fog computing environments. Software: Practice and Experience 2017; 47(9): 1275–1296. * 10 crawdad. A Community Resource for Archiving Wireless Data At Dartmouth; 2009. http://crawdad.org/epfl/mobility/. * 11 Wang S, Guo Y, Zhang N, Yang P, Zhou A, Shen XS. Delay-aware Microservice Coordination in Mobile Edge Computing: A Reinforcement Learning Approach. IEEE Transactions on Mobile Computing 2019: 1-1. * 12 Wang S, Urgaonkar R, Zafer M, He T, Chan K, Leung KK. Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process. IEEE/ACM Transactions on Networking 2019; 27(3): 1272-1288. * 13 Wang S, Urgaonkar R, Zafer M, He T, Chan K, Leung KK. Dynamic service migration in mobile edge-clouds. In: 2015 IFIP Networking Conference (IFIP Networking). ; 2015: 1-9. * 14 Samanta A, Tang J. Dyme: Dynamic Microservice Scheduling in Edge Computing Enabled IoT. IEEE Internet of Things Journal 2020; 7(7): 6164-6174. * 15 Samanta A, Li Y, Esposito F. Battle of Microservices: Towards Latency-Optimal Heuristic Scheduling for Edge Computing. In: 2019 IEEE Conference on Network Softwarization (NetSoft). ; 2019: 223-227. * 16 Poularakis K, Llorca J, Tulino AM, Taylor I, Tassiulas L. Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. ; 2019: 10-18. * 17 Pasteris S, Wang S, Herbster M, He T. Service Placement with Provable Guarantees in Heterogeneous Edge Computing Systems. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. ; 2019: 514-522. * 18 Zhang C, Zheng Z. Task migration for mobile edge computing using deep reinforcement learning. Future Generation Computer Systems 2019; 96: 111 - 118. doi: https://doi.org/10.1016/j.future.2019.01.059 * 19 Wan L, Sun L, Kong X, Yuan Y, Sun K, Xia F. Task-Driven Resource Assignment in Mobile Edge Computing Exploiting Evolutionary Computation. IEEE Wireless Communications 2019; 26(6): 94-101. * 20 Gao B, Zhou Z, Liu F, Xu F. Winning at the Starting Line: Joint Network Selection and Service Placement for Mobile Edge Computing. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. ; 2019: 1459-1467. * 21 Yu N, Xie Q, Wang Q, Du H, Huang H, Jia X. Collaborative Service Placement for Mobile Edge Computing Applications. In: 2018 IEEE Global Communications Conference (GLOBECOM). ; 2018: 1-6. * 22 Ouyang T, Li R, Chen X, Zhou Z, Tang X. Adaptive User-managed Service Placement for Mobile Edge Computing: An Online Learning Approach. In: IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. ; 2019: 1468-1476. * 23 Wu H, Deng S, Li W, et al. Mobility-Aware Service Selection in Mobile Edge Computing Systems. In: 2019 IEEE International Conference on Web Services (ICWS). ; 2019: 201-208. * 24 Ghosh S, Mukherjee A, Ghosh SK, Buyya R. Mobi-iost: mobility-aware cloud-fog-edge-iot collaborative framework for time-critical applications. IEEE Transactions on Network Science and Engineering 2019\. * 25 Shi Y, Chen S, Xu X. MAGA: A mobility-aware computation offloading decision for distributed mobile cloud computing. IEEE Internet of Things Journal 2017; 5(1): 164–174. * 26 Yu F, Chen H, Xu J. DMPO: Dynamic mobility-aware partial offloading in mobile edge computing. Future Generation Computer Systems 2018; 89: 722-735. doi: https://doi.org/10.1016/j.future.2018.07.032 * 27 Ouyang T, Zhou Z, Chen X. Follow Me at the Edge: Mobility-Aware Dynamic Service Placement for Mobile Edge Computing. IEEE Journal on Selected Areas in Communications 2018; 36(10): 2333-2345. * 28 Wyner A. Recent results in the Shannon theory. IEEE Transactions on information Theory 1974; 20(1): 2–10. * 29 Antennasearch. Antenna Distribution; 2020. www.antennasearch.com. * 30 Ding Z, Xu J, Dobre OA, Poor HV. Joint Power and Time Allocation for NOMA–MEC Offloading. IEEE Transactions on Vehicular Technology 2019; 68(6): 6207-6211. * 31 Park K, Pai VS. CoMon: a mostly-scalable monitoring system for PlanetLab. ACM SIGOPS Operating Systems Review 2006; 40(1): 65–74. * 32 Wang S, Zhao Y, Xu J, Yuan J, Hsu CH. Edge server placement in mobile edge computing. Journal of Parallel and Distributed Computing 2019; 127: 160 - 168. doi: https://doi.org/10.1016/j.jpdc.2018.06.008 * 33 Xu M, N. Toosi A, Buyya R. A Self-adaptive Approach for Managing Applications and Harnessing Renewable Energy for Sustainable Cloud Computing. IEEE Transactions on Sustainable Computing 2020: 1-1. doi: 10.1109/TSUSC.2020.3014943 * 34 Zhao L, Wang J, Liu J, Kato N. Optimal edge resource allocation in IoT-based smart cities. IEEE Network 2019; 33(2): 30–35. * 35 Beloglazov A, Buyya R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in Cloud data centers. Concurrency and Computation: Practice and Experience 2012; 24(13): 1397-1420. doi: https://doi.org/10.1002/cpe.1867 ## Author Biography Minxian Xu is currently an assistant professor at Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. He received the BSc degree in 2012 and the MSc degree in 2015, both in software engineering from University of Electronic Science and Technology of China. He obtained his PhD degree from the University of Melbourne in 2019. His research interests include resource scheduling and optimization in cloud computing. He has co- authored 20+ peer-reviewed papers published in prominent international journals and conferences, such as CSUR, T-SUSC, T-ASE, JPDC, JSS, ICSOC. His Ph.D. Thesis was awarded the 2019 IEEE TCSC Outstanding Ph.D. Dissertation Award. More information can be found at: minxianxu.info. Qiheng Zhou received his BSc degree from Sun Yat-sen University. He is currently a master student at National University of Singapore. His research interests include cloud computing and blockchain. Huaming Wu received the B.E. and M.S. degrees from Harbin Institute of Technology, China in 2009 and 2011, respectively, both in electrical engineering. He received the Ph.D. degree with the highest honor in computer science at Freie Universität Berlin, Germany in 2015. He is currently an associate professor in the Center for Applied Mathematics, Tianjin University, China. His research interests include model-based evaluation, wireless and mobile network systems, mobile cloud computing and deep learning. Weiwei Lin received his B.S. and M.S. degrees from Nanchang University in 2001 and 2004, respectively, and the PhD degree in Computer Application from South China University of Technology in 2007. He has been serving as visiting scholar at Clemson University from 2016 to 2017. Currently, he is a professor in the School of Computer Science and Engineering, South China University of Technology. His research interests include distributed systems, cloud computing, big data computing and AI application technologies. He has published more than 100 papers in refereed journals and conference proceedings. He has been the reviewers for many international journals, including TPDS, TC, TMC, TCYB, TSC, TCC, etc. Kejiang Ye received his BSc and PhD degree in Computer Science from Zhejiang University in 2008 and 2013, respectively. He was also a joint PhD student at The University of Sydney from 2012 to 2013. After graduation, he works as Post-Doc Researcher at Carnegie Mellon University from 2014 to 2015 and Wayne State University from 2015 to 2016. He is currently a Professor at Shenzhen Institutes of Advanced Technology, Chinese Academy of Science. His research interests focus on the performance, energy, and reliability of cloud computing and network systems. Chengzhong Xu (Fellow, IEEE) is the Dean of Faculty of Science and Technology and the Interim Director of Institute of Collaborative Innovation, University of Macau, and a Chair Professor of Computer and Information Science. Dr. Xu’s main research interests lie in parallel and distributed computing and cloud computing, in particular, with an emphasis on resource management for system’s performance, reliability, availability, power efficiency, and security, and in big data and data-driven intelligence applications in smart city and self- driving vehicles. He published two research monographs and more than 300 peer- reviewed papers in journals and conference proceedings; his papers received about 10K citations with an H-index of 52. He serves or served on a number of journal editorial boards, including IEEE Transactions on Computers (TC), IEEE Transactions on Cloud Computing (TCC), IEEE Transactions on Parallel and Distributed Systems (TPDS), Journal of Parallel and Distributed Computing (JPDC), Science China: Information Science and ZTE Communication. Dr. Xu has been the Chair of IEEE Technical Committee on Distributed Processing (TCDP) since 2015. He obtained BSc and MSc degrees from Nanjing University in 1986 and 1989 respectively, and a PhD degree from the University of Hong Kong in 1993, all in Computer Science and Engineering.
# Sequential Approximations for Types and Keisler Measures Kyle Gannon Department of Mathematics University of California, Los Angeles Los Angeles, CA, 90095, USA<EMAIL_ADDRESS> ###### Abstract. This paper is a modified chapter of the author’s Ph.D. thesis. We introduce the notions of sequentially approximated types and sequentially approximated Keisler measures. As the names imply, these are types which can be approximated by a sequence of realized types and measures which can be approximated by a sequence of “averaging measures” on tuples of realized types. We show that both generically stable types (in arbitrary theories) and Keisler measures which are finitely satisfiable over a countable model (in NIP theories) are sequentially approximated. We also introduce the notion of a smooth sequence in a measure over a model and give an equivalent characterization of generically stable measures (in NIP theories) via this definition. In the last section, we take the opportunity to generalize the main result of [8]. ###### Key words and phrases: Keisler measures, NIP, generic stability; MSC2020: 03C45, 03C68 ## 1\. Introduction One of the joys of working in a metric space is that the closure of a set coincides with its sequential closure. In particular, if $X$ is a metric space, $A$ is a subset of $X$, and $b$ is in the closure of $A$, then there exists a sequence of elements in $A$ which converges to $b$. In [17], Simon showed that global types which are finitely satisfiable over a countable model of a countable NIP theory admit a similar property. Let $T$ be a complete, first-order theory, $\mathcal{U}$ a monster model of $T$, and $M$ a small submodel of $\mathcal{U}$. Simon proved the following ([17, Lemma 2.8]): ###### Theorem 1.1. Let $T$ be a countable NIP theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is finitely satisfiable over $M$ where $|M|=\aleph_{0}$. Then there exists a sequence of points $(a_{i})_{i\in\omega}$ in $M^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$. One of the goals of this paper is to morally generalize the proof of the above theorem in two different directions. By mimicking Simon’s proof, we are able to prove the following, 1. (T$1$) Let $T$ be any countable theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is generically stable over $M$. Then there exists a sequence of points $(a_{i})_{i\in\omega}$ in $M^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$. 2. (T$2$) Let $T$ be a countable NIP theory. Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$ where $|M|=\aleph_{0}$. Then there exists a sequence of points $(\overline{a}_{i})_{i\in\omega}$ in $(M^{x})^{<\omega}$ such that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})=\mu$. More explicitly, for any formula $\varphi(x)$ in $\mathcal{L}_{x}(\mathcal{U})$, we have that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})(\varphi(x))=\mu(\varphi(x)).$ The proofs of both of these theorems are slightly more enjoyable than one would anticipate. For example, we already know many diverse and useful approximation theorems for measures in NIP theories (and some for generically stable types in arbitrary theories) and so one might expect that our proofs rely on composing approximation techniques. However, stringing together different approximation methods can result in an array with some kind of modes-of-convergence problem. As stated previously, the technique used to prove both these theorems mimics the argument used in [17, Lemma 2.8]. In the generically stable case, the set up is identical: Suppose $p$ is in $S_{x}(\mathcal{U})$ where $p$ is generically stable over $M$ and $I$ is a Morley sequence in $p$ over $M$. As in Simon’s proof, we use both $M$ and $I$ to find an eventually indiscernible sequence of points in $M^{x}$ which converge to $p|_{MI}$. The eventual EM- type of this sequence over $M$ is precisely $p^{(\omega)}|_{M}$. Using generic stability and compactness, we conclude that this sequence must converge to $p$. Our proof of the Keisler measure case is slightly more exotic since there is no standard notion of a “Morley sequence in a Keisler measure”. The proof we provide is essentially done in first order model theory (with an important exceptional lemma following from Ben Yaacov’s work on randomizations [2]). We expect that there exists other proofs using other methods such as continuous model theory111In fact, after this paper was posted to arXiv, another proof was discovered by Khanaki using BFT on an infinite product space [14].. The proof we give here embraces the ideology first developed in [10] and shows that this can be resolved by replacing the Morley sequence (in Simon’s proof) by a smooth sequence in $\mu$ over $M$. This provides more evidence for the intuition that smooth measures can play the role of realized types, at least in the NIP context. After constructing a countable model $N_{\omega}$ “containing this sequence”, we find a sequence of points in $(M^{x})^{<\omega}$ such that the corresponding average measures on these tuples converge to $\mu|_{N_{\omega}}$. After constructing an eventually indiscernible subsequence in this context, we are able to readapt most of Simon’s proof technique by making use of known approximation theorems, symmetry properties, and some basic integration techniques. It is interesting to note that one can give another equivalent characterization of generically stable measures in NIP theories using smooth sequences. This characterization highlights the connection between generically stable types and generically stable measures. Recall that a type $p$ is generically stable over a model $M$ if for every Morley sequence $(a_{i})_{i\in\omega}$ in $p$ over $M$, $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$. We show that in an NIP theory, a measure $\mu$ is generically stable over a model $M$ if and only if for every smooth sequence in $\mu$ over $M$, the limit of this sequence is precisely $\mu$. In addition to proving these theorems, we also introduce the classes of sequentially approximated measures and sequentially approximated types. These definitions can be seen as the global analogue to Khanaki’s definition of Baire 1 definability for local types (see [13]). Sequentially approximated measures should be thought of as a “halfway point” between finitely approximated measures and Keisler measures which are finitely satisfiable over a small model. For instance, we show that a Keisler measure is finitely approximated if and only if it is both definable and sequentially approximated (Proposition 3.4) and sequentially approximated measures commute with definable measures (Proposition 3.7). Sequentially approximated types remain a little more mysterious. We show that there exists a type such that its corresponding Keisler measure is sequentially approximated (even finitely approximated), but the type itself is not sequentially approximated (Proposition 4.14). In the last section, we consider connections to the local measure case and generalize the main result in [8] (Theorem 6.4). Explicitly, the main result in [8] demonstrates that if a formula $\varphi$ is NIP and $\mu$ is a $\varphi$-measure which is $\varphi$-definable and finitely satisfiable over a countable model, then $\mu$ is $\varphi$-finitely approximated in said model. Here, we demonstrate that countable can be replaced by small. This paper is structured as follows: In section 2, we discuss preliminaries. In section 3, we describe sequentially approximated measures and sequentially approximated types. In section 4, we show that if $p$ is generically stable over $M$, then $p$ is sequentially approximated over $M$. We also give some examples of types which are which are not sequentially approximated at the end of the section. In section 5, we show that if $T$ is a countable NIP theory, and $\mu$ is finitely satisfiable over a countable model $M$, then $\mu$ is sequentially approximated over $M$. We then give an equivalent characterization of generically stable measures in NIP theories using smooth sequences. In section 6, we generalize the main theorem in [8]. ### Acknowledgements We would like to thank Gabriel Conant, James Hanson, Karim Khanaki, Pierre Simon and our Ph.D. defense committee Daniel Hoffmann, Anand Pillay, Sergei Starchenko, and Minh Chieu Tran for helpful discussions and comments. Thanks also to the referee for many helpful comments. This paper was also partially supported by the NSF research grant DMS-1800806 as well as the NSF CAREER grant DMS-1651321. ## 2\. Preliminaries If $r$ and $s$ are real numbers and $\epsilon$ is a real number greater than $0$, then we write $r\approx_{\epsilon}s$ to mean $|r-s|<\epsilon$. Fix $\mathcal{L}$ a countable language. Throughout this paper, we always have a countable, complete, first-order theory $T$ and a monster model $\mathcal{U}$ of $T$ in the background. The letters $M$ and $N$ will be used to denote small elementary submodels of $\mathcal{U}$. The letters $x,y,z$ will denote tuples of variables. If $A\subseteq\mathcal{U}$, we let $\mathcal{L}(A)$ be the collection of formulas with parameters from $A$ (modulo logical equivalence). A formula in $\mathcal{L}(A)$ is called an “$\mathcal{L}(A)$-formula”. If $x_{0},...,x_{k}$ is a finite sequence of pairwise disjoint tuples of variables, we let $\mathcal{L}_{x_{0},...,x_{k}}(A)$ be the collection of $\mathcal{L}(A)$-formulas with free variables in these tuples. We write $\mathcal{L}_{x_{0},...,x_{k}}(\emptyset)$ simply as $\mathcal{L}_{x_{0},...,x_{k}}$. If $(x_{i})_{i\in\omega}$ is a countable sequence of pairwise distinct tuples of variables, we let $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)=\bigcup_{k\in\omega}\mathcal{L}_{x_{0},...,x_{k}}(A)$. For a tuple $x$, let $A^{x}=\\{(a_{0},...,a_{|x|-1}):a_{i}\in A,i\leq|x|-1\\}$. We let $(A^{x})^{<\omega}$ be the collection of all finite sequences of points in $A^{x}$. If we call $\varphi(x,y)$ a partitioned $\mathcal{L}_{x,y}(\mathcal{U})$-formula, we treat $x$ as object variables and $y$ as parameter variables. The formula $\varphi^{*}(y,x)$ denotes the exact same formula as $\varphi(x,y)$, but with the roles exchanged for parameters and object tuples. Generally speaking, in any instance where we have multiple tuples of variables (e.g. $x$ and $y$, or $(x_{1},x_{2},x_{3},...)$), we will always assume they are pairwise distinct without comment. Unlike similar papers about Keisler measures, we do not identify a type and its corresponding Keisler measure. We let $S_{x}(A)$ denote the usual type space over $A$ and $\mathfrak{M}_{x}(A)$ the space of Keisler measures over $A$. We let $\mathfrak{M}_{(x_{i})_{i\in\omega}}(\mathcal{U})$ be the collection of finitely additive probability measures on $\mathcal{L}_{(x_{i})_{i\in\omega}}(\mathcal{U})$. For any (tuple of) variable(s) $x$, and any subset $A\subseteq\mathcal{U}$, we have a map $\delta:S_{x}(A)\to\mathfrak{M}_{x}(A)$ via $\delta(p)=\delta_{p}$ where $\delta_{p}$ is the Dirac measure at the type $p$. We sometimes refer to $\delta_{p}$ as the corresponding Keisler measure of $p$. If $\overline{a}=(a_{1},...,a_{n})$ is a sequence of points in $\mathcal{U}^{x}$, then we let $\operatorname{Av}(\overline{a})$ be the associated average measure in $\mathfrak{M}_{x}(\mathcal{U})$. Explicitly, for any $\psi(x)\in\mathcal{L}_{x}(\mathcal{U})$, we define $\operatorname{Av}(\overline{a})(\psi(x))=\frac{|\\{1\leq i\leq n:\mathcal{U}\models\psi(a_{i})\\}|}{n}.$ ### 2.1. Basics of convergence Recall that if $A\subseteq\mathcal{U}$, then both $S_{x}(A)$ and $\mathfrak{M}_{x}(A)$ carry a natural compact Hausdorff topology. For $S_{x}(A)$, we have the usual Stone space topology. Similarly, $\mathfrak{M}_{x}(A)$ admits a compact Hausdorff topology. There are two ways to describe this topology. First, this topology is the topology induced from the compact Hausdorff space $[0,1]^{\mathcal{L}_{x}(A)}$ where we identify each measure with the obvious map from $\mathcal{L}_{x}(A)$ to $[0,1]$. This topology on $\mathfrak{M}_{x}(A)$ can also be described as the coarsest topology such that for any continuous function $f:S_{x}(A)\to\mathbb{R}$, the map $\int f:\mathfrak{M}_{x}(A)\to\mathbb{R}$ is continuous. We will routinely need to keep track of which sets of parameters our types and measures are converging over. Hence, we establish the following conventions. ###### Definition 2.1. Fix $A\subseteq\mathcal{U}$, $p\in S_{x}(A)$ and $\mu\in\mathfrak{M}_{x}(A)$. 1. $(i)$ We say that a sequence of types $(p_{i})_{i\in\omega}$, where each $p_{i}$ is in $S_{x}(A)$, converges to $p$ if it converges in the Stone space topology on $S_{x}(A)$, which we write as “$\lim_{i\to\infty}p_{i}=p$ in $S_{x}(A)$” or simply as “$\lim_{i\to\infty}p_{i}=p$” when the underlying space is obvious. We recall that $\lim_{i\to\infty}p_{i}=p$ if for every $\psi(x)\in p$, there exists some natural number $N_{\psi}$ such that for any $n>N_{\psi}$, $\psi(x)\in p_{n}$. 2. $(ii)$ We say that a sequence of measures $(\mu_{i})_{i\in\omega}$, where each $\mu_{i}$ is in $\mathfrak{M}_{x}(A)$, converges to $\mu$ if this sequence converges in the compact Hausdorff topology on $\mathfrak{M}_{x}(A)$, which we write as “$\lim_{i\to\infty}\mu_{i}=\mu$ in $\mathfrak{M}_{x}(A)$” or simply as “$\lim_{i\to\infty}\mu_{i}=\mu$” when there is no possibility of confusion. Notice that $\lim_{i\to\infty}\mu_{i}=\mu$ if for every $\psi(x)\in\mathcal{L}_{x}(A)$ and $\epsilon>0$, there exists some natural number $N_{\psi,\epsilon}$ such that for any $n>N_{\psi,\epsilon}$, $|\mu_{n}(\psi(x))-\mu(\psi(x))|<\epsilon.$ We now observe the relationship between finitely satisfiable types and measures and topological closure in their respective spaces. ###### Fact 2.2. Suppose $p\in S_{x}(\mathcal{U})$, $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $M\prec\mathcal{U}$. Assume that $p$ and $\mu$ are finitely satisfiable over $M$. Then the following are true. 1. ($i$) The type $p$ is in the closure of $\\{tp(a/\mathcal{U}):a\in M^{x}\\}$ in $S_{x}(\mathcal{U})$. 2. ($ii$) The associated Keisler measure $\delta_{p}$ is in the closure of $\\{\delta_{a}:a\in M^{x}\\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. 3. ($iii$) The measure $\mu$ is in the closure of $\Big{\\{}\sum_{i=1}^{n}r_{i}\delta_{a_{i}}:n\in\mathbb{N},r_{i}>0,\sum_{i=1}^{n}r_{i}=1,a_{i}\in M^{x}\Big{\\}}$ in $\mathfrak{M}_{x}(\mathcal{U})$. 4. ($iv$) The measure $\mu$ is in the closure of $\\{\operatorname{Av}(\overline{a}):\overline{a}\in(M^{x})^{<\omega}\\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. We remark that the proof of $(i)$ is a standard exercise and the proof of $(ii)$ follows directly from $(i)$. A proof of $(iii)$ can be found at [4, Proposition 2.11] and $(iv)$ follows directly from $(iii)$. ### 2.2. Types We recall some basic definitions and facts about special kinds of types (e.g. generically stable types). Our notion of an EM-type is not defined in complete generality since we are only concerned with countable sequences in this paper. ###### Definition 2.3. Let $(a_{i})_{i\in\omega}$ be a sequence of points in $\mathcal{U}^{x}$ and let $B\subseteq\mathcal{U}$. Then the Ehrenfeucht-Mostowski type or EM-type of the sequence $(a_{i})_{i\in\omega}$ over $B$, denoted $\operatorname{EM}((a_{i})_{i\in\omega}/B)$, is the following partial type: $\\{\varphi(x_{0},...,x_{k})\in\mathcal{L}_{(x_{i})_{i\in\omega}}(B):\mathcal{U}\models\varphi(a_{i_{0}},...,a_{i_{k}})\text{ for any }i_{0}<...<i_{k}\\}.$ We remark that this partial type corresponds to a subset of $S_{(x_{i})_{i\in\omega}}(B)$. ###### Observation 2.4. It is clear from the definition above that for any sequence of points $(a_{i})_{i\in\omega}$ in $\mathcal{U}^{x}$ and any $B\subseteq\mathcal{U}$, the type $\operatorname{EM}((a_{i})_{i\in\omega}/B)$ is complete if and only if the sequence $(a_{i})_{i\in\omega}$ is indiscernible over $B$. The general notion of a generically stable type was introduced by Pillay and Tanović in [15]. The definition of a generically stable type provided below was proved to be equivalent in [6] (see Proposition 3.2). We also provide the definition of a $\operatorname{dfs}$ type which will be important throughout this paper. In general, the class of $\operatorname{dfs}$ types strictly contains the class of generically stable types. ###### Definition 2.5. Suppose that $p\in S_{x}(\mathcal{U})$. 1. $(i)$ We say that $p$ is dfs if there exists a small model $M\prec\mathcal{U}$ such that $p$ is both definable and finitely satisfiable over $M$. In this case, we say that $p$ is dfs over $M$. 2. $(ii)$ We say that $p$ is generically stable if there exists a small model $M\prec\mathcal{U}$ such that $p$ is invariant over $M$ and for any Morley sequence $(a_{i})_{i\in\omega}$ in $p$ over $M$, we have that $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$. In this case, we say that $p$ is generically stable over $M$. Finally, we provide a collection of standard facts about these classes of types. ###### Fact 2.6. Let $p$ be in $S_{x}(\mathcal{U})$ and $M\prec\mathcal{U}$. 1. $(i)$ If $p$ is generically stable over $M$, then $p$ is $\operatorname{dfs}$ over $M$ $($[15, Proposition 1]$)$. 2. $(ii)$ If $p$ is $\operatorname{dfs}$ over $M$, then any Morley sequence in $p$ over $M$ is totally indiscernible over $M$ $($[9, Proposition 3.2], proof does not use NIP$)$. 3. $(iii)$ If $p$ is generically stable/$\operatorname{dfs}$ over $M$ and $M_{0}$-invariant, then $p$ is respectively generically stable/$\operatorname{dfs}$ over $M_{0}$ $($generically stable case follows from $(i)$ of [15, Proposition 1]; $\operatorname{dfs}$ case can be found in [16, Lemma 2.8]$)$. 4. $(iv)$ $($T is countable$)$ If $p$ is generically stable/$\operatorname{dfs}$ over $M$, there exists an elementary submodel $M_{0}$ such that $|M_{0}|=\aleph_{0}$ and $p$ is generically stable/$\operatorname{dfs}$ over $M_{0}$ $($Easy to check from $(iii)$$)$. 5. $(v)$ $($T is NIP$)$ If $p$ is $\operatorname{dfs}$ over $M$ then $p$ is generically stable over $M$ $($e.g. [16, Theorem 2.29]$)$. ### 2.3. Keisler measures In this subsection, we will briefly recall some important definitions and facts about these measures. As with any paper about Keisler measures, we provide the following standard atlas. ###### Definition 2.7. Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. 1. ($i$) $\mu$ is invariant if there exists a model $M\prec\mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and $b,b^{\prime}\in\mathcal{U}^{y}$ such that $b\equiv_{M}b^{\prime}$, $\mu(\varphi(x,b))=\mu(\varphi(x,b^{\prime}))$. In this case, we say that $\mu$ is $M$-invariant or invariant over $M$. 2. ($ii$) If $\mu$ is invariant over $M$, then for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$, we can define the map $F_{\mu,M}^{\varphi}:S_{y}(M)\to[0,1]$ via $F_{\mu,M}^{\varphi}(q)=\mu(\varphi(x,b))$ where $b\models q$. When $M$ is obvious we will simply write $F_{\mu,M}^{\varphi}$ as $F_{\mu}^{\varphi}$. 3. ($iii$) $\mu$ is Borel-definable if there exists a model $M\prec\mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is Borel. In this case, we say that $\mu$ is Borel-definable over $M$. 4. ($iv$) $\mu$ is definable if there exists a model $M\prec\mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is continuous. In this case, we say that $\mu$ is $M$-definable or definable over $M$. 5. ($v$) $\mu$ is finitely satisfiable over a small model if there exists $M\prec\mathcal{U}$ such that for every formula $\varphi(x)\in\mathcal{L}_{x}(\mathcal{U})$, if $\mu(\varphi(x))>0$ then there exists $a\in M^{x}$ such that $\mathcal{U}\models\varphi(a)$. In this case, we say that $\mu$ is finitely satisfiable over $M$. 6. ($vi$) $\mu$ is finitely approximated if there exists a model $M\prec\mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and every $\epsilon>0$, there exists $\overline{a}\in(M^{x})^{<\omega}$ such that $\sup_{b\in\mathcal{U}^{y}}|\mu(\varphi(x,b))-\operatorname{Av}(\overline{a})(\varphi(x,b))|<\epsilon.$ In this case, we say that $\mu$ is finitely approximated over $M$. 7. ($vii$) $\mu$ is smooth if there exists a model $M\prec\mathcal{U}$ such that for any $\lambda\in\mathfrak{M}_{x}(\mathcal{U})$ if $\lambda|_{M}=\mu|_{M}$, then $\lambda=\mu$. If this is the case, we say that $\mu$ is smooth over $M$. We now provide a collection of basic facts. Statements $(i)$, $(iii)$, $(iv)$, and $(v)$ in Fact 2.8 are relatively straightforward to prove and so we leave them as exercises. ###### Fact 2.8. Assume that $T$ is any theory and $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ with $M\prec\mathcal{U}$. 1. $(i)$ If $\mu=\operatorname{Av}(\overline{a})$ for some $\overline{a}\in(M^{x})^{<\omega}$, then $\mu$ is smooth over $M$. 2. $(ii)$ If $\mu$ is smooth over $M$, then $\mu$ is finitely approximated over $M$, $($e.g. [16, Proposition 7.10]$)$. 3. $(iii)$ If $\mu$ is finitely approximated over $M$, then $\mu$ is both definable and finitely satisfiable over $M$. 4. $(iv)$ If $\mu$ is definable or finitely satisfiable over $M$, then $\mu$ is $M$-invariant. 5. $(v)$ The measure $\mu$ is definable over $M$ if and only if for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$ and for every $\epsilon>0$, there exists formulas $\psi_{1}(y),...,\psi_{n}(y)\in\mathcal{L}_{y}(M)$ and real numbers $r_{1},...,r_{n}\in[0,1]$ such that $\sup_{q\in S_{y}(M)}|F_{\mu,M}^{\varphi}(q)-\sum_{i=1}^{n}r_{i}\mathbf{1}_{\psi_{i}(y)}(q)|<\epsilon.$ where $\mathbf{1}_{\psi_{i}(y)}$ is the characteristic function of the clopen set $[\psi_{i}(y)]$. Moreover, if $T$ is NIP then the following also hold. 1. $(vi)$ If $\mu$ is invariant over $M$, then $\mu$ is Borel-definable $M$ $($e.g. [16, Proposition 7.19]$)$. 2. $(vii)$ Any measure $\mu$ is definable and finitely satisfiable over $M$ if and only if $\mu$ is finitely approximated over $M$ $($[10, Proposition 3.2]$)$. 3. $(viii)$ Every measure has a “smooth extension”. In particular, for any given $M\prec\mathcal{U}$ and $\mu\in\mathfrak{M}_{x}(\mathcal{U})$, there exists some $N$ such that $M\prec N\prec\mathcal{U}$ and a measure $\lambda\in\mathfrak{M}_{x}(\mathcal{U})$ such that $\lambda$ is smooth over $N$ and $\lambda|_{M}=\mu|_{M}$ $($[10, Lemma 2.2]$)$. ###### Proposition 2.9 (T is countable). If $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$, then there exists a countable model $M_{0}$ such that $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$ over $M_{0}$ $($respectively$)$. ###### Proof. We notice that the properties of definability and smoothness only require the existence of $\aleph_{0}$-many $\mathcal{L}(M)$-formulas (by [10, Lemma 2.3] and (v) of Fact 2.8 respectively). If we choose an elementary submodel $M_{0}$ of $M$ containing the parameters from these formulas, then $\mu$ will have the desired property over $M_{0}$. Finitely approximated measures only require the existence of $\aleph_{0}$-many elements of $M$. Choosing an elementary submodel $M_{0}$ of $M$ with these elements demonstrates that $\mu$ is finitely approximated over $M_{0}$. Finally, if $\mu$ is $\operatorname{dfs}$ then $\mu$ is definable over a countable model $M_{0}$. In particular, $\mu$ is invariant over $M_{0}$ and so $\mu$ is also finitely satisfiable over $M_{0}$ by the same argument as in [8, Proposition 4.13]. ∎ ###### Remark 2.10. Assuming $T$ is countable, there are measures (even types) which are finitely satisfiable over a small submodel, but are not finitely satisfiable over a countable submodel. See Proposition 4.11 and Remark 4.12 for an explicit example. ###### Definition 2.11. Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$, $\nu\in\mathfrak{M}_{y}(\mathcal{U})$ and assume that $\mu$ is Borel-definable over $M$. Then we define the Morley product of $\mu$ and $\nu$ (denoted $\mu\otimes\nu)$ is the unique Keisler measure in $\mathfrak{M}_{x,y}(\mathcal{U})$ with the following property: for any formula $\varphi(x,y)\in\mathcal{L}_{x,y}(\mathcal{U})$, $\mu\otimes\nu(\varphi(x,y))=\int_{S_{y}(N)}F_{\mu}^{\varphi}d(\nu|_{N}),$ where $N$ is any small elementary submodel of $\mathcal{U}$ containing $M$ and any parameters from $\varphi$ and $\nu|_{N}$ is the associated regular Borel probability measure of the restriction of $\nu$ to $N$ on the type space $S_{y}(N)$. We remark that this this product is well-defined and the computation does not depend on our choice of $N$ (assuming $N$ contains $M$ and all parameters in $\varphi(x,y)$) (see discussion after [16, Proposition 7.19]). This observation allows us to grow or shrink the space in which we are integrating over and we will make substantial use of this property in section 5. We end this section with a list of facts about measures and products. ###### Fact 2.12. Assume that $T$ is any theory and $\mu\in\mathfrak{M}_{x}(\mathcal{U})$, $\nu\in\mathfrak{M}_{y}(\mathcal{U})$, and $\lambda\in\mathfrak{M}_{z}(\mathcal{U})$. Assume that $\mu$ and $\nu$ are both $M$-invariant. 1. $(i)$ If $\mu$ is smooth and $\nu$ is Borel definable, then $\mu\otimes\nu=\nu\otimes\mu$ $($see [10, Corollary 2.5]$)$. 2. $(ii)$ If $\mu$ and $\nu$ are definable (over $M$), then $\mu\otimes\nu$ is definable (over $M$) and $\mu\otimes(\nu\otimes\lambda)=(\mu\otimes\nu)\otimes\lambda)$ $($see [6, Proposition 2.6]$)$. 3. $(iii)$ If $\mu$ and $\nu$ are smooth (over $M$), then $\mu\otimes\nu$ is smooth (over $M$) $($e.g. [5, Corollary 3.1]$)$. 4. $(iv)$ If $\mu$ is Borel definable (over $M$) and $\nu$ is invariant (over $M$), then $\mu\otimes\nu$ is invariant (over $M$) $($discussion before [16, Exercise 7.20]$)$. 5. $(v)$ If $\mu$ and $\nu$ are $\operatorname{dfs}$ (over $M$), then $\mu\otimes\nu$ is $\operatorname{dfs}$ (over $M$) $($e.g. [6, Proposition 2.10]$)$. Moreover, if $T$ is NIP then the following also hold. 1. $(a)$ If $\mu,\nu$ are invariant then $\mu\otimes(\nu\otimes\lambda)=(\mu\otimes\nu)\otimes\lambda$ $($see [5]$)$. 2. $(b)$ If $\mu$ is $\operatorname{dfs}$ and $\nu$ is invariant, then $\mu\otimes\nu=\nu\otimes\mu$ $($see [10, Theorem 3.2]$)$. ###### Definition 2.13 (T is NIP). Suppose that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is invariant. Then, we define the following measures: 1. (1) $\mu^{(0)}(x_{0})=\mu(x_{0})$. 2. (2) $\mu^{(n)}=\mu(x_{n})\otimes\mu^{(n-1)}(x_{0},...,x_{n-1})$. 3. (3) $\mu^{(\omega)}=\bigcup_{i\in\omega}\mu^{(n)}$ (where $\mu^{(\omega)}\in\mathfrak{M}_{(x_{i})_{i\in\omega}}(\mathcal{U})$). We note that $\mu^{(n)}$ and $\mu^{(\omega)}$ are well-defined by Fact 2.12, and moreover we do not need to worry about the ordering of the parentheses in the product. ## 3\. Sequentially approximated types and measures We begin this section by isolating the property of sequential approximability. We again remark that these classes of objects are a global version of Khanaki’s Baire 1 definability [13]. We assume that $T$ is countable, but make no other global assumptions about $T$. As usual, $\mathcal{U}$ is a fixed sufficiently saturated model of $T$. We now define sequentially approximated types and measures. ###### Definition 3.1. Let $p\in S_{x}(\mathcal{U})$ and $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. We say that, 1. (1) $p$ is sequentially approximated if there exists $M\prec\mathcal{U}$ and a sequence of points $(a_{i})_{i\in\omega}$ in $M^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$ in $S_{x}(\mathcal{U})$. In this case, we say $p$ is sequentially approximated over $M$. 2. (2) $\mu$ is sequentially approximated if there exists $M\prec\mathcal{U}$ and a sequence of points $(\overline{a}_{i})_{i\in\omega}$ in $(M^{x})^{<\omega}$ such that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. In this case, we say $\mu$ is sequentially approximated over $M$. We warn the reader that Definition 3.1 is only meaningful in the context of types and measures over large models. Indeed, if $M$ is a countable model and $T$ is a countable theory, then for every $p\in S_{x}(M)$, there exists a sequence of points in $M^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}(a_{i}/M)=p$ in $S_{x}(M)$. The analogous statement also holds for measures. We also emphasize to the reader that there is a real distinction between a type $p$ being sequentially approximated over a model $M$ and its associated Keisler measure $\delta_{p}$ being sequentially approximated over $M$. Proposition 4.14 gives an example of a type which is not sequentially approximated while its associated Keisler measure is sequentially approximated. However, the other implication holds almost trivially. ###### Observation 3.2. If a type $p$ in $S_{x}(\mathcal{U})$ is sequentially approximated over a model $M$, then the associated Keisler measure $\delta_{p}$ is sequentially approximated over $M$. ###### Proof. If $\lim_{i\to\infty}\operatorname{tp}(a_{i}/\mathcal{U})=p$ in $S_{x}(\mathcal{U})$, then $\lim_{i\to\infty}\delta_{a_{i}}=\delta_{p}$ in $\mathfrak{M}_{x}(\mathcal{U})$ since $\delta:S_{x}(\mathcal{U})\to\mathfrak{M}_{x}(\mathcal{U})$ is a topological embedding. ∎ ### 3.1. Basic properties We now connect sequentially approximated types and measures to standard model- theoretic properties. For the reader’s intuition, sequential approximability (at least in the case of measures) should be thought of as a strong version of finite satisfiability over a small model or a weak version of finite approximability. Sequentially approximated types remain a little more mysterious. ###### Proposition 3.3. Assume that $p\in S_{x}(\mathcal{U})$ and $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. 1. ($i$) If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are finitely satisfiable over $M$. Even more, $p$ and $\mu$ are finitely satisfiable over a countable elementary submodel of $M$. 2. ($ii$) If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are Borel-definable over $M$. 3. ($iii$) If $\mu$ is finitely approximated over $M$, then $\mu$ is sequentially approximated over $M$. $($Warning: In general, this fails for types.$)$ 4. ($iv$) If $T$ is NIP, then $p$ is sequentially approximated over $M$ if and only if $\delta_{p}$ is sequentially approximated over $M$. 5. ($v$) Assume that $k\subseteq\\{1,2,...,n\\}$ and let $\pi_{k}:S_{n}(\mathcal{U})\to S_{k}(\mathcal{U})$ and $\rho_{k}:\mathfrak{M}_{n}(\mathcal{U})\to\mathfrak{M}_{k}(\mathcal{U})$ be the obvious projection maps. If $p\in S_{n}(\mathcal{U})$ and $p$ is sequentially approximated over $M$, then $\pi_{k}(p)$ is sequentially approximated over $M$. Similarly, if $\mu\in\mathfrak{M}_{n}(\mathcal{U})$ is sequentially approximated over $M$ then so is $\rho_{k}(\mu)$. ###### Proof. We prove the claims. 1. ($i$) The first part of $(i)$ is obvious. For the second part, we only need to choose a submodel containing a sequence which sequentially approximates our type or measure. Since $T$ is countable, we can choose a countable model. 2. ($ii$) The proofs for both the type and measure cases are similar, so we prove the measure case. Assume that $(\overline{a}_{i})_{i\in\omega}$ is a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. By part $(i)$, $\mu$ is finitely satisfiable over $M$ and hence $M$-invariant. So, for any partitioned formula $\varphi(x,y)$ in $\mathcal{L}$, the map $F_{\mu}^{\varphi}:S_{y}(M)\to[0,1]$ is well-defined. By sequential approximability, the sequence of continuous functions $\big{(}F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big{)}_{i\in\omega}$ converges pointwise to $F_{\mu}^{\varphi}$. Hence, $F_{\mu}^{\varphi}$ is Baire-1 (and therefore Borel). 3. ($iii$) This follows from an encoding argument. Let $(\varphi_{n}(x,y_{n}))_{n\in\omega}$ be an enumeration of the partitioned $\mathcal{L}$-formulas. For each $n\in\mathbb{N}$, consider the partitioned formula $\theta_{n}(x;y_{0},...,y_{n},z_{*},z_{0},...,z_{n})$ where $|z_{*}|=|z_{i}|=1$ and $\theta_{n}(x;\bar{y},\bar{z}):=\bigwedge_{i\leq n}\left(\left(z_{*}=z_{i}\wedge\bigwedge_{\begin{subarray}{c}j\leq n\\\ j\neq i\end{subarray}}z_{j}\neq z_{*}\right)\to\varphi_{i}(x,y_{i})\right).$ Since $\mu$ is finitely approximated over $M$, for $\epsilon=\frac{1}{n}$, there exists some $\overline{a}_{n}$ in $(M^{x})^{<\omega}$ such that for every $(\bar{b},\bar{c})\in\mathcal{U}^{\bar{y}\bar{z}}$, $|\operatorname{Av}(\overline{a}_{n})(\theta_{n}(x,\bar{b},\bar{c}))-\mu((\theta_{n}(x,\bar{b},\bar{c}))|<\epsilon.$ Notice that $\theta_{n}(x;\bar{y},\bar{z})$ encodes the definable sets which are obtained by the formulas $\varphi_{0}(x,y_{0}),...,\varphi_{n}(x,y_{n})$. In particular, for every $b\in\mathcal{U}^{y_{j}}$ where $j\leq n$, consider then tuple $(\bar{d}_{b},\bar{c}_{j})=(d_{0},...d_{j-1},b,d_{j+1}...,d_{n},c_{*},c_{0},...,c_{n})$ where the $d_{i}$’s are arbitrary and $c_{*}=c_{l}$ if and only if $l=j$. Then $|\operatorname{Av}(\overline{a}_{n})(\varphi_{j}(x,b))-\mu(\varphi_{j}(x,b))|=|\operatorname{Av}(\theta(x,\bar{d}_{b},\bar{c}_{j}))-\mu((\theta(x,\bar{d}_{b},\bar{c}_{j}))|.$ So for any $j\leq n$ and $b\in\mathcal{U}^{y_{j}}$, $|\operatorname{Av}(\overline{a}_{n})(\varphi_{j}(x,b))-\mu(\varphi_{j}(x,b))|<\frac{1}{n}.$ It is clear that $\lim_{n\to\infty}\operatorname{Av}(\overline{a}_{n})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. 4. ($iv$) The forward direction is Observation 3.2. We consider the converse. If $\delta_{p}$ is sequentially approximated over $M$ then $\delta_{p}$ is finitely satisfiable over a countable submodel $M_{0}$ by $(i)$ above. Then $p$ is finitely satisfiable over $M_{0}$ and so by Theorem 1.1, $p$ is sequentially approximated over $M_{0}$ (and also over $M$). 5. ($v$) Simply consider the approximating sequence restricted to the appropriate coordinates. ∎ ###### Proposition 3.4. A measure $\mu$ is sequentially approximated and definable over $M$ if and only if $\mu$ is finitely approximated over $M$. ###### Proof. We first prove the forward direction. The proof is similar to the proof of [8, Theorem 4.8]. Fix $\epsilon>0$. For any partitioned $\mathcal{L}$-formula $\varphi(x,y)$, consider the map $F_{\mu}^{\varphi}:S_{y}(M)\to[0,1]$. Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Observe that each map $F_{\operatorname{Av}(\overline{a})}^{\varphi}:S_{y}(M)\to[0,1]$ is continuous and the sequence $\big{(}F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big{)}_{i\in\omega}$ converge pointwise to $F_{\mu}^{\varphi}$. Since $\mu$ is definable, the map $F_{\mu}^{\varphi}$ is continuous. By the Riesz representation theorem and dominated convergence theorem, we have that $\big{(}F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big{)}_{i\in\omega}$ converges weakly to $F_{\mu}^{\varphi}$ in $C(S_{y}(M))$. By a standard application of Mazur’s lemma, there exists a sequence of functions $(g_{j})_{j\in\omega}$ such that each $g_{j}$ is a rational convex combination of $\\{F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}:i\leq n_{j}\\}$ for some natural number $n_{j}$ and the sequence $(g_{j})_{j\in\omega}$ converges uniformly to $F_{\mu}^{\varphi}$. Choose $m\in\mathbb{N}$ so that $\sup_{p\in S_{y}(M)}|F_{\mu}^{\varphi}(p)-g_{m}(p)|<\epsilon.$ By construction, $g_{m}=F_{\operatorname{Av}(\overline{c})}^{\varphi}$ for some $\overline{c}\in(M^{x})^{<\omega}$. Notice that $\sup_{b\in\mathcal{U}^{y}}|\mu(\varphi(x,b))-\operatorname{Av}(\overline{c})(\varphi(x,b))|<\epsilon.$ For the converse, $\mu$ is definable over $M$ by $(iii)$ of Fact 2.8. Moreover, $\mu$ is sequentially approximated over $M$ by $(iii)$ of Proposition 3.3. ∎ We now show that sequentially approximated measures commute with definable measures. It is well-known that in the context of NIP theories definable measures commute with measures which are finitely satisfiable over a small model (see [10, Lemma 3.1] or [16, Proposition 7.22]). Recently, it was shown that in general, measures which are finitely satisfiable over a small model (even $\operatorname{dfs}$ measures) do not always commute with definable measures (see [7, Proposition 7.14]). We first present a topological proof (in NIP theories) which shows that measures which are finitely satisfiable over a small model commute with definable measures. We will then modify this proof (by replacing an instance of continuity by the dominated convergence theorem) to show that sequentially approximated measures commute with definable ones in any theory. Recall the following facts. ###### Fact 3.5. Let $\nu\in\mathfrak{M}_{y}(\mathcal{U})$, $N\prec\mathcal{U}$, and $\varphi(x,y)$ be an $\mathcal{L}_{x,y}(N)$ formula. Let $\mathfrak{M}_{x}(\mathcal{U},N)$ denote the collection of measures in $\mathfrak{M}_{x}(\mathcal{U})$ which are finitely satisfiable over $N$. 1. ($i$) If $\nu$ is definable over $N$, then the map from $\mathfrak{M}_{x}(\mathcal{U})$ to $[0,1]$ defined via $\mu\to\nu\otimes\mu(\varphi(x,y))$ is continuous $($[7, Lemma 5.4]$)$. 2. ($ii$) $($T is NIP$)$ If $\nu$ is any measure, then the map from $\mathfrak{M}_{x}(\mathcal{U},N)$ to $[0,1]$ defined via $\mu\to\mu\otimes\nu(\varphi(x,y))$ is well-defined and continuous $($[4, Proposition 6.3]$)$. We remark that statement $(ii)$ of Fact 3.5 requires NIP for two reasons. First, it is not true in general that measures which are finitely satisfiable over a small model are Borel definable. In NIP theories, this is true ($(vi)$ of Fact 2.8). Secondly, the proof that this map is continuous relies on the existence of a smooth extension of $\nu|_{N}$. Without NIP, this map need not be continuous. The first proof of the following proposition can be found in [10]. ###### Proposition 3.6 (T is NIP). Assume that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\nu\in\mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely satisfiable over a small model and $\nu$ is definable, then $\mu\otimes\nu=\nu\otimes\mu$. ###### Proof. Fix a formula $\varphi(x,y)\in\mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is finitely satisfiable over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Since $\mu$ is finitely satisfiable over $N$, there exists a net of measures $(\operatorname{Av}(\overline{a}_{i}))_{i\in I}$ such that each $\overline{a}_{i}\in(N^{x})^{<\omega}$ and $\lim_{i\in I}\operatorname{Av}(\overline{a}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$ ($(iv)$ of Fact 2.2). By Fact 3.5 $\displaystyle\mu\otimes\nu(\varphi(x,y))=\int_{S_{y}(N)}F_{\mu}^{\varphi}d(\nu|_{N})$ $\displaystyle\overset{(a)}{=}\ \lim_{i\in I}\int_{S_{y}(N)}F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}d(\nu|_{N})$ $\displaystyle\overset{(b)}{=}\ \lim_{i\in I}\int_{S_{x}(N)}F_{\nu}^{\varphi^{*}}d(\operatorname{Av}(\overline{a}_{i})|_{N})$ $\displaystyle\overset{(c)}{=}\ \int_{S_{x}(N)}F_{\nu}^{\varphi^{*}}d(\mu|_{N})=\nu\otimes\mu(\varphi(x,y)).$ Where the equalities $(a)$ and $(c)$ follow from the fact that continuous functions commute with nets. The equality $(b)$ is simple to check and is also justified by statement $(i)$ of Fact 2.12. ∎ ###### Proposition 3.7. Sequentially approximated and definable measures commute. Assume that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\nu\in\mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is sequentially approximated and $\nu$ is definable, then $\mu\otimes\nu=\nu\otimes\mu$. ###### Proof. Fix a formula $\varphi(x,y)\in\mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is sequentially approximated over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of points in $(N^{x})^{<\omega}$ such that $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Now we consider the following computation. $\displaystyle\mu\otimes\nu(\varphi(x,y))=\int_{S_{y}(N)}F_{\mu}^{\varphi}d(\nu|_{N})$ $\displaystyle\overset{(a)}{=}\ \lim_{i\to\infty}\int_{S_{y}(N)}F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}d(\nu|_{N})$ $\displaystyle\overset{(b)}{=}\ \lim_{i\to\infty}\int_{S_{x}(N)}F_{\nu}^{\varphi^{*}}d(\operatorname{Av}(\overline{a}_{i})|_{N})$ $\displaystyle\overset{(c)}{=}\ \int_{S_{x}(N)}F_{\nu}^{\varphi^{*}}d(\mu|_{N})=\nu\otimes\mu(\varphi(x,y)).$ Where the equality $(a)$ now holds from the dominated convergence theorem, equality $(c)$ holds from $(i)$ of Fact 3.5 and the observation that continuous functions commute with nets, and equality $(b)$ is easy to check (also $(i)$ of Fact 2.12). ∎ ###### Corollary 3.8. Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\nu\in\mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely approximated and $\nu$ is definable, then $\mu\otimes\nu=\nu\otimes\mu$. ###### Proof. By (iii) of Proposition 3.3, $\mu$ is sequentially approximated. Apply Proposition 3.7. ∎ ### 3.2. Egorov’s theorem It is interesting to note that sequentially approximated measures are not too far away from finitely approximated measures. In particular, if we fix some measure on the parameter space, any sequentially approximated measure is almost finitely approximated. This result is in a similar vein as Khanaki’s almost definable coheirs in the local setting ([12]). A direct application of Egorov’s theorem gives our result. ###### Theorem 3.9 (Egorov’s Theorem). Let $(X,B,\mu)$ be a finite measure space. Assume that $(f_{i})_{i\in\omega}$ is a sequence of measurable functions from $X\to\mathbb{R}$ such that $(f_{i})_{i\in\omega}$ converges to a function $f$ pointwise. Then for every $\epsilon>0$ there exists a $Y_{\epsilon}\in B$ such that $f_{i}|_{Y_{\epsilon}}$ converges to $f|_{Y_{\epsilon}}$ uniformly on $Y_{\epsilon}$ and $\mu(X\backslash Y_{\epsilon})<\epsilon$. A proof of Egorov’s theorem can be found in [19, Theorem 3.2.4.1]. Restating this theorem in our context gives the following result. ###### Corollary 3.10. Assume that $p$ and $\mu$ are sequentially approximated over $M$. Let $\nu\in\mathfrak{M}_{y}(M)$. Then, for every $\epsilon>0$, there exists a Borel set $Y_{\epsilon}\subset S_{y}(M)$ such that 1. (1) $\nu(Y_{\epsilon})>1-\epsilon$. 2. (2) For every $\delta>0$ and every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $\overline{a}_{\delta}$ in $(M^{x})^{<\omega}$ such that for every $b\in\mathcal{U}^{y}$ so that $\operatorname{tp}(b/M)\in Y_{\epsilon}$, we have $|\mu(\varphi(x,b))-\operatorname{Av}(\overline{a}_{\delta})(\varphi(x,b))|<\delta.$ 3. (3) For every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $a$ in $M^{x}$ such that for every $b\in\mathcal{U}^{y}$ so that $\operatorname{tp}(b/M)\in Y_{\epsilon}$, we have $\varphi(x,b)\in p\iff\models\varphi(a,b).$ ## 4\. Generically stable types Throughout this section, we let $T$ be a countable theory and $\mathcal{U}$ be a monster model of $T$. We show that if a type $p$ is generically stable over a small submodel $M$ of $\mathcal{U}$, then $p$ is sequentially approximated over $M$. Toward proving this result, we actually prove a slightly stronger lemma than what is necessary. Namely, let $p$ be a $\operatorname{dfs}$ type and let $M$ be a countable model such that $p$ is $\operatorname{dfs}$ over $M$ (for any $\operatorname{dfs}$ type, these models always exist by (iv) of Fact 2.6). We show that there exists a special sequence of points in $M$ such that the limiting behavior of this sequence resembles a Morley sequence in $p$ over $M$. In the case where $p$ is generically stable over $M$, we show that this special sequence converges to $p$. This is enough to show the result since every generically stable type is generically stable over some countable model. We now begin with a discussion on eventually indiscernible sequences, which were introduced in [17]. ###### Definition 4.1. Let $(c_{i})_{i\in\omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A\subset\mathcal{U}$. We say that $(c_{i})_{i\in\omega}$ is an eventually indiscernible sequence over $A$ if for any formula $\varphi(x_{0},...,x_{k})$ in $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)$, there exists some natural number $N_{\varphi}$ such that for any indices $n_{k}>....>n_{0}>N_{\varphi}$ and $m_{k}>...>m_{0}>N_{\varphi}$, we have that $\mathcal{U}\models\varphi(c_{n_{0}},...,c_{n_{k}})\leftrightarrow\varphi(c_{m_{0}},...,c_{m_{k}}).$ ###### Fact 4.2. Let $(b_{i})_{i\in\omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A\subset\mathcal{U}$ such that $|A|=\aleph_{0}$. Then there exists a subsequence $(c_{i})_{i\in\omega}$ of $(b_{i})_{i\in\omega}$ such that $(c_{i})_{i\in\omega}$ is eventually indiscernible over $A$. The proof is a standard application of Ramsey’s theorem and taking the diagonal (as mentioned in [17]). We prove a “continuous” version of this fact in the next section and the proof is analogous (see Proposition 5.3 for details). For any eventually indiscernible sequence $(c_{i})_{i\in\omega}$ over a set of parameters $A$, we can associate to this sequence a unique type in $S_{(x_{i})_{i\in\omega}}(A)$. We call this the eventual Ehrenfeucht-Mostowski type (or $\operatorname{EEM}$-type) of $(c_{i})_{i\in\omega}$ over $A$. We now give the formal definition. ###### Definition 4.3. Let $(b_{i})_{i\in\omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A\subset\mathcal{U}$. Then the eventual Ehrenfeucht-Mostowski type (or EEM- type) of $(b_{i})_{i\in\omega}$ over $A$, which is written as $\operatorname{EEM}((b_{i})_{i\in\omega})/A)$, is a subset of $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)$ defined as follows: Let $\varphi(x_{i_{0}},...,x_{i_{k}})$ be a formula in $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)$ where the indices are ordered $i_{0}<...<i_{k}$. Then $\varphi(x_{i_{0}},...x_{i_{k}})\in\operatorname{EEM}((b_{i})_{i\in\omega})/A)$ if and only if there exists an $N_{\varphi}$ such that for any $n_{k}>...>n_{0}>N_{\varphi}$, we have that $\mathcal{U}\models\varphi(b_{n_{0}},...,b_{n_{k}})$. Notice that an $\operatorname{EEM}$-type of a sequence is always indiscernible in the following sense: If we have indices $i_{0},...,i_{k}$ and $j_{0},...,j_{k}$ where $i_{0}<...<i_{k}$ and $j_{0}<...<j_{k}$, then $\varphi(x_{i_{0}},...,x_{i_{k}})$ is in the $\operatorname{EEM}$-type of $(b_{i})_{i\in\omega}$ over $A$ if and only if $\varphi(x_{j_{0}},...,x_{j_{k}})$ is. This follows directly from the definition. We have some basic observations. ###### Observation 4.4. Let be $(c_{i})_{i\in\omega}$ an eventually indiscernible sequence over $A$. 1. (1) Then $\operatorname{EEM}((c_{i})_{i\in\omega}/A)$ is a complete type in $S_{(x_{i})_{i\in\omega}}(A)$. 2. (2) If $(c_{i})_{i\in\omega}$ is $A$-indiscernible, then $\operatorname{EEM}((c_{i})_{i\in\omega}/A)=\operatorname{EM}((c_{i})_{i\in\omega}/A)$. 3. (3) If $\operatorname{tp}((b_{i})_{i\in\omega}/A)=\operatorname{EEM}((c_{i})_{i\in\omega}/A)$, then $(b_{i})_{i\in\omega}$ is $A$-indiscernible. ###### Proof. Clear from the definitions and discussion above. ∎ We warn the reader that an eventually indiscernible sequence need not “realize” its own $\operatorname{EEM}$-type. Consider the following example: ###### Example 4.5. Let $T_{<}$ be the theory of $(\mathbb{R};<)$. Let $\mathcal{U}$ be a monster model of $T_{real}$ and $\mathbb{R}\prec\mathcal{U}$. Then the sequence $(a_{i})_{i\in\omega}$ where $a_{i}=i$ is eventually indiscernible over $\mathbb{R}$ while the sequence $(b_{i})_{i\in\omega}$ where $b_{i}=i(-1)^{i}$ is not. Clearly, $(a_{i})_{i\in\omega}$ is not $\mathbb{R}$-indiscernible. Moreover, for each $r\in\mathbb{R}$, the formula $x_{0}>r$ is in $\operatorname{EEM}((a_{i})_{i\in\omega}/\mathbb{R})$ while $a_{1}>2$ clearly does not hold. So if $\operatorname{tp}((c_{i})_{i\in\omega}/\mathbb{R}))=\operatorname{EEM}((a_{i})_{i\in\omega}/\mathbb{R})$, then $c_{i}>\mathbb{R}$ for each $i\in\omega$. The next two lemmas prove the bulk of this section’s main theorem and their proofs are similar to the proof of Theorem 1.1. The proof strategy for this theorem is the following: If $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$, then we can find a countable model $M$ such that $p$ is $\operatorname{dfs}$ over $M$. Let $I$ be a Morley sequence in $p$ over $M$. Using the fact that $p$ is finitely satisfiable over $M$, we can find a sequence of points in $M^{x}$ which converge to $p|_{MI}$ in $S_{x}(MI)$. After moving to an eventually indiscernible subsequence, we show that the $\operatorname{EEM}$-type of this eventually indiscernible sequence is precisely $p^{\omega}|_{M}$. With the stronger assumption that our type $p$ is generically stable (instead of just $\operatorname{dfs}$), we show that this eventually indiscernible subsequence must converge to $p$ in $S_{x}(\mathcal{U})$. ###### Lemma 4.6. Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$ over $M$ where $|M|=\aleph_{0}$. Then there exists a sequence $(c_{i})_{i\in\omega}$ in $M^{x}$ such that $\operatorname{EEM}((c_{i})_{i\in\omega}/M)=p^{\omega}|_{M}$. ###### Proof. Let $I=(a_{i})_{i\in\omega}$ be a Morley sequence in $p$ over $M$. Since $T$, $M$, and $I$ are countable, $\mathcal{L}_{x}(MI)$ is countable. It follows that $p|_{MI}$ is countable and we may enumerate this collection of formulas as $(\varphi_{i}(x))_{i\in\omega}$. Since $p$ is $\operatorname{dfs}$ over $M$, in particular $p$ is finitely satisfiable over $M$. For each natural number $n$, we choose $b_{n}$ in $M^{x}$ such that $\mathcal{U}\models\bigwedge_{j\leq n}\varphi_{j}(b_{n})$. By construction, we have that $\lim_{i\to\infty}\operatorname{tp}(b_{i}/MI)=p|_{MI}$ in $S_{x}(MI)$. By Fact 4.2, we may choose a subsequence $(c_{i})_{i\in\omega}$ of $(b_{i})_{i\in\omega}$ such that $(c_{i})_{i\in\omega}$ is eventually indiscernible over $MI$. For ease of notation, we write $(c_{i})_{i\in\omega}$ as $J$. We now show that $\operatorname{EEM}(J/M)=\operatorname{EM}(I/M)=p^{\omega}|_{M}$. We remind the reader that $\operatorname{EM}(I/M)=p^{\omega}|_{M}$ follows directly from the definition of a Morley sequence. We prove the first equality by induction on the number of free variables occurring in a formula. We begin with the base case. It suffices to show that for every $\varphi(x_{0})\in\mathcal{L}_{x_{0}}(M)$, if $\varphi(x_{0})\in\operatorname{EM}(I/M)$, then $\varphi(x_{0})\in\operatorname{EEM}(J/M)$. Notice that $\lim_{n\to\infty}\operatorname{tp}(b_{n}/MI)=p|_{MI}$, and $(c_{i})_{i\in\omega}$ is a subsequence of $(b_{n})_{n\in\omega}$, $\lim_{i\to\infty}\operatorname{tp}(c_{i}/MI)=p|_{MI}$. This clearly implies the base case. Fix $k$ and suppose that for any formula $\theta(x_{0},...,x_{k})$ in $\mathcal{L}_{x_{0},...,x_{k}}(M)$, we have that $\theta(x_{0},...,x_{k})\in\operatorname{EM}(I/M)$ if and only if $\theta(x_{0},...,x_{k})\in\operatorname{EEM}(J/M)$. Towards a contradiction, we assume that $\neg\theta(x_{0},...,x_{k+1})\in\operatorname{EEM}(J/M)$ and $\theta(x_{0},...,x_{k+1})\in\operatorname{EM}(I/M)$. Since $\neg\theta(\overline{x})\in\operatorname{EEM}(J/M)$, there exists some natural number $N_{\theta_{1}}$ such that for any $n_{k+1}>...>n_{0}>N_{\theta_{1}}$, we have that $\mathcal{U}\models\neg\theta(c_{n_{0}},...,c_{n_{k+1}})$. Since $\theta(\overline{x})\in\operatorname{EM}(I/M)$, we conclude that $\mathcal{U}\models\theta(a_{0},...,a_{k+1})$. Since $p$ is $\operatorname{dfs}$ over $M$, $I$ is totally indiscernible over $M$ by Fact 2.6. Therefore, $\mathcal{U}\models\theta(a_{k+1},a_{0}...,a_{k})$ and so $\theta(x,a_{0},...,a_{k})\in p|_{Ma_{0},...,a_{k}}$. Since $\lim_{i\to\infty}\operatorname{tp}(c_{i}/MI)=p|_{MI}$, there exists some $N_{\theta_{2}}$ such that for every $n>N_{\theta_{2}}$, we have that $\mathcal{U}\models\theta(c_{n},a_{0},...,a_{k})$. Choose $n_{*}>\max\\{N_{\theta_{1}},N_{\theta_{2}}\\}$. Then the formula $\theta(c_{n_{*}},x_{0},...,x_{k})\in\operatorname{tp}(a_{0},...,a_{k}/M)$. By our induction hypothesis, we have that $\theta(c_{n_{*}},\overline{x})\in\operatorname{EEM}(J/M)$ and so there exists $N_{\theta_{3}}$ such that for any $m_{k}>...>m_{0}>N_{\theta_{3}}$, we have that $\mathcal{U}\models\theta(c_{n_{*}},c_{m_{0}},...,c_{m_{k}})$. Now consider what happens when $m_{0}>\max\\{N_{\theta_{3}},n_{*}\\}$. Then $m_{k}>...>m_{0}>n_{*}>N_{\theta_{1}}$ and so $\mathcal{U}\models\neg\theta(c_{n_{*}},c_{m_{0}},...,c_{m_{k}})$ by our assumption. However, $m_{k}>...>m_{0}>N_{\theta_{3}}$ and therefore $\mathcal{U}\models\theta(c_{n_{*}},c_{m_{0}},...,c_{m_{k}})$. This is a contradiction. ∎ ###### Lemma 4.7. Suppose $p$ is in $S_{x}(\mathcal{U})$ and $M\prec\mathcal{U}$. Assume that $p$ is generically stable over $M$. If $(c_{i})_{i\in\omega}$ is a sequence in $M^{x}$ such that $\operatorname{EEM}((c_{i})_{i\in\omega}/M)=p^{\omega}|_{M}$, then $\lim_{i\to\infty}tp(c_{i}/\mathcal{U})=p$. ###### Proof. Let $p$, $(c_{i})_{i\in\omega}$ and $M$ be as in the statement of the lemma. Let $J=(c_{i})_{i\in\omega}$. We first argue that the sequence of global types $(\operatorname{tp}(c_{i}/\mathcal{U}))_{i\in\omega}$ converges and then argue that this sequence converges to $p$. Claim 1: The sequence $(\operatorname{tp}(c_{i}/\mathcal{U}))_{i\in\omega}$ converges to a some type in $S_{x}(\mathcal{U})$. It suffices to argue that for any formula $\psi(x)\in\mathcal{L}_{x}(\mathcal{U})$, $\lim_{i\to\infty}\mathbf{1}_{\psi}(c_{i})$ exists (recall that $\mathbf{1}_{\psi(x)}$ is the characteristic function of the definable set $\psi(x)$). Assume not. Then we may choose a subsequence $(c_{i}^{\prime})_{i\in\omega}$ of $(c_{i})_{i\in\omega}$ such that $\mathcal{U}\models\psi(c_{i}^{\prime})\leftrightarrow\neg\psi(c_{i+1}^{\prime})$. For notational purposes, we also denote $(c^{\prime}_{i})_{i\in\omega}$ as $J^{\prime}$. It is clear that $(c_{i}^{\prime})_{i\in\omega}$ is also eventually indiscernible over $M$ and $\operatorname{EEM}((c_{i}^{\prime})_{i\in\omega}/M)=\operatorname{EEM}((c_{i})_{i\in\omega}/M)$. By using $J^{\prime}$, one can show that the following type is finitely consistent: $\Theta_{1}=\operatorname{EEM}(J^{\prime}/M)\cup\bigcup_{\textit{$i$ is even}}\\{\psi(x_{i})\wedge\neg\psi(x_{i+1})\\}.$ Let $(d_{i})_{i\in\omega}$ realize this type. Then $(d_{i})_{i\in\omega}$ is a Morley sequence in $p$ over $M$ because $\operatorname{EM}((d_{i})_{i\in\omega}/M)=\operatorname{EEM}(J^{\prime}/M)=\operatorname{EEM}(J/M)=p^{\omega}|_{M}.$ Then $\mathcal{U}\models\psi(d_{i})$ if and only if $i$ is even. This contradicts generic stability since $\lim_{i\to\infty}\operatorname{tp}(d_{i}/M)$ does not converge. Claim 2: The sequence $(\operatorname{tp}(c_{i}/\mathcal{U}))_{i\in\omega}$ converges to $p$. Again, assume not. By claim 1, $\lim_{i\to\infty}\operatorname{tp}(c_{i}/\mathcal{U})=q$ for some $q\in S_{x}(\mathcal{U})$. By assumption, $q\neq p$ and so there exists a formula $\psi(x)$ such that $\psi(x)\in p$ and $\neg\psi(x)\in q$. Since $(\operatorname{tp}(c_{i}/\mathcal{U}))_{i\in\omega}$ converges to $q$, there is an $N$ such that for every $n>N$, we have that $\mathcal{U}\models\neg\theta(c_{n})$. By a similar argument as the previous claim, one can show the following type is finitely consistent: $\Theta_{2}=\operatorname{EEM}(J/M)\cup\bigcup_{i\in\omega}\neg\theta(x_{i}).$ Again, we let $(d_{i})_{i\in\omega}$ realize this type. Then $(d_{i})_{i\in\omega}$ is a Morley sequence in $p$ over $M$ and we have that $\lim_{i\to\infty}\operatorname{tp}(d_{i}/\mathcal{U})\neq p$ in $S_{x}(\mathcal{U})$. This again contradicts the definition of generic stability. ∎ ###### Theorem 4.8. Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is generically stable (over $M$). Then $p$ is sequentially approximated (over $M$). ###### Proof. If $p$ is generically stable, then $p$ is generically stable over a countable submodel model $M_{0}$ contained in $M$ by Fact 2.6. Then $p$ is $\operatorname{dfs}$ over $M_{0}$ and so by Lemma 4.6, one can choose $(c_{i})_{i\in\omega}$ where each $c_{i}\in M_{0}^{x}$ and $\operatorname{EEM}((c_{i})_{i\in\omega}/M_{0})=p^{\omega}|_{M_{0}}$. By Lemma 4.7, $\lim_{i\to\infty}\operatorname{tp}(c_{i}/\mathcal{U})=p$. ∎ ###### Corollary 4.9. Assume that $T^{\prime}$ is countable or uncountable in the language $\mathcal{L^{\prime}}$, $\mathcal{U}^{\prime}\models T^{\prime}$, and $M^{\prime}$ a submodel of $\mathcal{U}^{\prime}$. Assume that $p$ is generically stable over $M^{\prime}$. Then for any countable collection of formulas $\Delta=\\{\psi_{i}(x,y_{i})\\}_{i\in\omega}$ in $\mathcal{L^{\prime}}$, there exists a sequence of points $(c_{i})_{i\in\omega}$ each in $(M^{\prime})^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}_{\Delta}(c_{i}/\mathcal{U})=p|_{\Delta}$. ###### Proof. Let $\mathcal{L}$ be a countable sublanguage of $\mathcal{L^{\prime}}$ containing all the formulas in $\Delta$. The corresponding type $p|_{\mathcal{L}}$ is generically stable over the model $M$ where $M=M^{\prime}|_{\mathcal{L}}$ (see [6, Remark 3.3]). Hence we may apply Theorem 4.8. ∎ ### 4.1. Examples and non-examples We begin this subsection by collecting the known examples of sequentially approximated types. We then go on to give two examples of types which are not sequentially approximated (over any model). ###### Observation 4.10. Assume that $p\in S_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $p$ is sequentially approximated over $M$ if 1. ($i$) $T$ is stable, and $p$ is invariant over $M$, 2. ($ii$) $T$ is NIP, $|M|=\aleph_{0}$, and $p$ is finitely satisfiable over $M$, or 3. ($iii$) $p$ is generically stable over $M$. We just proved $(iii)$. Clearly, $(i)$ follows from $(iii)$ (we remark that it also follows from $(ii)$). As noted previously, the proof of $(ii)$ is precisely [17, Lemma 2.8]. We now exhibit some concrete examples of types which are not sequentially approximated. We begin by describing a type in an NIP theory which is finitely satisfiable over a small model but not sequentially approximated (and its associated Keisler measure is not sequentially approximated either). We then discuss a finitely approximated type which is not sequentially approximated. ###### Proposition 4.11. Let $\omega_{1}$ be the first uncountable ordinal, $M=(\omega_{1};<)$ with the usual ordering, and let $T_{<}$ be the theory of $M$ in the language $\\{<\\}$. Recall that $T_{<}$ is NIP. Let $p\in S_{x}(\omega_{1})$ be the complete type extending $\\{\alpha<x:\alpha<\omega_{1}\\}$. Let $\mathcal{U}$ be a monster model of $T_{<}$ such that $M\prec\mathcal{U}$ and let $p_{*}\in S_{x}(\mathcal{U})$ be the unique global coheir of $p$. Then, $p_{*}$ is not sequentially approximated over any model. ###### Proof. Assume for the sake of contradiction that $p_{*}$ is sequentially approximated over some model $N$. Then there exists a sequence of points $(b_{i})_{i\in\omega}$ in $N$ such that $\lim_{i\to\infty}\operatorname{tp}(b_{i}/\mathcal{U})=p_{*}$ in $S_{x}(\mathcal{U})$. There is either an infinite subsequence which is strictly increasing or strictly decreasing and so without loss of generality, $(b_{i})_{i\in\omega}$ has one of these two properties. First assume that $(b_{i})_{i\in\omega}$ is strictly increasing. Notice that $b_{i}<x\in p_{*}$. Since $p_{*}$ is a coheir of $p$, $p_{*}$ is finitely satisfiable over $\omega_{1}$. So, for each $b_{i}$ there exists $\alpha$ in $\omega_{1}$ such that $b_{i}<\alpha$. Now, for each $b_{i}$, we define $\alpha_{i}:=\min\\{\alpha\in\omega_{1}:\mathcal{U}\models b_{i}<\alpha\\}$. Since $\omega_{1}$ is well-ordered, $\alpha_{i}$ is well-defined. We let $\beta$ be the supremum (in $\omega_{1}$) of $\\{\alpha_{i}:i\in\omega\\}$. Then $\mathcal{U}\models b_{i}<\beta$ for each $i\in\omega$, and so but $x<\beta\in p_{*}$, contradiction. Now we assume that $(b_{i})_{i\in\omega}$ is a strictly decreasing subsequence. Notice that for each $i\in\omega$, $b_{i}>x\in p_{*}$. Let $\Theta(x)=\\{\alpha<x:\alpha\in\omega_{1}\\}\ \cup\\{x<b_{i}:i\in\omega\\}$. By compactness, choose $c_{\infty}$ in $\mathcal{U}$ satisfying $\Theta(x)$. Since $p_{*}$ is finitely satisfiable over $\omega_{1}$, we have $c_{\infty}>x\in p_{*}$. But since $\mathcal{U}\models b_{i}>c_{\infty}$ for each $i\in\omega$, we have that $x>c_{\infty}\in p$, contradiction. ∎ ###### Remark 4.12. The type $p_{*}$ in Proposition 4.11 is finitely satisfiable over a small model, but not finitely satisfiable over any countable submodel by Theorem 1.1. ###### Proposition 4.13. Let $p_{*}$ be as in Proposition 4.11. Then the associated Keisler measure $\delta_{p_{*}}$ is not sequentially approximated. ###### Proof. Clear from $(iv)$ of Proposition 3.3. ∎ ###### Proposition 4.14. Let $T^{2}_{s}$ be the theory of the random $K_{s}$-free graph in the language $\mathcal{L}=\\{E(x,y)\\}$. Let $p_{*}$ be the unique global complete type extending the formulas $\\{\neg E(x,b):b\in\mathcal{U}\\}$. Then, $\delta_{p_{*}}$ is sequentially approximated (even finitely approximated over any submodel) but $p_{*}$ is not sequentially approximated. Moreover, $T^{2}_{s}$ admits no (non-realized) sequentially approximated types. ###### Proof. The proof that $\delta_{p_{*}}$ is finitely approximated can be found in [6, Theorem 5.8]. By $(iii)$ of Proposition 3.3, $\delta_{p_{*}}$ is sequentially approximated. By $(v)$ of Proposition 3.3, it suffices to show that there are no non-realized types in one variable which are sequentially approximated. Let $p$ be any non-realized type in $S_{1}(\mathcal{U})$ and assume that $(b_{i})_{i\in\omega}$ is a sequence of points in $\mathcal{U}^{x}$ such that $\lim_{i\to\infty}\operatorname{tp}(b_{i}/\mathcal{U})=p$. Since $p$ is non- realized, we may assume that the points in $(b_{i})_{i\in\omega}$ are distinct. Then, by Ramsey’s theorem, there is a subsequence which is either independent or complete. It cannot be complete, because that would violate $K_{s}$-freeness. Therefore, $(b_{i})_{i\in\omega}$ contains an independent subsequence, call it $(c_{i})_{i\in\omega}$. By compactness, there exists an $a$ in $\mathcal{U}$ such that $\mathcal{U}\models E(c_{i},a)$ if and only if $i$ is even. Then, $(\operatorname{tp}(c_{i}/\mathcal{U}))_{i\in\omega}$ does not converge in $S_{x}(\mathcal{U})$ and so $(\operatorname{tp}(b_{i}/\mathcal{U}))_{i\in\omega}$ does not converge in $S_{x}(\mathcal{U})$. ∎ ###### Question 4.15. We say a global type $p$ in $S_{x}(\mathcal{U})$ is sad222Credit to James Hanson for the terminology. if it is both sequentially approximated and definable. Does there exist a global type $p$ which is sad over a model $M$ but is not generically stable over $M$? It is clear that if $T$ is NIP, then all sad types are generically stable. Therefore an example of such a type must come from the wild. ## 5\. Sequential approximations of measures in NIP theories Throughout this section, we assume that $T$ is a countable NIP theory and $\mathcal{U}$ is a monster model of $T$. We show that measures which are finitely satisfiable over a countable model of $T$ are sequentially approximated (Theorem T2). To do this, we introduce the notion of a smooth sequence. These are sequences of global measures which are intended to play the role of a Morley sequence for a measure. Unfortunately, these sequences only exist (a priori) in the NIP context and it is currently not known how to expand this idea to IP theories. At the end of this section, we give a characterization of generic stability using smooth sequences (again, only in the NIP context). To motivate the machinery introduced in this section, we explain why Theorem T2 does not follow directly from some approximation results currently in the literature. One might assume that one could prove Theorem T2 from Theorem 1.1 in tandem with the following fact [16, Proposition 7.11], ###### Fact 5.1 (T is NIP). Suppose that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$. Then, for any formula $\varphi(x,y)\in\mathcal{L}$ and every $\epsilon>0$, there exists types $p_{1},...,p_{n}\in S_{x}(\mathcal{U})$, where for each $i\leq n$ the type $p_{i}$ is finitely satisfiable over $M$, and $\sup_{b\in\mathcal{U}^{y}}|\mu(\varphi(x,b))-\operatorname{Av}(\overline{p})(\varphi(x,b))|<\epsilon.$ If $\mu$ is in $\mathfrak{M}_{x}(\mathcal{U})$ and is finitely satisfiable over a countable model $M$, then one can use Theorem 1.1 and Fact 5.1 together to produce: 1. (1) a sequence of global measures $(\operatorname{Av}(\overline{p}_{i}))_{i\in\mathbb{N}}$ such that each $\overline{p}_{i}=(p_{i_{1}},...,p_{i_{k}})$, each $p_{i_{k}}\in S_{x}(\mathcal{U})$ is finitely satisfiable over $M$, and $\lim_{i\to\infty}\operatorname{Av}(\overline{p}_{i})=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$, 2. (2) for each $i\in\mathbb{N}$, a sequence of points $(\overline{a}_{i_{j}})_{j\in\mathbb{N}}$ each in $(M^{x})^{<\omega}$ so that $\lim_{j\to\infty}\operatorname{Av}(\overline{a}_{i_{j}})=\operatorname{Av}(\overline{p}_{i})$. This construction gives an array of points $(\overline{a}_{i_{j}})_{(i,j)\in\mathbb{N}\times\mathbb{N}}$ in $(M^{x})^{<\omega}$ so that $\lim_{i\to\infty}\lim_{j\to\infty}\Big{(}\operatorname{Av}(\overline{a}_{i_{j}})\Big{)}=\mu\text{ in $\mathfrak{M}_{x}(\mathcal{U})$}.$ A priori, the convergence of an array does not imply that there exists a subsequence of that array which converges to the array’s limit333For example, any Baire-2 function which is not Baire-1 can be written as the limit of an array of continuous functions, but cannot be written as the sequential limit of continuous functions.. A similar situation arises by trying to iterate Theorem 5.13. So, we must work slightly harder. As previously stated, our proof essentially mimics the proof of Theorem 1.1 but with Morley sequences replaced by smooth sequences. Finally we remark that if there were an elementary proof using an array to show this result, then we would have a moderately simple proof that $\operatorname{dfs}$ measures are finitely approximated in NIP theories. In particular, this proof would bypass the implicit use of randomizations (i.e. $(i)$ of Fact 5.5). We formally begin this section by discussing a “continuous” analogue of eventually indiscernible sequences. ### 5.1. Eventually indiscernible sequences revisited We fix some notation. Fix distinct tuples of variables $x$ and $x_{0},...,x_{n}$ such that $|x|=|x_{i}|$ for $i\leq n$. If $\varphi(x_{0},...,x_{n})$ is a formula in $\mathcal{L}_{x_{0},...,x_{n}}(\mathcal{U})$ and $\overline{a}_{0},...,\overline{a}_{n}$ is a finite sequence of elements where each $\overline{a}_{i}\in(\mathcal{U}^{x})^{<\omega}$ and $\overline{a}_{i}=(a_{i,0},...,a_{i,m_{i}})$ for $i\leq n$, then we write $\varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})$ to mean, $\bigotimes_{i=0}^{n}\operatorname{Av}(\overline{a}_{i})_{x_{i}}(\varphi(x_{0},...,x_{n})).$ Notice that $\varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})$ is a real number. We observe that by unpacking the definition of the product measure, our formula can be computed as follows: $\varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})=\frac{1}{\prod_{i=0}^{n}(m_{i}+1)}\sum_{j_{0}=0}^{m_{0}}...\sum_{j_{n}=0}^{m_{n}}\mathbf{1}_{\varphi}(a_{0,j_{0}},...,a_{n,j_{n}}).$ ###### Definition 5.2. Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of elements in $(\mathcal{U}^{x})^{<\omega}$ and let $A\subset\mathcal{U}$ be a collection of parameters. Then we say that the sequence $(\overline{a}_{i})_{i\in\omega}$ is eventually indiscernible over $A$ if for any formula $\varphi(x_{0},...,x_{n})$ in $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)$ and any $\epsilon>0$, there exists $N_{\epsilon,\varphi}$ such that for any $n_{k}>...>n_{0}>N_{\epsilon,\varphi}$ and $m_{k}>....>m_{0}>N_{\epsilon,\varphi}$, $|\varphi_{c}(\overline{a}_{n_{0}},...,\overline{a}_{n_{k}})-\varphi_{c}(\overline{a}_{m_{0}},...,\overline{a}_{m_{k}})|<\epsilon.$ ###### Proposition 5.3. Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of tuples in $(\mathcal{U}^{x})^{<\omega}$. If $A$ is a countable set of parameters, then there exists some subsequence $(\overline{c}_{i})_{i\in\omega}$ of $(\overline{a})_{i\in\omega}$ such that $(\overline{c}_{i})_{i\in\omega}$ is eventually indiscernible over $A$. ###### Proof. This proof is a standard application of Ramsey’s theorem applied to the “continuous” setting. Enumerate all pairs in $\mathcal{L}_{(x_{i})_{i\in\omega}}(A)\times\mathbb{N}_{>0}$. Let $(\overline{a}_{i}^{0})_{i\in\omega}:=(\overline{a}_{i})_{i\in\omega}$ and set $B_{0}=\\{\overline{a}^{0}_{i}:i\in\omega\\}$. Now, assume we have constructed the subsequence $(\overline{a}_{i}^{l})_{i\in\omega}$ and $B_{l}$ (where $B_{l}=\\{\overline{a}_{i}^{l}:i\in\omega\\}$). We now construct $(\overline{a}_{i}^{l+1})_{i\in\omega}$ and $B_{l+1}$. Assume that $(\varphi(x_{0},...,x_{k}),n)$ is the $l+1$ indexed pair in our enumeration. Then we define the coloring $r_{l+1}:[B_{l}]^{k+1}\to\\{0,...,n\\}$ via $r(\\{\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}}\\})=\lfloor n\cdot\varphi_{c}(\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}})\rfloor.$ where $i_{0}<i_{1}<...<i_{k}$. By Ramsey’s theorem, there is an infinite monochromatic subset $B_{l}^{\prime}$ of $B_{l}$. Let $(\overline{a}_{i}^{l+1})_{i\in\omega}$ be the obvious reindexed subsequence of $(\overline{a}_{i}^{l})_{i\in\omega}$ with the elements only from the monochromatic set $B_{l}^{{}^{\prime}}$. We let $B_{l+1}=\\{\overline{a}^{l+1}_{i}:i\in\omega\\}$. By construction, the sequence $(\overline{a}_{i}^{i})_{i\in\omega}$ is eventually indiscernible. ∎ We now present a collection of facts which will help us prove that the associated average measures along eventually indiscernible sequences always converge to a measure in $\mathfrak{M}_{x}(\mathcal{U})$ when the underlying theory is NIP. The first fact is elementary and left to the reader as an exercise. ###### Fact 5.4. Assume that $(\mu_{i})_{i\in\omega}$ is a sequence of Keisler measures in $\mathfrak{M}_{x}(\mathcal{U})$. If for every formula $\varphi(x)\in\mathcal{L}_{x}(\mathcal{U})$, $\lim_{i\to\infty}\mu_{i}(\varphi(x))$ converges, then $(\mu_{i})_{i\in\omega}$ converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. The next collection of facts can be found in [10]. In particular, $(i)$ follows immediately from Lemma 2.10 while $(ii)$ and $(iii)$ are from Corollary 2.14. The proof of Lemma 2.10 is non-trivial and is an interpretation of results in [2]. Implicitly, our proof uses the fact that the randomization of an NIP theory is NIP. ###### Fact 5.5 (T is NIP). Suppose that $\lambda\in\mathfrak{M}_{(x_{i})_{i\in\omega}}$ where $|x_{i}|=|x_{j}|$ for each $i,j<\omega$. $\lambda$ is said to be $M$-indiscernible if for every increasing sequence of indices $i_{0},...,i_{n}$ and any formula $\varphi(x_{i_{0}},...,x_{i_{n}})$ in $\mathcal{L}_{(x_{i})_{i\in\omega}}(M)$, we have that $\lambda(\varphi(x_{i_{0}},...,x_{i_{n}}))=\lambda(\varphi(x_{0},...,x_{n})).$ Let $\mu,\nu\in\mathfrak{M}_{x}(\mathcal{U})$ such that $\mu,\nu$ are invariant over $M$. The following statements are true. 1. ($i$) If $\lambda$ is $M$-indiscernible, then for any formula $\varphi(x,b)\in\mathcal{L}_{x}(\mathcal{U})$, we have that $\lim_{i\to\infty}\lambda(\varphi(x_{i},b))$ exists. 2. ($ii$) The measures $\mu^{(\omega)}$ and $\nu^{(\omega)}$ are $M$-indiscernible. 3. ($iii$) If $\mu^{(\omega)}|_{M}=\nu^{(\omega)}|_{M}$, then $\mu=\nu$. We now establish a formal connection between eventually indiscernible sequences of tuples and indiscernible measures. We use this connection to show that the eventually indiscernible sequences converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. ###### Proposition 5.6. Let $(\overline{c}_{i})_{i\in\omega}$ be a sequence of points in $(\mathcal{U}^{x})^{<\omega}$. If $(\overline{c}_{i})_{i\in\omega}$ is an eventually indiscernible sequence over some model $M$, then the sequence $(\operatorname{Av}(\overline{c}_{i}))_{i\in\omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$. ###### Proof. Assume not. Then there exists some formula $\psi(x,b)$ in $\mathcal{L}_{x}(\mathcal{U})$, some $\epsilon_{0}>0$, and some subsequence $(\overline{c}_{i}^{\prime})_{i\in\omega}$ of $(\overline{c}_{i})_{i\in\omega}$ such that for each natural number $i$, $|\operatorname{Av}(\overline{c}_{i}^{\prime})(\psi(x;b))-\operatorname{Av}(\overline{c}_{i+1}^{\prime})(\psi(x;b))|>\epsilon_{0}.$ It is clear that $(\overline{c}_{i}^{\prime})_{i\in\omega}$ is also eventually indiscernible over $M$. We now aim to contradict $(i)$ of Fact 5.5 via (topological) compactness of the space $\mathfrak{M}_{\omega}(\mathcal{U}):=\mathfrak{M}_{(x_{i})_{i\in\omega}}(\mathcal{U})$. For any formula $\varphi(x_{i_{0}},...,x_{i_{k}})\in\mathcal{L}_{(x_{i})_{i\in\omega}}(M)$, we let $r_{\varphi}$ be the unique real number such that for every $\epsilon>0$, there exists an $N_{\epsilon,\varphi}$ so that for any $n_{k}>...>n_{0}>N_{\epsilon,\varphi}$ we have $|\varphi_{c}(\overline{c}^{\prime}_{n_{0}},...,\overline{c}^{\prime}_{n_{k}})-r_{\varphi}|<\epsilon.$ Since the sequence $(\overline{c}_{i}^{\prime})_{i\in\omega}$ is eventually indiscernible over $M$, $r_{\varphi}$ exists for each $\varphi(\overline{x})\in\mathcal{L}_{(x_{i})_{i\in\omega}}(M)$. Now, for every $\varphi(\overline{x})\in\mathcal{L}_{(x_{i})_{i\in\omega}}(M)$ and $\epsilon>0$, we define the following family of closed subsets of $\mathfrak{M}_{\omega}(\mathcal{U})$; $C_{\epsilon,\varphi}=\Big{\\{}\lambda\in\mathfrak{M}_{\omega}(\mathcal{U}):r_{\varphi}-\epsilon\leq\lambda(\varphi(\overline{x}))\leq r_{\varphi}+\epsilon\Big{\\}}.$ We also define another family of sets and argue that they are closed; let $D_{i}=\Big{\\{}\lambda\in\mathfrak{M}_{\omega}(\mathcal{U}):|\lambda(\psi(x_{i},b))-\lambda(\psi(x_{i+1},b))|\geq\frac{\epsilon_{0}}{2}\Big{\\}}.$ Notice that $D_{i}$ is closed since for every natural number $i$, the evaluation map $E_{i}:\mathfrak{M}_{\omega}(\mathcal{U})\to[0,1]$ via $E_{i}(\lambda)=\lambda(\varphi(x_{i},b))$ is continuous. Indeed, define $F_{i}=E_{i}-E_{i+1}$ and $H_{i}=E_{i+1}-E_{i}$. Then we have $D_{i}=F_{i}^{-1}([\frac{\epsilon_{0}}{2},1])\cup H_{i}^{-1}([\frac{\epsilon_{0}}{2},1])$ and so $D_{i}$ is a union of two closed sets and therefore closed. Using $(\overline{c}_{i}^{\prime})_{i\in\omega}$, the collection $\Phi=\\{C_{\epsilon,\varphi}:\epsilon>0,\varphi(\overline{x})\in\mathcal{L}_{\omega}(M)\\}\cup\\{D_{i}:i\in\omega\\}$ has the finite intersection property. Therefore, there exists some $\lambda\in\mathfrak{M}_{\omega}(\mathcal{U})$ in the intersection of all the sets in $\Phi$. Moreover, $\lambda$ is $M$-indiscernible by construction. Since $\lambda$ is in $D_{i}$ for each $i$, its existence contradicts $(i)$ of Fact 5.5. ∎ ### 5.2. Smooth sequences In this subsection, we define the notion of a smooth sequence and prove the main theorem. If $\mu$ is a global $M$-invariant measure, then a smooth sequence is a collection of models and measures meant to replicate a Morley sequence. The ideology is the following: A Morley sequence in $p$ over $M$ is to the infinite type $p^{\omega}|_{M}$ as a smooth sequence in $\mu$ over $M$ is to the measure $\mu^{(\omega)}|_{M}$. We now provide the formal definition. ###### Definition 5.7. Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and assume that $\mu$ is invariant over some small model $M$. Then, a smooth sequence in $\mu$ over $M$ is a sequence of pairs of measures and small models, $(\mu_{i},N_{i})_{i\in\omega}$, such that: 1. $(i)$ $M\prec N_{0}$ and $N_{i}\prec N_{i+1}$ and each $N_{i}$ is small. 2. $(ii)$ $\mu_{i}$ is smooth over $N_{i}$. 3. $(iii)$ $\mu_{0}|_{M}=\mu|_{M}$ and for $i>0$, $\mu_{i}|_{N_{i-1}}=\mu|_{N_{i-1}}$. Furthermore, we define $\bigotimes_{i=0}^{\omega}\mu_{i}=\bigcup_{i=0}^{\omega}\bigotimes_{i=0}^{n}\mu_{i}$ which is an element of $\mathfrak{M}_{(x_{i})_{i\in\omega}}(\mathcal{U})$. We let $N_{\omega}=\bigcup_{i\in\omega}N_{i}$. Notice that for each $i\in\omega$, the measure $\mu_{i}$ is smooth over $N_{\omega}$. ###### Proposition 5.8. If $T$ is a countable NIP theory, $\mu\in\mathfrak{M}_{x}(\mathcal{U})$, and $\mu$ is invariant over $M$ where $|M|=\aleph_{0}$, then there exists a smooth sequence $(\mu_{i},N_{i})_{i\in\omega}$ in $\mu$ over $M$ such that each $N_{i}$ is countable. ###### Proof. This follows directly from Proposition 2.9. ∎ ###### Proposition 5.9 (T is NIP). Assume that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is $M$-invariant. Let $(\mu_{i},N_{i})_{i\in\omega}$ be a smooth sequence in $\mu$ over $M$. Then, $\bigotimes_{i=0}^{\omega}\mu_{i}|_{M}=\mu^{(\omega)}|_{M}$. Hence, $\bigotimes_{i=0}^{\omega}\mu_{i}$ is $M$-indiscernible. ###### Proof. We prove this via induction on formulas in $\mathcal{L}_{(x_{i})_{i\in\omega}}(\mathcal{U})$. For our base case, it is true by construction that $\mu_{0}|_{M}=\mu|_{M}$. For our induction hypothesis, we assume that $\mu^{(k-1)}|_{M}=\bigotimes_{i=0}^{k-1}\mu_{i}|_{M}$. For ease of notation, we set $\lambda=\bigotimes_{i=0}^{k-1}\mu_{i}$ and show the induction step: Let $\varphi(x_{0},...,x_{k})$ be any formula in $\mathcal{L}_{x_{0},...,x_{k}}(M)$. Since the product of smooth measures is smooth (by $(iii)$ of Fact 2.12), we have that $\lambda$ is smooth over $N_{k-1}$. In particular, $\lambda$ is invariant over $N_{k-1}$. We let $\overline{x}=(x_{0},...,x_{k-1})$ and $\theta(x_{k};\overline{x})=\varphi(x_{0},...,x_{k})$. We consider the following computation followed by a list of justifications. $\mu_{k}\otimes\lambda(\varphi(x_{0},...,x_{k}))=\int_{S_{\overline{x}}(N_{k})}F_{\mu_{k}}^{\theta}d(\lambda|_{N_{k}})\overset{(a)}{=}\int_{S_{x_{k}}(N_{k})}F_{\lambda}^{\theta^{*}}d(\mu_{k}|_{N_{k}})$ $\overset{(b)}{=}\int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu_{k}|_{N_{k-1}})\overset{(c)}{=}\int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu|_{N_{k-1}})\overset{(a)}{=}\int_{S_{\overline{x}}(N_{k-1})}F_{\mu}^{\theta}d(\lambda|_{N_{k-1}})$ $\overset{(d)}{=}\int_{S_{\overline{x}}(M)}F_{\mu}^{\theta}d(\lambda|_{M})\overset{(e)}{=}\int_{S_{\overline{x}}(M)}F_{\mu}^{\theta}d(\mu^{(k-1)}|_{M})=\mu\otimes\mu^{(k-1)}(\varphi(x_{0},...,x_{k})).$ We provide the following justifications: 1. ($a$) Smooth measures commute with invariant measures. 2. ($b$) Changing space of integration since $\lambda$ is invariant over $N_{k-1}$. 3. ($c$) By construction of smooth sequences, we have that $\mu_{k}|_{N_{k-1}}=\mu|_{N_{k-1}}$. 4. ($d$) Changing space of integration since $\mu$ is invariant over $M$. 5. ($e$) By our induction hypothesis. ∎ We now begin the proof of our main theorem. Again, the proof is similar to both the generically stable case in the previous section and even more so to the proof of Lemma 2.8 in [17]. Here, the major difference is that we replace the Morley sequence in that proof with a countable model, $N_{\omega}$, which “contains” a smooth sequence in $\mu$ over $M$. Then we find a sequence of elements in $(M^{x})^{<\omega}$ such that the associated average measures converge to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. After choosing an eventually indiscernible subsequence, we know from our NIP assumption that this new sequence converges to a global measure $\nu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Finally, we demonstrate that $\nu^{(\omega)}|_{M}=\mu^{(\omega)}|_{M}$ which completes the proof. ###### Theorem 5.10 ($T$ is NIP). Let $\mu$ be finitely satisfiable over a countable model $M$. Then there exists a sequence $(\overline{a})_{i\in\omega}$ of elements, each in $(M^{x})^{<\omega}$, such that for any $\theta(x)\in\mathcal{L}_{x}(\mathcal{U})$, we have that, $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})(\theta(x))=\mu(\theta(x)).$ ###### Proof. Choose a smooth sequence $(\mu_{i},N_{i})_{i\in\omega}$ in $\mu$ over $M$. By Proposition 5.8 we may choose this sequence so that for each $i\in\omega$, $N_{i}$ is countable. In particular, this implies that $N_{\omega}$ is a countable model. We begin by constructing a sequence of elements $(\overline{a}_{i})_{i\in\omega}$ in $(M^{x})^{<\omega}$ such that $(\operatorname{Av}(\overline{a}_{i})|_{N_{\omega}})_{i\in\omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. Since $N_{\omega}$ is countable, we let $(\theta_{i}(x))_{i\in\omega}$ be an enumeration of the formulas in $\mathcal{L}_{x}(N_{\omega})$. Since $\mu$ is finitely satisfiable over $M$, we can find we find $\overline{a}_{k}\in(M^{x})^{<\omega}$ such that for any $j\leq k$, we have that, $|\mu(\theta_{j}(x))-\operatorname{Av}(\overline{a}_{k})(\theta_{j}(x))|<\frac{1}{k}.$ By construction, it is clear that the sequence $(\operatorname{Av}(\overline{a}_{i})|_{N_{\omega}})_{i\in\omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega}$). Now, we let $(\overline{c}_{i})_{i\in\omega}$ be a subsequence of $(\overline{a}_{i})_{i\in\omega}$ so that $(\overline{c}_{i})_{i\in\omega}$ is eventually indiscernible over $N_{\omega}$. Then the sequence $(\operatorname{Av}(\overline{c}_{i}))_{i\in\omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$ by Proposition 5.6. Assume that $(\operatorname{Av}(\overline{c}_{i}))_{i\in\omega}$ converges to some measure $\nu\in\mathfrak{M}_{x}(\mathcal{U})$. Hence, $\nu$ is finitely satisfiable over $M$ by $(i)$ of Proposition 3.3 and therefore $\nu$ is invariant over $M$. We show that $\nu^{(\omega)}|_{M}=\mu^{(\omega)}|_{M}$. This will conclude the proof by $(iii)$ of Fact 5.5. Since $(\overline{c}_{i})_{i\in\omega}$ is a subsequence of $(\overline{a}_{i})_{i\in\omega}$, it follows that $\nu|_{N_{\omega}}=\mu|_{N_{\omega}}$ and therefore $\nu|_{M}=\mu|_{M}$. We now proceed by induction. Assume that $\nu^{(k-1)}|_{M}=\mu^{(k-1)}|_{M}$. Fix $\varphi(x_{0},...,x_{k})$ in $\mathcal{L}_{x_{0},...,x_{k}}(M)$. For ease of notation, set $\lambda=\bigotimes_{i=0}^{k-1}\mu_{i}$. We recall that $\lambda$ is smooth over $N_{\omega}$ (see Fact 2.12). By Proposition 5.9, $\mu^{(k-1)}|_{M}=\lambda|_{M}$. We let $\overline{x}=(x_{0},...,x_{k-1})$ and let $\theta(x_{k};\overline{x})=\varphi(x_{0},...,x_{k})$. We now consider the critical computation followed a small glossary of justifications. $\nu^{(k)}(\varphi(x_{0},...,x_{k}))=\int_{S_{\overline{x}}(M)}F_{\nu}^{\theta}d(\nu^{(k-1)}|_{M})\overset{(a)}{=}\int_{S_{\overline{x}}(M)}F_{\nu}^{\theta}d(\mu^{(k-1)}|_{M})$ $\overset{(b)}{=}\int_{S_{\overline{x}}(M)}F_{\nu}^{\theta}d(\lambda|_{M})\overset{(c)}{=}\int_{S_{\overline{x}}(N_{\omega})}F_{\nu}^{\theta}d(\lambda|_{N_{\omega}})\overset{(d)}{=}\int_{S_{x_{k}}(N_{\omega})}F_{\lambda}^{\theta^{*}}d(\nu|_{N_{\omega}})$ $\overset{(e)}{=}\int_{S_{x_{k}}(N_{\omega})}F_{\lambda}^{\theta^{*}}d(\mu|_{N_{\omega}})\overset{(d)}{=}\int_{S_{\overline{x}}(N_{\omega})}F_{\mu}^{\theta}d(\lambda|_{N_{\omega}})\overset{(c)}{=}\int_{S_{\overline{x}}(M)}F_{\mu}^{\theta}d(\lambda|_{M})$ $\overset{(b)}{=}\int_{S_{\overline{x}}(M)}F_{\mu}^{\theta}d(\mu^{(k-1)}|_{M})=\mu^{(k)}(\varphi(x_{0},...,x_{k})).$ We provide the following justifications: 1. (a) Induction hypothesis. 2. (b) $\mu^{(k-1)}|_{M}=\lambda|_{M}$. 3. (c) Changing the space of integration. 4. (d) Smooth measures commute with invariant measures. 5. (e) $\nu|_{N_{\omega}}=\mu|_{N_{\omega}}$ ∎ We now observe that we have another proof of the theorem that global measures in NIP theories which are definable and finitely satisfiable are also finitely approximated. ###### Corollary 5.11. If $T^{\prime}$ is a countable or uncountable NIP theory and $\mu$ is $\operatorname{dfs}$ over $M$, then $\mu$ is finitely approximated over $M$. ###### Proof. After restricting to a countable language, we still have a $\operatorname{dfs}$ measures (by [6, Proposition 2.9]). By Proposition 2.9, $\mu$ restricted to this language is $\operatorname{dfs}$ over a countable model, $M_{0}$. By the previous result, $\mu$ is sequentially approximated over $M_{0}$. Since $\mu$ is also definable, an application of Proposition 3.4 yields the result. ∎ ###### Observation 5.12. Assume that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $\mu$ is sequentially approximated over $M$ if 1. (1) $T$ is stable, and $\mu$ is invariant over $M$, 2. (2) $T$ is NIP, $|M|=\aleph_{0}$, and $\mu$ is finitely satisfiable over $M$, or 3. (3) $\mu$ is finitely approximated over $M$. Finally, one may ask what happens in the local context. We remark that there exists two proofs for a local version of Theorem T2 which both rely on an important result of Bourgain, Fremlin, and Talagrand whose connection to model theory is (by now) well-known (e.g. [11, 18, 12, 8]). Chronologically, the first proof of the following theorem is implicit in the work of Khanaki (see [12, Remark 3.21, Theorem 3.26]) (through the observation that measures are types over models of the randomization in continuous model theory and [1, Proposition 1.1]), ###### Theorem 5.13. Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is finitely satisfiable over $M$ where $|M|=\aleph_{0}$, and $\varphi(x,y)$ is an NIP formula. Then there exists a sequence of points $(\overline{a}_{i})_{i\in\omega}$ in $(M^{x})^{<\omega}$ such that for each $b\in\mathcal{U}^{y}$, $\lim_{i\to\infty}\operatorname{Av}(\overline{a}_{i})(\varphi(x,b))=\mu(\varphi(x,b)).$ There is another proof for the case of just Keisler measures via the VC theorem (see [8, Lemma 4.7]) which came later. ### 5.3. Smooth sequences and generically stable measures in NIP theories We now give an equivalent characterization for generically stable measures in NIP theories. We invite the reader to review the definition of a generically stable type prior to reading this section. Recall the following theorem due to Hrushovski, Pillay, and Simon [10, Theorem 3.2]. ###### Theorem 5.14 (T is NIP). Assume that $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent. 1. ($i$) $\mu$ is dfs. 2. ($ii$) $\mu$ is finitely approximated. 3. ($iii$) $\mu$ is fim (see [10, Definition 2.7]). 4. ($iv$) $\mu$ is invariant and $\mu_{x}\otimes\mu_{y}=\mu_{y}\otimes\mu_{x}$. Moreover, a Keisler measure (in an NIP theory) is called generically stable if it satisfies any/all of $(i)-(iv)$. We will now show that smooth sequences can also give a characterization of generically stable measures in NIP theories. ###### Lemma 5.15 (T is NIP). Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. Suppose that $\mu$ is generically stable over $M$. For any smooth sequence $(\mu_{i},N_{i})_{i\in\omega}$ in $\mu$ over $M$, we have that $\lim_{i\to\infty}\mu_{i}=\mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. ###### Proof. Since $(\mu_{i},N_{i})_{i\in\omega}$ is a smooth sequence in $\mu$ over $M$, the measure $\bigotimes_{i=0}^{\omega}\mu_{i}$ is indiscernible over $M$ by Proposition 5.9. By $(i)$ of Fact 5.5, we know that $\lim_{i\to\infty}\mu_{i}=\nu$ for some $\nu\in\mathfrak{M}_{x}(\mathcal{U})$. Since each $\mu_{i}$ is finitely satisfiable over $N_{i}$, it follows that $\nu$ is finitely satisfiable over $N_{\omega}$. By $(iii)$ of Fact 5.5, it is enough to show that $\nu^{(\omega)}|_{N_{\omega}}=\mu^{(\omega)}|_{N_{\omega}}$. The base case is trivial. Assume that $\nu^{(k-1)}|_{N_{\omega}}=\mu^{(k-1)}|_{N_{\omega}}$. Fix $\varphi(x_{0},...,x_{k})\in\mathcal{L}_{x_{0},...,x_{k}}(N_{\omega})$ and $\epsilon>0$. Let $\overline{x}=(x_{0},...,x_{k-1})$ and $\theta(x_{k};\overline{x})=\varphi(x_{0},...,x_{k})$. Since $\mu$ is generically stable over $M$, $\mu^{(k-1)}$ is generically stable over $M$ ($(v)$ of Fact 2.12) and so also definable over $N_{\omega}$. Therefore by $(v)$ of Fact 2.8, there exists formulas $\psi_{1}(x_{k}),...,\psi_{n}(x_{k})\in\mathcal{L}_{x_{k}}({N_{\omega}})$ and real numbers $r_{1},...,r_{n}\in[0,1]$ so that $\sup_{q\in S_{x_{k}}(N_{\omega})}|F_{\mu^{(k-1)}}^{\theta^{*}}(q)-\sum_{i=1}^{n}r_{i}\mathbf{1}_{\psi_{i}(x_{k})}(q)|<\epsilon.$ Consider the following sequence of equations followed by a short list of justifications. $\nu^{(k)}(\varphi(x_{0},...,x_{k}))=\int_{S_{\bar{x}}(N_{\omega})}F_{\nu}^{\theta}d(\nu^{(k-1)}|_{N_{\omega}})\overset{(a)}{=}\int_{S_{\bar{x}}(N_{\omega})}F_{\nu}^{\theta}d(\mu^{(k-1)}|_{N_{\omega}})$ $\overset{(b)}{=}\int_{S_{x_{k}}(N_{\omega})}F_{\mu^{(k-1)}}^{\theta^{*}}d(\nu|_{N_{\omega}})\approx_{\epsilon}\int_{S_{x_{k}}(N_{\omega})}\sum_{i=1}^{n}r_{i}\mathbf{1}_{\psi_{i}(x_{k})}d(\nu|_{N_{\omega}})$ $=\sum_{i=1}^{n}r_{i}\nu(\psi_{i}(x_{k}))\overset{(c)}{=}\sum_{i=1}^{n}r_{i}\mu(\psi_{i}(x_{k}))=\int_{S_{x_{k}}(N_{\omega})}\sum_{i=1}^{n}r_{i}\mathbf{1}_{\psi_{i}(x_{k})}d(\mu|_{N_{\omega}})$ $\approx_{\epsilon}\int_{S_{x_{k}}(N_{\omega})}F_{\mu^{(k-1)}}^{\theta^{*}}d(\mu|_{N_{\omega}})\overset{(b)}{=}\int_{S_{x_{k}}(N_{\omega})}F_{\mu}^{\theta}d(\mu^{(k-1)}|_{N_{\omega}})=\mu^{(k)}(\varphi(x_{0},...,x_{k})).$ 1. (a) Induction hypothesis. 2. (b) (T is NIP) Generically stable measures commute with invariant measures (see $(b)$ of Fact 2.12). 3. (c) Base case. As $\epsilon$ was arbitrary, this proves the result. ∎ ###### Lemma 5.16 (T is NIP). Assume that $\mu$ is $M$-invariant. If for every smooth sequence $(\mu_{i},N_{i})_{i\in\mathbb{N}}$ in $\mu$ over $M$, we have that $\lim_{i\to\infty}\mu_{i}=\mu$, then $\mu$ is generically stable over $M$. ###### Proof. Since $T$ is NIP, all invariant measures are Borel definable. By Theorem 5.14, it suffices to show that $\mu$ commutes with itself, i.e. $\mu_{x}\otimes\mu_{y}=\mu_{y}\otimes\mu_{x}$. Fix $\varphi(x,y)\in\mathcal{L}_{x,y}(\mathcal{U})$. Let $M_{1}$ be a small model such that $M\prec M_{1}$ and $M_{1}$ contains all the parameters from $\varphi(x,y)$. We choose a smooth sequence $(\mu_{i,x};N_{i})_{i\in\omega}$ in $\mu_{x}$ over $M_{1}$ and let $N_{\omega}=\bigcup_{i\in\omega}N_{i}$. By construction, the sequence $(\mu_{i,x},N_{i})_{i\in\omega}$ is a smooth sequence in $\mu_{x}$ over $M$. Consider the following computation. $\mu_{x}\otimes\mu_{y}(\varphi(x,y))=\int_{S_{y}(M_{1})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{M_{1}})\overset{(a)}{=}\int_{S_{y}(N_{\omega})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{N_{\omega}})$ $\overset{(b)}{=}\lim_{i\to\infty}\int_{S_{y}(N_{\omega})}F_{\mu_{i,x}}^{\varphi}d(\mu_{y}|_{N_{\omega}})\overset{(c)}{=}\lim_{i\to\infty}\int_{S_{x}(N_{\omega})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{i,x}|_{N_{\omega}})$ $\overset{(d)}{=}\lim_{i\to\infty}\int_{S_{x}(M_{1})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{i,x}|_{M_{1}})\overset{(e)}{=}\lim_{i\to\infty}\int_{S_{x}(M_{1})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{x}|_{M_{1}})$ $=\int_{S_{x}(M_{1})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{x}|_{M_{1}})=\mu_{y}\otimes\mu_{x}(\varphi(x,y)).$ We provide a list of the following justifications: 1. $(a)$ Changing the space of integration. 2. $(b)$ Dominated convergence theorem. 3. $(c)$ Smooth measures commute with Borel definable measures. 4. $(d)$ Since $\mu_{y}$ is $M_{1}$ invariant. 5. $(e)$ Since $\mu_{i,x}|_{M_{1}}=\mu_{x}|_{M_{1}}$ for any $i\in\omega$. ∎ ###### Theorem 5.17 (T is NIP). Let $\mu\in\mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent: 1. (1) $\mu$ is generically stable over $M$. 2. (2) For any smooth sequence $(\mu_{i},N_{i})_{i\in\omega}$ in $\mu$ over $M$, $\lim_{i\to\infty}\mu_{i}=\mu\text{ in $\mathfrak{M}_{x}(\mathcal{U})$.}$ ###### Proof. Follows directly from the previous two lemmas. ∎ ## 6\. Local Measures revisited We generalize the main theorem of [8]. Fix a partitioned NIP formula $\varphi(x,y)$ and let $\mu$ be a $\varphi$-measure. In [8], we proved two main theorems. We showed that if $\varphi(x,y)$ is an NIP formula and $\mu$ is $\varphi$-definable and finitely satisfiable over a countable model $M$, then $\mu$ is $\varphi$-finitely approximated. We then proved that if $\mu$ is definable and finitely satisfiable over any small model $M$, then $\mu$ is finitely approximated in $M$ by reducing to the previous theorem. But this was somewhat unsatisfactory and the following question was left open: if $\mu$ is $\varphi$-definable and finitely satisfiable over a small model, then is $\mu$ $\varphi$-finitely approximated? We give a positive answer to this question by modifying one of the important technical lemmas in the proof. Let us first recall some definitions. ###### Definition 6.1. Fix $\mathcal{U}$ and a formula $\varphi(x,y)$ in $\mathcal{L}(\mathcal{U})$. 1. (1) $\mathcal{L}_{\varphi}(\mathcal{U})$ denotes the Boolean algebra of definable sets of $\mathcal{U}^{x}$ generated by the collection $\\{\varphi(x,b):b\in\mathcal{U}\\}$. 2. (2) A $\varphi$-measure is a finitely additive measure on the Boolean algebra $\mathcal{L}_{\varphi}(\mathcal{U})$. 3. (3) The collection of all $\varphi$-measures is denoted $\mathfrak{M}_{\varphi}(\mathcal{U})$. 4. (4) Let $M\prec\mathcal{U}$ and assume that $M$ contains all the parameters from $\varphi(x,y)$. For any $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$, we say that $\mu$ is $(M,\varphi)$-invariant if for any $b,c\in\mathcal{U}^{y}$ such that $\operatorname{tp}(b/M)=\operatorname{tp}(c/M)$, we have that $\mu(\varphi(x,b))=\mu(\varphi(x,c))$. 5. (5) Let $\mu\in\mathfrak{M}_{\varphi}(M)$. If $\mu$ is $(M,\varphi)$-invariant, then we define can the fiber map $F_{\mu}^{\varphi}:S_{y}(M)\to[0,1]$ via $F_{\mu,M}^{\varphi}(q)=\mu(\varphi(x,b))$ where $b\models q|_{M}$. When $M$ is clear from context, we write $F_{\mu,M}^{\varphi}$ simply as $F_{\mu}^{\varphi}$. 6. (6) Let $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be $\varphi$-definable if the map $F_{\mu,M}^{\varphi}:S_{y}(M)\to[0,1]$ is continuous. 7. (7) Let $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be definable if for any formula $\theta(x,\overline{y})$ in the algebra generated by $\\{\varphi(x,y_{i}):i\in\mathbb{N}\\}$, $\mu$ is $(M,\theta)$-invariant and the map $F_{\mu}^{\varphi}:S_{y}(M)\to[0,1]$ is continuous. 8. (8) For any $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is said to be finitely satisfiable in $M$ if for every $\theta(x)\in\mathcal{L}_{\varphi}(\mathcal{U})$ such that $\mu(\theta(x))>0$, there exists some $a\in M$ so that $\mathcal{U}\models\theta(a)$. 9. (9) For each $a\in M$ we let $F_{a}^{\varphi}:S_{y}(M)\to[0,1]$ via $F_{a}^{\varphi}=\mathbf{1}_{\varphi(a,y)}$. We denote the collection of such functions as $\mathbb{F}_{M}$. We let $\operatorname{conv}(\mathbb{F}_{M})$ be the collection of convex combinations of elements in $\mathbb{F}_{M}$. We let $F=[0,1]^{S_{y}(M)}$ endowed with the Tychonoff topology and if $A\subset F$, we let $\operatorname{cl}(A)$ denote its closure in this space and so the set $\operatorname{cl}(\operatorname{conv}(A))$ is well-defined. Recall the following facts about $\varphi$-measures which can be found in [8]. ###### Fact 6.2. Let $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$ and $M\prec\mathcal{U}$. 1. $(i)$ If $\mu$ is finitely satisfiable or $\varphi$-definable over $M$ then $\mu$ is $(M,\varphi)$-invariant. 2. $(ii)$ If $\mu$ is $\varphi$-definable over $M$ then $\mu$ is $(M_{0},\varphi)$-invariant for some $M_{0}\prec M$ such that $|M_{0}|=\aleph_{0}$. 3. $(iii)$ If $\mu$ is finitely satisfiable over $M$ then $F_{\mu,M}^{\varphi}$ is in $\operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M}))$. 4. $(iv)$ If $|M|=\aleph_{0}$ and $\varphi(x,y)$ is NIP, there exists a sequence of elements $(g_{i})_{i\in\omega}$ with each $g_{i}\in\operatorname{conv}(\mathbb{F}_{M})$ so that $\lim_{i\to\infty}g_{i}=F_{\mu,M}^{\varphi}$. The following lemma is essentially the missing lemma from [8]. The missed observation is that one can consider finitely many parameters at once (instead of a single parameter). ###### Lemma 6.3. Suppose that $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$ and $\mu$ is finitely satisfiable in a small submodel $N$ and $(M,\varphi)$-invariant. Then the map $F_{\mu,M}^{\varphi}\in\operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M}))$. ###### Proof. The proof is similar to the proof for types [16, Lemma 2.18] as well as the proof for measures [8, Proposition 4.13] (which has both a stronger assumption and conclusion). It suffices to show that for any finite collection of types $p_{1},...,p_{n}\in S_{y}(M)$ and $\epsilon>0$ there exists $\overline{a}\in(M^{x})^{<\omega}$ such that $F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_{i})\approx_{\epsilon}F_{\mu,M}^{\varphi}(p_{i})$ for each $i\leq n$. Fix $p_{1},...,p_{n}\in S_{y}(M)$ and $\epsilon>0$. Choose $b_{i}\models p_{i}$ for $i\leq n$. Let $q=\operatorname{tp}(N/M)\in S_{|N|}(M)$. Let $\hat{q}\in S_{|N|}(\mathcal{U})$ such that $\hat{q}\supset q$ and $\hat{q}$ is finitely satisfiable in $M$, i.e. $\hat{q}$ is a global coheir of $q$. Let $N_{1}\models\hat{q}|_{Mb_{1},...,b_{n}}$. By compactness, there exists elements $b_{1}^{\prime},...,b_{n}^{\prime}\in\mathcal{U}$ such that $\operatorname{tp}(N_{1}b_{1},...,b_{n}/M)=tp(Nb_{1}^{\prime},...,b_{n}^{\prime}/M)$. Since $\mu$ is $(M,\varphi)$-invariant, we have that $F_{\mu,M}^{\varphi}(p_{i})=\mu(\varphi(x,b_{i}))=\mu(\varphi(x,b^{\prime}_{i})),$ for each $i\leq n$. Since $\mu$ is finitely satisfiable in $N$, there exists some $m$ and $\overline{c}\in(N^{x})^{m}$ such that $\operatorname{Av}(\overline{c})(\varphi(x,b^{\prime}_{i}))\approx_{\epsilon}\mu(\varphi(x,b_{i}^{\prime}))$ for $i\leq n$. Let $B_{i}=\\{j\leq m:\models\varphi(c_{j},b^{\prime}_{i})\\}$. Now consider the formula $\theta(x_{1},...,x_{m},y_{1},...y_{n})=\bigwedge_{i\leq n}\Big{(}\bigwedge_{j\in B_{i}}\varphi(x_{j},y_{i})\wedge\bigwedge_{j\not\in B_{i}}\neg\varphi(x_{j},y_{i})\Big{)}.$ By construction $\theta(\overline{x},\overline{y})\in\operatorname{tp}(\overline{c},\overline{b^{\prime}}/M)$ and so for an appropriate choice of indices, $\theta(\overline{x},\overline{y})\in\operatorname{tp}(Nb_{1}^{\prime},...,b_{n}^{\prime}/M)$. Hence $\theta(\overline{x},\overline{y})\in\operatorname{tp}(N_{1}b_{1},...,b_{n}/M)$ and so $\theta(\overline{x},\overline{b})\in\operatorname{tp}(N_{1}/Mb_{1},...,b_{n})\subset\hat{q}$. Since $\hat{q}$ is finitely satisfiable in $M$, there exists $\overline{a}\in(M^{x})^{m}$ such that $\models\theta(\bar{a},\bar{b})$. By construction, we have that for any $i\leq n$, $F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_{i})=\operatorname{Av}(\overline{a})(\varphi(x,b_{i}))=\operatorname{Av}(\overline{c})(\varphi(x,b^{\prime}_{i}))\approx_{\epsilon}\mu(\varphi(x,b^{\prime}_{i}))=F_{\mu,M}^{\varphi}(p_{i}).$ This concludes the proof. ∎ ###### Theorem 6.4. Fix a formula $\varphi(x,y)$ and a small model $M$ containing all the parameters from $\varphi(x,y)$. Assume that $\mu\in\mathfrak{M}_{\varphi}(\mathcal{U})$. If 1. (1) $\varphi(x;y)$ is NIP, 2. (2) $\mu$ is $\varphi$-definable over $M$, 3. (3) and $\mu$ is finitely satisfiable in $M$, Then for every $\epsilon>0$, there exists $a_{1},...,a_{n}\in M^{x}$ such that, $\sup_{b\in\mathcal{U}^{y}}|\mu(\varphi(x,b))-\operatorname{Av}(\overline{a})(\varphi(x,b))|<\epsilon.$ ###### Proof. We remark that the proof is similar to that of Proposition 3.4. Since $\mu$ is $\varphi$-definable over $M$, $\mu$ is $(M_{0},\varphi)$-invariant where $M_{0}$ is a countable submodel of $M$. By Lemma 6.3, the map $F_{\mu,M_{0}}^{\varphi}\in\operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M_{0}}))$. By Fact 6.2, there exists a sequence $(g_{i})_{i\in I}$ so that $\lim_{i\to\infty}g_{i}=F_{\mu,M_{0}}^{\varphi}$. By Mazur’s lemma, for every $\epsilon>0$, there exists a finite set $I\subset\mathbb{N}$ and positive real numbers $\\{r_{i}:i\in I\\}$ such that $\sum_{i\in I}r_{i}=1$ and $\sup_{q\in S_{y}(M_{0})}|F_{\mu,M_{0}}^{\varphi}(q)-\sum_{i\in I}r_{i}g_{i}(q)|<\epsilon.$ The map $\sum_{i\in I}r_{i}g_{i}$ can clearly be uniformly approximated by an average function. More explicitly, there exists $\overline{d}\in(M^{x})^{<\omega}$ such that $\sup_{q\in S_{y}(M)}|\sum_{i\in I}r_{i}g_{i}(q)-F^{\varphi}_{\operatorname{Av}(\overline{d}),M}(q)|<\epsilon.$ Hence $\sup_{b\in\mathcal{U}^{y}}|\mu(\varphi(x,b))-\operatorname{Av}(\overline{d})(\varphi(x,b))|=\sup_{q\in S_{y}(M)}|F_{\mu,M}^{\varphi}(q)-F_{\operatorname{Av}(\overline{d}),M}^{\varphi}(q)|<2\epsilon.$ which completes the proof. ∎ ## References * [1] I. Ben Yaacov, _Transfer of properties between measures and random types_ , Unpublished research notes (2009). * [2] I. Ben Yaacov, _Continuous and random Vapnik-Chervonenkis classes_ , Israel Journal of Mathematics 173.1 (2009). MR 2561997 * [3] J. Bourgain, D. Fremlin, and M. Talagrand, _Pointwise compact sets of Baire-measurable functions_ , American Journal of Mathematics 100.4 (1978). * [4] A. Chernikov and K. Gannon, _Definable convolution and idempotent Keisler measures_ , arXiv:2004.10378 (2020). * [5] G. Conant, and K. Gannon, _Associativity of the Morley product of invariant measures in NIP theories_ , https://arxiv.org/abs/2104.08298 (2021). * [6] G. Conant, and K. Gannon, “Remarks on generic stability in independent theories.” _Annals of Pure and Applied Logic_ 171.2 (2020). * [7] G. Conant, K. Gannon, and J. Hanson, “Keisler measures in the wild.” arXiv preprint arXiv:2103.09137 (2021). * [8] K. Gannon, “Local Keisler measures and NIP formulas.” _The Journal of Symbolic Logic_ 84.3 (2019). * [9] E. Hrushovski and A. Pillay, _On NIP and invariant measures_ , J. Eur. Math. Soc. (JEMS) 13 (2011). * [10] E. Hrushovski, A. Pillay, and P. Simon, _Generically stable and smooth measures in NIP theories_ , Trans. Amer. Math. Soc. 365 (2013). * [11] Ibarlucía, Tomás. _The dynamical hierarchy for Roelcke precompact Polish groups_ , Israel Journal of Mathematics 215.2 (2016). * [12] Khanaki, Karim. _Stability, NIP, and NSOP; model theoretic properties of formulas via topological properties of function spaces_ , arXiv preprint arXiv:1410.3339 (2014). * [13] Khanaki, Karim. _NIP formulas and Baire 1 definability_ , arXiv preprint arXiv:1703.08731 (2017). * [14] Khanaki, Karim. _Glivenko-Cantelli classes and NIP formulas_ , arXiv preprint arXiv:2103.10788 (2021). * [15] A. Pillay,and P. Tanović, _Generic stability, regularity, and quasiminimality_ , Models, logics, and higher-dimensional categories CRM Proc. Lecture Notes, vol. 53, Amer. Math. Soc., Providence, RI, (2011). * [16] Pierre Simon, _A guide to NIP theories_ , Lecture Notes in Logic, vol. 44, Association for Symbolic Logic, Chicago, IL; Cambridge Scientific Publishers, Cambridge, (2015). * [17] Pierre Simon, _Invariant types in NIP theories_ , Journal of Mathematical Logic 15.02 (2015). * [18] Pierre Simon. _Rosenthal compacta and NIP formulas_ , Fundamenta Mathematicae 231.1 (2015). * [19] Kadets, Vladimir. _A Course in Functional Analysis and Measure Theory_. Cham: Springer International Publishing (2018).
11institutetext: Universität Koblenz-Landau, Germany 11email<EMAIL_ADDRESS>22institutetext: University of Bergen, Norway 22email<EMAIL_ADDRESS> # Revising Ontologies via Models: The ${\cal ALC}$-formula Case Jandson S. Ribeiro 11 Ricardo Guimarães 22 Ana Ozaki 22 ###### Abstract Most approaches for repairing description logic (DL) ontologies aim at changing the axioms as little as possible while solving inconsistencies, incoherences and other types of undesired behaviours. As in Belief Change, these issues are often specified using logical formulae. Instead, in the new setting for updating DL ontologies that we propose here, the input for the change is given by a _model_ which we want to add or remove. The main goal is to minimise the loss of information, without concerning with the syntactic structure. This new setting is motivated by scenarios where an ontology is built automatically and needs to be refined or updated. In such situations, the syntactical form is often irrelevant and the incoming information is not necessarily given as a formula. We define general operations and conditions on which they are applicable, and instantiate our approach to the case of ${\cal ALC}$-formulae. ††Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ## 1 Introduction Formal specifications often have to be updated either due to modelling errors or because they have become obsolete. When these specifications are description logic (DL) ontologies, it is possible to use one of the many approaches to fix missing or unwanted behaviours. Usually these methods involve the removal or replacement of formulae responsible by the undesired aspect [28, 16, 29, 2, 20]. The problem of changing logical representations of knowledge upon the arrival of new information is the subject matter of _Belief Change_ [13]. The theory developed in this field provides constructions suitable for various formalisms and applications [13, 24, 27, 23]. In most approaches for Belief Change and for repairing ontologies, it is assumed that a set of formulae represents the entailments to be added or removed. However, in some situations, it might be easier to obtain this information as a _model_ instead. This idea relates with Model Checking [3] whose main problem is to determine whether a model satisfies a set of constraints; and with the paradigm of Learning from Interpretations [4], where a formula needs to be created or changed so as to have certain interpretations as part of its models and remove others from its set of models. Example 1 illustrates the intuition behind using models as input. ###### Example 1 Suppose that a system, which serves a university, uses an internal logical representation of the domain with a open world behaviour and unique names. Let ${\mathcal{B}}$ be its current represention: $\displaystyle{\mathcal{B}}=$ $\displaystyle\left\\{Professors:\\{Mary\\},Courses:\\{DL,AI\\},\right.$ $\displaystyle\qquad\left.\\{teaches:\\{(Mary,AI),(Mary,DL)\\}\right\\}.$ Assume that a user finds mistakes in the course schedule and this is caused by the wrong information that Mary teaches the DL course. The user may lack knowledge to define the issue formally. An alternative would be to provide the user with an interface where one can specify, for instance, that the following model should be accepted $M=\\{Professors=\\{Mary\\},Courses=\\{DL,AI\\},teaches=\\{(Mary,AI)\\}\\},$ (in this model Mary does not teach the DL course). Given this input, the system should repair itself (semi-)automatically. We propose a new setting for Belief Change, in particular, contraction and expansion functions which take models as input. We analyse the case of ${\cal ALC}$-formula using quasimodels as a mean to define belief change operations. This logic satisfies properties which facilitate the design of these operations and it is close to ${\cal ALC}$, which is a well-studied DL. Additionally, we identify the postulates which determine these functions and prove that they characterise the mathematical constructions via representation theorems. The remaining of this work is organised as follows: in Section 2 we introduce the concepts from Belief Change which our approach builds upon and detail the paradigm we propose here. Section 3 presents ${\cal ALC}$-formula, the belief operations that take models as input and their respective representation theorems. In Section 4, we highlight studies which share similarities with our proposal and we conclude in Section 5. ## 2 Belief Change ### 2.1 The Classical Setting Belief Change [1, 13] studies the problem of how an agent should modify its knowledge in light of new information. In the original paradigm of Belief Change, the AGM theory, an agent’s body of knowledge is represented as a set of formulae closed under logical consequence, called a _belief set_ , and the new information is represented as a single _formula_. Belief sets, however, are not the only way for representing an agent’s body of knowledge, and another way of representing an agent’s knowledge is via _belief bases_ : arbitrary sets of formulae, not necessarily closed under logical consequence [11]. In the AGM paradigm, an agent may modify its current belief base $\mathcal{B}$ in response to a new piece of information $\varphi$ through three kinds of operations: Expansion: $\operatorname{ex}(\mathcal{B},\varphi)$, simply add $\varphi$ to ${\mathcal{B}}$; Contraction: $\operatorname{con}(\mathcal{B},\varphi)$, reduce ${\mathcal{B}}$ so that it does not imply $\varphi$; Revision: $\operatorname{rev}(\mathcal{B},\varphi)$, incorporate $\varphi$ and keep consistency of the resulting belief base, as long as $\varphi$ is consistent. When modifying its body of knowledge an agent should rationally modify its beliefs conserving most of its original beliefs. This principle of minimal change is captured in Belief Change via sets of rationality postulates. Each of the three operations (expansion, contraction and revision) presents its own set of rationality postulates which characterize precisely different classes of belief change constructions. The AGM paradigm was initially proposed for classical logics that satisfy specific requirements, dubbed AGM assumptions, among them Tarskianicity, compactness and _deduction_. See [6, 24] for a complete list of the AGM assumptions and a discussion on the topic. Recently, efforts have been applied to extend Belief Change to logics that do not satisfy such assumptions. For instance, logics that are not closed under classical negation of formulae (such as is the case for most DLs) [24, 26], and temporal logics and logics without compactness [21, 22, 23]. In what follows, we define _kernel contraction_ [13], one of the most studied constructions in Belief Change and which is closely related to the most common ways to repair ontologies. Kernel operations rely on calculating the minimal implying sets (MinImps), also known as justifications [15] or kernels [13]. A MinImp is a minimal subset that does entail a formula $\varphi$. The set of all MinImps of a belief base ${\mathcal{B}}$ w.r.t. a formula $\varphi$ is denoted by $\operatorname{MinImps}({\mathcal{B}},\varphi)$. A kernel contraction removes from each MinImp at least one formula using an _incision function_. ###### Definition 1 Given a set of formulae ${\mathcal{B}}$ of language $\mathcal{L}$, a function $f$ is an incision function for ${\mathcal{B}}$ iff for all $\varphi\in\mathcal{L}$: (i) $f(\operatorname{MinImps}({\mathcal{B}},\varphi))\subseteq\bigcup\operatorname{MinImps}({\mathcal{B}},\varphi)$ and (ii) $f(\operatorname{MinImps}({\mathcal{B}},\varphi))\cap X\neq\emptyset$, for all $X\in\operatorname{MinImps}({\mathcal{B}},\varphi)$. Kernel contraction operators are built upon incision functions. The application of an incision function to a set of MinImps is a hitting set [16, 2]. ###### Definition 2 Let $\mathcal{L}$ be a language and $f$ an incision function. The _kernel contraction_ on ${\mathcal{B}}\subseteq\mathcal{L}$ determined by $f$ is the operation $\operatorname{con}_{f}:2^{\mathcal{L}}\times\mathcal{L}\mapsto 2^{\mathcal{L}}$ defined as: $\operatorname{con}_{f}({\mathcal{B}},\varphi)={\mathcal{B}}\setminus f(\operatorname{MinImps}({\mathcal{B}},\varphi))$. Kernel contraction operations are characterised precisely by a set of rationality postulates, as shown in the following representation theorem: ###### Theorem 2.1 ([14]) Let ${\operatorname{Cn}}$ be a consequence operator satisfying monotonicity and compactness defined for a language $\mathcal{L}$. Then $\operatorname{con}:2^{\mathcal{L}}\times\mathcal{L}\mapsto 2^{\mathcal{L}}$ is an operation of kernel contraction on ${\mathcal{B}}\subseteq\mathcal{L}$ iff for all sentences $\varphi\in\mathcal{L}$: (success) if $\varphi\not\in{\operatorname{Cn}}(\emptyset)$, then $\varphi\not\in{\operatorname{Cn}}(\operatorname{con}({\mathcal{B}},\varphi))$, (inclusion) $\operatorname{con}({\mathcal{B}},\varphi)\subseteq{\mathcal{B}}$, (core-retainment) if $\psi\in{\mathcal{B}}\setminus\operatorname{con}({\mathcal{B}},\varphi)$, then there is some ${\mathcal{B}}^{\prime}\subseteq{\mathcal{B}}$ such that $\varphi\not\in{\operatorname{Cn}}({\mathcal{B}}^{\prime})$ and $\varphi\in{\operatorname{Cn}}({\mathcal{B}}^{\prime}\cup\psi)$, (uniformity) if for all subsets ${\mathcal{B}}^{\prime}$ of ${\mathcal{B}}$, $\varphi\in{\operatorname{Cn}}({\mathcal{B}}^{\prime})$ iff $\psi\in{\operatorname{Cn}}({\mathcal{B}}^{\prime})$, then $\operatorname{con}({\mathcal{B}},\varphi)=\operatorname{con}({\mathcal{B}},\psi)$. ### 2.2 Changing Finite Bases by Models The Belief Change setting discussed in this section represents an epistemic state by means of a finite base. While this essentially differ from the traditional approach [1, 11], it aligns with the KM paradigm established by Katsuno and Mendelzon [17]. In Section 4 we discuss other studies in Belief Change which also take finite representability into account. In this work, unlike the standard representation methods in Belief Change, we consider that an incoming piece of information is represented as a finite model. Belief Change operations defined in this format will be called model change operations. Recall that a model $M$ is simply a structure used to give semantics to an underlying logic language. The set of all possible models is given by $\mathfrak{M}$. Moreover, we assume a semantic system that, for each set of formulae ${\mathcal{B}}$ of the language $\mathcal{L}$ gives a set of models $\operatorname{Mod}({{\mathcal{B}}})\coloneqq\\{M\in\mathfrak{M}\mid\forall\varphi\in{\mathcal{B}}:M\models\varphi\\}$. Let $\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ denote the set of all finite bases in $\mathcal{L}$. We also say that a set of models $\mathbb{M}$ is _finitely representable in $\mathcal{L}$_ if there is a finite base ${\mathcal{B}}\in\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ such that $\operatorname{Mod}({{\mathcal{B}}})=\mathbb{M}$. Additionally, if for all $\varphi\in\mathcal{L}$ it holds that $M\models\varphi$ iff $M^{\prime}\models\varphi$ then we write $M\equiv^{\mathcal{L}}M^{\prime}$. We also define ${[{M}]^{\mathcal{L}}}\coloneqq\\{M^{\prime}\in\mathfrak{M}\mid M^{\prime}\equiv^{\mathcal{L}}M\\}$. When compared to traditional methods in Belief Change and Ontology Repair [13, 16, 2], where the incoming information comes as a single formula, our approach receives instead a single model as input. Although, the initial body of knowledge is represented as a finite base, the operations we define do not aim to preserve its syntactic structure. The first model change operation we introduce is model contraction, which eliminates one of the models of the current base (which in Section 3 is instantiated as an ontology). Model contraction is akin to a belief expansion, where a formula is added to the belief set or base, reducing the set of accepted models. The counterpart operation, model expansion, changes the base to include a new model. This relates to belief contraction, in which a formula is removed, and thus more models are seen as plausible. We rewrite the rationality postulates that characterize kernel contraction [14], considering an incoming piece of information represented as a model instead of a single formula. ###### Definition 3 (Model Contraction) Let $\mathcal{L}$ be a language. A function $\operatorname{con}:\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})\times\mathfrak{M}\mapsto\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ is a _finitely representable model contraction function_ iff for every $\mathcal{B}\in\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ and $M\in\mathfrak{M}$ it satisfies the following postulates: (success) $M\not\in\operatorname{Mod}({\operatorname{con}(\mathcal{B},M)})=\emptyset$, (inclusion) $\operatorname{Mod}({\operatorname{con}(\mathcal{B},M)})\subseteq\operatorname{Mod}({\mathcal{B}})$, (retainment) if $M^{\prime}\in\operatorname{Mod}({\mathcal{B}})\setminus\operatorname{Mod}({\operatorname{con}(\mathcal{B},M)})$ then $M^{\prime}\equiv^{\mathcal{L}}M$, (extensionality) $\operatorname{con}({\mathcal{B}},M)=\operatorname{con}({\mathcal{B}},M^{\prime})$, if $M\equiv^{\mathcal{L}}M^{\prime}$. We might also need to add a model to the set of models of the current base. This addition relates to classical contractions in Belief Change, which _reduces_ the belief base. ###### Definition 4 (Model Expansion) Let $\mathcal{L}$ be a language. A function $\operatorname{ex}:\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})\times\mathfrak{M}\mapsto\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ is a _finitely representable model expansion_ iff for every ${\mathcal{B}}\in\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ and $M\in\mathfrak{M}$ it satisfies the postulates: (success) $M\in\operatorname{Mod}({\operatorname{ex}({\mathcal{B}},M)}),$ (persistence) $\operatorname{Mod}({{\mathcal{B}}})\subseteq\operatorname{Mod}({\operatorname{ex}({\mathcal{B}},M)}),$ (vacuity) $\operatorname{Mod}({\operatorname{ex}({\mathcal{B}},M)})=\operatorname{Mod}({{\mathcal{B}}}),$ if $M\in\operatorname{Mod}({{\mathcal{B}}})$, (extensionality) $\operatorname{ex}({\mathcal{B}},M)=\operatorname{ex}({\mathcal{B}},M^{\prime})$, if $M\equiv^{\mathcal{L}}M^{\prime}$. ###### Definition 5 Let $\mathcal{L}$ be a language and ${\operatorname{Cn}}$ a Tarskian consequence operator defined over $\mathcal{L}$. Also let $\mathfrak{M}$ be a fixed set of models. We say that a triple ${\Lambda}=(\mathcal{L},{\operatorname{Cn}},\mathfrak{M})$ is an _ideal logical system_ if the following holds. * • For every ${\mathcal{B}}\subseteq\mathcal{L}$ and $\varphi\in\mathcal{L}$, ${\mathcal{B}}\models\varphi$ (i.e. $\varphi\in{\operatorname{Cn}}({\mathcal{B}})$) iff $\operatorname{Mod}({{\mathcal{B}}})\subseteq\operatorname{Mod}({\varphi})$. * • For each $\mathbb{M}\subseteq\mathfrak{M}$ there is a finite set of formulae ${\mathcal{B}}$ such that $\operatorname{Mod}({{\mathcal{B}}})=\mathbb{M}$. If ${\Lambda}=(\mathcal{L},{\operatorname{Cn}},\mathfrak{M})$ is an ideal logical system, we can define a function ${\operatorname{FR}}_{\Lambda}:2^{\mathfrak{M}}\mapsto\operatorname{\operatorname{\mathcal{P}}_{fin}}(\mathcal{L})$ and such that $\operatorname{Mod}({{\operatorname{FR}}(\mathbb{M})})=\mathbb{M}$. Then, we can define model contraction as $\operatorname{con}({\mathcal{B}},M)={\operatorname{FR}}(\operatorname{Mod}({{\mathcal{B}}})\setminus{[{M}]^{\mathcal{L}}})$ and expansion as $\operatorname{ex}({\mathcal{B}},M)={\operatorname{FR}}(\operatorname{Mod}({{\mathcal{B}}})\cup{[{M}]^{\mathcal{L}}})$. The first condition in Definition 5 implies that there is a connection between the models satisfied and the logical consequences of the base obtained and the second ensures that the result always exists. An example that fits these requirements is to consider classical propositional logic with a finite signature $\Sigma$, together with its usual consequence operator and models. In this situation, we can define ${\operatorname{FR}}_{prop}$ as follows: ${\operatorname{FR}}_{prop}(\mathbb{M})=\bigvee_{M\in\mathbb{M}}\left(\bigwedge_{a\in\Sigma\mid M\models a}a\land\bigwedge_{a\in\Sigma\mid M\models\neg{a}}\neg{a}\right).$ Next, we show that the construction proposed with ${\operatorname{FR}}$ has the properties stated in Definitions 3 and 4. ###### Theorem 2.2 Let $(\mathcal{L},{\operatorname{Cn}},\mathfrak{M})$ be an ideal logical system as in Definition 5. Then $\operatorname{iCon}({\mathcal{B}},M)\coloneqq{\operatorname{FR}}\left(\operatorname{Mod}({{\mathcal{B}}})\setminus{[{M}]^{\mathcal{L}}}\right)$ satisfies the postulates in Definition 3. ###### Proof Definition 5 ensures that the result exists and that $M\models\varphi$, for all $\varphi\in\Lambda$, giving us success. By construction we do gain models, thus we have inclusion. If $M\equiv^{\mathcal{L}}M^{\prime}$, then ${[{M}]^{\mathcal{L}}}={[{M^{\prime}}]^{\mathcal{L}}}$, thus extensionality is satisfied. Also, if $M^{\prime}\in\operatorname{Mod}({\varphi})\setminus\operatorname{Mod}({\operatorname{iCon}(\varphi,M)})$ then $M^{\prime}\in{[{M}]^{\mathcal{L}}}$, hence the operation satisfies retainment. ###### Theorem 2.3 Let $(\mathcal{L},{\operatorname{Cn}},\mathfrak{M})$ be an ideal logical system as in Definition 5. Then $\operatorname{iExp}({\mathcal{B}},M)\coloneqq{\operatorname{FR}}\left(\operatorname{Mod}({{\mathcal{B}}})\cup{[{M}]^{\mathcal{L}}}\right)$ satisfies the postulates in Definition 4. ###### Proof Definition 5 ensures that the result exists and that $M\models\varphi$, for all $\varphi\in\Lambda$, giving us success. Due to the first condition in Definition 5 we gain vacuity: if $M\in\operatorname{Mod}({{\mathcal{B}}})$, then there will be no changes in the accepted models. By construction we do not lose models, thus we have persistance. Extensionality also holds because whenever $M\equiv^{\mathcal{L}}M^{\prime}$ we have then ${[{M}]^{\mathcal{L}}}={[{M^{\prime}}]^{\mathcal{L}}}$. A revision operation incorporates new formulae, and removes potential conflicts in behalf of consistency. In our setting, incorporating information coincides with model contraction which could lead to an inconsistent belief state. In this case, model revision could be interpreted as a conditional model contraction: in some cases the removal might be rejected to preserve consistency. We leave the study on revision as a future work. ## 3 The case of $\mathcal{ALC}$-formula The logic $\mathcal{ALC}$-formula corresponds to the DL $\mathcal{ALC}$ enriched with boolean operators over ${\cal ALC}$ axioms. As discussed in Section 2.2, in finite representable logics, such as the classical propositional logics, we can easily add and remove models while keeping the representation finite. For ${\cal ALC}$-formula, however, it is not possible to uniquely add or remove a new model $M$ since, for instance, the language does not distinguish quantities (e.g., a model $M$ and another model that has two duplicates of $M$). Even if quantities are disregarded and our input is a class of models indistinguishable by ${\cal ALC}$-formulae, there are sets of formulae in this language that are not finitely representable. As for instance in the following infinite set: $\\{C\sqsubseteq\exists r^{n}.\top\mid n\in\mathbb{N}^{>0}\\}$, where $\exists r^{n+1}.\top$ is a shorthand for $\exists r.(\exists r^{n}.\top)$ and $\exists r^{1}.\top\coloneqq\exists r.C$. As a workaround for the ${\cal ALC}$-formula case, we propose a new strategy based on the translation of $\mathcal{ALC}$-formulae into DNF. ### 3.1 ${\cal ALC}$-formulae and Quasimodels Let ${\sf N_{C}}$, ${\sf N_{R}}$ and ${\sf N_{I}}$ be countably infinite and pairwise disjoint sets of concept names, role names, and individual names, respectively. ${\cal ALC}$ _concepts_ are built according to the rule: $C::=A\mid\neg C\mid(C\sqcap C)\mid\exists r.C$, where $A\in{\sf N_{C}}$ and $r\in{\sf N_{R}}$. ${\cal ALC}$-_formulae_ are defined as expressions $\phi$ of the form $\phi::=\alpha\mid\neg(\phi)\mid(\phi\wedge\phi)\quad\quad\alpha::=C(a)\mid r(a,b)\mid(C=\top),$ where $C$ is an ${\cal ALC}$ concept, $a,b\in{\sf N_{I}}$, and $r\in{\sf N_{R}}$111 We may omit parentheses if there is no risk of confusion. The usual concept inclusions $C\sqsubseteq D$ can be expressed with $\top\sqsubseteq\neg C\sqcup D$ and $\neg C\sqcup D\sqsubseteq\top$, which is $(\neg C\sqcup D=\top)$.. Denote by ${\sf ind}(\varphi)$ the set of all individual names occurring in an ${\cal ALC}$-formula $\varphi$. The semantics of ${\cal ALC}$-formulae and the definitions related to quasimodels are standard [8, page 70]. In what follows, we reproduce the essential definitions and results for this work. Let $\varphi$ be an ${\cal ALC}$-formula. Let ${\sf f}(\varphi)$ and ${\sf c}(\varphi)$ be the set of all subformulae and subconcepts of $\varphi$ closed under single negation, respectively. ###### Definition 6 A _concept type_ for $\varphi$ is a subset ${\mathbf{c}}\subseteq{\sf c}(\varphi)$ such that: 1. 1. $D\in{\mathbf{c}}$ iff $\neg D\not\in{\mathbf{c}}$, for all $D\in{\sf c}(\varphi)$; 2. 2. $D\sqcap E\in{\mathbf{c}}$ iff $\\{D,E\\}\subseteq{\mathbf{c}}$, for all $D\sqcap E\in{\sf c}(\varphi)$. ###### Definition 7 A _formula type_ for $\varphi$ is a subset ${\mathbf{f}}\subseteq{\sf f}(\varphi)$ such that: 1. 1. $\phi\in{\mathbf{f}}$ iff $\neg\phi\not\in{\mathbf{f}}$, for all $\phi\in{\sf f}(\varphi)$; 2. 2. $\phi\wedge\psi\in{\mathbf{f}}$ iff $\\{\phi,\psi\\}\subseteq{\mathbf{f}}$, for all $\phi\wedge\psi\in{\sf f}(\varphi)$. We may omit ‘for $\varphi$’ if this is clear from the context. A _model candidate_ for $\varphi$ is a triple $(T,o,{\mathbf{f}})$ such that $T$ is a set of concept types, $o$ is a function from ${\sf ind}(\varphi)$ to $T$, ${\mathbf{f}}$ a formula type, and $(T,o,{\mathbf{f}})$ satisfies the conditions: $\varphi\in{\mathbf{f}}$; $C(a)\in{\mathbf{f}}$ implies $C\in o(a)$; $r(a,b)\in{\mathbf{f}}$ implies $\\{\neg C\mid\neg\exists r.C\in o(a)\\}\subseteq o(b)$. ###### Definition 8 (Quasimodel) A model candidate $(T,o,{\mathbf{f}})$ for $\varphi$ is a _quasimodel_ for $\varphi$ if the following holds * • for every concept type ${\mathbf{c}}\in T$ and every $\exists r.D\in{\mathbf{c}}$, there is ${\mathbf{c}}^{\prime}\in T$ such that $\\{D\\}\cup\\{\neg E\mid\neg\exists r.E\in{\mathbf{c}}\\}\subseteq{\mathbf{c}}^{\prime}$; * • for every concept type ${\mathbf{c}}\in T$ and every concept $C$, if $\neg C\in{\mathbf{c}}$ then this implies $(C=\top)\not\in{\mathbf{f}}$; * • for every concept $C$, if $\neg(C=\top)\in{\mathbf{f}}$ then there is ${\mathbf{c}}\in T$ such that $C\not\in{\mathbf{c}}$; * • $T$ is not empty. Theorem 3.1 motivates the decision of using quasimodels to implement our operations for finite bases described in ${\cal ALC}$-formulae. ###### Theorem 3.1 (Theorem 2.27 [8]) An $\mathcal{ALC}$-formula $\varphi$ is satisfiable iff there is a quasimodel for $\varphi$. ### 3.2 ${\cal ALC}$-formulae in Disjunctive Normal Form Next, we propose a translation method which converts an ${\cal ALC}$-formula into a disjunction of conjunctions of (possibly negated) atomic formulae. Let ${\sf S}(\varphi)$ be the set of all quasimodels for $\varphi$. We We define $\varphi^{\dagger}$ as $\bigvee_{(T,o,{\mathbf{f}})\in{\sf S}(\varphi)}(\bigwedge_{\alpha\in{\mathbf{f}}}\alpha\wedge\bigwedge_{\neg\alpha\in{\mathbf{f}}}\neg\alpha).$ where $\alpha$ is of the form $(C=\top),C(a),r(a,b)$. Theorem 3.2 confirms the equivalence between a formula and its translation into DNF. As downside, the translation can be potentially exponentially larger than the original formula. ###### Theorem 3.2 () For every ${\cal ALC}$-formula $\varphi$, we have that $\varphi\equiv\varphi^{\dagger}$. In the next subsections, we present finite base model change operations for ${\cal ALC}$-formulae, i.e., functions from $\mathcal{L}\times\mathfrak{M}\mapsto\mathcal{L}$. We can represent the body of knowledge as a single formula because every finite belief base of ${\cal ALC}$-formulae can be represented by the conjunction of its elements. We use our translation to add models in a “minimal” way by _adding disjuncts_ , while removing a model amounts to _removing disjuncts_. We also need to obtain a model candidate relative to our translated formula, as show in Definition 9. ###### Definition 9 ([8]) Let $\mathcal{I}$ be an interpretation and $\varphi$ an ${\cal ALC}$-formula formula. The quasimodel of $\mathcal{I}$ w.r.t. $\varphi$, symbols $\operatorname{qm}({\varphi},{\mathcal{I}})=(T,o,{\mathbf{f}})$, is * • $T\coloneqq\\{c(x)\mid x\in\Delta^{\mathcal{I}}\\}$, where $c(x)=\\{C\in{\sf c}(\varphi)\mid x\in C^{\mathcal{I}}\\}$, * • $o(a)\coloneqq c(a^{\mathcal{I}})$, for all $a\in{\sf ind}(\varphi)$, * • ${\mathbf{f}}\coloneqq\\{\psi\in{\sf f}(\varphi)\mid\mathcal{I}\models\psi\\}.$ ### 3.3 Model Contraction for $\mathcal{ALC}$-formulae We define model contraction for $\mathcal{ALC}$-formulae using the notion of quasimodels discussed previously and a correspondence between models and quasimodels. We use the following operator, denoted $\operatorname{\mu}$, to define model contraction in Definition 10. Let $\varphi$ be an ${\cal ALC}$-formula and let $M$ be a model. Then, $\operatorname{\mu}(\varphi,M)=\operatorname{ftypes}({\varphi})\setminus\\{{\mathbf{f}}\\},\mbox{ where }qm(\varphi,M)=(T,o,{\mathbf{f}})$ and $\operatorname{ftypes}({\varphi})$ is the set of all formula types in all quasimodels for $\varphi$, that is: $\operatorname{ftypes}({\varphi})=\\{{\mathbf{f}}\mid(T,o,{\mathbf{f}})\in{\sf S}(\varphi)\\}.$ Let $lit({\mathbf{f}})\coloneqq\\{\ell\in{\mathbf{f}}\mid\ell\mbox{ is a literal }\\}$ be the set of all literals in a formula type ${\mathbf{f}}$. ###### Definition 10 A _finite base model contraction function_ is a function $\operatorname{con}:\mathcal{L}\times\mathfrak{M}\mapsto\mathcal{L}$ such that $\displaystyle\operatorname{con}(\varphi,M)$ $\displaystyle{=}\left\\{\begin{array}[]{cl}\bigvee\limits_{{\mathbf{f}}\in\operatorname{\mu}(\varphi,M)}\bigwedge lit({\mathbf{f}}),&\mbox{if }M\models\varphi\mbox{ and }\operatorname{\mu}(\varphi,M)\neq\emptyset\\\ \bot&\mbox{if }M\models\varphi\mbox{ and }\operatorname{\mu}(\varphi,M)=\emptyset\\\ \varphi&\mbox{otherwise}.\end{array}\right.$ As we see later in this Section, there are models $M,M^{\prime}$ such that $M{\not\equiv}^{\mathcal{L}}M^{\prime}$ but our operations based on quasimodels cannot distinguish them. Given ${\cal ALC}$-formulae $\varphi,\psi$, we say that $\psi$ is _in the language of the literals of $\varphi$_, written $\psi\in{\mathcal{L}_{lit}({\varphi})}$, if $\psi$ is a boolean combination of the atoms in $\varphi$. Our operations partition the models according to this restricted language. We write $M\equiv^{\varphi}M^{\prime}$ instead of $M\equiv^{\mathcal{L}_{lit}({\varphi})}M^{\prime}$, and ${{[{M}]^{\varphi}}}$ instead of $[M]^{\mathcal{L}_{lit}({\varphi})}$ for conciseness. ###### Theorem 3.3 () Let $M$ be a model and $\varphi$ an ${\cal ALC}$-formula. A finite base model function $\operatorname{con}^{*}(\varphi,M)$ is equivalent to $\operatorname{con}(\varphi,M)$ iff $\operatorname{con}^{*}$ satisfies: (success) $M\not\models\operatorname{con}^{*}(\varphi,M)$, (inclusion) $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})\subseteq\operatorname{Mod}({\varphi})$, (atomic retainment): For all $\mathbb{M}^{\prime}\subseteq\mathfrak{M}$, if $\operatorname{Mod}({\operatorname{con}^{*}({\mathcal{B}},{M})})\subset\mathbb{M}^{\prime}\subseteq\operatorname{Mod}({{\mathcal{B}}})\setminus{{[{{M}}]^{\varphi}}}$ then $\mathbb{M}^{\prime}$ is not finitely representable in ${\cal ALC}$-formula. (atomic extensionality) if $M^{\prime}\equiv^{\varphi}M$ then $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})=\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M^{\prime})}).$ The postulate of success guarantees that $M$ will be indeed relinquished, while inclusion imposes that no model will be gained during a contraction operation. Recall that in order to guarantee finite representability, it might be necessary to remove $M$ jointly with other models. The postulate atomic retainmentcaptures a notion of minimal change, dictating which models are allowed to be removed together with $M$. On the other hand, atomic extensionality imposes that if two models $M$ and $M^{\prime}$ satisfy the same formulae within the literals of the current knowledge base $\varphi$, then they should present the same result. A simpler way of implementing model contraction, also using the notion of a quasimodel, ###### Definition 11 Let $\varphi$ be an ${\cal ALC}$-formula and $M$ a model. Also, let $(T,o,{\mathbf{f}})=\operatorname{qm}({\varphi},{M})$. The function $\operatorname{con}_{s}(\varphi,M)$ is defined follows: $\displaystyle\operatorname{con}_{s}(\varphi,M)=\left\\{\begin{array}[]{cl}\varphi\land\neg(\bigwedge lit({\mathbf{f}}))&\text{if }M\models\varphi\\\ \varphi&\text{otherwise.}\end{array}\right.$ Example 2 illustrates how $\operatorname{con}_{s}$ works. ###### Example 2 Consider the following ${\cal ALC}$-formula and interpretation $M$: $\displaystyle\varphi\coloneqq$ $\displaystyle P(Mary)\land C(DL)\land C(AI)\land\left((teaches(Mary,DL)\right.\land$ $\displaystyle\left.\neg{teaches(Mary,AI)})\lor(\neg{teaches(Mary,DL)}\land teaches(Mary,AI))\right)$ and $M=(\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})$, where $\Delta^{\mathcal{I}}=\\{m,d,a\\}$, $C^{\mathcal{I}}=\\{d,a\\}$, $P^{\mathcal{I}}=\\{m\\}$, $teaches^{\mathcal{I}}=\\{(m,d)\\}$, $Mary^{\mathcal{I}}=m$, $AI^{\mathcal{I}}=a$, and $DL^{\mathcal{I}}=d$. Assume we want to remove $M$ from $\operatorname{Mod}({\varphi})$. Let $\operatorname{qm}({\varphi},{M})=(T,o,{\mathbf{f}})$. Thus, $\displaystyle lit({\mathbf{f}})$ $\displaystyle=\\{\neg teaches(m,a),teaches(m,d),C(d),C(a),P(m)\\}$ $\displaystyle\operatorname{con}_{s}(\varphi,M)$ $\displaystyle=\varphi\land\neg\bigwedge lit({\mathbf{f}})$ $\displaystyle=\varphi\land\neg\left(\neg teaches(m,a)\land teaches(m,d)\land C(d)\land C(a)\land P(m)\right).$ Both model contraction operations $\operatorname{con}$ and $\operatorname{con}_{s}$ are equivalent. ###### Theorem 3.4 () For every ${\cal ALC}$-formula $\varphi$ and model $M$, $\operatorname{con}(\varphi,M){\equiv}\operatorname{con}_{s}(\varphi,M)$. ### 3.4 Model Expansion in $\mathcal{ALC}$-formulae In this section, we investigate model expansion for ${\cal ALC}$-formulae. Recall that we assume that a knowledge base is represented as a single ${\cal ALC}$-formula $\varphi$. Expansion consists in adding an input model $M$ to the current knowledge base $\varphi$ with the requirement that the new epistemic state can be represented also as a finite formula. ###### Definition 12 Given a quasimodel $(T,o,{\mathbf{f}})$, we write $\bigwedge(T,o,{\mathbf{f}})$ as a short-cut for $\bigwedge lit({\mathbf{f}})$. A _finite base model expansion_ is a function $\operatorname{ex}:\mathcal{L}\times\mathfrak{M}\to\mathcal{L}$ s.t.: $\displaystyle\operatorname{ex}(\varphi,M)$ $\displaystyle=\left\\{\begin{array}[]{cl}\varphi&\mbox{if }M\models\varphi\\\ \varphi\lor\bigwedge qm(\neg\varphi,M)&\mbox{otherwise}.\end{array}\right.$ Example 3 illustrates how $\operatorname{ex}$ works. ###### Example 3 Consider the interpretation $M$ from Example 2 and $\varphi\coloneqq P(Mary)\land C(DL)\land C(AI)\land teaches(Mary,AI)\land\neg{teaches(Mary,DL)}.$ Assume we want to add $M$ to $\operatorname{Mod}({\varphi})$ and $qm(\neg\varphi,M)=(T,o,{\mathbf{f}})$. Thus, $\displaystyle lit({\mathbf{f}})$ $\displaystyle=\\{\neg teaches(m,a),teaches(m,d),C(d),C(a),P(m)\\}$ $\displaystyle\operatorname{ex}(\varphi,M)$ $\displaystyle=\varphi\lor\bigwedge lit({\mathbf{f}})$ $\displaystyle=\varphi\lor\left(\neg teaches(m,a)\land teaches(m,d)\land C(d)\land C(a)\land P(m)\right).$ The operation ‘$\operatorname{ex}$’ maps a current knowledge base represented as a single formula $\varphi$ and maps it to a new knowledge base that is satisfied by the input model $M$. The intuition is that ‘$\operatorname{ex}$’ modifies the current knowledge base only if $M$ does not satisfy $\varphi$. This modification is carried out by making a disjunct of $\varphi$ with a formula $\psi$ that is satisfied by $M$. This guarantees that $M$ is present in the new epistemic state and that models of $\varphi$ are not discarded. The trick is to find such an appropriate formula $\psi$ which is obtained by taking the conjunction of all the literals within the quasimodel $qm(\neg\varphi,M)$. Here, the quasimodel needs to be centred on $\neg\varphi$ because $M\not\models\varphi$, and therefore it is not possible to construct a quasimodel based on $M$ centred on $\varphi$. As discussed in the prelude of this section, this strategy not only adds $M$ to the new knowledge base but also the whole equivalence class modulo the literals of $\varphi$. ###### Lemma 1 () For every ${\cal ALC}$-formula $\varphi$ and model $M$: $\displaystyle\operatorname{Mod}({\operatorname{ex}(\varphi,M)})=\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}.$ Actually, any operation that adds precisely the equivalence class of $M$ modulo the literals is equivalent to ‘$\operatorname{ex}$’. In the following, we write $\operatorname{ex}^{*}(\varphi,M)$ to refer to an arbitrary finite base expansion function of the form $\operatorname{ex}^{*}:\mathcal{L}\times\mathfrak{M}\mapsto\mathcal{L}$. ###### Theorem 3.5 () For every $\operatorname{ex}^{*}$, if $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,M)})=\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}$ then * (i) $\operatorname{ex}^{*}(\varphi,M)\equiv\varphi$, if $M\models\varphi$; and * (ii) $\operatorname{ex}^{*}(\varphi,M)\equiv\varphi\lor\bigwedge qm(\neg\varphi,M)$, if $M\not\models\varphi$. Our next step is to investigate the rationality of ‘$\operatorname{ex}^{*}$’. As expected adding the whole equivalence class of $M$ with respect to ${\mathcal{L}_{lit}({\varphi})}$ does not come freely, and some rationality postulates are captured, while others are lost: ###### Theorem 3.6 () Let $M$ be a model and $\varphi$ an ${\cal ALC}$-formula. A finite base model function $\operatorname{ex}^{*}(\varphi,M)$ is equivalent to $\operatorname{ex}(\varphi,M)$ iff $\operatorname{ex}^{*}$ satisfies: (success) ${M}\in\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. (persistence): $\operatorname{Mod}({\varphi})\subseteq\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. (atomic temperance): For all $\mathbb{M}^{\prime}\subseteq\mathfrak{M}$, if $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}\subseteq\mathbb{M}^{\prime}\subset\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})\cup\\{{M}\\}$ then $\mathbb{M}^{\prime}$ is not finitely representable in ${\cal ALC}$-formula. (atomic extensionality) if $M^{\prime}\equiv^{\varphi}M$ then $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,M)})=\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,M^{\prime})}).$ The postulates success and persistence come from requiring that $M$ will be absorbed, and that models will not be lost during an expansion. The atomic extensionality postulate states that if two models satisfy exactly the same literals within $\varphi$, then they should present the same results. Atomic temperance captures a principle of minimality and guarantees that when adding $M$, the loss of information should be minimised. Precisely, the only formulae allowed to be given up are those that are incompatible with $M$ modulo the literals of $\varphi$. Lemma 1 and Theorem 3.6 prove that the ‘$\operatorname{ex}$’ operation is characterized by the postulates: success, persistence, atomic temperance and atomic extensionality. ## 4 Related Work In the foundational paradigm of Belief Change, the AGM theory, bases have been used in the literature with two main purposes: as a finite representation of the knowledge of an agent [19, 5], and as a way of distinguishing agents knowledge explicitly [11]. Even though the AGM theory cannot be directly applied to DLs because most of these logics do not satisfy the prerequisites known as the AGM-assumptions [7], it has been studied and adapted to DLs [6, 25]. The syntactic connectivity in a knowledge base has a strong consequence of how an agent should modify its knowledge [13]. This sensitivity to syntax is also present in Ontology Repair and Evolution. Classical approaches preserve the syntactic form of the ontology as much as possible [16, 28]. However, these approaches may lead to drastic loss of information, as noticed by Hansson [10]. This problem has been studied in Belief Change for pseudo-contraction [27]. In the same direction, Troquard et al. [29] proposed the repair of DL ontologies by weakening axioms using refinement operators. Building on this study, Baader et al. [2] devised the theory of _gentle repairs_ , which also aims at keeping most of the information within the ontology upon repair. In fact, gentle repairs are closely related to pseudo-contractions [18]. Other remarkable works in Belief Change in which the body of knowledge is represented in a finite way include the formalisation of revision due to Katsuno and Mendelzon [17] and the base-generated operations by Hansson [12]. In the former, Katsuno and Mendelzon [17] formalise traditional belief revision operations using a single formula to represent the whole belief set. This is possible because they only consider finitary propositional languages. Hansson provides a characterisation of belief change operations over finite bases but restricted for logics which satisfy all the AGM-assumptions (such as propositional classical logic). Guerra and Wassermann [9] develop operations for rational change where an agent’s knowledge or behaviour is given by a Kripke model. They also provide two characterisations with AGM-style postulates. ## 5 Conclusion and Future Work In this work, we have introduced a new kind of belief change operation: belief change via models. In our approach, an agent is confronted with a new piece of information in the format of a finite model, and it is compelled to modify its current epistemic state, represented as a single finite formula, either incorporating the new model, called model expansion; or removing it, called model contraction. The price for such finite representation is that the single input model cannot be removed or added alone, and some other models must be added or removed as well. As future work, we will investigate model change operations in other DLs, still taking into account finite representability. We will also explore the effects of relaxing some constraints on Belief Base operations, allowing us to rewrite axioms with different levels of preservation in the spirit of Pseudo-Contractions, Gentle Repairs, and Axiom Weakening. ## Acknowledgements Part of this work has been done in the context of CEDAS (Center for Data Science, University of Bergen, Norway). The first author is supported by the German Research Association (DFG), project number 424710479. Ozaki is supported by the Norwegian Research Council, grant number 316022. ## References * Alchourrón et al. [1985] Carlos E. Alchourrón, Peter Gärdenfors, and David Makinson. On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. _Journal of Symbolic Logic_ , 50(2):510–530, 1985\. * Baader et al. [2018] Franz Baader, Francesco Kriegel, Adrian Nuradiansyah, and Rafael Peñaloza. Making Repairs in Description Logics More Gentle. In _Proceedings of the 16th International Conference on Principles of Knowledge Representation and Reasoning, KR 2018_. AAAI Press, 2018. * Clarke et al. [1986] Edmund M. Clarke, E. Allen Emerson, and Aravinda P. Sistla. Automatic verification of finite-state concurrent systems using temporal logic specifications. _ACM Transactions on Programming Languages and Systems_ , 8(2):244–263, apr 1986. doi: 10.1145/5397.5399. * De Raedt [1997] Luc De Raedt. Logical settings for concept-learning. _Artificial Intelligence_ , 95(1):187–201, aug 1997. doi: 10.1016/s0004-3702(97)00041-6. * Dixon and Wobcke [1993] Simon Dixon and Wayne Wobcke. The Implementation of a First-Order Logic AGM Belief Revision System. In _Proceedings of the 5th International Conference on Tools with Artificial Intelligence, ICTAI 1993_ , pages 40–47. IEEE Computer Society, 1993. doi: 10.1109/TAI.1993.633934. * Flouris [2006] Giorgos Flouris. _On Belief Change and Ontology Evolution_. PhD thesis, University of Crete, 2006. * Flouris et al. [2005] Giorgos Flouris, Dimitris Plexousakis, and Grigoris Antoniou. On Applying the AGM Theory to DLs and OWL. In _The Semantic Web – ISWC 2005_ , pages 216–231. Springer Berlin Heidelberg, 2005. doi: 10.1007/11574620_18. * Gabbay [2003] Dov Gabbay. _Many-dimensional modal logics : theory and applications_. Elsevier North Holland, Amsterdam Boston, 2003. ISBN 0444508260. * Guerra and Wassermann [2019] Paulo T. Guerra and Renata Wassermann. Two AGM-style characterizations of model repair. _Ann. Math. Artif. Intell._ , 87(3):233–257, 2019\. doi: 10.1007/s10472-019-09656-4. * Hansson [1993] Sven Ove Hansson. Changes of disjunctively closed bases. _Journal of Logic, Language and Information_ , 2(4):255–284, oct 1993. doi: 10.1007/bf01181682. * Hansson [1994] Sven Ove Hansson. Taking Belief Bases Seriously. In _Logic and Philosophy of Science in Uppsala: Papers from the 9th International Congress of Logic, Methodology and Philosophy of Science_ , pages 13–28. Springer Netherlands, Dordrecht, 1994. ISBN 978-94-015-8311-4. * Hansson [1996] Sven Ove Hansson. Knowledge-Level Analysis of Belief Base Operations. _Artificial Intelligence_ , 82(1-2):215–235, 1996\. doi: 10.1016/0004-3702(95)00005-4. * Hansson [1999] Sven Ove Hansson. _A Textbook of Belief Dynamics: Theory Change and Database Updating_. Applied Logic Series. Kluwer Academic Publishers, 1999. * Hansson and Wassermann [2002] Sven Ove Hansson and Renata Wassermann. Local Change. _Studia Logica_ , 70(1):49–76, 2002. * Horridge [2011] Matthew Horridge. _Justification based explanation in ontologies_. PhD thesis, University of Manchester, 2011. * Kalyanpur [2006] Aditya Kalyanpur. _Debugging and repair of OWL ontologies_. PhD thesis, University of Maryland, 2006. * Katsuno and Mendelzon [1991] Hirofumi Katsuno and Alberto O. Mendelzon. Propositional knowledge base revision and minimal change. _Artificial Intelligence_ , 52(3):263–294, dec 1991. * Matos et al. [2019] Vinícius Bitencourt Matos, Ricardo Guimarães, Yuri David Santos, and Renata Wassermann. Pseudo-contractions as Gentle Repairs. In _Lecture Notes in Computer Science_ , pages 385–403. Springer International Publishing, 2019. doi: 10.1007/978-3-030-22102-7_18. * Nebel [1991] Bernhard Nebel. Belief Revision and Default Reasoning: Syntax-Based Approaches. In _Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning, KR 1991_ , pages 417–428. Morgan Kaufmann, 1991. * Ozaki and Peñaloza [2018] Ana Ozaki and Rafael Peñaloza. Consequence-Based Axiom Pinpointing. In _Proceedings of the 12th International Conference on Scalable Uncertainty Management, SUM 2018_ , volume 11142 of _Lecture Notes in Computer Science_ , pages 181–195. Springer, 2018. * Ribeiro et al. [2018] Jandson S. Ribeiro, Abhaya Nayak, and Renata Wassermann. Towards Belief Contraction without Compactness. In _Proceedings of the Sixteenth International Conference on Principles of Knowledge Representation and Reasoning, KR 2018_ , pages 287–296. AAAI Press, 2018. * Ribeiro et al. [2019a] Jandson S. Ribeiro, Abhaya Nayak, and Renata Wassermann. Belief Update without Compactness in Non-finitary Languages. In _Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019_ , pages 1858–1864. ijcai.org, 2019a. * Ribeiro et al. [2019b] Jandson S. Ribeiro, Abhaya Nayak, and Renata Wassermann. Belief Change and Non-Monotonic Reasoning Sans Compactness. In _Proceedings of the 33rd AAAI Conference on Artificial Intelligence, AAAI 2019_ , pages 3019–3026. AAAI Press, 2019b. * Ribeiro [2013] Márcio Moretto Ribeiro. _Belief Revision in Non-Classical Logics_. Springer London, 2013. doi: 10.1007/978-1-4471-4186-0. * Ribeiro and Wassermann [2008] Márcio Moretto Ribeiro and Renata Wassermann. Base revision for ontology debugging. _Journal of Logic and Computation_ , 19(5):721–743, sep 2008. doi: 10.1093/logcom/exn048. * Ribeiro and Wassermann [2014] Márcio Moretto Ribeiro and Renata Wassermann. Minimal Change in AGM Revision for Non-Classical Logics. In _Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning, KR 2014_. AAAI Press, 2014. * Santos et al. [2018] Yuri David Santos, Vinícius Bitencourt Matos, Márcio Moretto Ribeiro, and Renata Wassermann. Partial meet pseudo-contractions. _International Journal of Approximate Reasoning_ , 103:11–27, dec 2018. doi: 10.1016/j.ijar.2018.08.006. * Suntisrivaraporn [2009] Boontawee Suntisrivaraporn. _Polynomial time reasoning support for design and maintenance of large-scale biomedical ontologies_. PhD thesis, Dresden University of Technology, Germany, 2009. * Troquard et al. [2018] Nicolas Troquard, Roberto Confalonieri, Pietro Galliani, Rafael Peñaloza, Daniele Porello, and Oliver Kutz. Repairing Ontologies via Axiom Weakening. In _Proceedings of the 22nd AAAI Conference on Artificial Intelligence, AAAI 2018_ , pages 1981–1988. AAAI Press, 2018. ## Appendix 0.A Proofs for Section 3 We have already given the syntax of ${\cal ALC}$-formulae in the main text and we provide the semantics here for the convenience of the reader. The semantics is given by interpretations. As usual, an interpretation $\mathcal{I}$ is a pair $(\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})$ where $\Delta^{\mathcal{I}}$ is a countable non-empty set, called the _domain of invidivuals_ , and $\cdot^{\mathcal{I}}$ is a function mapping each concept name $A\in{\sf N_{C}}$ to a subset $A^{\mathcal{I}}$ of $\Delta^{\mathcal{I}}$ and each role name $r\in{\sf N_{R}}$ to a subset $r^{\mathcal{I}}$ of $\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}$. The interpretation of concepts in $\mathcal{I}$ is $\displaystyle\top^{\mathcal{I}}=\Delta^{\mathcal{I}}\quad(\neg C)^{\mathcal{I}}=\Delta^{\mathcal{I}}\setminus C^{\mathcal{I}}\quad(C\sqcap D)^{\mathcal{I}}=C^{\mathcal{I}}\cap D^{\mathcal{I}}$ $\displaystyle(\exists R.C)^{\mathcal{I}}=\\{d\in\Delta^{\mathcal{I}}\mid\exists d^{\prime}\in C^{\mathcal{I}}:(d,d^{\prime})\in R^{\mathcal{I}}\\}.$ The interpretation of formulae is as expected $\displaystyle\mathcal{I}\models\neg(\varphi)\text{ iff not }\mathcal{I}\models\neg(\varphi)\quad\quad\mathcal{I}\models(\varphi\wedge\psi)\text{ iff }\mathcal{I}\models\varphi\text{ and }\mathcal{I}\models\psi$ $\displaystyle\mathcal{I}\models(C=\top)\text{ iff }C^{\mathcal{I}}=\top^{\mathcal{I}}\quad\mathcal{I}\models C(a)\text{ iff }a^{\mathcal{I}}\in C^{\mathcal{I}}\quad\mathcal{I}\models r(a,b)\text{ iff }(a^{\mathcal{I}},b^{\mathcal{I}})\in r^{\mathcal{I}}.$ Formally, we inductively define the sets ${\sf f}(\varphi)$ and ${\sf c}(\varphi)$ as follows. ###### Definition 13 (Subformulae) Given an ${\cal ALC}$-formula $\varphi$, we have that * • if $\varphi$ is atomic then ${\sf f}(\varphi)\coloneqq\\{\varphi,\neg\varphi\\}$; * • ${\sf f}(\varphi\land\psi)\coloneqq\\{\varphi\land\psi,\neg(\varphi\land\psi)\\}\cup{\sf f}(\varphi)\cup{\sf f}(\psi)$; * • ${\sf f}(\neg(\varphi\land\psi))\coloneqq{\sf f}(\varphi\land\psi)$. ###### Definition 14 (Subconcepts) Given a an ${\cal ALC}$-formula $\varphi$, ${\sf c}(\varphi)$ is the minimal set satisfying the following conditions: * • if ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(C=\top)}\in{\sf f}(\varphi)$ then $C\in{\sf c}(\varphi)$; * • if $C(a)\in{\sf f}(\varphi)$ then $C\in{\sf c}(\varphi)$; * • if $C\sqcap D\in{\sf c}(\varphi)$ then $C,D\in{\sf c}(\varphi)$; * • if $\exists r.C\in{\sf c}(\varphi)$ then $C\in{\sf c}(\varphi)$; * • $\neg C\in{\sf c}(\varphi)$ iff $C\in{\sf c}(\varphi)$. ###### Definition 15 Let $(T,o,{\mathbf{f}})$ be a model candidate for $\varphi$. Then, the interpretation $\mathcal{I}_{(T,o,{\mathbf{f}})}$ is defined as: * • $\Delta^{\mathcal{I}}\coloneqq T\cup{\sf ind}(\varphi)$; * • $a^{\mathcal{I}}\coloneqq a$, for all $a\in{\sf ind}(\varphi)$; * • $A^{\mathcal{I}}\coloneqq\\{{\mathbf{c}}\in T\mid A\in{\mathbf{c}}\\}\cup\\{a\in{\sf ind}(\varphi)\mid A\in o(a)\\}$; * • $({\mathbf{c}},{\mathbf{c}}^{\prime})\in r^{\mathcal{I}}$ iff $\\{\neg C\mid\neg\exists r.C\in{\mathbf{c}}\\}\subseteq{\mathbf{c}}^{\prime}$, for ${\mathbf{c}},{\mathbf{c}}^{\prime}\in T$; * • $(a,b)\in r^{\mathcal{I}}$ iff $r(a,b)\in{\mathbf{f}}$, for $a,b\in{\sf ind}(\varphi)$; * • $(a,{\mathbf{c}})\in r^{\mathcal{I}}$ iff $\\{\neg C\mid\neg\exists r.C\in o(a)\\}\subseteq{\mathbf{c}}$, for $a\in{\sf ind}(\varphi)$ and ${\mathbf{c}}\in T$. ###### Example 4 Let $\varphi=(C=\top)\land\neg(C(a)\land\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(r(a,b))})$. Then, we have: $\displaystyle{\sf f}(\varphi)=$ $\displaystyle\\{\varphi,\neg\varphi,C=\top,C\neq\top,\neg(C(a)\land(\neg r(a,b))),$ $\displaystyle\qquad C(a)\land(\neg r(a,b)),C(a),{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\neg(C(a))},\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(r(a,b))},r(a,b)\\}$ In any quasimodel $(T,o,{\mathbf{f}})$ for $\varphi$, we have that $\varphi\in{\mathbf{f}}$. However this also implies that $(C=\top),\neg(C(a)\land(\neg r(a,b)))\in{\mathbf{f}}$. Consequently $(C(a)\land(\neg r(a,b)))\not\in{\mathbf{f}}$ and thus, $C(a)\not\in{\mathbf{f}}$ or $\neg r(a,b)\not\in{\mathbf{f}}$. Hence, there are only three possible formula types for ${\mathbf{f}}$: ${\mathbf{f}}=\begin{cases}\\{\varphi,C=\top,\neg(C(a)\land(\neg r(a,b))),C(a),r(a,b)\\}&\text{ or }\\\ \\{\varphi,C=\top,\neg(C(a)\land(\neg r(a,b))),\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(C(a))},r(a,b)\\}&\text{ or }\\\ \\{\varphi,C=\top,\neg(C(a)\land(\neg r(a,b))),\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(C(a))},\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(r(a,b))}\\}\end{cases}$ Assuming that for each of these possible formula types there is at least one quasimodel of ${\mathbf{f}}$, we get that: $\displaystyle\varphi^{\dagger}\equiv$ $\displaystyle\left((C=\top)\land C(a)\land r(a,b)\right){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\lor}$ $\displaystyle\left((C=\top)\land\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(C(a))}\land r(a,b)\right){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\lor}$ $\displaystyle\left((C=\top)\land\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(C(a))}\land\neg{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}(r(a,b))}\right)$ It is easy to check that the formula above is equivalent to $\varphi$. ###### Lemma 2 Let $\varphi,\phi$ be ${\cal ALC}$-formulae. If $\varphi\in{\sf f}(\phi)$ then ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. ###### Proof The proof follows by induction in the structure of $\phi$. Base: $\phi$ is atomic. Then, by construction ${\sf f}(\phi)=\\{\phi,\neg\phi\\}$. Thus, if $\varphi\in{\sf f}(\phi)$ then $\varphi=\phi$ or $\varphi=\neg\phi$. In either case, ${\sf f}(\varphi)=\\{\varphi,\neg\varphi\\}$ which implies that ${\sf f}(\varphi)=\\{\phi,\neg\phi\\}$. Thus, ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. In the following, assume that $\phi$ is not atomic. Induction Hypothesis: by construction $\phi$ is defined as the conjunction of two formulae $\psi$ and $\psi^{\prime}$ or the negation of such conjunction, that is, $\phi=\psi\land\psi^{\prime}$ or $\varphi=\neg(\psi\land\psi^{\prime})$. Let us assume that for all $\beta\in\\{\psi,\psi^{\prime}\\}$, if $\varphi\in{\sf f}(\beta)$ then ${\sf f}(\varphi)\subseteq{\sf f}(\beta)$. Induction step: consider the cases (i) $\phi=\psi\land\psi^{\prime}$ and (ii) $\phi=\neg(\psi\land\psi^{\prime})$. * (i) $\phi=\psi\land\psi^{\prime}$. By construction ${\sf f}(\phi)={\sf f}(\psi\land\psi^{\prime})=\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}\cup{\sf f}(\psi)\cup{\sf f}(\psi^{\prime}).$ (1) Thus, (a) $\varphi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$ or (b) $\varphi\in{\sf f}(\psi)$ or (c) $\varphi\in{\sf f}(\psi^{\prime})$. * (a) $\varphi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$. Thus, either $\varphi=\psi\land\psi^{\prime}$ or $\varphi=\neg(\psi\land\psi^{\prime})$. For $\varphi=\psi\land\psi^{\prime}$, we get that ${\sf f}(\varphi)={\sf f}(\psi\land\psi^{\prime})={\sf f}(\phi)$ which means that ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. For $\varphi=\neg(\psi\land\psi^{\prime})$, we get that ${\sf f}(\varphi)={\sf f}(\neg(\psi\land\psi^{\prime}))$. By construction, ${\sf f}(\neg(\psi\land\psi^{\prime}))={\sf f}(\psi\land\psi^{\prime})$. Therefore, ${\sf f}(\varphi)={\sf f}(\psi\land\psi^{\prime})={\sf f}(\phi)$ and so ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. * (b) $\varphi\in{\sf f}(\psi)$. By the inductive hypothesis, ${\sf f}(\varphi)\subseteq{\sf f}(\psi)$. From (1), ${\sf f}(\psi)\subseteq{\sf f}(\phi)$. So, ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. * (c) $\varphi\in{\sf f}(\psi^{\prime})$. Analogous to item (b). * (ii) $\phi=\neg(\psi\land\psi^{\prime})$. By construction, ${\sf f}(\neg(\psi\land\psi^{\prime}))={\sf f}(\psi\land\psi^{\prime})$. So, ${\sf f}(\phi)={\sf f}(\psi\land\psi^{\prime})=\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}\cup{\sf f}(\psi)\cup{\sf f}(\psi^{\prime})$. Proof proceeds as in item (i). ###### Lemma 3 For every ${\cal ALC}$-formula $\phi$ and formula type ${\mathbf{f}}$ for $\phi$, if $\phi,\varphi\in{\mathbf{f}}$ then ${\mathbf{f}}\cap{\sf f}(\varphi)$ is a formula type for $\varphi$. ###### Proof Let ${\mathbf{f}}_{\phi}$ be a fixed but arbitrary formula type for $\phi$ with $\phi\in{\mathbf{f}}_{\phi}$. We will show that ${\mathbf{f}}\coloneqq{\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$ is a formula type for $\varphi$. Suppose for contradiction that ${\mathbf{f}}$ is not a formula type for $\varphi$. Thus, as ${\mathbf{f}}\subseteq{\sf f}(\varphi)$, either condition (1) or (2) of the formula type definition is violated: 1. 1. There are formulae $\psi,\neg\psi\in{\sf f}(\varphi)$ such that either (a) $\psi\not\in{\mathbf{f}}$ and $\neg\psi\not\in{\mathbf{f}}$, or (b) $\psi,\neg\psi\in{\mathbf{f}}$. 1. (a) $\psi\not\in{\mathbf{f}}$ and $\neg\psi\not\in{\mathbf{f}}$. By hypothesis, $\varphi\in{\mathbf{f}}_{\phi}$. Thus, as ${\mathbf{f}}={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$, and by construction $\varphi\in{\sf f}(\varphi)$, we get that $\varphi\in{\mathbf{f}}$. Since ${\mathbf{f}}_{\phi}$ is a formula type, we have that for all $\psi^{\prime}\in{\sf f}(\phi)$, $\psi^{\prime}\in{\mathbf{f}}_{\phi}$ iff $\neg\psi^{\prime}\not\in{\mathbf{f}}_{\phi}$. As $\varphi\in{\mathbf{f}}_{\phi}\subseteq{\sf f}(\phi)$, it follows from Lemma 2 that ${\sf f}(\varphi)\subseteq{\sf f}(\phi)$. Therefore, for all $\psi^{\prime}\in{\sf f}(\varphi)$, $\psi^{\prime}\in{\mathbf{f}}_{\phi}$ iff $\neg\psi^{\prime}\not\in{\mathbf{f}}_{\phi}$. By hypothesis, $\neg\psi,\psi\in{\sf f}(\varphi)$ which implies from above that either: $\displaystyle\psi\in{\mathbf{f}}_{\phi}\mbox{ and }\neg\psi\not\in{\mathbf{f}}_{\phi},\mbox{ or }\psi\not\in{\mathbf{f}}_{\phi}\mbox{ and }\neg\psi\in{\mathbf{f}}_{\phi}.$ (2) By hypothesis, $\neg\psi,\psi\in{\sf f}(\varphi)$ but $\neg\psi,\psi\not\in{\mathbf{f}}$. Thus, as ${\mathbf{f}}={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$, we get $\neg\psi,\psi\not\in{\mathbf{f}}_{\phi}$, contradicting (2). 2. (b) $\psi,\neg\psi\in{\mathbf{f}}$. By hypothesis, ${\mathbf{f}}_{\phi}$ is a formula type which implies that for all $\psi^{\prime}\in{\mathbf{f}}_{\phi}$, $\psi^{\prime},\neg\psi^{\prime}\not\in{\mathbf{f}}_{\phi}$. Therefore, as ${\mathbf{f}}\subseteq{\mathbf{f}}_{\phi}$, we get that $\psi,\neg\psi\not\in{\mathbf{f}}$, a contradiction. 2. 2. Let $\psi\land\psi^{\prime}\in{\sf f}(\varphi)$. We will show that $\psi\land\psi^{\prime}\in{\mathbf{f}}$ iff $\\{\psi,\psi^{\prime}\\}\subseteq{\mathbf{f}}$ which contradicts the hypothesis that condition (2) from the formula type definition is violated. We split the proof in two cases: either (a) $\psi\land\psi^{\prime}\in{\mathbf{f}}$ or (b) $\psi\land\psi^{\prime}\not\in{\mathbf{f}}$. If $\psi\land\psi^{\prime}\in{\mathbf{f}}$, as ${\mathbf{f}}={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$, we get that $\psi\land\psi^{\prime}\in{\mathbf{f}}_{\phi}$. Since ${\mathbf{f}}_{\phi}$ is a formula type, we have that $\\{\psi,\psi^{\prime}\\}\subseteq{\mathbf{f}}_{\phi}$. By definition of ${\sf f}(\varphi)$, if $\psi\land\psi^{\prime}\in{\sf f}(\varphi)$ then $\\{\psi,\psi^{\prime}\\}\subseteq{\sf f}(\varphi)$. Hence, $\\{\psi,\psi^{\prime}\\}\in{\mathbf{f}}={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$. Otherwise, $\psi\land\psi^{\prime}\not\in{\mathbf{f}}$. As ${\mathbf{f}}={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$ and $\psi\land\psi^{\prime}\in{\sf f}(\varphi)$, we get that $\psi\land\psi^{\prime}\not\in{\mathbf{f}}_{\phi}$. Thus, as ${\mathbf{f}}_{\phi}$ is a formula type, we get that $\\{\psi,\psi^{\prime}\\}\not\subseteq{\mathbf{f}}_{\phi}$. Therefore, as ${\mathbf{f}}\subseteq{\mathbf{f}}_{\phi}$, we get that $\\{\psi,\psi^{\prime}\\}\not\subseteq{\mathbf{f}}$. From (a) and (b) we conclude that $\psi\land\psi^{\prime}\in{\mathbf{f}}$ iff $\\{\psi,\psi^{\prime}\\}\subseteq{\mathbf{f}}$. But this contradicts the hypothesis that condition (2) from the formula type definition is violated. Therefore, we conclude that ${\mathbf{f}}$ is a formula type. ###### Lemma 4 For every ${\cal ALC}$-formula $\varphi$, ${\sf f}(\varphi)={\sf f}(\neg\varphi)$ 222 We silently remove double negation and treat $\neg\neg\phi$ as equal to $\phi$.. ###### Proof By construction $\varphi$ is a subformula of $\neg\varphi$. We can see that ${\sf f}(\neg\varphi)={\sf f}(\varphi)\cup\\{\neg\varphi\\}$. Since ${\sf f}(\varphi)$ is closed under single negation and, by construction, $\varphi\in{\sf f}(\varphi)$, we have that $\neg\varphi\in{\sf f}(\varphi)$. Thus, ${\sf f}(\varphi)={\sf f}(\neg\varphi)$. ###### Definition 16 Let $\varphi$ be an ${\cal ALC}$-formula. The set of of formula types for $\varphi$ that has $\varphi$ is given by the set $\tau(\varphi)=\\{{\mathbf{f}}\subseteq{\sf f}(\varphi)\mid{\mathbf{f}}\mbox{ is a formula type for }\varphi\mbox{ and }\varphi\in{\mathbf{f}}\\}.$ ###### Lemma 5 For every ${\cal ALC}$-formula $\phi$ and formula type ${\mathbf{f}}$ for $\phi$, if $\phi\in{\mathbf{f}}$ and $\varphi\in{\sf f}(\phi)$ then ${\mathbf{f}}\cap{\sf f}(\varphi)\in\tau(\varphi)\cup\tau(\neg\varphi)$. ###### Proof Let ${\mathbf{f}}_{\phi}$ be a fixed but arbitrary formula type for $\phi$ with $\phi\in{\mathbf{f}}_{\phi}$. As ${\mathbf{f}}_{\phi}$ is a formula type (for $\phi$) and $\varphi\in{\sf f}(\phi)$, either (i) $\varphi\in{\mathbf{f}}_{\phi}$ or $\neg\varphi\in{\mathbf{f}}_{\phi}$: 1. (i) $\varphi\in{\mathbf{f}}_{\phi}$. Thus, by Lemma 3, we have that ${\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$ is a formula type of $\varphi$. Also, $\varphi\in{\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$. Therefore, ${\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)\in\tau(\varphi)$ which means that ${\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)\in\tau(\varphi)\cup\tau(\neg\varphi)$. 2. (ii) $\neg\varphi\in{\mathbf{f}}_{\phi}$. Thus, by Lemma 3, we have that ${\mathbf{f}}_{\phi}\cap{\sf f}(\neg\varphi)$ is a formula type for $\neg\varphi$. Also, $\neg\varphi\in{\mathbf{f}}_{\phi}\cap{\sf f}(\neg\varphi)$. Therefore, ${\mathbf{f}}_{\phi}\cap{\sf f}(\neg\varphi)\in\tau(\neg\varphi)$ which means that ${\mathbf{f}}_{\phi}\cap{\sf f}(\neg\varphi)\in\tau(\varphi)\cup\tau(\neg\varphi)$. By Lemma 4, we have that ${\sf f}(\varphi)={\sf f}(\neg\varphi)$ which implies that ${\mathbf{f}}_{\phi}\cap{\sf f}(\neg\varphi)={\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)$. Therefore, ${\mathbf{f}}_{\phi}\cap{\sf f}(\varphi)\in\tau(\varphi)\cup\tau(\neg\varphi)$. ###### Lemma 6 For every ${\cal ALC}$-formula $\varphi$, ${\mathbf{f}}\in\tau(\varphi)$ iff ${\mathbf{f}}$ is a formula type for $\varphi$ and 1. 1. if $\varphi$ is atomic then ${\mathbf{f}}=\\{\varphi\\}$; 2. 2. if $\varphi=\psi\land\psi^{\prime}$ then ${\mathbf{f}}=\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$, for some ${\mathbf{f}}_{\psi}\in\tau(\psi)$ and ${\mathbf{f}}_{\psi^{\prime}}\in\tau(\psi^{\prime})$; 3. 3. if $\varphi=\neg(\psi\land\psi^{\prime})$ then ${\mathbf{f}}=\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$, for some ${\mathbf{f}}_{\psi}\in\tau(\psi)\cup\tau(\neg\psi),{\mathbf{f}}_{\psi^{\prime}}\in\tau(\psi^{\prime})\cup\tau(\neg\psi^{\prime})$ such that either ${\mathbf{f}}_{\psi}\in\tau(\neg\psi)$ or ${\mathbf{f}}_{\psi^{\prime}}\in\tau(\neg\psi^{\prime})$. ###### Proof The direction “$\Leftarrow$” is trivial, so we focus only on the “$\Rightarrow$” direction. Let ${\mathbf{f}}\in\tau(\varphi)$. Thus, $\varphi\in{\mathbf{f}}$ and ${\mathbf{f}}$ is a formula type for $\varphi$. By construction, (I) either $\varphi$ is atomic or (II) $\varphi=\psi\land\psi^{\prime}$ or (III) $\varphi=\neg(\psi\land\psi^{\prime})$: 1. (I) $\varphi$ is atomic. Thus, by construction ${\mathbf{f}}=\\{\varphi\\}$ or ${\mathbf{f}}=\\{\neg\varphi\\}$. Thus, as $\varphi\in{\mathbf{f}}$, we get ${\mathbf{f}}=\\{\varphi\\}$. 2. (II) $\varphi=\psi\land\psi^{\prime}$. As $\varphi\in{\mathbf{f}}$, we get that $\psi\land\psi^{\prime}\in{\mathbf{f}}$. Moreover, as ${\mathbf{f}}$ is a formula type for $\varphi$ and $\psi\land\psi^{\prime}\in{\mathbf{f}}$, it follows that $\psi,\psi^{\prime}\in{\mathbf{f}}$. Let $\displaystyle{\mathbf{f}}_{\psi}\coloneqq{\mathbf{f}}\cap{\sf f}(\psi)$ $\displaystyle\mbox{ and }{\mathbf{f}}_{\psi^{\prime}}\coloneqq{\mathbf{f}}\cap{\sf f}(\psi^{\prime}).$ As $\psi,\psi^{\prime}\in{\mathbf{f}}$ and ${\mathbf{f}}$ is a formula type for $\varphi=\psi\land\psi^{\prime}$, by Lemma 3, ${\mathbf{f}}_{\psi}={\mathbf{f}}\cap{\sf f}(\psi)$ is a formula type for $\psi$ and ${\mathbf{f}}_{\psi^{\prime}}={\mathbf{f}}\cap{\sf f}(\psi^{\prime})$ is a formula type for $\psi^{\prime}$. We have that $\psi\in{\sf f}(\psi)$ and $\psi^{\prime}\in{\sf f}(\psi^{\prime})$ which means that $\psi\in{\mathbf{f}}_{\psi}$ and $\psi^{\prime}\in{\mathbf{f}}_{\psi^{\prime}}$. Thus, ${\mathbf{f}}_{\psi}\in\tau(\psi)$ and ${\mathbf{f}}_{\psi^{\prime}}\in\tau(\psi^{\prime})$. We still need to show that ${\mathbf{f}}=\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. For this, we will show that (i) ${\mathbf{f}}\subseteq\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$ and (ii) $\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}\subseteq{\mathbf{f}}$. The case (ii) is trivial, so we focus only on case (i). Let $\phi\in{\mathbf{f}}$. As ${\mathbf{f}}$ is a formula type for $\varphi=\psi\land\psi^{\prime}$, we get that $\phi\in{\mathbf{f}}\subseteq{\sf f}(\psi\land\psi^{\prime})=\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}\cup{\sf f}(\psi)\cup{\sf f}(\psi^{\prime}).$ Therefore, (a) $\phi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$ or (b) $\phi\in{\sf f}(\psi)$ or (c) $\phi\in{\sf f}(\psi^{\prime})$. * (a) $\phi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$. As ${\mathbf{f}}$ is a formula type and $\varphi=\psi\land\psi^{\prime}\in{\mathbf{f}}$, we get that $\neg(\psi\land\psi^{\prime})\not\in{\mathbf{f}}$. Therefore, as $\phi\in{\mathbf{f}}$, we have that $\phi\neq\neg(\psi\land\psi^{\prime})$. Hence, $\phi=\psi\land\psi^{\prime}$, which implies that $\phi\in\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. * (b) $\phi\in{\sf f}(\psi)$. Thus, as $\phi\in{\mathbf{f}}$, we get that $\phi\in{\mathbf{f}}_{\psi}={\mathbf{f}}\cap{\sf f}(\psi)$ which implies that $\phi\in\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. * (c) $\phi\in{\sf f}(\psi^{\prime})$. Thus, as $\phi\in{\mathbf{f}}$, we get that $\phi\in{\mathbf{f}}_{\psi^{\prime}}={\mathbf{f}}\cap{\sf f}(\psi^{\prime})$ which implies that $\phi\in\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. Thus, $\phi\in\\{\psi\land\psi^{\prime}\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. 3. (III) $\varphi=\neg(\psi\land\psi^{\prime})$. As $\varphi\in{\mathbf{f}}$, we get that $\neg(\psi\land\psi^{\prime})\in{\mathbf{f}}$. Let $\displaystyle{\mathbf{f}}_{\psi}\coloneqq{\mathbf{f}}\cap{\sf f}(\neg\psi)$ $\displaystyle\mbox{ and }{\mathbf{f}}_{\psi^{\prime}}\coloneqq{\mathbf{f}}\cap{\sf f}(\neg\psi^{\prime}).$ As $\neg\psi,\neg\psi^{\prime}\in{\sf f}(\varphi=\neg(\psi\land\psi^{\prime}))$, by Lemma 5, we have that ${\mathbf{f}}_{\psi}\in\tau(\psi)\cup\tau(\neg\psi)\mbox{ and }{\mathbf{f}}_{\psi^{\prime}}\in\tau(\psi^{\prime})\cup\tau(\neg\psi^{\prime}).$ Moreover, as ${\mathbf{f}}$ is a formula type for $\varphi$ and $\varphi=\neg(\psi\land\psi^{\prime})\in{\mathbf{f}}$, it follows that $\psi\land\psi^{\prime}\not\in{\mathbf{f}}$. Therefore, $\\{\psi,\psi^{\prime}\\}\not\subseteq{\mathbf{f}}$. Thus, either $\psi\not\in{\mathbf{f}}$ or $\psi^{\prime}\not\in{\mathbf{f}}$. Thus, as ${\mathbf{f}}$ is a formula type, either (i) $\neg\psi\in{\mathbf{f}}$ or (ii) $\neg\psi^{\prime}\in{\mathbf{f}}$. * (i) $\neg\psi\in{\mathbf{f}}$. Thus, as ${\mathbf{f}}$ is a formula type for $\varphi=\neg(\psi\land\psi^{\prime})$, by Lemma 3, ${\mathbf{f}}_{\psi}={\mathbf{f}}\cap{\sf f}(\neg\psi)$ is a formula type for $\neg\psi$. We have that $\neg\psi\in{\sf f}(\neg\psi)$. So $\neg\psi\in{\mathbf{f}}_{\psi}$. Thus, ${\mathbf{f}}_{\psi}\in\tau(\neg\psi)$. * (ii) $\neg\psi^{\prime}\in{\mathbf{f}}$. Analogously to item (i), we get that ${\mathbf{f}}_{\psi^{\prime}}\in\tau(\neg\psi^{\prime})$. Thus, ${\mathbf{f}}_{\psi}\in\tau(\neg\psi)\mbox{ or }{\mathbf{f}}_{\psi^{\prime}}\in\tau(\neg\psi^{\prime}).$ We still need to show that ${\mathbf{f}}=\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. For this we need to show that (i) ${\mathbf{f}}\subseteq\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$ and (ii) $\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}\subseteq{\mathbf{f}}$. The case (ii) is trivial. So we focus only on case (i). Let $\phi\in{\mathbf{f}}$. As ${\mathbf{f}}$ is a formula type for $\varphi=\neg(\psi\land\psi^{\prime})$, we get that $\displaystyle\phi\in{\mathbf{f}}\subseteq{\sf f}(\neg(\psi\land\psi^{\prime}))$ $\displaystyle={\sf f}(\psi\land\psi^{\prime})$ $\displaystyle=\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}\cup{\sf f}(\psi)\cup{\sf f}(\psi^{\prime}).$ Therefore, (a) $\phi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$ or (b) $\phi\in{\sf f}(\psi)$ or (c) $\phi\in{\sf f}(\psi^{\prime})$. * (a) $\phi\in\\{\psi\land\psi^{\prime},\neg(\psi\land\psi^{\prime})\\}$. As ${\mathbf{f}}$ is a formula type and $\varphi=\neg(\psi\land\psi^{\prime})\in{\mathbf{f}}$, we get that $(\psi\land\psi^{\prime})\not\in{\mathbf{f}}$. Therefore, as $\phi\in{\mathbf{f}}$, we have that $\phi\neq(\psi\land\psi^{\prime})$. Therefore, $\phi=\neg(\psi\land\psi^{\prime})$, which implies that $\phi\in\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. * (b) $\phi\in{\sf f}(\psi)$. By Lemma 4, we get ${\sf f}(\psi)={\sf f}(\neg\psi)$. Therefore, $\phi\in{\sf f}(\neg\psi)$. Thus, as $\phi\in{\mathbf{f}}$, we get that $\phi\in{\mathbf{f}}_{\psi}={\mathbf{f}}\cap{\sf f}(\neg\psi)$ which implies that $\phi\in\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$ * (c) $\phi\in{\sf f}(\psi^{\prime})$. Analogously to item (b), we get $\phi\in\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. Thus, $\phi\in\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}}$. ###### Definition 17 (Formula degree) The degree of an ${\cal ALC}$-formula $\phi$, denoted $degree(\phi)$, is * • $1$ if $\phi$ is an atomic ${\cal ALC}$-formula; * • $degree(\varphi)+1$ if $\phi=\neg\varphi$; and * • $degree(\varphi)+degree(\psi)$ if $\phi=\varphi\land\psi$. ###### Lemma 7 ([8]) If $\mathcal{I}\models\varphi$ then $\operatorname{qm}({\varphi},{\mathcal{I}})$ is a quasimodel for $\varphi$. To show Theorem 3.2, we use Lemma 8. ###### Lemma 8 () Let $\varphi$ be an ${\cal ALC}$-formula. If ${\mathbf{f}}\in\tau(\varphi)$ then $\bigg{(}\bigwedge lit({\mathbf{f}})\bigg{)}\models\varphi.$ ###### Proof The proof follows by induction in the degree of $\phi$. Base: $degree(\phi)=1$. Then $\phi$ is atomic. This implies from Lemma 6 that ${\mathbf{f}}=\\{\phi\\}$. Thus, $\bigwedge lit({\mathbf{f}})=\phi$. For ${\cal ALC}$, this means that $\bigwedge lit({\mathbf{f}})\models\phi$. Induction Hypothesis: For every formula $\varphi$, and formula type ${\mathbf{f}}_{\varphi}$ for $\varphi$, if $\varphi\in{\mathbf{f}}_{\varphi}$ and $degree(\varphi)<degree(\phi)$ then $\bigwedge lit({\mathbf{f}}_{\varphi})\models\varphi$. Induction Step: Let $degree(\phi)>1$. By construction, $\phi$ is of the form $\varphi\land\psi$ or $\neg\varphi$, for some ${\cal ALC}$-formulae $\varphi$ and $\psi$: 1. 1. $\phi=\varphi\land\psi$. Thus, from Lemma 6, ${\mathbf{f}}=\\{\varphi\land\psi\\}\cup{\mathbf{f}}_{\varphi}\cup{\mathbf{f}}_{\psi},\mbox{ such that }{\mathbf{f}}_{\varphi}\in\tau(\varphi),{\mathbf{f}}_{\psi}\in\tau(\psi).$ Note that $lit({\mathbf{f}})=lit({\mathbf{f}}_{\varphi})\cup lit({\mathbf{f}}_{\psi})$. Therefore, $\bigwedge lit({\mathbf{f}})=\bigg{(}\bigwedge lit({\mathbf{f}}_{\varphi})\bigg{)}\land\bigg{(}\bigwedge lit({\mathbf{f}}_{\psi})\bigg{)}$ By the definition of degree, we get that $degree(\phi)=degree(\varphi\land\psi)=degree(\varphi)+degree(\psi)$ and $1\leq degree(\varphi)$ and $1\leq degree(\psi)$. Therefore, $degree(\varphi)<degree(\phi)$ and $degree(\psi)<degree(\phi)$. By the inductive hypothesis, $\bigwedge lit({\mathbf{f}}_{\varphi})\models\varphi\mbox{ and }\bigwedge lit({\mathbf{f}}_{\psi})\models\psi.$ Therefore, $\bigwedge lit({\mathbf{f}})=\bigwedge lit({\mathbf{f}}_{\varphi})\land\bigwedge lit({\mathbf{f}}_{\psi})\models\varphi\land\psi.$ Thus, as $\phi=\varphi\land\psi$ , we get $\bigwedge lit({\mathbf{f}})\models\phi.$ 2. 2. $\phi=\neg\varphi$. By construction, either: (a) $\varphi$ is atomic, or (b)$\varphi=\psi\land\psi^{\prime}$. 1. (a) $\varphi$ is atomic. We get from Lemma 6 that ${\mathbf{f}}=\\{\neg\varphi\\}$, which implies that $lit({\mathbf{f}})=\\{\neg\varphi\\}$, and analogous to the base case, we get that $\bigwedge lit({\mathbf{f}})\models\neg\varphi$ that is, $\bigwedge lit({\mathbf{f}})\models\phi$. 2. (b) $\varphi=\psi\land\psi^{\prime}$. By Lemma 6, we get that $\displaystyle{\mathbf{f}}=\\{\neg(\psi\land\psi^{\prime})\\}\cup{\mathbf{f}}_{\psi}\cup{\mathbf{f}}_{\psi^{\prime}},$ (3) where ${\mathbf{f}}_{\psi}\in\tau(\psi)\cup\tau(\neg\psi),{\mathbf{f}}_{\psi^{\prime}}\in\tau(\psi^{\prime})\cup\tau(\neg\psi^{\prime})$ such that either ${\mathbf{f}}_{\psi}\in\tau(\neg\psi)\mbox{ or }{\mathbf{f}}_{\psi^{\prime}}\in\tau(\neg\psi^{\prime}).$ 1. i. ${\mathbf{f}}_{\psi}\in\tau(\neg\psi)$. From definition of degree, we get that $degree(\phi)=degree(\neg(\psi\land\psi^{\prime}))=degree(\psi)+degree(\psi^{\prime})+1,$ and $degree(\psi)\geq 1$ and $degree(\psi^{\prime})\geq 1$ and $degree(\neg\psi)=degree(\psi)+1$. Thus, $degree(\phi)=degree(\neg(\psi\land\psi^{\prime}))=degree(\neg\psi)+degree(\psi^{\prime})$. Thus, as $degree(\psi^{\prime})\geq 1$ we get $degree(\neg\psi)<degree(\phi).$ Thus, by the inductive hypothesis, $\bigwedge lit({\mathbf{f}}_{\psi})\models\neg\psi$. Note that for every formula $\beta$, $\neg\psi\models\neg(\psi\land\beta)$. Therefore, for $\beta=\psi^{\prime}$ $\bigwedge lit({\mathbf{f}}_{\psi})\models\neg(\psi\land\psi^{\prime})$ From (3), we get that $\bigwedge lit({\mathbf{f}})=\bigwedge lit({\mathbf{f}}_{\psi})\land\bigwedge lit({\mathbf{f}}_{\psi^{\prime}}).$ Thus, as $\bigwedge lit({\mathbf{f}}_{\psi})\models\neg(\psi\land\psi^{\prime})$, we get that $\bigwedge lit({\mathbf{f}}_{\psi})\land\bigwedge lit({\mathbf{f}}_{\psi^{\prime}})\models\neg(\psi\land\psi^{\prime})$ which implies from above that $\bigwedge lit({\mathbf{f}})\models\neg(\psi\land\psi^{\prime})$ that is, $\bigwedge lit({\mathbf{f}})\models\phi.$ 2. ii. ${\mathbf{f}}_{\psi}\in\tau(\neg\psi^{\prime})$. Analogous to item (i). See 3.4 ###### Proof If $M\not\models\varphi$, then $\operatorname{con}(\varphi,M)=\operatorname{con}^{\prime}(\varphi,M)=\varphi$. Now, suppose that $M\models\varphi$. In this case, we know from Lemma 7 that $(T^{\prime},o^{\prime},{\mathbf{f}}^{\prime})=\operatorname{qm}({\varphi},{M})$ is a quasimodel for $\varphi$. From Theorem 3.2, we know that: $\displaystyle\operatorname{con}^{\prime}(\varphi,M)$ $\displaystyle\equiv\varphi^{\dagger}\land\neg(\bigwedge lit({\mathbf{f}}))$ $\displaystyle=\left(\bigvee_{(T,o,{\mathbf{f}})\in{\sf S}(\varphi)}(\bigwedge lit({\mathbf{f}}))\right)\land\neg(\bigwedge lit({\mathbf{f}}))$ For each $(T,o,{\mathbf{f}})$ in ${\sf S}(\varphi)$ either $lit({\mathbf{f}})=lit({\mathbf{f}}^{\prime})$ or $lit({\mathbf{f}})\neq lit({\mathbf{f}}^{\prime})$. If $lit({\mathbf{f}})=lit({\mathbf{f}}^{\prime})$, then: $\bigwedge lit({\mathbf{f}})\land\neg(\bigwedge lit({\mathbf{f}}^{\prime}))\equiv\bot.$ Otherwise, we know that $\bigwedge lit({\mathbf{f}})\land\neg(\bigwedge lit({\mathbf{f}}^{\prime}))\not\equiv\bot.$ Due to the definition of $\operatorname{\mu}$ and Corollary 1 we can conclude that for every ${\mathbf{f}}\in\operatorname{ftypes}({\varphi})$, ${\mathbf{f}}\in\operatorname{\mu}(\varphi,M)$ iff $lit({\mathbf{f}})\neq lit({\mathbf{f}}^{\prime})$. As we can ignore inconsistent formulae in disjunctions, we get that $\operatorname{con}(\varphi,M)\equiv\operatorname{con}^{\prime}(\varphi,M)$. See 3.2 ###### Proof Let $\varphi$ be an ${\cal ALC}$-formula and $\mathcal{I}$ an interpretation. #### $\mathcal{I}\models\varphi\Rightarrow\mathcal{I}\models\varphi^{\dagger}$: First, suppose that $\mathcal{I}\models\varphi$. From Lemma 7 we know that $\operatorname{qm}({\varphi},{\mathcal{I}})=(T,o,{\mathbf{f}})$ is a quasimodel of $\varphi$. Therefore, there is a disjunct $\psi$ of $\varphi^{\dagger}$ which is the conjunction of all atomic formulae in ${\mathbf{f}}$. By Definition 9 $\mathcal{I}\models{\mathbf{f}}$, thus we can conclude that $\mathcal{I}\models\varphi^{\dagger}$. #### $\mathcal{I}\models\varphi^{\dagger}\Rightarrow\mathcal{I}\models\varphi$: Now, assume that $\mathcal{I}\models\varphi^{\dagger}$. This means that there is one disjunct $\psi$ of $\varphi^{\dagger}$ such that $\mathcal{I}\models\psi$. By construction, this disjunct is a conjunction of atomic formulae in the formula type of a quasimodel $(T,o,{\mathbf{f}})$ for $\varphi$. Using Lemma 8 we can conclude that $\mathcal{I}\models{\mathbf{f}}$. As $\varphi\in{\mathbf{f}}$ we get that $\mathcal{I}\models\varphi$. Hence, $\mathcal{I}\models\varphi$ iff $\mathcal{I}\models\varphi^{\dagger}$, i.e., $\varphi\equiv\varphi^{\dagger}$. Corollary 1 is a direct consequence of the definition of a formula type. ###### Corollary 1 Let $(T,o,{\mathbf{f}})$ and $(T^{\prime},o^{\prime},{\mathbf{f}}^{\prime})$ be quasimodels for an ${\cal ALC}$-formula $\varphi$. Then, $lit({\mathbf{f}})=lit({\mathbf{f}}^{\prime})$ iff ${\mathbf{f}}={\mathbf{f}}^{\prime}$. Given ${\cal ALC}$-formulae $\varphi,\psi$, we say that $\psi$ is in the language of the literals of $\varphi$, written $\psi\in{\mathcal{L}_{lit}({\varphi})}$, if $\psi$ is a boolean combination of the atoms in $\varphi$. ###### Lemma 9 Let $M,M^{\prime}$ be models and $\varphi$ an ${\cal ALC}$-formula. Also let $(T,o,{\mathbf{f}})\coloneqq\operatorname{qm}({\varphi},{M})$ and $(T^{\prime},o^{\prime},{\mathbf{f}}^{\prime})\coloneqq\operatorname{qm}({\varphi},{M^{\prime}})$. Then, ${{[{M}]^{\varphi}}}\ =\ {{[{M^{\prime}}]^{\varphi}}}$ iff ${\mathbf{f}}={\mathbf{f}}^{\prime}$. ###### Proof First, assume that ${{[{M}]^{\varphi}}}={{[{M^{\prime}}]^{\varphi}}}$. Then we know that for every $\alpha\in{\mathcal{L}_{lit}({\varphi})}$, $M\models\alpha$ iff $M^{\prime}\models\alpha$. With Corollary 1 we can conclude that ${\mathbf{f}}={\mathbf{f}}^{\prime}$. Now, assume that ${\mathbf{f}}={\mathbf{f}}^{\prime}$. Corollary 1 implies that $lit({\mathbf{f}})=lit({\mathbf{f}}^{\prime})$. In other words, for every atomic subformula $\alpha\in{\mathcal{L}_{lit}({\varphi})}$ we have that $\varphi$, $M\models\alpha$ iff $M^{\prime}\models\alpha$, that is, ${{[{M}]^{\varphi}}}={{[{M^{\prime}}]^{\varphi}}}$. ###### Lemma 10 Let $M$ be a model and $\varphi$ an ${\cal ALC}$-formula. Then, the following holds: $\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}=\operatorname{Mod}({\operatorname{con}(\varphi,M)})$. ###### Proof Let $(T,o,{\mathbf{f}})\coloneqq\operatorname{qm}({\varphi},{M})$ and $(T^{\prime},o^{\prime},{\mathbf{f}}^{\prime})\coloneqq\operatorname{qm}({\varphi},{M^{\prime}})$. First, suppose that $M^{\prime}\in{\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}}$. We know that $M^{\prime}\models\varphi$ and by Lemma 7 we get that $\operatorname{qm}({\varphi},{M^{\prime}})$ is a quasimodel for $\varphi$. We also know that $M^{\prime}\not\in\ {{[{M}]^{\varphi}}}$. Thus, from Lemma 9, we obtain ${\mathbf{f}}\neq{\mathbf{f}}^{\prime}$. Therefore, ${\mathbf{f}}^{\prime}\in\operatorname{\mu}(\varphi,M)$. Hence, $M^{\prime}\in{\operatorname{Mod}({\operatorname{con}(\varphi,M)})}$ and so ${\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}}\subseteq{\operatorname{Mod}({\operatorname{con}(\varphi,M)})}$. Now, let $M^{\prime}\in{\operatorname{Mod}({\operatorname{con}(\varphi,M)})}$. This means that there is at least one ${\mathbf{f}}{{}^{\prime\prime}}\in\operatorname{\mu}(\varphi,M)$ such that $M^{\prime}\models\bigwedge lit({\mathbf{f}}^{\prime\prime})$. But as consequence of the definition of formula type, this implies that $M^{\prime}\in\operatorname{Mod}({\varphi})$ and thus $(T^{\prime},o^{\prime},{\mathbf{f}}^{\prime})\in{\sf S}(\varphi)$. We also know that $M\not\in\ {{[{M}]^{\varphi}}}$, otherwise ${\mathbf{f}}^{\prime}={\mathbf{f}}$ due to Lemma 9. Therefore, $M^{\prime}\in{\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}}$ and we can conclude that ${\operatorname{Mod}({\operatorname{con}(\varphi,M)})}\subseteq{\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}}$. Finally, we obtain: ${\operatorname{Mod}({\operatorname{con}(\varphi,M)})}={\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}}$. See 3.3 ###### Proof Assume that $\operatorname{con}^{*}(\varphi,{M})\equiv\operatorname{con}(\varphi,{M})$. From Lemma 10 we have that $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$, hence success and inclusion are immediately satisfied. To prove atomic retainment, assume that ${M}^{\prime}\not\in\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})$ and that there is a set of models $\mathbb{M}^{\prime}$ with ${M}^{\prime}\in\mathbb{M}^{\prime}$, $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})\subset\mathbb{M}^{\prime}\subseteq\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$ and that is finitely representable in ${\cal ALC}$-formula. Lemma 10 implies that $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}$. Hence, ${M}^{\prime}\in{{[{{M}}]^{\varphi}}}$, a contradiction as we assumed that $\mathbb{M}^{\prime}\subseteq\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$. Therefore, no such $\mathbb{M}^{\prime}$ could exist, and thus, $\operatorname{con}^{*}$ satisfies atomic retainment. Let ${M}^{\prime}\equiv^{\varphi}{M}$. Since $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$ and ${{[{M^{\prime}}]^{\varphi}}}={{[{M}]^{\varphi}}}$, we have that: $\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}=\operatorname{Mod}({\varphi})\setminus{{[{{M}^{\prime}}]^{\varphi}}}=\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M}^{\prime})})$. Hence, atomic extensionality is also satisfied. On the other hand, suppose that $\operatorname{con}^{*}(\varphi,M)$ satisfies the postulates stated. Let $M^{\prime}\in\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}$ and assume that $M^{\prime}\not\in\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})$. Due to atomic retainment, this means that there is no set $\mathbb{M}^{\prime}$ finitely representable in ${\cal ALC}$-formula such that $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})\subset\mathbb{M}^{\prime}\subseteq\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$ and ${M}^{\prime}\in\mathbb{M}^{\prime}$. But we know from Lemma 10 that $\operatorname{Mod}({\varphi})\setminus{{[{{M}}]^{\varphi}}}$ is finitely representable in ${\cal ALC}$-formula and includes ${M}^{\prime}$ by assumption, a contradiction. Thus, no such ${M}^{\prime}$ could exist and $\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}\subseteq\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})$. Now, let ${M}^{\prime}\in\operatorname{Mod}({\operatorname{con}^{*}(\varphi,{M})})$. By inclusion ${M}^{\prime}\in\operatorname{Mod}({\varphi})$ and by success ${M}^{\prime}\neq{M}$. We will show that ${M}^{\prime}\not\in{{[{M}]^{\varphi}}}$. By contradiction, suppose that ${M}^{\prime}\in\ {{[{M}]^{\varphi}}}$. Due to atomic extensionality $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})=\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M^{\prime})})$, but success implies that $M^{\prime}\not\in\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M^{\prime})})$. This contradicts our initial assumption that $M^{\prime}\in\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})$. Therefore $M^{\prime}\in\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}$ and we can conclude that $\operatorname{Mod}({\operatorname{con}^{*}(\varphi,M)})\subseteq\operatorname{Mod}({\varphi})\setminus{{[{M}]^{\varphi}}}$. Hence, Lemma 10 yields $\operatorname{con}(\varphi,M)\equiv\operatorname{con}^{*}(\varphi,M)$. ###### Proposition 1 Let ${\mathbf{f}}$ be a formula type. If $M\models\bigwedge lit({\mathbf{f}})$, $\psi\in{\mathcal{L}_{lit}({{\mathbf{f}}})}$ and $M\models\psi$ then $\bigwedge lit({\mathbf{f}})\models\psi$. ###### Proof Let ${\mathbf{f}}$ be a formula type, $M$ a model such that $M\models\bigwedge lit({\mathbf{f}})$, and $\psi$ and ${\cal ALC}$-formula such that $M\models\psi$. The proof is by induction on the degree of $\psi$. * Base: $degree(\psi)=1$. Thus, from its definition, $\psi$ has to be an atomic formula. As ${\mathbf{f}}$ is a formula type, we have that $\varphi\in{\mathbf{f}}$ iff $\neg\varphi\not\in{\mathbf{f}}$. Let us suppose for contradiction that $\psi\not\in{\mathbf{f}}$. Thus, $\neg\psi\in{\mathbf{f}}$. This implies that $\bigwedge{\mathbf{f}}\models\neg\psi$. Thus, as $M\models\bigwedge lit({\mathbf{f}})$, we have that $M\models\neg\psi$. This contradicts the hypothesis that $M\models\psi$. Thus, we conclude that $\psi\in{\mathbf{f}}$. Therefore, $\bigwedge lit({\mathbf{f}})\models\psi$. Induction Hypothesis: For every formula $\varphi$, if $degree(\varphi)<degree(\psi)$ and $M\models\varphi$ then $\bigwedge lif({\mathbf{f}})\models\varphi$. Induction Step: Let $degree(\psi)>1$. By construction, $\psi$ is of the form (1) $\varphi\land\varphi^{\prime}$ or (2) $\neg\varphi$, for some ${\cal ALC}$-formulae $\varphi$ and $\varphi^{\prime}$: 1. (1) $\psi=\varphi\land\varphi^{\prime}$. From definition, $degree(\varphi\land\varphi^{\prime})=degree(varphi)+degree(\varphi^{\prime})$. Recall from definition of $degree$ that $degree(\beta)>1$, for every formula $\beta$. Therefore, $degree(\varphi)<degree(\varphi\land\psi^{\prime})$ and $degree(\varphi^{\prime})<degree(\varphi\land\varphi^{\prime})$. This means that $degree(\varphi)<degree(\psi)$ and $degree(\varphi^{\prime})<degree(\psi)$. From hypothesis, $M\models\psi=\varphi\land\varphi^{\prime}$. Thus, $M\models\varphi$ and $M\models\psi$. This implies from IH that $\bigwedge lit({\mathbf{f}})\models\varphi\mbox{ and }\bigwedge lit({\mathbf{f}})\models\varphi^{\prime}$ Therefore, $\bigwedge lit({\mathbf{f}})\models\varphi\land\varphi^{\prime}=\psi$. 2. (2) $\psi=\neg\varphi$. We have two cases, either (i) $\varphi$ is an atomic formula or (ii) $\varphi=(\beta\land\beta^{\prime})$. For the first case, analogous to the base case, we get that $\bigwedge lit({\mathbf{f}})\models\psi$. So we focus only on the second case. From the definition of $degree$, we get that $degree(\neg\beta)<degree(\psi)$ and $degree(\neg beta^{\prime})<degree(\psi)$. As $M\models\psi=\neg(\beta\land\beta^{\prime})$, we get that either (a) $M\models\neg\beta$ or (b) $M\models\neg\beta^{\prime}$. * (a) $M\models\neg\beta$. From above, $degree(\neg\beta)<degree(\psi)$. Thus, from IH, we get that $\bigwedge lit({\mathbf{f}})\models\neg\beta$. Thus, $\bigwedge lit({\mathbf{f}})\models\neg(\beta\land\beta^{\prime})=\psi$. * (b): $M\models\neg\beta^{\prime}$. Analogous to case (a). ###### Lemma 11 If $M\models\varphi$, ${\mathbf{f}}\in\operatorname{ftypes}({\varphi})$ and $M\models\bigwedge lit({\mathbf{f}})$ then $\operatorname{Mod}({\bigwedge lit({\mathbf{f}})})={{[{M}]^{\varphi}}}$. ###### Proof We need to show that $M^{\prime}\in\operatorname{Mod}({\bigwedge lit({\mathbf{f}})})$ iff $M^{\prime}\in{{[{M}]^{\varphi}}}$. * “$\Rightarrow$”. $M^{\prime}\in\operatorname{Mod}({\bigwedge lit({\mathbf{f}})})$. To show that $M^{\prime}\in{{[{M}]^{\varphi}}}$, it suffices to show that $M^{\prime}\equiv^{\varphi}M$. Let $\psi\in{\mathcal{L}_{lit}({\varphi})}$, we need to show that $M\models\psi$ iff $M\models\psi$. * (a) “$\Rightarrow$” $M\models\psi$. From Proposition 1, we have that $\bigwedge lit({\mathbf{f}})\models\psi$. This jointly with $M^{\prime}\models\bigwedge lit({\mathbf{f}})$ implies that $M^{\prime}\models\psi$. * (b) “$\Leftarrow$” $M^{\prime}\models\psi$. Analogous to item (a). * “$\Leftarrow$” $M^{\prime}\in{{[{M}]^{\varphi}}}$. Note that $\bigwedge lit({\mathbf{f}})\in{\mathcal{L}_{lit}({\varphi})}$. Thus as $M^{\prime}\equiv^{\varphi}M$, and $M\models\bigwedge lit({\mathbf{f}})$, we get that $M^{\prime}\models\bigwedge lit({\mathbf{f}})$. See 1 ###### Proof We have two cases: either (i) $M\models\varphi$ or (ii)$M\not\models\varphi$. 1. (i) $M\models\varphi$. Then, from definition of $\operatorname{ex}$, we have that $\operatorname{ex}(\varphi,M)=\varphi$ which implies that $\operatorname{Mod}({\operatorname{ex}(\varphi,M)})=\operatorname{Mod}({\varphi})$. As $M\models\varphi$, we get that ${{[{M}]^{\varphi}}}\subseteq\operatorname{Mod}({\varphi})$. Therefore, $\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}=\operatorname{Mod}({\varphi})$. This implies that $\operatorname{Mod}({\operatorname{ex}(\varphi,M)})=\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}.$ 2. (ii)$M\not\models\varphi$. Thus, from definition of $\operatorname{ex}$, we get that $\operatorname{ex}(\varphi,M)=\varphi\lor\bigwedge lit({\mathbf{f}}),\mbox{ where }qm(\neg\varphi,M)=(T,o,{\mathbf{f}}).$ This implies that $\operatorname{Mod}({\operatorname{ex}(\varphi,M)})=\operatorname{Mod}({\varphi\lor\bigwedge lit({\mathbf{f}})})$. Note that $\operatorname{Mod}({\varphi\lor\bigwedge lit({\mathbf{f}})})=\operatorname{Mod}({\varphi})\cup\operatorname{Mod}({\bigwedge lit({\mathbf{f}})}).$ As $qm(\neg\varphi,M)=(T,o,{\mathbf{f}})$, it follows from the definition of $qm$ that ${\mathbf{f}}\in\operatorname{ftypes}({\neg\varphi})$ and $M\models\bigwedge lit({\mathbf{f}})$. In summary, $M\models\neg\varphi$, ${\mathbf{f}}\in\operatorname{ftypes}({\neg\varphi})$ and $M\models\bigwedge lit({\mathbf{f}})$. Thus, from Lemma 11, we have that $\operatorname{Mod}({\bigwedge lit({\mathbf{f}})})={{[{M}]^{\varphi}}}.$ Therefore, $\operatorname{Mod}({\operatorname{ex}(\varphi,M)})=\operatorname{Mod}({\varphi}){\cup}{{[{M}]^{\varphi}}}.$ See 3.5 ###### Proof We consider each case separately. 1. (i) $M\models\varphi$. Thus, ${{[{M}]^{\varphi}}}{\subseteq}\ \operatorname{Mod}({\varphi})$ which implies that $\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}=\operatorname{Mod}({\varphi}).$ Therefore, $\operatorname{ex}^{*}(\varphi,M)\equiv\varphi$. 2. (ii) $M\not\models\varphi$. Let $qm(\neg\varphi,M)=(T,o,{\mathbf{f}})$. Note that ${\mathbf{f}}\in\operatorname{ftypes}({\varphi})$, and from definition of $qm$ that $M\models\bigwedge lit({\mathbf{f}})$. Thus, it follows from Lemma 11 that $\operatorname{Mod}({\bigwedge lit({\mathbf{f}})})={{[{M}]^{\varphi}}}$. Thus, $\operatorname{Mod}({\varphi\lor\bigwedge qm(\neg\varphi,M)})=\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}$. This means that $\operatorname{ex}^{*}(\varphi,M)\equiv\varphi\lor\bigwedge qm(\neg\varphi,M)$. See 3.6 ###### Proof First, assume that $\operatorname{ex}^{*}(\varphi,{M})\equiv\operatorname{ex}(\varphi,{M})$. From Lemma 1 we have that $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$, hence success and persistence are immediately satisfied. To prove atomic temperance, assume that ${M}^{\prime}\in\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$ and that there is a set of models $\mathbb{M}^{\prime}$ with ${M}^{\prime}$ that is finitely representable in ${\cal ALC}$-formula and such that $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}\subseteq\mathbb{M}^{\prime}\subset\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. Lemma 1 implies that $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\cup{{[{M}]^{\varphi}}}$. Hence, ${M}^{\prime}\not\in{{[{{M}}]^{\varphi}}}$, a contradiction as we assumed that $\mathbb{M}^{\prime}\supseteq\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$. Therefore, no such $\mathbb{M}^{\prime}$ could exist, and thus, $\operatorname{ex}^{*}$ satisfies atomic temperance. Let ${M}^{\prime}\equiv^{\varphi}{M}$. Since $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})=\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$ and ${{[{M^{\prime}}]^{\varphi}}}={{[{M}]^{\varphi}}}$, we have that: $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}=\operatorname{Mod}({\varphi})\cup{{[{{M}^{\prime}}]^{\varphi}}}=\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M}^{\prime})})$. Hence, atomic extensionality is also satisfied. On the other hand, suppose that $\operatorname{ex}^{*}(\varphi,M)$ satisfies the postulates stated. Let ${M}^{\prime}\in\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$. If ${M}^{\prime}\in\operatorname{Mod}({\varphi})$ then success ensures that ${M}^{\prime}\in\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. Otherwise, we have ${M}^{\prime}\equiv^{\varphi}{M}$, and as consequence of success and atomic extensionality we also obtain ${M}^{\prime}\in\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. Therefore, $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}\subseteq\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. Now, let ${M}^{\prime}\in\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$ and assume that ${M}^{\prime}\not\in\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$. Success, persistence and atomic extensionality imply that $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})$. Atomic temperance states that there is no set of models $\mathbb{M}^{\prime}$ that is finitely representable in ${\cal ALC}$-formula with $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}\subseteq\mathbb{M}^{\prime}\subset\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})\cup\\{{M}\\}$. But we know from Lemma 1 that $\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$ is finitely representable in ${\cal ALC}$-formula and does not include ${M}^{\prime}$ by assumption, a contradiction. Thus, no such ${M}^{\prime}$ could exist and $\operatorname{Mod}({\operatorname{ex}^{*}(\varphi,{M})})\subseteq\operatorname{Mod}({\varphi})\cup{{[{{M}}]^{\varphi}}}$. Hence, Lemma 1 yields $\operatorname{ex}^{*}(\varphi,{M})\equiv\operatorname{ex}(\varphi,{M})$.
# Target-wavelength-trimmed second harmonic generation with gallium phosphide- on-nitride ring resonators Lillian Thiel 1,* Alan D. Logan 1 Srivatsa Chakravarthi 1 Shivangi Shree 2 Karine Hestroffer 3 Fariba Hatami 3 and Kai-Mei C. Fu1,2 1Department of Electrical and Computing Engineering, University of Washington, Seattle WA 98195 2Department of Physics, University of Washington, Seattle WA 98195 3 Department of Physics, Humboldt-Universitat zu Berlin, 12489 Berlin, Germany <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We demonstrate post-fabrication target-wavelength trimming with a gallium phosphide on silicon nitride integrated photonic platform using controlled electron-beam exposure of hydrogen silsesquioxane cladding. A linear relationship between the electron-beam exposure dose and resonant wavelength red-shift enables deterministic, individual trimming of multiple devices on the same chip to within 30 pm of a single target wavelength. Second harmonic generation from telecom to near infrared at a target wavelength is shown in multiple devices with quality factors on the order of $10^{4}$. Post- fabrication tuning is an essential tool for targeted wavelength applications including quantum frequency conversion. ††journal: oe ## 1 Introduction Nanophotonic devices have transformed frequency conversion processes, enabling conversion efficiencies orders of magnitude higher than in bulk materials by confining light and controlling mode dispersion. To fully leverage the potential of nanophotonic platforms for frequency conversion, multi-resonant devices are desirable; for example, in a $\chi^{(2)}$ nonlinear media, doubly resonant devices for second-harmonic generation (SHG) and parametric down- conversion (PDC) and triply resonant devices for difference- and sum-frequency generation. Moreover, applications are emerging in which an absolute frequency of one or more interacting fields is required. Examples of this include PDC entangled photon sources for optical quantum computing [1, 2, 3, 4] and frequency conversion of single visible or NIR photons, emitted by ions or solid-state defects, for long-distance fiber transmission [5, 6, 7, 8, 9]. Toward this latter application, multi-resonant frequency conversion of photons to/from the telecom band has been demonstrated in nanophotonic structures fabricated from non-linear materials such as gallium phosphide (GaP) [10], aluminum nitride [11], lithium niobate [12, 13] and silicon nitride [14, 15]. However, tight fabrication tolerances for such devices typically cause resonant wavelengths to differ significantly from their designed values, presenting a challenge for practical device implementation. Here we introduce a promising technique for post-fabrication nanoscale control of photonic resonances to address this challenge via electron-beam modification of hydrogen silsesquioxane (HSQ) cladding. Generally, photonic resonators can be tuned by altering the physical dimensions or optical properties of the structure. For a ring resonator, tuning variables include ring width and height, resonator refractive index, and cladding refractive index. Resonances may be either tuned via processes that require constant active control (e.g. temperature [16, 17], electric field [18, 19]) or trimmed via processes that permanently change device parameters (e.g. etching [20, 21, 22], introducing strain [23, 24], modification of cladding material [25, 26, 27, 28]). Resonant wavelength modification of ring resonators for frequency conversion processes presents a particularly complex challenge due to the simultaneous involvement of multiple wavelengths. Any tuning or trimming process will cause a shift in both absolute resonant wavelength and relative spacing between the two or more required resonances. Thus, realizing multiple independent tuning mechanisms and/or developing techniques which offer the spatial resolution to preferentially tune certain resonator modes are particularly interesting. Further, a tuning mechanism that can be readily integrated into a well established fabrication process is preferred. HSQ is heavily utilized for high-resolution electron-beam lithography [29, 30] of photonic devices and is fully compatible with most materials and fabrication processes. Prior studies have investigated the utility of HSQ for resonance tuning using thermal annealing [31] to convert the cage-like structure of uncured HSQ into a denser cross-linked silica structure, increasing the index of refraction from n = 1.45 to n = 1.51. Similarly, localized laser[26] and in situ[25] annealing of HSQ on individual silicon micro-ring resonators has been utilized to shift resonances by up to 2 nm. Both techniques involve heating the devices to high temperatures, thus limiting integration of III-V non-linear photonic materials such as GaAs and GaP. Additionally, long-term, the spatial resolution afforded by thermal processes will be inadequate for multi-resonance frequency conversion applications. High-energy electron-beam exposure allows high spatial and tuning resolution [28]. In this work we investigate the utility of HSQ electron-beam exposure as a method for high-resolution, post-fabrication, single device trimming, with the potential for mode-selective trimming. Electron-beam exposure of HSQ has been shown to induce cross-linking similar to thermal annealing, while high exposure intensity enables further refractive index tuning (up to $\mathrm{n=1.62}$) via the formation of silicon-rich SiO2 [32, 33]. Our results indicate that the induced wavelength shift is linearly dependent on the electron-beam exposure dose, with an observed trimming range of $\sim$ 5 nm for the applied doses. We demonstrate high spatial and tuning resolution, resonance stability, and minimal impact on device quality factor (Q), making HSQ electron-beam trimming a promising candidate for use in quantum frequency conversion applications. Finally, we demonstrate successful target-wavelength second harmonic generation with multiple devices and provide an outlook toward utilizing this technology for simultaneous multiple resonance tuning. Figure 1: (a) SEM image of the fabricated GaP-on-Si3N4 resonator with accompanying grating couplers. The device is excited with telecom band tunable laser (red) and the SHG signal is measured out of the vis/NIR output grating (yellow). (b) Simulated resonator $\vec{E}$-field profiles of the quasi-phase matched telecom and second-harmonic modes. (c) Pre and post trimming optical images of a device. The ring was exposed with a donut pattern. The waveguide and grating structures were utilized for exposure alignment. To reveal the exposed HSQ pattern during the trimming process, the post-trimmed optical image was acquired after developing the exposed HSQ in a 25% TMAH solution. Note that all optical characterizations were performed before the development. ## 2 Resonator design, fabrication and testing The ring resonators were designed and fabricated in a 250 nm (100) GaP photonic layer on an epiwafer substrate with 220 nm LPCVD silicon nitride on 4 µm silicon oxide on silicon. A ring radius of 7.77 µm and width of 940 nm were selected to satisfy a quasi-phase matching condition for a TE00 fundamental mode and TM03 second harmonic (SH) mode at $\mathrm{\lambda_{0}\approx 1550\,nm}$ and $\mathrm{775\,nm}$, respectively [10]. Each ring mode is evanescently coupled to a different input/output waveguide leading to a pair of cross-polarized grating couplers. A device SEM image and schematic are shown in Fig. 1a, with the relevant mode profiles in Fig. 1b. Additional design details can be found in Appendix A1. Eigenmode simulations of the ring resonator predict that an increment in refractive index of $\mathrm{\Delta n_{HSQ}=+\,0.1}$ in a 100 nm thick conformal HSQ cladding will red-shift the resonant wavelength of the fundamental mode by $\sim$ 2.98 nm and SH mode by $\sim$ 1.07 nm. With changes in HSQ refractive index as high as 0.22 reported [32, 33], the predicted trimming range is capable of compensating for typical device-to-device fabrication variations. To fabricate the structure, the 250 nm GaP layer was released from a GaP substrate by etching an intermediate $\mathrm{Al_{0.8}Ga_{0.2}P}$ sacrificial layer with dilute HF. The GaP was then transferred to the Si3N4-on-SiO2/Si substrate using a wet transfer process [34, 35, 36, 10]. Electron-beam lithography using HSQ resist and subsequent plasma reactive-ion-etching (RIE) of the GaP layer forms the photonic devices. Two chips (A and B) were fabricated with identical device dimensions. After fabrication, a layer of developed HSQ mask remains. This HSQ layer is removed with a vapor-HF etch, enabling the spin-on of a new conformal coating of HSQ for the resonance trimming process. After HSQ spin-on, each device was trimmed by exposing a donut-shaped region extending 1 µm beyond the inner and outer device radii at a beam voltage of 5 keV (Fig. 1c). Further details of the fabrication are provided in Appendix A2. The fabrication process is compatible with diamond substrates for GaP-on-diamond or GaAs-on-diamond photonic circuits integrated with solid-state defects [35, 36, 37]. The fundamental resonances of individual devices were identified by sweeping a continuous-wave telecom laser (1530 to 1565 nm) and detecting transmission spectra using a InGaAs photodiode. In the visible band a supercontinuum laser (550 to 1350 nm) was used for excitation and the transmission spectra is detected with a grating spectrometer. The excitation polarization was set to excite TE (1550 nm) or TM (775 nm) modes, as depicted in Fig. 1a. Scattered excitation light was filtered out of the collection path via polarization and spatial filtering. Chip-A exhibited resonances with Q-factors of 7 $\pm$ 2.2 $\times$103. Chip-B exhibited narrower resonances with Q-factors of 3 $\pm$ 1.7 $\times$104. ## 3 HSQ target-wavelength trimming The first goal is to establish a relationship between exposure dose and resonant wavelength shift to deterministically tune the resonant wavelength of individual devices on-chip. Approximately 24 hours after HSQ spin-on, 13 devices (chip-A) were selected for a dose-test and exposed with electron-beam lithography. Three devices each received doses of 1000, 1750, and 2500 µC/cm2 respectively and four devices received no exposure. Transmission spectra of one specific fundamental mode taken before and after exposure show a linear relationship between resonance shift and exposure dose (Fig. 2a). On average, the exposed devices exhibit a tuning rate (red-shift) of 1.4 $\pm$0.1 pm/µC/cm2, with the maximum tested dose (2500 µC/cm2) giving an average shift of 3.08 nm. Figure 2: Chip-A: (a) Relationship between HSQ exposure dose and observed wavelength shift at the fundamental wavelength. (b), (c) A target trim-test showing the fundamental wavelength resonances for nine devices with identically designed resonator dimensions pre- and post-exposure, respectively. The target wavelength was selected to be 1540.5 nm, however a systematic overexposure resulted in resonances trimmed to 1540.91$\pm$0.19 nm. In the next step, a set of nine identically designed devices were identified (Fig. 2b) with initial resonances near $\mathrm{\lambda=1540.5}$ nm to trim to mutual resonance. Before trimming, the spread of resonant wavelengths was measured to be $\approx$ 3 nm. We used the above linear dose-shift relationship to determine the necessary exposure dose for targeted trimming of each device resonance to $\mathrm{\lambda=1540.5}$ nm. With a single exposure of each device, all resonances were trimmed to the target wavelength with an offset of 0.41$\pm$0.19 nm (Fig. 2c). Post-tuning analysis (Appendix A3) of the transmission spectra determined that the tuning rate is different for various families of modes supported by the ring. The highest observed shift was 4.92 nm (Fig. A1a, mode-1). The significant mode-sensitivity suggests deterministic inter-modal control could be developed within this platform. Further, we observed an average post-trim resonance drift of 10.6 pm/day (corresponding to 0.7% of average trimmed red- shift or 5% of the average target resonance linewidth) over 12 days. This minor drift followed a non-monotonic trend for all tested devices. The source of the post-exposure drift requires further investigation. ## 4 Target-wavelength-trimmed SHG In a second experiment (chip-B), we expanded the scope of our dose-test to incorporate both the fundamental and SH modes. Using the same dose-test procedure, a linear dose-shift relationship is found for the fundamental mode with a nearly identical tuning rate (1.2 $\pm$0.1 pm/µC/cm2) to the chip-A dose-test. The largest dose (2500 µC/cm2) corresponds to an average shift of 3.0 nm. This matches the simulated resonance shift for a cladding index change of $\mathrm{\Delta n_{HSQ}\approx 0.1}$, which is reasonable for the large electron-beam exposure dose [32]. A linear dose-shift relationship (0.5 $\pm$0.2 pm/µC/cm2) is also established for the SH mode (Fig. 3a). Due to the highly multi-mode nature of the ring at 775 nm, difficulty in consistently identifying radial modes prevented us from accurately determining the mode- specific trimming rate of the targeted SH mode. Thus, we focus on HSQ target trimming of the fundamental mode to demonstrate target-wavelength-trimmed SHG in multiple devices. We selected three nominally identical devices with telecom resonances that exhibited strong SH conversion. The initial spread of resonances was 1.11 nm. Each device was exposed to a calibrated HSQ electron-beam dose for trimming to a target wavelength of $\mathrm{\lambda=1533.75}$ nm. All three devices were trimmed to within $\mathrm{\approx 30\,pm}$ of the target wavelength (Fig. 3b). The strong temperature dependence of the SHG signal (Fig. 3c), indicates that the devices were close to quasi-phase matched double resonance after the trimming process. The SH and fundamental resonances red-shift at different rates with increasing temperature, causing the SHG conversion efficiency to peak as the SH resonance tunes into and out of double resonance with the fundamental. All three devices gain some double-resonant SHG enhancement at the target wavelength (1533.75 nm), but due to remaining variations in the SH resonant wavelength after the HSQ exposure trimming, each resonator reaches its maximum SHG efficiency at a slightly different temperature and wavelength. Figure 3: Chip-B: (a) Linear dependence of resonant wavelength shifts with the doses for both fundamental and SH (blue) wavelengths with estimated slopes of 1.18 and 0.47 pm/µC/cm2 respectively. (b)The dose-test data is utilized to trim the fundamental wavelength resonance of three near-quasi-phase-matched devices. (c) Peak SHG intensity (blue) and corresponding input (fundamental resonance) wavelength (red) as a function of temperature for each trimmed device. The fundamental and SH resonance red-shift linearly with different rates with increasing temperature. The SHG intensity depends on how close the device is to double resonance (SH resonance at exactly half the input wavelength). (d) A simulated trimming scenario showing two ring resonators with different initial fundamental and SH resonant frequencies (points A and B). Blue and red regions show the range of resonant wavelengths that can be reached using a combination of temperature tuning (10 to 60∘C) and HSQ exposure (0 to 2500 µC/cm2), with the overlap (green) showing that each ring could be tuned to double resonance (dotted red line) over a range of wavelengths including the design wavelength (point C). Wavelength shift dependence on temperature and HSQ exposure are derived from experimental data for the fundamental and simulations for the SH resonances. SHG efficiency is also very sensitive to the Q-factors of the resonant modes. The narrow resonances observed on chip-B enable us to investigate the effect of trimming on the fundamental mode Q-factor. Initially the average Q-factor for 15 devices was 3 $\pm$1.7$\times$104. The Q-factor is observed to both decrease and increase with the average Q decreasing by 5.1 $\pm$6.1$\times$103 after exposure. It is likely that the majority of this variation in Q can be attributed to exposure of the coupling region, altering the coupling-Q as described in Appendix A1. An exposure pattern that avoids the coupling region could maintain the device Q through the trimming process. Further, selective exposure of the coupling regions could be utilized to tune devices to the critical-coupling regime in addition to resonant wavelength trimming. ## 5 Outlook and Conclusion In this work we demonstrated that HSQ exposure is a promising post-fabrication trimming method for photonic resonators. With a simple calibration process, we target-wavelength trim a large set of identically designed devices with typical post-fabrication variations. HSQ trimming will be an important tool in the larger effort towards target-wavelength frequency conversion. For example, combined with temperature tuning, HSQ trimming should enable dual target- wavelength trimming for doubly resonant quasi-phase matched frequency conversion processes. The devices in this work that were trimmed to correct variance at the telecom wavelength at room temperature still had some variance in the SH resonance, resulting in maximum SHG conversion at different temperatures and wavelengths. By taking the effect of temperature tuning into account when selecting exposure doses, Fig. 3d illustrates how multiple resonators could be trimmed to provide their maximum SHG conversion efficiency at the same target wavelength. Two simulated devices (A, B) with fundamental wavelengths differing by 1.3 nm at room temperature and $\mathrm{\sim 5.5\,nm}$ at their optimal SHG temperatures can be target wavelength tuned within the green dose-temperature area. A third independent tuning mechanism will be required to achieve optimal efficiency for a specific sum- or difference-frequency conversion process needed for quantum frequency conversion applications. The attractive properties of HSQ exposure trimming include tuning range, resolution and precision, low post-trim drift and minimal effect on quality factor. We expect that additional process development will further improve the accuracy and precision. Better mode identification will reduce uncertainty in mode-specific trimming rates enabling the implementation of multiple tuning mechanisms. Improved understanding of sources of drift will allow us to design for or mitigate post-trim resonant shifts. The spatial resolution afforded by electron-beam lithography for HSQ trimming creates potential for applications beyond resonance shifting. Selective exposure of coupling regions could allow post-fabrication fine tuning of coupling Q. Other works have shown that periodic nanometer-scale variations along the ring inner radius can target specific modes for mode-splitting [38]. Electron-beam lithography-based trimming makes target mode trimming and splitting a realistic opportunity. We have shown HSQ exposure to be a simple and effective post fabrication trimming technique. With further development it stands to become a process widely utilized by the photonics community. ## Appendix ## A1 Device design and simulation details The ring resonators were designed using bent-waveguide eigenmode simulations in Lumerical MODE. Using ring radius $r$ and width $w$ as design variables, the resonator was optimized to maximize nonlinear overlap $\beta$ while satisfying the mode polarization and quasi-phase matching (QPM) requirements imposed by the zincblende crystal symmetry and (100)-normal orientation of the GaP photonic layer [10, 39, 40]. At each design point, the target modes ($\mathrm{TE_{00}}$ at $\mathrm{\lambda_{1}\,=\,1550\,nm}$ and $\mathrm{TM_{03}}$ at $\mathrm{\lambda_{2}\,=\,775\,nm}$) were simulated in a ring cross section. The nonlinear overlap $\beta$ was calculated from the mode profiles, along with fractional azimuthal mode numbers $\mathrm{(m_{i}\,=\,2\pi\cdot r\cdot n_{eff,i}/\lambda_{i})}$ for each mode, which are used to check the QPM condition $\mathrm{(2m_{1}\,-\,m_{2}\,=\,\pm\,2)}$. Integer azimuthal mode numbers at the simulated wavelengths are preferred, but not required; a design with non- integer $m_{i}$ is expected to have QPM modes that are near double resonance at wavelengths that are not exactly 1550 nm and 775 nm in ambient conditions, which is acceptable if within the tolerance of our tuning capabilities. Based on previous experiments in similar material platforms (GaP-on-Diamond, GaP-on- oxide), intrinsic quality factor is expected to be dominated by fabrication- related factors such as sidewall roughness, so the simulated radiative quality factor was not considered for the design process. The ring resonator coupling regions were designed to provide specific coupling quality factors using a supermode analysis method. For a range of coupling waveguide widths and ring-to-waveguide distances, the target ring and waveguide modes were simulated with each structure alone, and then all guided modes were simulated for the combined structure. Mode overlap between each coupling region mode and the ring/waveguide modes were calculated and used to numerically find the total power transfer from the ring mode to waveguide mode as a function of coupling region length. The coupling quality factor $Q_{c}$ was derived from the simulated free spectral range $\Delta\lambda_{FSR}$ of the resonator mode and the field coupling coefficient $\kappa$ from a single pass through the coupling region: $Q_{c}=\frac{2\pi\lambda}{\left|\kappa^{2}\right|\Delta\lambda_{FSR}}.$ (1) To provide a consistent interface to the rest of the coupling photonic circuit, a constant waveguide width and propagation length was selected for each ring mode’s coupling region, with only the separation distance varied to set $\mathrm{Q_{c}}$. Coupling quality factor depends significantly on how well the HSQ fills the space between the ring and the waveguide, as well as HSQ exposure dose; the coupling regions were simulated with a 100 nm conformal HSQ layer. The telecom ring mode is coupled to a 440 nm wide waveguide wrapped around a $\mathrm{45^{\circ}}$ arc of the ring, and the SH mode is coupled to a 120 nm waveguide with a $\mathrm{30^{\circ}}$ wrap. For the telecom mode, a ring-to- waveguide gap distance of 240 nm gives $\mathrm{Q_{c}\approx 24000-11000}$ with increasing ebeam exposure, and a distance of 360 nm gives $\mathrm{Q_{c}\approx 90000-55000}$. For the SH mode, a gap distance of 120 nm gives $\mathrm{Q_{c}\approx 4500-5500}$, and 160 nm gives $\mathrm{Q_{c}\approx 11000-13500}$. The grating couplers were designed to provide coupling to free space with minimal footprint and low back reflections. For each wavelength band and polarization, an aperiodic grating coupler design was found using a sampled hill climbing optimization algorithm with a simplified model of grating behavior as an objective function. The performance of the final design was verified with a 3D FDTD simulation. To suppress on-chip back reflections, which cause Fabry-Perot interference patterns in transmission spectra, the grating designs were implemented as elliptically-shaped focusing grating couplers. The grating notches are shaped as sections of progressively larger ellipses with one focus centered on the end of the waveguide. Back reflections from the waveguide are directed toward the second focus of the ellipse instead of going back into the waveguide mode [41]. The design variables used for the elliptical grating were: eccentricity, the angle between waveguide and major axis, the arc angle of the elliptical section, and major axis length of first grating notch. Designs were evaluated with 3D FDTD simulations to find a combination of parameters that sufficiently suppresses back reflection in the the same mode while maintaining a reasonable spot shape for light scattered into free space. ## A2 Fabrication details The GaP material used for the photonics layer was grown by molecular beam epitaxy on a 300 nm $\mathrm{Al_{0.8}Ga_{0.2}P}$ sacrificial layer on a GaP substrate. This epitaxial GaP layer was released in 2% HF solution, followed by a H2O rinse and wet transfer to the nitride substrate. To improve GaP adhesion, prior to transfer, a 5 nm layer of SiO2 was thermally evaporated onto the substrate, which was then treated with hexamethyldisilizane vapor. The transferred GaP was allowed to dry overnight on a hotplate at 80 ∘C. Electron beam lithography (JEOL-6300, 100 kV, 1 nA beam current) was utilized to pattern the designed photonic structures on a thin (150 nm) HSQ resist layer on top of the GaP-on-nitride stack. Multiple arrays of devices were patterned with the resonator and coupling region dimensions varying across the pattern to compensate for fabrication variations. A reactive-ion etch (RIE) step (3.0 mTorr, 45 W RF, 60 W ICP, 235 V DC-bias, 1.0/6.0/3.0 sccm Cl2/Ar/N2 flow) was used to etch the GaP layer. The residual HSQ mask had to be removed. Initial attempts at HSQ removal using a dilute HF (0.5%) dip revealed low adhesion of the GaP devices to the substrate: stiction from the wet process damaged the waveguide coupling regions. Thus, a HF vapor-etch process was investigated. After the successful HSQ vapor-etch, the chip was heated to 200 ∘C for 5 min to remove etch byproducts. This provided a clean surface for a $\sim$150 nm layer of HSQ electron-beam resist that was spun-on and exposed for device trimming. The thickness of this HSQ layer is expected to shrink by up to 33% due to e-beam exposure [32]. ## A3 Post-tuning analysis Figure A1: (a) Resonance shift as a function of HSQ exposure dose shows a linear relation for two different modes: Mode-1 (red) and Mode-2 (blue) at telecom wavelength range. The slope is different for the two modes indicating that a different dose would be required for the same shift for a specific mode. (b) Temporal stability of the trimming process for a subset of eight devices. Day-0 data shows initial resonance wavelengths of all devices without HSQ exposure. Device 5 did not receive any e-beam exposure. The large shift at day-1 is observed due to coarse e-beam trimming. Devices 1, 2, 6 and 7 were re-exposed for fine target wavelength trimming. The dose-shift relationship is dependent on the overlap of the target mode with the surrounding cladding. Analysis of the chip-A trim-test data (Fig. A1a) reveals that two families of modes were observed in the transmission spectra (1530 to 1565 nm) with significantly different tuning rates. Mode-1 (FSR = 13.61 $\pm$0.14 nm) exhibits a sensitivity of 2.11 pm/µC/cm2, and mode-2 (FSR = 12.91 $\pm$0.05 nm) exhibits a lower sensitivity of 1.45 pm/µC/cm2. From simulations, the fundamental TE00 mode is expected to have an FSR$\approx$14.7 nm. Given the smaller observed FSR, we suspect both these families of resonances are higher order hybrid TE01/TM00 modes. Simulations show these hybrid modes tune about twice as fast as the fundamental mode, with a change in HSQ cladding index of 0.1 shifting the wavelength by about 6.25 nm and 7.32 nm respectively instead of 2.98 nm. To monitor the temporal stability of the trimming process, after HSQ exposure, room temperature transmission spectra of eight identically designed devices were periodically collected over a 12 day period. Devices 1, 2, 6 and 7 were re-exposed for fine target wavelength trimming. Therefore, we only included the remaining devices for detailed analysis of the resonance shift. One possible reason for the observed shift could be the change in refractive index of the cladding over time (possibly from uptake of ambient moisture by the exposed HSQ [26, 25]) and further investigation is required for identification of the source of the drift. Funding Acknowledgments This material is based upon work supported by the National Science Foundation under Grants EFMA-1640986 (photonic design and fabrication) and U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704 (HSQ tuning and testing). The photonic devices were fabricated at the Washington Nanofabrication Facility, a National Nanotechnology Coordinated Infrastructure (NNCI) site at the University of Washington which is supported in part by funds from the National Science Foundation (awards NNCI-1542101, 1337840 and 0335765). Disclosures The authors declare that there are no conflicts of interest related to this article. Data Availability Statement Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. ## References * [1] P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarization-entangled photons,” Physical Review A 60, R773 (1999). * [2] P. J. Mosley, J. S. Lundeen, B. J. Smith, P. Wasylczyk, A. B. U’Ren, C. Silberhorn, and I. A. Walmsley, “Heralded generation of ultrafast single photons in pure quantum states,” Physical Review Letters 100, 133601 (2008). * [3] X.-C. Yao, T.-X. Wang, P. Xu, H. Lu, G.-S. Pan, X.-H. Bao, C.-Z. Peng, C.-Y. Lu, Y.-A. Chen, and J.-W. Pan, “Observation of eight-photon entanglement,” Nature photonics 6, 225–228 (2012). * [4] Y. Wang, K. D. Jöns, and Z. Sun, “Integrated photon-pair sources with nonlinear optics,” Applied Physics Reviews 8, 011314 (2021). * [5] J. D. Siverns, J. Hannegan, and Q. Quraishi, “Neutral-atom wavelength-compatible 780 nm single photons from a trapped ion via quantum frequency conversion,” Physical Review Applied 11, 014044 (2019). * [6] X. Lu, Q. Li, D. A. Westly, G. Moille, A. Singh, V. Anant, and K. Srinivasan, “Chip-integrated visible–telecom entangled photon pair source for quantum communication,” Nature physics 15, 373–381 (2019). * [7] P. Kumar, “Quantum frequency conversion,” Optics letters 15, 1476–1478 (1990). * [8] M. Bock, P. Eich, S. Kucera, M. Kreis, A. Lenhard, C. Becher, and J. Eschner, “High-fidelity entanglement between a trapped ion and a telecom photon via quantum frequency conversion,” Nature communications 9, 1–7 (2018). * [9] S. Zaske, A. Lenhard, C. A. Keßler, J. Kettler, C. Hepp, C. Arend, R. Albrecht, W.-M. Schulz, M. Jetter, P. Michler _et al._ , “Visible-to-telecom quantum frequency conversion of light from a single quantum emitter,” Physical review letters 109, 147404 (2012). * [10] A. D. Logan, M. Gould, E. R. Schmidgall, K. Hestroffer, Z. Lin, W. Jin, A. Majumdar, F. Hatami, A. W. Rodriguez, and K.-M. C. Fu, “400%/w second harmonic conversion efficiency in 14 $\mu$m-diameter gallium phosphide-on-oxide resonators,” Optics express 26, 33687–33699 (2018). * [11] A. W. Bruch, X. Liu, X. Guo, J. B. Surya, Z. Gong, L. Zhang, J. Wang, J. Yan, and H. X. Tang, “17 000%/w second-harmonic conversion efficiency in single-crystalline aluminum nitride microresonators,” Applied Physics Letters 113, 131102 (2018). * [12] A. Rao, K. Abdelsalam, T. Sjaardema, A. Honardoost, G. F. Camacho-Gonzalez, and S. Fathpour, “Actively-monitored periodic-poling in thin-film lithium niobate photonic waveguides with ultrahigh nonlinear conversion efficiency of 4600% w- 1 cm- 2,” Optics express 27, 25920–25930 (2019). * [13] J.-Y. Chen, Z.-H. Ma, Y. M. Sua, Z. Li, C. Tang, and Y.-P. Huang, “Ultra-efficient frequency conversion in quasi-phase-matched lithium niobate microrings,” Optica 6, 1244–1245 (2019). * [14] Y. Liu, M. Davanço, V. Aksyuk, and K. Srinivasan, “Electromagnetically induced transparency and wideband wavelength conversion in silicon nitride microdisk optomechanical resonators,” Physical review letters 110, 223603 (2013). * [15] X. Lu, G. Moille, Q. Li, D. A. Westly, A. Singh, A. Rao, S.-P. Yu, T. C. Briles, S. B. Papp, and K. Srinivasan, “Efficient telecom-to-visible spectral translation through ultralow power nonlinear nanophotonics,” Nature Photonics 13, 593–601 (2019). * [16] P. Dong, S. Liao, D. Feng, H. Liang, D. Zheng, R. Shafiiha, C.-C. Kung, W. Qian, G. Li, X. Zheng, A. V. Krishnamoorthy, and M. Asghari, “Low vpp, ultralow-energy, compact, high-speed silicon electro-optic modulator,” Optics Express 17 (2009). * [17] L. Koehler, P. Chevalier, E. Shim, B. Desiatov, A. Shams-Ansari, M. Piccardo, Y. Okawachi, M. Yu, M. Loncar, M. Lipson, A. L. Gaeta, and F. Capasso, “Direct thermo-optical tuning of silicon microresonators for the mid-infrared,” Optics Express 26 (2018). * [18] H. Jung, K. Y. Fong, C. Xiong, and H. X. Tang, “Electrical tuning and switching of an optical frequency comb generated in aluminum nitride microring resonators,” Optics Letters 39 (2014). * [19] H. Shen, M. H. Khan, L. Fan, L. Zhao, Y. Xuan, J. Ouyang, L. T. Varghese, and M. Qi, “Eight-channel reconfigurable microring filters with tunable frequency, extinction ratio and bandwidth,” Optics express 18, 18067–18076 (2010). * [20] K. Hennessy, A. Badolato, A. Tamboli, P. M. Petroff, E. Hu, M. Atatüre, J. Dreiser, and A. Imamoǧlu, “Tuning photonic crystal nanocavity modes by wet chemical digital etching,” Applied Physics Letters 87 (2005). * [21] X. Lu, G. Moille, Q. Li, D. A. Westly, A. Singh, A. Rao, S. P. Yu, T. C. Briles, S. B. Papp, and K. Srinivasan, “Efficient telecom-to-visible spectral translation through ultralow power nonlinear nanophotonics,” Nature Photonics 13 (2019). * [22] A. H. Atabaki, A. A. Eftekhar, M. Askari, and A. Adibi, “Accurate post-fabrication trimming of ultra-compact resonators on silicon,” Optics express 21, 14139–14145 (2013). * [23] M. M. Milosevic, X. Chen, W. Cao, A. F. Runge, Y. Franz, C. G. Littlejohns, S. Mailis, A. C. Peacock, D. J. Thomson, and G. T. Reed, “Ion implantation in silicon for trimming the operating wavelength of ring resonators,” IEEE Journal of Selected Topics in Quantum Electronics 24, 1–7 (2018). * [24] J. Schrauwen, D. Van Thourhout, and R. Baets, “Trimming of silicon ring resonator by electron beam induced compaction and strain,” Optics express 16, 3738–3743 (2008). * [25] S. Spector, J. M. Knecht, and P. W. Juodawlkis, “Localized in situ cladding annealing for post-fabrication trimming of silicon photonic integrated circuits,” Optics express 24, 5996–6003 (2016). * [26] V. Biryukova, G. J. Sharp, C. Klitis, and M. Sorel, “Trimming of silicon-on-insulator ring-resonators via localized laser annealing,” Optics express 28, 11156–11164 (2020). * [27] L. Zhou, K. Okamoto, and S. Yoo, “Athermalizing and trimming of slotted silicon microring resonators with uv-sensitive pmma upper-cladding,” IEEE Photonics Technology Letters 21, 1175–1177 (2009). * [28] S. Prorok, A. Y. Petrov, M. Eich, J. Luo, and A. K.-Y. Jen, “Trimming of high-q-factor silicon ring resonators by electron beam bleaching,” Optics letters 37, 3114–3116 (2012). * [29] A. Grigorescu and C. Hagen, “Resists for sub-20-nm electron beam lithography with a focus on hsq: state of the art,” Nanotechnology 20, 292001 (2009). * [30] V. R. Manfrinato, F. E. Camino, A. Stein, L. Zhang, M. Lu, E. A. Stach, and C. T. Black, “Patterning si at the 1 nm length scale with aberration-corrected electron-beam lithography: Tuning of plasmonic properties by design,” Advanced Functional Materials 29, 1903429 (2019). * [31] C.-C. Yang and W.-C. Chen, “The structures and properties of hydrogen silsesquioxane (hsq) films produced by thermal curing,” Journal of Materials Chemistry 12, 1138–1141 (2002). * [32] H.-J. Lee, J. Goo, S.-H. Kim, J.-G. Hong, H.-D. Lee, H.-K. Kang, S.-I. Lee, and M. Y. Lee, “A new, low-thermal-budget planarization scheme for pre-metal dielectric using electron-beam cured hydrogen silsesquioxane in device,” Japanese Journal of Applied Physics 39, 3924 (2000). * [33] S. Choi, M. J. Word, V. Kumar, and I. Adesida, “Comparative study of thermally cured and electron-beam-exposed hydrogen silsesquioxane resists,” Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures Processing, Measurement, and Phenomena 26, 1654–1659 (2008). * [34] E. Yablonovitch, D. M. Hwang, T. J. Gmitter, L. T. Florez, and J. P. Harbison, “Van der Waals bonding of GaAs epitaxial liftoff films onto arbitrary substrates,” Applied Physics Letters 56, 2419–2421 (1990). * [35] M. Gould, E. R. Schmidgall, S. Dadgostar, F. Hatami, and K.-M. C. Fu, “Efficient Extraction of Zero-Phonon-Line Photons from Single Nitrogen-Vacancy Centers in an Integrated GaP-on-Diamond Platform,” Physical Review Applied 6, 011001 (2016). * [36] E. R. Schmidgall, S. Chakravarthi, M. Gould, I. R. Christen, K. Hestroffer, F. Hatami, and K.-M. C. Fu, “Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits,” Nano Letters 18, 1175–1179 (2018). * [37] D. Huang, A. Abulnaga, S. Welinski, M. Raha, J. D. Thompson, and N. P. de Leon, “Hybrid iii-v diamond photonic platform for quantum nodes based on neutral silicon vacancy centers in diamond,” Optics Express 29, 9174–9189 (2021). * [38] X. Lu, A. Rao, G. Moille, D. A. Westly, and K. Srinivasan, “Universal frequency engineering tool for microcavity nonlinear optics: multiple selective mode splitting of whispering-gallery resonances,” Photonics Research 8 (2020). * [39] Z.-F. Bi, A. W. Rodriguez, H. Hashemi, D. Duchesne, M. Loncar, K.-M. Wang, and S. G. Johnson, “High-efficiency second-harmonic generation in doubly-resonant $\chi$ (2) microring resonators,” Opt. Express 20, 7526–7543 (2012). * [40] R. Boyd, _Nonlinear Optics_ (Academic, 2008). * [41] D. Vermeulen, Y. D. Koninck, Y. Li, E. Lambert, W. Bogaerts, R. Baets, and G. Roelkens, “Reflectionless grating couplers for silicon-on-insulator photonic integrated circuits,” Opt. Express 20, 22278–22283 (2012).
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN) ​​​ CERN-EP-2017-135 LHCb-PAPER-2017-015 10th July 2017 Study of prompt ${{D}^{0}}$ meson production in $p\mathrm{Pb}$ collisions at ${\sqrt{s_{\mathrm{NN}}}}=5\,\mathrm{\,Te\kern-2.07413ptV}$ The LHCb collaboration†††Authors are listed at the end of this paper. Production of prompt ${{D}^{0}}$ mesons is studied in proton-lead and lead- proton collisions recorded at the LHCb detector at the LHC. The data sample corresponds to an integrated luminosity of $1.58\pm 0.02\mbox{\,nb}^{-1}$ recorded at a nucleon-nucleon centre-of-mass energy of ${\sqrt{s_{\mathrm{NN}}}}=5\mathrm{\,Te\kern-1.00006ptV}$. Measurements of the differential cross-section, the forward-backward production ratio and the nuclear modification factor are reported using ${{D}^{0}}$ candidates with transverse momenta less than $10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and rapidities in the ranges $1.5<y^{*}<4.0$ and $-5.0<y^{*}<-2.5$ in the nucleon- nucleon centre-of-mass system. Submitted to JHEP © CERN on behalf of the LHCb collaboration, licence CC-BY-4.0. ## 1 Introduction Charm hadrons produced in hadronic and nuclear collisions are excellent probes to study nuclear matter in extreme conditions. The differential cross-sections of $c$-quark production in $pp$ or $p\bar{p}$ collisions have been calculated based on perturbative quantum chromodynamics (QCD) and collinear or $k_{\mathrm{T}}$ factorisation [1, 2, 3, 4, 5, 6]. These phenomenological models [7] are also able to predict the differential cross-section of $c$-quark production including most of the commonly assumed “cold nuclear matter” (CNM) effects in nuclear collisions, where CNM effects related to the parton flux differences and other effects come into play. Since heavy quarks are produced at a time scale of approximately 0.1 fm/$c$ after the collision, they are ideal to examine hot nuclear matter, the so-called “quark-gluon plasma” (QGP), by studying how they traverse this medium and interact with it right after their formation. These studies require a thorough understanding of the CNM effects, which can be investigated in systems where the formation of QGP is not expected. In addition, a precise quantification of CNM effects would significantly improve the understanding of charmonium and open-charm production by confirming or discarding the possibility that the suppression pattern in the production of quarkonium states, like ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$, at the SPS, RHIC and LHC is due to QGP formation [7]. The study of CNM effects is best performed in collisions of protons with heavy nuclei like lead, where the most studied CNM effects, such as gluon saturation [8, 9] and in-medium energy loss [10] in initial- and final-state radiation [11, 12], are more evident. Phenomenologically, collinear parton distributions are often used to describe the nuclear modification of the parton flux in the nucleus. The modification with respect to the free nucleon depends on the parton fractional longitudinal momentum, $x$, and the atomic mass number of the nucleus $A$ [13, 14]. In the low-$x$ region, down to $x\approx 10^{-5}-10^{-6}$, which is accessible at LHC energies, stronger onset of gluon saturation [15, 16, 17, 18] is expected to play a major role. Its effect can be quantified by studying production of ${{D}^{0}}$ mesons at low transverse momentum $p_{\mathrm{T}}$ [19], ideally down to zero $p_{\mathrm{T}}$. The in- medium energy loss occurs when the partons lose energy in the cold medium through both initial- and final-state radiation. CNM effects have been investigated in detail at the RHIC collider in $pp$ and $d$Au collisions [7, 20] at a nucleon-nucleon centre-of-mass energy of ${\sqrt{s_{\mathrm{NN}}}}=200$$\mathrm{\,Ge\kern-1.00006ptV}$. Most recently, CNM effects were measured in $p$Pb collisions at the LHC for quarkonium and heavy flavour production [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. The ALICE experiment [28] studied $D$ meson production in $p$Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5$$\mathrm{\,Te\kern-1.00006ptV}$ in the region $−0.96<y^{*}<0.04$ for $\mbox{$p_{\mathrm{T}}$}>2{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, where $y^{*}$ is the rapidity of the $D$ meson defined in the centre-of-mass system of the colliding nucleons. Their results suggest that the suppression observed in PbPb collisions is due to hot nuclear matter effects, i.e. QGP formation. Results on leptons from semileptonic heavy-flavour decays at various rapidities are also available [37, 38, 39]. In this paper the measurement of the cross-section and of the nuclear modification factors of “prompt” $D^{0}$ mesons, i.e. those directly produced in proton-lead collisions and not coming from decays of $b$-hadrons, is presented. The measurement is performed at ${\sqrt{s_{\mathrm{NN}}}}=5$ TeV with the LHCb [40] detector at the LHC. Depending on the direction of the proton and 208Pb beams and due to the different energies per nucleon in the two beams, the LHCb detector covers two different acceptance regions in the nucleon-nucleon rest frame, * • $1.5<y^{\ast}<4.0$, denoted as “forward” beam configuration, * • $-5.0<y^{\ast}<-2.5$, denoted as “backward” beam configuration, where the rapidity $y^{\ast}$ is defined with respect to the direction of the proton beam, The measurement is performed in the range of ${D}^{0}$ transverse momentum $\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, in both backward and forward collisions. ## 2 Detector and data samples The LHCb detector [40, 41] is a single-arm forward spectrometer covering the pseudorapidity range $2<\eta<5$, designed for the study of particles containing $b$ or $c$ quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region (VELO), a large-area silicon-strip detector (TT) located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes (OT) placed downstream of the magnet. The tracking system provides a measurement of momentum, $p$, of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The minimum distance of a track to a primary vertex (PV), the impact parameter, is measured with a resolution of $(15+29/\mbox{$p_{\mathrm{T}}$}){\,\upmu\mathrm{m}}$, where $p_{\mathrm{T}}$ is the component of the momentum transverse to the beam, in ${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a trigger [42], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. The data sample used in this analysis consists of $p$Pb collisions collected in early 2013, corresponding to integrated luminosities of ($1.06\pm 0.02$)$\mbox{\,nb}^{-1}$ and ($0.52\pm 0.01$)$\mbox{\,nb}^{-1}$ for the forward and backward colliding beam configurations, respectively. The luminosity has been determined using the same method as in the LHCb measurement of ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ production in $p$Pb collisions [43], with a precision of about 2%. The instantaneous luminosity during the period of data taking was around 5$\times 10^{27}$ cm-2 s-1, which led to an event rate that was three orders of magnitude lower than in nominal LHCb $pp$ operation. Therefore, the hardware trigger simply rejected empty events, while the next level software trigger accepted all events with at least one track in the VELO. For the analyses presented below, simulated samples of $pp$ collisions at 8 TeV are used to determine geometrical acceptance and reconstruction efficiencies. Effects due to the different track multiplicity distributions in the $pp$ and $p$Pb collision data and the effects of the asymmetric beam energies in $p$Pb collisions are taken into account as described later. In the simulation, $pp$ collisions are generated using Pythia [44, *Sjostrand:2007gs] with a specific LHCb configuration [46]. Decays of hadronic particles are described by EvtGen [47], in which final-state radiation is generated using Photos [48]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [49, *Agostinelli:2002hh, 51]. ## 3 Cross-section determination The double-differential cross-section for prompt ${{D}^{0}}$ production in a given $(\mbox{$p_{\mathrm{T}}$},y^{*})$ kinematic bin is defined as $\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}\mbox{$p_{\mathrm{T}}$}\mathrm{d}y^{*}}=\frac{N({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})}{\mathcal{L}\times{\varepsilon_{\mathrm{tot}}}\times{\mathcal{B}}({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})\times\Delta\mbox{$p_{\mathrm{T}}$}\times\Delta y^{*}},$ (1) where $N({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})$ is the number of prompt ${D}^{0}$ signal candidates reconstructed through the ${{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}}$ decay channels111Charge conjugation is implied throughout this document if not otherwise specified., ${\varepsilon_{\mathrm{tot}}}$ is the total ${D}^{0}$ detection efficiency, $\mathcal{L}$ is the integrated luminosity, ${\mathcal{B}}({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})=(3.94\pm 0.04)\%$ is the sum of the branching fractions of the decays ${{{D}^{0}}\rightarrow K^{-}\pi^{+}}$ and ${{{D}^{0}}\rightarrow K^{+}\pi^{-}}$ [52], $\Delta\mbox{$p_{\mathrm{T}}$}=1{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ is the bin width of the ${D}^{0}$ transverse momentum, and $\Delta y^{*}=0.5$ is the bin width of the ${D}^{0}$ rapidity. The rapidity $y^{*}$ is defined in the nucleon-nucleon centre-of-mass frame, where the positive direction is that of the proton beam. The measurement is performed in the ${D}^{0}$ kinematic region defined by $\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and rapidities $1.5<y^{*}<4.0$ for the forward sample and $-5.0<y^{*}<-2.5$ for the backward sample. The total cross-section over a specific kinematic range is determined by integration of the double-differential cross-section. The nuclear modification factor, $R_{p{\rm Pb}}$, is the ratio of the ${D}^{0}$ production cross- section in forward or backward collisions to that in $pp$ at the same nucleon- nucleon centre-of-mass energy ${\sqrt{s_{\mathrm{NN}}}}$ $R_{p{\rm Pb}}(\mbox{$p_{\mathrm{T}}$},y^{*})\equiv\frac{1}{A}\frac{{\rm d}^{2}\sigma_{p{\rm Pb}}(\mbox{$p_{\mathrm{T}}$},y^{*})/{\rm d}\mbox{$p_{\mathrm{T}}$}{\rm d}y^{*}}{{\rm d}^{2}\sigma_{pp}(\mbox{$p_{\mathrm{T}}$},y^{*})/{\rm d}\mbox{$p_{\mathrm{T}}$}{\rm d}y^{*}},$ (2) where $A$=208 is the atomic mass number of the lead nucleus. The forward- backward production ratio, $R_{\rm FB}$, is defined as $R_{\rm FB}(\mbox{$p_{\mathrm{T}}$},y^{*})\equiv\frac{{\rm d}^{2}\sigma_{p{\rm Pb}}(\mbox{$p_{\mathrm{T}}$},+|y^{*}|)/{\rm d}\mbox{$p_{\mathrm{T}}$}{\rm d}y^{*}}{{\rm d}^{2}\sigma_{{\rm Pb}p}(\mbox{$p_{\mathrm{T}}$},-|y^{*}|)/{\rm d}\mbox{$p_{\mathrm{T}}$}{\rm d}y^{*}},$ (3) where $\sigma_{p{\rm Pb}}$ and $\sigma_{{\rm Pb}p}$ indicate the cross- sections in the forward and backward configurations respectively, measured in a common rapidity range. The ${D}^{0}$ candidates are selected according to the same requirements as used in the ${{D}^{0}}$ production cross-section measurements in $pp$ collisions at $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$ [53] and $\sqrt{s}=13\mathrm{\,Te\kern-1.00006ptV}$[54]. The kaon and pion tracks from the ${D}^{0}$ candidate and the vertex they form are both required to be of good quality. The requirements set on particle identification (PID) criteria are tighter than in $pp$ collisions to increase the signal-over- background ratio given the high detector occupancy observed in $p$Pb collisions. | ---|--- Figure 1: The (left) $M(K^{\mp}\pi^{\pm})$ and (right) ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distributions and the fit result for the inclusive ${{D}^{0}}$ mesons in the forward data sample in the kinematic range of $2<\mbox{$p_{\mathrm{T}}$}<3{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $2.5<y^{*}<3.0$. | ---|--- Figure 2: The (left) $M(K^{\mp}\pi^{\pm})$ and (right) ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distributions and the fit result for the inclusive ${{D}^{0}}$ mesons in the backward data sample in the kinematic range of $2<\mbox{$p_{\mathrm{T}}$}<3{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and $-4.0<y^{*}<-3.5$. The signal yield is determined from an extended unbinned maximum likelihood fit to the distribution of the invariant mass $M(K^{\mp}\pi^{\pm})$. The fraction of nonprompt ${{D}^{0}}$ mesons originating from $b$-hadron decays, called ${{D}^{0}}$-from-$b$ in the following, is determined from the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distribution, where $\chi^{2}_{\text{$\rm IP$}}(D^{0})$ is defined as the difference in vertex-fit $\chi^{2}$ of a given PV computed with and without the ${D}^{0}$ meson candidate[53, 54]. On average, prompt ${D}^{0}$ mesons have much smaller $\chi^{2}_{\text{$\rm IP$}}(D^{0})$ values than ${{D}^{0}}$-from-$b$. The fit is performed in two steps. First, the invariant mass distributions are fitted to determine the ${{D}^{0}}$ meson inclusive yield and the number of background candidates, then the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ fit is performed for candidates with mass within $\pm 20{\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ around the fitted value of the ${D}^{0}$ mass. In the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ fit, the number of background candidates is constrained to the value obtained from the invariant mass fit, scaled to the selected mass range. The distribution of ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ is shown in the right-hand plots of Figs. 1 and 2 for the forward and backward samples, respectively. The signal shape in the $M(K^{\mp}\pi^{\pm})$ distributions is described by a Crystal Ball (CB) function [55] plus a Gaussian. The mean is the same for both functions, and the ratios of widths and tail parameters are fixed following simulation studies, as in previous LHCb analyses [53, 54]. The width, mean, and signal yields are left free to vary. The background is described by a linear function. The candidates are fitted in the range 1792–1942${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$. The invariant mass distributions in the inclusive forward and backward samples are shown in the left-hand plots of Figs. 1 and 2 respectively. The fits to the invariant mass and ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distributions are performed independently in each bin of $(\mbox{$p_{\mathrm{T}}$},y^{*})$ of the ${{D}^{0}}$ meson. The contribution of the ${{D}^{0}}$-from-$b$ component increases with transverse momentum up to 10%. The ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ shapes for the prompt ${D}^{0}$ meson signal candidates are estimated using the simulation and modelled with a modified Gaussian function $f_{\mathrm{}}(x;\mu,\sigma,\epsilon,\rho_{\mathrm{L}},\rho_{\mathrm{R}})=\begin{cases}e^{\frac{\rho_{\mathrm{L}}^{2}}{2}+\rho_{\mathrm{L}}\frac{x-\mu}{(1-\epsilon)\sigma}}&x<\mu-(\rho_{\mathrm{L}}\sigma(1-\epsilon)),\\\ e^{-\left(\frac{x-\mu}{\sqrt{2}\sigma(1-\epsilon)}\right)^{2}}&\mu-(\rho_{\mathrm{L}}\sigma(1-\epsilon))\leq x<\mu,\\\ e^{-\left(\frac{x-\mu}{\sqrt{2}\sigma(1+\epsilon)}\right)^{2}}&\mu\leq x<\mu+(\rho_{\mathrm{R}}\sigma(1+\epsilon)),\\\ e^{\frac{\rho_{\mathrm{R}}^{2}}{2}-\rho_{\mathrm{R}}\frac{x-\mu}{(1+\epsilon)\sigma}}&x\geq\mu+(\rho_{\mathrm{R}}\sigma(1+\epsilon)),\end{cases}$ (4) where the values of $\epsilon$, $\rho_{\mathrm{L}}$ and $\rho_{\mathrm{R}}$ are fixed to the values obtained in the simulation and $\mu$ and $\sigma$ are free parameters. The ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distribution for the ${{D}^{0}}$-from-$b$ component is described by a Gaussian function. The shape of the combinatorial background is estimated using the distribution of candidates with mass in the ranges 1797–1827${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ and 1907–1937${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$, i.e. between 40 and 70 ${\mathrm{\,Me\kern-1.00006ptV\\!/}c^{2}}$ away from the observed ${{D}^{0}}$ meson mass. The total efficiency ${\varepsilon_{\mathrm{tot}}}$ in Eq. 1 includes the effects of geometrical acceptance and the efficiencies of the trigger, of the reconstruction and of the PID criteria used in the analysis. The analysis uses a minimum activity trigger, whose efficiency for events containing a ${D}^{0}$ meson is found to be 100%. The geometrical acceptance and reconstruction efficiencies are estimated using $pp$ simulated samples, validated with data. The difference between the distributions of the track multiplicity in the $p$Pb and $pp$ collisions is accounted for by studying the efficiency in bins of the track multiplicity, and weighting the efficiency according to the multiplicity distributions seen in $p$Pb and Pb$p$ data. The related systematic uncertainties are discussed in Sec. 4. The PID efficiency is estimated using a calibration sample of ${D}^{0}$ meson decays selected in data without PID requirements [41], and collected in the same period as the $p\mathrm{Pb}$ sample used for the analysis. The PID selection efficiency is calculated by using the $K^{\mp}$ and $\pi^{\pm}$ single-track efficiencies from calibration data, and averaging them according to the kinematic distributions observed in the simulation in each ${{D}^{0}}$ $(\mbox{$p_{\mathrm{T}}$},y^{*})$ bin. ## 4 Systematic Uncertainties The systematic uncertainties affecting the cross-sections are listed in Table 1. Table 1: Summary of systematic and statistical uncertainties on the cross-section. The ranges indicate the variations between bins, with the uncertainty on average increasing with rapidity and momentum. Source | Relative uncertainty (%) ---|--- | Forward | Backward Correlated between bins | | Invariant mass fits | 0.0 $-$05.0 | 0.0 $-$05.0 $\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))$ fits | 0.0 $-$05.0 | 0.0 $-$05.0 Tracking efficiency | 3.0 | 5.0 PID efficiency | 0.6 $-$ 17.0 | 0.6 $-$ 30.0 Luminosity | 1.9 | 2.1 $\mathcal{B}({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})$ | 1.0 | 1.0 Uncorrelated between bins | | Simulation sample size | 1.0 $-$04.0 | 1.0 $-$05.0 Statistical uncertainty | 0.5 $-$ 20.0 | 1.0 $-$ 20.0 They are evaluated separately for the backward and forward samples unless otherwise specified. The systematic uncertainty associated to the determination of the signal yield has contributions from the signal and background models. The uncertainty associated to the modelling of the signal is studied by using alternative models of single or sum of two Gaussian functions to fit the invariant mass in the forward and backward samples. A variation of the parameters which are fixed in the default model, within the ranges indicated by the simulation, is also explored. The largest difference between the nominal and the alternative fits is taken as the uncertainty on the method, which results in a bin-dependent uncertainty, not exceeding 5%. The effect due to background modelling in the invariant mass fit is studied by using an exponential as an alternative to the linear function. This uncertainty is found to be negligible. For the fit to the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ distribution, the $\rho_{\mathrm{L}}$ and $\rho_{\mathrm{R}}$ parameters of the prompt signal component are varied within the ranges studied in simulation. The distribution of combinatorial backgrounds is studied with candidates in different background mass regions. The shape of the distribution for the ${{D}^{0}}$-from-$b$ component is fixed when studying the variation of its fraction. The same procedure is followed to estimate the uncertainty on the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ fits. The systematic uncertainty on the prompt signal yields, determined by the ${\log_{10}(\chi^{2}_{\text{$\rm IP$}}({{D}^{0}}))}$ fit, depends on the kinematic bin and is estimated to be less than 5% in all cases. The systematic uncertainty associated with the tracking efficiency has the components described in the following. The efficiency measurement is affected by the imperfect modelling of the tracking efficiency by simulation, which is corrected using a data-driven method [56], and the uncertainty of the correction is propagated into an uncertainty on the ${D}^{0}$ yield. The limited sizes of the simulated samples affect the precision of the efficiency, especially in the high multiplicity region. Another source of uncertainty is introduced by the choice of variable representing the detector occupancy, used to weight the distributions. The number of tracks and the number of hits in the VELO and in the TT and OT are all considered separately. The largest difference between the efficiencies when weighted by each of these variables and their average, which is the default, is taken as systematic uncertainty. An additional uncertainty comes from the detector occupancy distribution estimated in backward and forward data. The effects are summed in quadrature, yielding a total uncertainty on the tracking efficiency of $3\%$ and $5\%$ for the forward and backward collision sample respectively. The limited size of the calibration sample, the binning scheme and the signal fit model used to determine the $\pi$ and $K$ PID efficiency from the calibration sample, all contribute to the systematic uncertainty. The first is evaluated by estimating new sets of efficiencies through the variation of the $\pi$ and $K$ PID efficiencies in the calibration sample within the statistical uncertainties, the second by using alternative binning schemes and the third by varying the signal function used to determine the signal. The uncertainty is taken to be the quadratic sum of the three components. The total PID systematic uncertainty ranges between 1% and 30% depending on the kinematic region and the collision sample. The relative uncertainty associated with the luminosity measurement is approximately $2\%$ for both forward and backward samples. The relative uncertainty of the branching fraction ${\mathcal{B}}({{{D}^{0}}\rightarrow K^{\mp}\pi^{\pm}})$ is $1\%$ [52]. The limited size of the simulation sample introduces uncertainties on the efficiencies which are then propagated to the cross-section measurements; this effect is negligible for the central rapidity region but increases in the regions close to the boundaries of $p_{\mathrm{T}}$ and $y$, ranging between 1% and 5%. ## 5 Results ### 5.1 Production cross-sections The measured values of the double-differential cross-section of prompt ${{D}^{0}}$ mesons in proton-lead collisions in the forward and backward regions as a function of $p_{\mathrm{T}}$ and $y^{*}$ are given in Table 5.1 and shown in Fig. 3. Table 2: Double-differential cross-section $\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y^{*}}$ (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) for prompt ${{D}^{0}}$ meson production as functions of $p_{\mathrm{T}}$ and $y^{*}$ in ${p\mathrm{Pb}}$ forward and backward data, respectively. The first uncertainty is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. In the regions with no entries the signal is not statistically significant. | Forward (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) ---|--- $\mbox{$p_{\mathrm{T}}$}[{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}]$ | $1.5<y^{*}<2.0$ | $2.0<y^{*}<2.5$ | $2.5<y^{*}<3.0$ | $3.0<y^{*}<3.5$ | $3.5<y^{*}<4.0$ $[0,1]$ | $24.67\pm 0.32\pm 0.50\pm 3.45$ | $23.48\pm 0.17\pm 0.25\pm 1.70$ | $22.01\pm 0.16\pm 0.20\pm 1.16$ | $20.19\pm 0.21\pm 0.23\pm 1.02$ | $18.41\pm 0.36\pm 0.33\pm 1.09$ $[1,2]$ | $40.79\pm 0.34\pm 0.61\pm 3.83$ | $38.45\pm 0.19\pm 0.35\pm 2.19$ | $33.79\pm 0.18\pm 0.26\pm 1.50$ | $29.89\pm 0.22\pm 0.28\pm 1.31$ | $24.17\pm 0.34\pm 0.40\pm 1.63$ $[2,3]$ | $25.50\pm 0.20\pm 0.39\pm 1.76$ | $23.73\pm 0.11\pm 0.20\pm 1.08$ | $20.34\pm 0.10\pm 0.16\pm 0.82$ | $16.84\pm 0.11\pm 0.17\pm 0.69$ | $13.03\pm 0.17\pm 0.23\pm 0.78$ $[3,4]$ | $12.46\pm 0.11\pm 0.21\pm 0.63$ | $11.09\pm 0.06\pm 0.10\pm 0.47$ | $\phantom{0}9.31\pm 0.05\pm 0.09\pm 0.38$ | $\phantom{0}7.73\pm 0.06\pm 0.09\pm 0.36$ | $\phantom{0}5.22\pm 0.09\pm 0.11\pm 0.46$ $[4,5]$ | $\phantom{0}5.79\pm 0.06\pm 0.11\pm 0.27$ | $\phantom{0}5.23\pm 0.04\pm 0.06\pm 0.21$ | $\phantom{0}4.36\pm 0.03\pm 0.05\pm 0.17$ | $\phantom{0}3.32\pm 0.04\pm 0.05\pm 0.14$ | $\phantom{0}2.17\pm 0.07\pm 0.07\pm 0.45$ $[5,6]$ | $\phantom{0}2.94\pm 0.04\pm 0.07\pm 0.14$ | $\phantom{0}2.53\pm 0.03\pm 0.04\pm 0.11$ | $\phantom{0}2.04\pm 0.02\pm 0.03\pm 0.09$ | $\phantom{0}1.47\pm 0.02\pm 0.03\pm 0.10$ | $\phantom{0}0.93\pm 0.07\pm 0.07\pm 0.37$ $[6,7]$ | $\phantom{0}1.42\pm 0.02\pm 0.04\pm 0.08$ | $\phantom{0}1.26\pm 0.02\pm 0.02\pm 0.05$ | $\phantom{0}1.04\pm 0.02\pm 0.02\pm 0.06$ | $\phantom{0}0.72\pm 0.02\pm 0.02\pm 0.10$ | $\phantom{0}0.31\pm 0.08\pm 0.06\pm 0.20$ $[7,8]$ | $\phantom{0}0.84\pm 0.02\pm 0.03\pm 0.04$ | $\phantom{0}0.66\pm 0.01\pm 0.02\pm 0.04$ | $\phantom{0}0.53\pm 0.01\pm 0.01\pm 0.03$ | $\phantom{0}0.36\pm 0.02\pm 0.02\pm 0.09$ | $-$ $[8,9]$ | $\phantom{0}0.47\pm 0.01\pm 0.02\pm 0.02$ | $\phantom{0}0.38\pm 0.01\pm 0.01\pm 0.03$ | $\phantom{0}0.32\pm 0.01\pm 0.01\pm 0.03$ | $\phantom{0}0.17\pm 0.02\pm 0.02\pm 0.06$ | $-$ $[9,10]$ | $\phantom{0}0.31\pm 0.01\pm 0.02\pm 0.02$ | $\phantom{0}0.24\pm 0.01\pm 0.01\pm 0.02$ | $\phantom{0}0.17\pm 0.01\pm 0.01\pm 0.02$ | $\phantom{0}0.07\pm 0.01\pm 0.01\pm 0.03$ | $-$ | Backward (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) $\mbox{$p_{\mathrm{T}}$}[{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}]$ | $-3.0<y^{*}<-2.5$ | $-3.5<y^{*}<-3.0$ | $-4.0<y^{*}<-3.5$ | $-4.5<y^{*}<-4.0$ | $-5.0<y^{*}<-4.5$ $[0,1]$ | $27.75\pm 0.48\pm 0.47\pm 5.78$ | $29.56\pm 0.33\pm 0.29\pm 2.98$ | $28.47\pm 0.38\pm 0.28\pm 1.98$ | $25.03\pm 0.58\pm 0.28\pm 1.78$ | $20.85\pm 1.08\pm 0.43\pm 2.21$ $[1,2]$ | $46.66\pm 0.51\pm 0.69\pm 6.13$ | $46.10\pm 0.35\pm 0.38\pm 3.40$ | $40.35\pm 0.38\pm 0.33\pm 2.61$ | $35.82\pm 0.56\pm 0.38\pm 2.54$ | $27.00\pm 1.01\pm 0.45\pm 2.81$ $[2,3]$ | $28.55\pm 0.29\pm 0.41\pm 2.41$ | $25.90\pm 0.19\pm 0.22\pm 1.62$ | $21.47\pm 0.18\pm 0.17\pm 1.26$ | $17.13\pm 0.23\pm 0.19\pm 1.09$ | $11.82\pm 0.45\pm 0.23\pm 0.97$ $[3,4]$ | $12.73\pm 0.15\pm 0.18\pm 0.93$ | $10.98\pm 0.10\pm 0.10\pm 0.64$ | $\phantom{0}8.75\pm 0.09\pm 0.08\pm 0.50$ | $\phantom{0}6.33\pm 0.10\pm 0.08\pm 0.45$ | $\phantom{0}3.61\pm 0.17\pm 0.09\pm 0.55$ $[4,5]$ | $\phantom{0}5.60\pm 0.08\pm 0.09\pm 0.38$ | $\phantom{0}4.59\pm 0.05\pm 0.05\pm 0.26$ | $\phantom{0}3.36\pm 0.05\pm 0.04\pm 0.19$ | $\phantom{0}2.21\pm 0.05\pm 0.03\pm 0.14$ | $\phantom{0}1.47\pm 0.13\pm 0.06\pm 0.43$ $[5,6]$ | $\phantom{0}2.53\pm 0.05\pm 0.05\pm 0.16$ | $\phantom{0}1.93\pm 0.03\pm 0.03\pm 0.11$ | $\phantom{0}1.38\pm 0.03\pm 0.02\pm 0.08$ | $\phantom{0}0.82\pm 0.03\pm 0.02\pm 0.10$ | $\phantom{0}0.57\pm 0.14\pm 0.06\pm 0.30$ $[6,7]$ | $\phantom{0}1.32\pm 0.03\pm 0.03\pm 0.08$ | $\phantom{0}0.92\pm 0.02\pm 0.02\pm 0.06$ | $\phantom{0}0.62\pm 0.02\pm 0.01\pm 0.04$ | $\phantom{0}0.28\pm 0.02\pm 0.01\pm 0.07$ | $-$ $[7,8]$ | $\phantom{0}0.65\pm 0.02\pm 0.02\pm 0.04$ | $\phantom{0}0.48\pm 0.02\pm 0.01\pm 0.04$ | $\phantom{0}0.31\pm 0.01\pm 0.01\pm 0.04$ | $\phantom{0}0.19\pm 0.03\pm 0.01\pm 0.08$ | $-$ $[8,9]$ | $\phantom{0}0.33\pm 0.02\pm 0.01\pm 0.02$ | $\phantom{0}0.24\pm 0.01\pm 0.01\pm 0.02$ | $\phantom{0}0.14\pm 0.01\pm 0.01\pm 0.03$ | $\phantom{0}0.11\pm 0.03\pm 0.01\pm 0.08$ | $-$ $[9,10]$ | $\phantom{0}0.22\pm 0.01\pm 0.01\pm 0.02$ | $\phantom{0}0.13\pm 0.01\pm 0.01\pm 0.01$ | $\phantom{0}0.08\pm 0.01\pm 0.00\pm 0.02$ | $-$ | $-$ The one-dimensional differential prompt ${{D}^{0}}$ meson cross-sections as a function of $p_{\mathrm{T}}$ or $y^{*}$ are reported in Tables 5.1 and 4, and are displayed in Fig. 4. The measurements are also shown as a function of $p_{\mathrm{T}}$ integrated222 The integration over $y^{\ast}$ is performed up to $|y^{\ast}|$=3.5 for $\mbox{$p_{\mathrm{T}}$}>6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, neglecting the bin $3.5<|y^{*}|<4.0$ since it is not populated in the forward sample. This applies for the integrated cross-sections presented in this subsection, in Tables 5.1, 5.1 and 5.3 and in Figs. 4, 5, 8 and 9. over $y^{*}$ in the common rapidity range $2.5<|y^{*}|<4.0$. Figure 3: Double-differential cross-section $\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}y^{*}}$ (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) of prompt ${{D}^{0}}$ meson production in ${p\mathrm{Pb}}$ collisions in the (left) forward and (right) backward collision samples. The uncertainty is the quadratic sum of the statistical and systematic components. Figure 4: Differential cross-section of prompt ${{D}^{0}}$ meson production in ${p\mathrm{Pb}}$ collisions as a function of (left) $p_{\mathrm{T}}$ ($\frac{\mathrm{d}\sigma}{\mathrm{d}p_{\rm T}}$) and (right) $y^{*}$ ($\frac{\mathrm{d}\sigma}{\mathrm{d}y^{*}}$) in the forward and backward collision samples. The uncertainty is the quadratic sum of the statistical and systematic components. The measurements are compared with theoretical predictions including different nuclear parton distribution functions as explained in the text. Table 3: Measured differential cross-section $\frac{\mathrm{d}\sigma}{\mathrm{d}p_{\mathrm{T}}}$ (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) for prompt ${{D}^{0}}$ meson production as a function of $p_{\mathrm{T}}$ in ${p\mathrm{Pb}}$ forward and backward data, respectively. The first uncertainty is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. The results in the last two columns are integrated over the common rapidity range $2.5<|y^{*}|<4.0$ for $\mbox{$p_{\mathrm{T}}$}<6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over $2.5<|y^{*}|<3.5$ for $6<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. | Forward (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) ---|--- $\mbox{$p_{\mathrm{T}}$}[{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}]$ | $1.5<y^{*}<4.0$ | $2.5<y^{*}<4.0$ | $2.5<y^{*}<3.5$ $[0,1]$ | $54.38\pm 0.29\pm 0.36\pm 3.96$ | $30.31\pm 0.22\pm 0.22\pm 1.59$ | $-$ $[1,2]$ | $83.54\pm 0.30\pm 0.45\pm 5.01$ | $43.92\pm 0.22\pm 0.28\pm 2.17$ | $-$ $[2,3]$ | $49.72\pm 0.16\pm 0.27\pm 2.45$ | $25.11\pm 0.11\pm 0.16\pm 1.11$ | $-$ $[3,4]$ | $22.91\pm 0.09\pm 0.14\pm 1.10$ | $11.13\pm 0.06\pm 0.08\pm 0.55$ | $-$ $[4,5]$ | $10.43\pm 0.06\pm 0.08\pm 0.54$ | $\phantom{0}4.92\pm 0.04\pm 0.05\pm 0.32$ | $-$ $[5,6]$ | $\phantom{0}4.95\pm 0.05\pm 0.06\pm 0.35$ | $\phantom{0}2.21\pm 0.04\pm 0.04\pm 0.26$ | $-$ $[6,7]$ | $\phantom{0}2.37\pm 0.05\pm 0.04\pm 0.21$ | $-$ | $\phantom{0}0.88\pm 0.01\pm 0.01\pm 0.07$ $[7,8]$ | $\phantom{0}1.20\pm 0.02\pm 0.02\pm 0.09$ | $-$ | $\phantom{0}0.45\pm 0.01\pm 0.01\pm 0.06$ $[8,9]$ | $\phantom{0}0.67\pm 0.01\pm 0.01\pm 0.06$ | $-$ | $\phantom{0}0.24\pm 0.01\pm 0.01\pm 0.04$ $[9,10]$ | $\phantom{0}0.39\pm 0.01\pm 0.01\pm 0.04$ | $-$ | $\phantom{0}0.08\pm 0.00\pm 0.00\pm 0.01$ | Backward (mb/(${\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$)) $\mbox{$p_{\mathrm{T}}$}[{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}]$ | $-5.0<y^{*}<-2.5$ | $-4.0<y^{*}<-2.5$ | $-3.5<y^{*}<-2.5$ $[0,1]$ | $65.83\pm 0.70\pm 0.40\pm 6.85$ | $42.89\pm 0.35\pm 0.31\pm 5.15$ | $-$ $[1,2]$ | $97.97\pm 0.68\pm 0.52\pm 8.30$ | $66.56\pm 0.36\pm 0.43\pm 5.80$ | $-$ $[2,3]$ | $52.43\pm 0.32\pm 0.29\pm 3.57$ | $37.96\pm 0.20\pm 0.25\pm 2.56$ | $-$ $[3,4]$ | $21.21\pm 0.14\pm 0.13\pm 1.45$ | $16.23\pm 0.10\pm 0.11\pm 1.01$ | $-$ $[4,5]$ | $\phantom{0}8.62\pm 0.09\pm 0.06\pm 0.62$ | $\phantom{0}6.78\pm 0.05\pm 0.05\pm 0.41$ | $-$ $[5,6]$ | $\phantom{0}3.61\pm 0.08\pm 0.04\pm 0.33$ | $\phantom{0}2.92\pm 0.03\pm 0.03\pm 0.18$ | $-$ $[6,7]$ | $\phantom{0}1.57\pm 0.03\pm 0.02\pm 0.12$ | $-$ | $\phantom{0}1.12\pm 0.02\pm 0.02\pm 0.07$ $[7,8]$ | $\phantom{0}0.81\pm 0.02\pm 0.01\pm 0.09$ | $-$ | $\phantom{0}0.57\pm 0.01\pm 0.01\pm 0.04$ $[8,9]$ | $\phantom{0}0.41\pm 0.02\pm 0.01\pm 0.07$ | $-$ | $\phantom{0}0.29\pm 0.01\pm 0.01\pm 0.02$ $[9,10]$ | $\phantom{0}0.22\pm 0.01\pm 0.01\pm 0.02$ | $-$ | $\phantom{0}0.11\pm 0.01\pm 0.01\pm 0.01$ Table 4: Differential cross-section $\frac{\mathrm{d}\sigma}{\mathrm{d}y^{*}}$ (mb) for prompt ${{D}^{0}}$ meson production as a function of $|y^{*}|$ in ${p\mathrm{Pb}}$ forward and backward data, respectively. The first uncertainty is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Forward (mb) --- $y^{*}$ | $0<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ $[1.5,2.0]$ | $115.19\pm 0.53\pm 0.91\pm 9.99$ $[2.0,2.5]$ | $107.05\pm 0.29\pm 0.50\pm 5.73$ $[2.5,3.0]$ | $\phantom{0}93.90\pm 0.27\pm 0.38\pm 4.14$ $[3.0,3.5]$ | $\phantom{0}80.76\pm 0.33\pm 0.42\pm 3.71$ $[3.5,4.0]$ | $\phantom{0}64.24\pm 0.55\pm 0.58\pm 4.79$ Backward (mb) $y^{*}$ | $0<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ $[-3.0,-2.5]$ | $126.35\pm 0.78\pm 0.95\pm 15.54$ $[-3.5,-3.0]$ | $120.84\pm 0.53\pm 0.53\pm\phantom{0}8.89$ $[-4.0,-3.5]$ | $104.93\pm 0.58\pm 0.47\pm\phantom{0}6.66$ $[-4.5,-4.0]$ | $\phantom{0}87.92\pm 0.85\pm 0.52\pm\phantom{0}6.13$ $[-5.0,-4.5]$ | $\phantom{0}65.32\pm 1.57\pm 0.68\pm\phantom{0}7.07$ The integrated cross-sections of prompt ${{D}^{0}}$ meson production in ${p\mathrm{Pb}}$ forward data in the full and common fiducial regions are $\sigma_{\mathrm{forward}}(\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c},1.5<y^{*}<4.0)=230.6\pm 0.5\pm 13.0\mathrm{\,mb},$ $\sigma_{\mathrm{forward}}(\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c},2.5<y^{*}<4.0)=119.1\pm 0.3\pm\phantom{0}5.6\mathrm{\,mb}.$ The integrated cross-sections of prompt ${{D}^{0}}$ meson production in Pb$p$ backward data in the two fiducial regions are $\sigma_{\mathrm{backward}}(\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c},-2.5<y^{*}<-5.0)=252.7\pm 1.0\pm 20.0\mathrm{\,mb},$ $\sigma_{\mathrm{backward}}(\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c},-2.5<y^{*}<-4.0)=175.5\pm 0.6\pm 14.4\mathrm{\,mb},$ where the first uncertainties are statistical and the second systematic. The cross-sections as a function of $p_{\mathrm{T}}$ and $y^{*}$, shown in Fig. 4, are compared with calculations [57, 58, 59] validated with results of heavy-flavour production cross-section in $pp$ collisions. The nuclear effects are considered by using three different sets of nuclear parton distribution functions (nPDFs), the leading-order EPS09 (EPS09LO) [60], the next-to-leading order EPS09 (EPS09NLO) [60] and nCTEQ15 [61]. The free nucleon PDF CT10NLO [62] is also used as a reference for the cross-section predictions in $pp$ collisions. Within large theoretical uncertainties, all three sets of nPDFs can give descriptions consistent with the LHCb data, although a discrepancy is observed in the low $p_{\mathrm{T}}$ region between the measurements and the nCTEQ15 predictions. ### 5.2 Nuclear modification factors The value of the ${{D}^{0}}$ meson production cross-section in $pp$ collisions at $5\mathrm{\,Te\kern-1.00006ptV}$, needed for the measurement of the nuclear modification factor $R_{{p\mathrm{Pb}}}$, is taken from the LHCb measurement [63]. Correlations between the uncertainties of quantities that are common to both measurements are taken into account. The nuclear modification factor for prompt ${{D}^{0}}$ meson production is shown in Fig. 5 in bins of $p_{\mathrm{T}}$ and Fig. 6 in bins of $y^{*}$. The nuclear modification factors are calculated as a function of $p_{\mathrm{T}}$ integrated over $y^{*}$ in the ranges described in Fig. 5 for both forward and backward samples. The values of $R_{{p\mathrm{Pb}}}$, summarised in Tables 5.1 and 5.1, show a slight increase as a function of $p_{\mathrm{T}}$, suggesting that the suppression may decrease with increasing transverse momentum. Figure 5: Nuclear modification factor $R_{{p\mathrm{Pb}}}$ as a function of $p_{\mathrm{T}}$ for prompt ${{D}^{0}}$ meson production in the (left) backward data and (right) forward data, integrated over the common rapidity range $2.5<|y^{*}|<4.0$ for $\mbox{$p_{\mathrm{T}}$}<6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over $2.5<|y^{*}|<3.5$ for $6<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The uncertainty is the quadratic sum of the statistical and systematic components. The CGC predictions are only available for the forward region. Figure 6: Nuclear modification factor $R_{{p\mathrm{Pb}}}$ as a function of $y^{*}$ for prompt ${{D}^{0}}$ meson production, integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The uncertainty is the quadratic sum of the statistical and systematic components. The measurements are compared with calculations using EPS09LO, EPSNLO and nCTEQ15 nPDFs [58, 57, 59]. For the results in the backward configuration, all three predictions show reasonable agreement with each other and with LHCb data. In the forward configuration, nCTEQ15 and EPS09LO show better agreement with the data than EPS09NLO. Calculations [64] using CTEQ6M [65] nucleon PDF and EPS09NLO nPDF give results for $R_{{p\mathrm{Pb}}}$ that are similar to a combination of CT10NLO and EPS09NLO. The nuclear modification factors for prompt $D^{0}$ are also compared with those for prompt ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ [43] in Fig. 6 as a function of $p_{\mathrm{T}}$ integrated over rapidity, and they are found to be consistent. This is the first measurement of $R_{{p\mathrm{Pb}}}$ in this kinematic range. The ratios of the nuclear modification factors of ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ and ${\psi{(2S)}}$ mesons to ${{D}^{0}}$ mesons as a function of rapidity are shown in Fig. 7 where a different suppression between the two charmonium states can be observed. In Figs. 5 and 6 the measurements are also compared with calculations in the colour glass condensate framework (CGC) [66], which includes the effect of the saturation of partons at small $x$. The CGC model is found to be able to describe the trend of prompt ${{D}^{0}}$ meson nuclear modifications as a function of $p_{\mathrm{T}}$ and of rapidity. The uncertainty band for this model is much smaller than for the nuclear PDF calculations, since it only contains the variation of charm quark masses and factorisation scale which largely cancel in this ratio of cross-sections. Another CGC framework calculation gives similar results for nuclear modifications of charm production [67]. In the context of $p$Pb collisions, recent measurements have shown that long-range collective effects, which have previously been observed in relatively large nucleus-nucleus collision systems, may also be present in smaller collision systems at large charged particle multiplicities [68, 69, 70, 71]. If these effects are due to the creation of a hydrodynamic system, momentum anisotropies at the quark level can arise, which may modify the final distribution of observed heavy-quark hadrons [72]. Since the measurements in this analysis do not consider a classification in charged particle multiplicity, potential modifications in high-multiplicity events are weakened as the presented observables are integrated over charged particle multiplicity. Table 6: Nuclear modification factor $R_{{p\mathrm{Pb}}}$ for prompt ${{D}^{0}}$ meson production in different $y^{*}$ ranges, integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The first uncertainty is statistical and the second systematic. ### 5.3 Forward-backward ratio In the forward-backward production ratio $R_{\mathrm{FB}}$ the common uncertainty between the forward and backward measurements largely cancels. The uncertainties of branching fraction, signal yield and tracking are considered fully correlated, while the PID uncertainty is considered 90% correlated since it is a mixture of statistical uncertainty (uncorrelated) and the uncertainties due to the binning scheme and yield determination (correlated). All other uncertainties are uncorrelated. The measured $R_{\mathrm{FB}}$ values are shown in Fig. 8, as a function of $p_{\mathrm{T}}$ integrated over the range $2.5<|y^{*}|<4.0$, and as a function of $y^{*}$ integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The $R_{\mathrm{FB}}$ values in different kinematic bins are also summarised in Table 5.3. Good agreement is found between measurements and theoretical predictions using EPS09LO and nCTEQ15 nPDFs. Table 7: Forward-backward ratio $R_{\mathrm{FB}}$ for prompt ${{D}^{0}}$ meson production in different $p_{\mathrm{T}}$ ranges, integrated over the common rapidity range $2.5<|y^{*}|<4.0$ for $\mbox{$p_{\mathrm{T}}$}<6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over $2.5<|y^{*}|<3.5$ for $6<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, and in different $y^{*}$ ranges integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The first uncertainty is the statistical and the second is the systematic component. In the common kinematic range $\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, $2.5<|y^{*}|<4.0$, the forward-backward ratio $R_{\mathrm{FB}}$ is $0.71\pm 0.01(\mathrm{stat})\pm 0.04(\mathrm{syst})$, indicating a significant asymmetry. The predictions for $R_{\mathrm{FB}}$ integrated over the same kinematic range are $0.71^{+0.21}_{-0.24}$ for EPS09 at leading order, $0.81^{+0.10}_{-0.09}$ for EPS09 at next-to-leading order and $0.69^{+0.07}_{-0.07}$ for the nCTEQ15 nPDF set, which are all in good agreement with the measured value. The forward-backward production ratio increases slightly with increasing $p_{\mathrm{T}}$, and decreases strongly with increasing rapidity $|y^{*}|$. This behaviour is consistent with the expectations from the QCD calculations. In order to compare the production of open charm and charmonium, the ratio of $R_{\mathrm{FB}}$ for prompt ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ mesons divided by $R_{\mathrm{FB}}$ for prompt ${{D}^{0}}$ mesons is shown in Fig. 9. The measurement shows that $R_{\mathrm{FB}}$ has the same size for prompt ${{D}^{0}}$ and prompt ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ mesons within the uncertainties in the LHCb kinematic range. Figure 7: Ratio of nuclear modification factors $R_{{p\mathrm{Pb}}}$ of ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ and ${\psi{(2S)}}$ to ${{D}^{0}}$ mesons in bins of rapidity integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ in the common rapidity range $2.5<|y^{*}|<4.0$. The uncertainty is the quadratic sum of the statistical and systematic components. Figure 8: Forward-backward ratio $R_{\mathrm{FB}}$ for prompt ${{D}^{0}}$ meson production (left) as a function of $p_{\mathrm{T}}$ integrated over the common rapidity range $2.5<|y^{*}|<4.0$ for $\mbox{$p_{\mathrm{T}}$}<6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over $2.5<|y^{*}|<3.5$ for $6<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$; (right) as a function of $y^{*}$ integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The uncertainty is the quadratic sum of the statistical and systematic components. Figure 9: Relative forward-backward production ratio $R_{\mathrm{FB}}$ for prompt ${{D}^{0}}$ mesons over that for prompt ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ mesons (left) as a function of $p_{\mathrm{T}}$ integrated over the common rapidity range $2.5<|y^{*}|<4.0$ for $\mbox{$p_{\mathrm{T}}$}<6{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$ and over $2.5<|y^{*}|<3.5$ for $6<\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$; (right) as a function of $y^{*}$ integrated up to $\mbox{$p_{\mathrm{T}}$}=10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$. The red inner bars in the uncertainty represent the statistical uncertainty and the black outer bars the quadratic sum of the statistical and systematic components. ## 6 Conclusion The prompt ${{D}^{0}}$ production cross-section has been measured with LHCb proton-lead collision data at ${\sqrt{s_{\mathrm{NN}}}}=5\mathrm{\,Te\kern-1.00006ptV}$. The measurement is performed in the range of ${D}^{0}$ transverse momentum $\mbox{$p_{\mathrm{T}}$}<10{\mathrm{\,Ge\kern-1.00006ptV\\!/}c}$, in both backward and forward collisions covering the ranges $1.5<y^{\ast}<4.0$ and $-5.0<y^{\ast}<-2.5$. This is the first measurement of this kind down to zero transverse momentum of the ${D}^{0}$ meson. Nuclear modification factors and forward-backward production ratios are also measured in the same kinematic range. Both observables are excellent probes to constrain the PDF uncertainties, which are currently significantly larger than the uncertainties on the experimental results. A large asymmetry in the forward-backward production is observed, which is consistent with the expectations from nuclear parton distribution functions, and colour glass condensate calculations for the forward rapidity part. The results are found to be consistent with the theoretical predictions considered. ## Acknowledgements We would like to thank Andrea Dainese, Bertrand Ducloué, Jean-Philippe Lansberg and Huasheng Shao for providing the theoretical predictions for our measurements. We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany), EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union), Conseil Général de Haute-Savoie, Labex ENIGMASS and OCEVU, Région Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851 and the Leverhulme Trust (United Kingdom). ## References * [1] B. A. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, _Reconciling open charm production at the Fermilab Tevatron with QCD_ , Phys. Rev. Lett. 96 (2006) 012001, arXiv:hep-ph/0508129 * [2] B. A. Kniehl, G. Kramer, I. Schienbein, and H. Spiesberger, _Inclusive charmed-meson production at the CERN LHC_ , Eur. Phys. J. C72 (2012) 2082, arXiv:1202.0439 * [3] M. Cacciari, M. Greco, and P. Nason, _The $p_{T}$ spectrum in heavy flavor hadroproduction_, JHEP 05 (1998) 007, arXiv:hep-ph/9803400 * [4] M. Cacciari and P. Nason, _Charm cross-sections for the Tevatron Run II_ , JHEP 09 (2003) 006, arXiv:hep-ph/0306212 * [5] M. Cacciari et al., _Theoretical predictions for charm and bottom production at the LHC_ , JHEP 10 (2012) 137, arXiv:1205.6344 * [6] R. Maciula and A. Szczurek, _Open charm production at the LHC: $k_{t}$-factorization approach_, Phys. Rev. D87 (2013) 094022, arXiv:1301.3033 * [7] A. Andronic et al., _Heavy-flavour and quarkonium production in the LHC era: from proton-proton to heavy-ion collisions_ , Eur. Phys. J. C76 (2016) 107, arXiv:1506.03981 * [8] D. Kharzeev and K. Tuchin, _Signatures of the color glass condensate in $J/\psi$ production off nuclear targets_, Nucl. Phys. A770 (2006) 40, arXiv:hep-ph/0510358 * [9] H. Fujii, F. Gelis, and R. Venugopalan, _Quark pair production in high energy pA collisions: General features_ , Nucl. Phys. A780 (2006) 146, arXiv:hep-ph/0603099 * [10] F. Arleo and S. Peigné, _Heavy-quarkonium suppression in p-A collisions from parton energy loss in cold QCD matter_ , JHEP 03 (2013) 122, arXiv:1212.0434 * [11] S. Gavin and J. Milana, _Energy loss at large $x_{F}$ in nuclear collisions_, Phys. Rev. Lett. 68 (1992) 1834 * [12] R. Vogt, _The $x_{F}$ dependence of $\psi$ and Drell-Yan production_, Phys. Rev. C61 (2000) 035203, arXiv:hep-ph/9907317 * [13] N. Armesto, _Nuclear shadowing_ , J. Phys. G32 (2006) R367, arXiv:hep-ph/0604108 * [14] S. Malace, D. Gaskell, D. W. Higinbotham, and I. Cloet, _The challenge of the EMC effect: Existing data and future directions_ , Int. J. Mod. Phys. E23 (2014) 1430013, arXiv:1405.1270 * [15] H. Fujii and K. Watanabe, _Heavy quark pair production in high energy pA collisions: Open heavy flavors_ , Nucl. Phys. A920 (2013) 78, arXiv:1308.1258 * [16] P. Tribedy and R. Venugopalan, _QCD saturation at the LHC: Comparisons of models to p + p and A + A data and predictions for p + Pb collisions_ , Phys. Lett. B710 (2012) 125, arXiv:1112.2445, [Erratum: Phys. Lett.B718 (2013) 1154] * [17] J. L. Albacete, A. Dumitru, H. Fujii, and Y. Nara, _CGC predictions for p + Pb collisions at the LHC_ , Nucl. Phys. A897 (2013) 1, arXiv:1209.2001 * [18] A. H. Rezaeian, _CGC predictions for p+A collisions at the LHC and signature of QCD saturation_ , Phys. Lett. B718 (2013) 1058, arXiv:1210.2385 * [19] ALICE collaboration, B. Abelev et al., _Suppression of high transverse momentum D mesons in central Pb-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=2.76\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 09 (2012) 112, arXiv:1203.2160 * [20] R. Averbeck, _Heavy-flavor production in heavy-ion collisions and implications for the properties of hot QCD matter_ , Prog. Part. Nucl. Phys. 70 (2013) 159, arXiv:1505.03828 * [21] ALICE collaboration, D. Adamova et al., _$J/\psi$ production as a function of charged-particle pseudorapidity density in pPb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, arXiv:1704.00274 * [22] ALICE collaboration, J. Adam et al., _$D$ -meson production in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$ and in pp collisions at $\sqrt{s}=7\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Rev. C94 (2016) 054908, arXiv:1605.07569 * [23] ALICE collaboration, J. Adam et al., _Centrality dependence of $\mathbf{\psi}$(2S) suppression in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 06 (2016) 050, arXiv:1603.02816 * [24] ALICE collaboration, J. Adam et al., _Measurement of D-meson production versus multiplicity in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 08 (2016) 078, arXiv:1602.07240 * [25] ALICE collaboration, J. Adam et al., _Centrality dependence of inclusive $J/\psi$ production in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 11 (2015) 127, arXiv:1506.08808 * [26] ALICE collaboration, J. Adam et al., _Rapidity and transverse-momentum dependence of the inclusive J/ $\psi$ nuclear modification factor in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\,\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 06 (2015) 055, arXiv:1503.07179 * [27] ALICE collaboration, B. B. Abelev et al., _Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Lett. B740 (2015) 105, arXiv:1410.2234 * [28] ALICE collaboration, B. B. Abelev et al., _Measurement of prompt $D$-meson production in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Rev. Lett. 113 (2014) 232301, arXiv:1405.3452 * [29] ALICE collaboration, B. B. Abelev et al., _$J/\psi$ production and nuclear effects in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 02 (2014) 073, arXiv:1308.6726 * [30] ALICE collaboration, B. B. Abelev et al., _Suppression of $\psi$(2S) production in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\,\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 12 (2014) 073, arXiv:1405.3796 * [31] ATLAS collaboration, G. Aad et al., _Measurement of differential $J/\psi$ production cross sections and forward-backward ratios in p Pb collisions with the ATLAS detector_, Phys. Rev. C92 (2015) 034904, arXiv:1505.08141 * [32] CMS collaboration, S. Chatrchyan et al., _Event activity dependence of Y(nS) production in ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$ $p$Pb and $\sqrt{s}=2.76\mathrm{\,Te\kern-1.00006ptV}$ $pp$ collisions_, JHEP 04 (2014) 103, arXiv:1312.6300 * [33] CMS collaboration, A. M. Sirunyan et al., _Measurement of prompt and nonprompt $\mathrm{J}/{\psi}$ production in $\mathrm{p}\mathrm{p}$ and $\mathrm{p}\mathrm{Pb}$ collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Eur. Phys. J. C77 (2017) 269, arXiv:1702.01462 * [34] CMS collaboration, A. M. Sirunyan et al., _Measurements of the charm jet cross section and nuclear modification factor in pPb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, arXiv:1612.08972 * [35] CMS collaboration, V. Khachatryan et al., _Transverse momentum spectra of inclusive b jets in pPb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Lett. B754 (2016) 59, arXiv:1510.03373 * [36] CMS collaboration, V. Khachatryan et al., _Study of B meson production in pPb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02 \mathrm{\,Te\kern-1.00006ptV}$ using exclusive hadronic decays_, Phys. Rev. Lett. 116 (2016) 032301, arXiv:1508.06678 * [37] ALICE collaboration, J. Adam et al., _Measurement of electrons from heavy-flavour hadron decays in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Lett. B754 (2016) 81, arXiv:1509.07491 * [38] ALICE collaboration, J. Adam et al., _Measurement of electrons from beauty-hadron decays in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$ and Pb-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=2.76$ $\mathrm{\,Te\kern-1.00006ptV}$_, arXiv:1609.03898 * [39] ALICE collaboration, S. Acharya et al., _Production of muons from heavy-flavour hadron decays in p-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Lett. B770 (2017) 459, arXiv:1702.01479 * [40] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005 * [41] LHCb collaboration, R. Aaij et al., _LHCb detector performance_ , Int. J. Mod. Phys. A30 (2015) 1530022, arXiv:1412.6352 * [42] R. Aaij et al., _The LHCb trigger and its performance in 2011_, JINST 8 (2013) P04022, arXiv:1211.3055 * [43] LHCb collaboration, R. Aaij et al., _Study of ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}$ production and cold nuclear matter effects in ${p}$Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5\,\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 02 (2014) 072, arXiv:1308.6729 * [44] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175 * [45] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820 * [46] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047 * [47] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152 * [48] P. Golonka and Z. Was, _PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays_, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026 * [49] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270 * [50] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250 * [51] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023 * [52] Particle Data Group, C. Patrignani et al., _Review of particle physics_ , Chin. Phys. C40 (2016) 100001 * [53] LHCb collaboration, R. Aaij et al., _Prompt charm production in ${p}{p}$ collisions at $\sqrt{s}=7$$\mathrm{\,Te\kern-1.00006ptV}$_, Nucl. Phys. B871 (2013) 1, arXiv:1302.2864 * [54] LHCb collaboration, R. Aaij et al., _Measurements of prompt charm production cross-sections in ${p}{p}$ collisions at $\sqrt{s}=13\,$$\mathrm{\,Te\kern-1.00006ptV}$_, JHEP 03 (2016) 159, arXiv:1510.01707 * [55] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02 * [56] LHCb collaboration, R. Aaij et al., _Measurement of the track reconstruction efficiency at LHCb_ , JINST 10 (2015) P02007, arXiv:1408.1251 * [57] J.-P. Lansberg and H.-S. Shao, _Towards an automated tool to evaluate the impact of the nuclear modification of the gluon density on quarkonium, D and B meson production in proton-nucleus collisions_ , Eur. Phys. J. C77 (2017) 1, arXiv:1610.05382 * [58] H.-S. Shao, _HELAC-Onia: An automatic matrix element generator for heavy quarkonium physics_ , Comput. Phys. Commun. 184 (2013) 2562, arXiv:1212.5293 * [59] H.-S. Shao, _HELAC-Onia 2.0: an upgraded matrix-element and event generator for heavy quarkonium physics_ , Comput. Phys. Commun. 198 (2016) 238, arXiv:1507.03435 * [60] K. J. Eskola, H. Paukkunen, and C. A. Salgado, _EPS09: a new generation of NLO and LO nuclear parton distribution functions_ , JHEP 04 (2009) 065, arXiv:0902.4154 * [61] K. Kovarik et al., _nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties in the CTEQ framework_ , Phys. Rev. D93 (2016) 085037, arXiv:1509.00792 * [62] H.-L. Lai et al., _New parton distributions for collider physics_ , Phys. Rev. D82 (2010) 074024, arXiv:1007.2241 * [63] LHCb collaboration, R. Aaij et al., _Measurements of prompt charm production cross-sections in ${p}{p}$ collisions at $\sqrt{s}=5$ $\mathrm{\,Te\kern-1.00006ptV}$_, arXiv:1610.02230, submitted to JHEP * [64] M. L. Mangano, P. Nason, and G. Ridolfi, _Heavy quark correlations in hadron collisions at next-to-leading order_ , Nucl. Phys. B373 (1992) 295 * [65] D. Stump et al., _Inclusive jet production, parton distributions, and the search for new physics_ , JHEP 10 (2003) 046, arXiv:hep-ph/0303013 * [66] B. Ducloué, T. Lappi, and H. Mäntysaari, _Forward $J/\psi$ production in proton-nucleus collisions at high energy_, Phys. Rev. D91 (2015) 114005, arXiv:1503.02789 * [67] H. Fujii and K. Watanabe, _Nuclear modification of forward $D$ production in pPb collisions at the LHC_, arXiv:1706.06728 * [68] CMS collaboration, S. Chatrchyan et al., _Observation of long-range near-side angular correlations in proton-lead collisions at the LHC_ , Phys. Lett. B718 (2013) 795, arXiv:1210.5482 * [69] ALICE collaboration, B. Abelev et al., _Long-range angular correlations on the near and away side in $p$-Pb collisions at ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$_, Phys. Lett. B719 (2013) 29, arXiv:1212.2001 * [70] ATLAS collaboration, G. Aad et al., _Observation of associated near-side and away-side long-range correlations in ${\sqrt{s_{\mathrm{NN}}}}=5.02\mathrm{\,Te\kern-1.00006ptV}$ proton-lead collisions with the ATLAS detector_, Phys. Rev. Lett. 110 (2013) 182302, arXiv:1212.5198 * [71] PHENIX collaboration, A. Adare et al., _Quadrupole anisotropy in dihadron azimuthal correlations in central $d$$+$Au collisions at ${\sqrt{s_{\mathrm{NN}}}}$=200 GeV_, Phys. Rev. Lett. 111 (2013) 212301, arXiv:1303.1794 * [72] A. Beraudo et al., _Heavy-flavour production in high-energy d-Au and p-Pb collisions_ , JHEP 03 (2016) 123, arXiv:1512.05186 LHCb collaboration R. Aaij40, B. Adeva39, M. Adinolfi48, Z. Ajaltouni5, S. Akar59, J. Albrecht10, F. Alessio40, M. Alexander53, A. Alfonso Albero38, S. Ali43, G. Alkhazov31, P. Alvarez Cartelle55, A.A. Alves Jr59, S. Amato2, S. Amerio23, Y. Amhis7, L. An3, L. Anderlini18, G. Andreassi41, M. Andreotti17,g, J.E. Andrews60, R.B. Appleby56, F. Archilli43, P. d’Argent12, J. Arnau Romeu6, A. Artamonov37, M. Artuso61, E. Aslanides6, G. Auriemma26, M. Baalouch5, I. Babuschkin56, S. Bachmann12, J.J. Back50, A. Badalov38, C. Baesso62, S. Baker55, V. Balagura7,c, W. Baldini17, A. Baranov35, R.J. Barlow56, C. Barschel40, S. Barsuk7, W. Barter56, F. Baryshnikov32, M. Baszczyk27,l, V. Batozskaya29, V. Battista41, A. Bay41, L. Beaucourt4, J. Beddow53, F. Bedeschi24, I. Bediaga1, A. Beiter61, L.J. Bel43, N. Beliy63, V. Bellee41, N. Belloli21,i, K. Belous37, I. Belyaev32, E. Ben-Haim8, G. Bencivenni19, S. Benson43, S. Beranek9, A. Berezhnoy33, R. Bernet42, D. Berninghoff12, E. Bertholet8, A. Bertolin23, C. Betancourt42, F. Betti15, M.-O. Bettler40, M. van Beuzekom43, Ia. Bezshyiko42, S. Bifani47, P. Billoir8, A. Birnkraut10, A. Bitadze56, A. Bizzeti18,u, M.B. Bjoern57, T. Blake50, F. Blanc41, J. Blouw11,†, S. Blusk61, V. Bocci26, T. Boettcher58, A. Bondar36,w, N. Bondar31, W. Bonivento16, I. Bordyuzhin32, A. Borgheresi21,i, S. Borghi56, M. Borisyak35, M. Borsato39, M. Borysova46, F. Bossu7, M. Boubdir9, T.J.V. Bowcock54, E. Bowen42, C. Bozzi17,40, S. Braun12, T. Britton61, J. Brodzicka56, D. Brundu16, E. Buchanan48, C. Burr56, A. Bursche16,f, J. Buytaert40, W. Byczynski40, S. Cadeddu16, H. Cai64, R. Calabrese17,g, R. Calladine47, M. Calvi21,i, M. Calvo Gomez38,m, A. Camboni38, P. Campana19, D.H. Campora Perez40, L. Capriotti56, A. Carbone15,e, G. Carboni25,j, R. Cardinale20,h, A. Cardini16, P. Carniti21,i, L. Carson52, K. Carvalho Akiba2, G. Casse54, L. Cassina21,i, L. Castillo Garcia41, M. Cattaneo40, G. Cavallero20,40,h, R. Cenci24,t, D. Chamont7, M. Charles8, Ph. Charpentier40, G. Chatzikonstantinidis47, M. Chefdeville4, S. Chen56, S.F. Cheung57, S.-G. Chitic40, V. Chobanova39, M. Chrzaszcz42,27, A. Chubykin31, X. Cid Vidal39, G. Ciezarek43, P.E.L. Clarke52, M. Clemencic40, H.V. Cliff49, J. Closier40, V. Coco59, J. Cogan6, E. Cogneras5, V. Cogoni16,f, L. Cojocariu30, P. Collins40, T. Colombo40, A. Comerma-Montells12, A. Contu40, A. Cook48, G. Coombs40, S. Coquereau38, G. Corti40, M. Corvo17,g, C.M. Costa Sobral50, B. Couturier40, G.A. Cowan52, D.C. Craik52, A. Crocombe50, M. Cruz Torres62, R. Currie52, C. D’Ambrosio40, F. Da Cunha Marinho2, E. Dall’Occo43, J. Dalseno48, A. Davis3, O. De Aguiar Francisco54, K. De Bruyn6, S. De Capua56, M. De Cian12, J.M. De Miranda1, L. De Paula2, M. De Serio14,d, P. De Simone19, C.T. Dean53, D. Decamp4, L. Del Buono8, H.-P. Dembinski11, M. Demmer10, A. Dendek28, D. Derkach35, O. Deschamps5, F. Dettori54, B. Dey65, A. Di Canto40, P. Di Nezza19, H. Dijkstra40, F. Dordei40, M. Dorigo41, A. Dosil Suárez39, L. Douglas53, A. Dovbnya45, K. Dreimanis54, L. Dufour43, G. Dujany8, K. Dungs40, P. Durante40, R. Dzhelyadin37, M. Dziewiecki12, A. Dziurda40, A. Dzyuba31, N. Déléage4, S. Easo51, M. Ebert52, U. Egede55, V. Egorychev32, S. Eidelman36,w, S. Eisenhardt52, U. Eitschberger10, R. Ekelhof10, L. Eklund53, S. Ely61, S. Esen12, H.M. Evans49, T. Evans57, A. Falabella15, N. Farley47, S. Farry54, R. Fay54, D. Fazzini21,i, L. Federici25, D. Ferguson52, G. Fernandez38, P. Fernandez Declara40, A. Fernandez Prieto39, F. Ferrari15, F. Ferreira Rodrigues2, M. Ferro-Luzzi40, S. Filippov34, R.A. Fini14, M. Fiore17,g, M. Fiorini17,g, M. Firlej28, C. Fitzpatrick41, T. Fiutowski28, F. Fleuret7,b, K. Fohl40, M. Fontana16,40, F. Fontanelli20,h, D.C. Forshaw61, R. Forty40, V. Franco Lima54, M. Frank40, C. Frei40, J. Fu22,q, W. Funk40, E. Furfaro25,j, C. Färber40, E. Gabriel52, A. Gallas Torreira39, D. Galli15,e, S. Gallorini23, S. Gambetta52, M. Gandelman2, P. Gandini57, Y. Gao3, L.M. Garcia Martin70, J. García Pardiñas39, J. Garra Tico49, L. Garrido38, P.J. Garsed49, D. Gascon38, C. Gaspar40, L. Gavardi10, G. Gazzoni5, D. Gerick12, E. Gersabeck12, M. Gersabeck56, T. Gershon50, Ph. Ghez4, S. Gianì41, V. Gibson49, O.G. Girard41, L. Giubega30, K. Gizdov52, V.V. Gligorov8, D. Golubkov32, A. Golutvin55,40, A. Gomes1,a, I.V. Gorelov33, C. Gotti21,i, E. Govorkova43, J.P. Grabowski12, R. Graciani Diaz38, L.A. Granado Cardoso40, E. Graugés38, E. Graverini42, G. Graziani18, A. Grecu30, R. Greim9, P. Griffith16, L. Grillo21,40,i, L. Gruber40, B.R. Gruberg Cazon57, O. Grünberg67, E. Gushchin34, Yu. Guz37, T. Gys40, C. Göbel62, T. Hadavizadeh57, C. Hadjivasiliou5, G. Haefeli41, C. Haen40, S.C. Haines49, B. Hamilton60, X. Han12, T. Hancock57, S. Hansmann- Menzemer12, N. Harnew57, S.T. Harnew48, J. Harrison56, C. Hasse40, M. Hatch40, J. He63, M. Hecker55, K. Heinicke10, A. Heister9, K. Hennessy54, P. Henrard5, L. Henry70, E. van Herwijnen40, M. Heß67, A. Hicheur2, D. Hill57, C. Hombach56, P.H. Hopchev41, Z.-C. Huard59, W. Hulsbergen43, T. Humair55, M. Hushchyn35, D. Hutchcroft54, P. Ibis10, M. Idzik28, P. Ilten58, R. Jacobsson40, J. Jalocha57, E. Jans43, A. Jawahery60, F. Jiang3, M. John57, D. Johnson40, C.R. Jones49, C. Joram40, B. Jost40, N. Jurik57, S. Kandybei45, M. Karacson40, J.M. Kariuki48, S. Karodia53, M. Kecke12, M. Kelsey61, M. Kenzie49, T. Ketel44, E. Khairullin35, B. Khanji12, C. Khurewathanakul41, T. Kirn9, S. Klaver56, K. Klimaszewski29, T. Klimkovich11, S. Koliiev46, M. Kolpin12, I. Komarov41, R. Kopecna12, P. Koppenburg43, A. Kosmyntseva32, S. Kotriakhova31, M. Kozeiha5, L. Kravchuk34, M. Kreps50, P. Krokovny36,w, F. Kruse10, W. Krzemien29, W. Kucewicz27,l, M. Kucharczyk27, V. Kudryavtsev36,w, A.K. Kuonen41, K. Kurek29, T. Kvaratskheliya32,40, D. Lacarrere40, G. Lafferty56, A. Lai16, G. Lanfranchi19, C. Langenbruch9, T. Latham50, C. Lazzeroni47, R. Le Gac6, J. van Leerdam43, A. Leflat33,40, J. Lefrançois7, R. Lefèvre5, F. Lemaitre40, E. Lemos Cid39, O. Leroy6, T. Lesiak27, B. Leverington12, T. Li3, Y. Li7, Z. Li61, T. Likhomanenko35,68, R. Lindner40, F. Lionetto42, X. Liu3, D. Loh50, A. Loi16, I. Longstaff53, J.H. Lopes2, D. Lucchesi23,o, M. Lucio Martinez39, H. Luo52, A. Lupato23, E. Luppi17,g, O. Lupton40, A. Lusiani24, X. Lyu63, F. Machefert7, F. Maciuc30, V. Macko41, P. Mackowiak10, B. Maddock59, S. Maddrell-Mander48, O. Maev31, K. Maguire56, D. Maisuzenko31, M.W. Majewski28, S. Malde57, A. Malinin68, T. Maltsev36, G. Manca16,f, G. Mancinelli6, P. Manning61, D. Marangotto22,q, J. Maratas5,v, J.F. Marchand4, U. Marconi15, C. Marin Benito38, M. Marinangeli41, P. Marino24,t, J. Marks12, G. Martellotti26, M. Martin6, M. Martinelli41, D. Martinez Santos39, F. Martinez Vidal70, D. Martins Tostes2, L.M. Massacrier7, A. Massafferri1, R. Matev40, A. Mathad50, Z. Mathe40, C. Matteuzzi21, A. Mauri42, E. Maurice7,b, B. Maurin41, A. Mazurov47, M. McCann55,40, A. McNab56, R. McNulty13, J.V. Mead54, B. Meadows59, C. Meaux6, F. Meier10, N. Meinert67, D. Melnychuk29, M. Merk43, A. Merli22,40,q, E. Michielin23, D.A. Milanes66, E. Millard50, M.-N. Minard4, L. Minzoni17, D.S. Mitzel12, A. Mogini8, J. Molina Rodriguez1, T. Mombacher10, I.A. Monroy66, S. Monteil5, M. Morandin23, M.J. Morello24,t, O. Morgunova68, J. Moron28, A.B. Morris52, R. Mountain61, F. Muheim52, M. Mulder43, M. Mussini15, D. Müller56, J. Müller10, K. Müller42, V. Müller10, P. Naik48, T. Nakada41, R. Nandakumar51, A. Nandi57, I. Nasteva2, M. Needham52, N. Neri22,40, S. Neubert12, N. Neufeld40, M. Neuner12, T.D. Nguyen41, C. Nguyen-Mau41,n, S. Nieswand9, R. Niet10, N. Nikitin33, T. Nikodem12, A. Nogay68, D.P. O’Hanlon50, A. Oblakowska-Mucha28, V. Obraztsov37, S. Ogilvy19, R. Oldeman16,f, C.J.G. Onderwater71, A. Ossowska27, J.M. Otalora Goicochea2, P. Owen42, A. Oyanguren70, P.R. Pais41, A. Palano14,d, M. Palutan19,40, A. Papanestis51, M. Pappagallo14,d, L.L. Pappalardo17,g, C. Pappenheimer59, W. Parker60, C. Parkes56, G. Passaleva18, A. Pastore14,d, M. Patel55, C. Patrignani15,e, A. Pearce40, A. Pellegrino43, G. Penso26, M. Pepe Altarelli40, S. Perazzini40, P. Perret5, L. Pescatore41, K. Petridis48, A. Petrolini20,h, A. Petrov68, M. Petruzzo22,q, E. Picatoste Olloqui38, B. Pietrzyk4, M. Pikies27, D. Pinci26, A. Pistone20,h, A. Piucci12, V. Placinta30, S. Playfer52, M. Plo Casasus39, T. Poikela40, F. Polci8, M. Poli Lener19, A. Poluektov50,36, I. Polyakov61, E. Polycarpo2, G.J. Pomery48, S. Ponce40, A. Popov37, D. Popov11,40, S. Poslavskii37, C. Potterat2, E. Price48, J. Prisciandaro39, C. Prouve48, V. Pugatch46, A. Puig Navarro42, H. Pullen57, G. Punzi24,p, W. Qian50, R. Quagliani7,48, B. Quintana5, B. Rachwal28, J.H. Rademacker48, M. Rama24, M. Ramos Pernas39, M.S. Rangel2, I. Raniuk45,†, F. Ratnikov35, G. Raven44, M. Ravonel Salzgeber40, M. Reboud4, F. Redi55, S. Reichert10, A.C. dos Reis1, C. Remon Alepuz70, V. Renaudin7, S. Ricciardi51, S. Richards48, M. Rihl40, K. Rinnert54, V. Rives Molina38, P. Robbe7, A.B. Rodrigues1, E. Rodrigues59, J.A. Rodriguez Lopez66, P. Rodriguez Perez56,†, A. Rogozhnikov35, S. Roiser40, A. Rollings57, V. Romanovskiy37, A. Romero Vidal39, J.W. Ronayne13, M. Rotondo19, M.S. Rudolph61, T. Ruf40, P. Ruiz Valls70, J. Ruiz Vidal70, J.J. Saborido Silva39, E. Sadykhov32, N. Sagidova31, B. Saitta16,f, V. Salustino Guimaraes1, D. Sanchez Gonzalo38, C. Sanchez Mayordomo70, B. Sanmartin Sedes39, R. Santacesaria26, C. Santamarina Rios39, M. Santimaria19, E. Santovetti25,j, G. Sarpis56, A. Sarti26, C. Satriano26,s, A. Satta25, D.M. Saunders48, D. Savrina32,33, S. Schael9, M. Schellenberg10, M. Schiller53, H. Schindler40, M. Schlupp10, M. Schmelling11, T. Schmelzer10, B. Schmidt40, O. Schneider41, A. Schopper40, H.F. Schreiner59, K. Schubert10, M. Schubiger41, M.-H. Schune7, R. Schwemmer40, B. Sciascia19, A. Sciubba26,k, A. Semennikov32, A. Sergi47, N. Serra42, J. Serrano6, L. Sestini23, P. Seyfert40, M. Shapkin37, I. Shapoval45, Y. Shcheglov31, T. Shears54, L. Shekhtman36,w, V. Shevchenko68, B.G. Siddi17,40, R. Silva Coutinho42, L. Silva de Oliveira2, G. Simi23,o, S. Simone14,d, M. Sirendi49, N. Skidmore48, T. Skwarnicki61, E. Smith55, I.T. Smith52, J. Smith49, M. Smith55, l. Soares Lavra1, M.D. Sokoloff59, F.J.P. Soler53, B. Souza De Paula2, B. Spaan10, P. Spradlin53, S. Sridharan40, F. Stagni40, M. Stahl12, S. Stahl40, P. Stefko41, S. Stefkova55, O. Steinkamp42, S. Stemmle12, O. Stenyakin37, H. Stevens10, S. Stone61, B. Storaci42, S. Stracka24,p, M.E. Stramaglia41, M. Straticiuc30, U. Straumann42, L. Sun64, W. Sutcliffe55, K. Swientek28, V. Syropoulos44, M. Szczekowski29, T. Szumlak28, M. Szymanski63, S. T’Jampens4, A. Tayduganov6, T. Tekampe10, G. Tellarini17,g, F. Teubert40, E. Thomas40, J. van Tilburg43, M.J. Tilley55, V. Tisserand4, M. Tobin41, S. Tolk49, L. Tomassetti17,g, D. Tonelli24, S. Topp-Joergensen57, F. Toriello61, R. Tourinho Jadallah Aoude1, E. Tournefier4, M. Traill53, M.T. Tran41, M. Tresch42, A. Trisovic40, A. Tsaregorodtsev6, P. Tsopelas43, A. Tully49, N. Tuning43, A. Ukleja29, A. Ustyuzhanin35, U. Uwer12, C. Vacca16,f, A. Vagner69, V. Vagnoni15,40, A. Valassi40, S. Valat40, G. Valenti15, R. Vazquez Gomez19, P. Vazquez Regueiro39, S. Vecchi17, M. van Veghel43, J.J. Velthuis48, M. Veltri18,r, G. Veneziano57, A. Venkateswaran61, T.A. Verlage9, M. Vernet5, M. Vesterinen57, J.V. Viana Barbosa40, B. Viaud7, D. Vieira63, M. Vieites Diaz39, H. Viemann67, X. Vilasis-Cardona38,m, M. Vitti49, V. Volkov33, A. Vollhardt42, B. Voneki40, A. Vorobyev31, V. Vorobyev36,w, C. Voß9, J.A. de Vries43, C. Vázquez Sierra39, R. Waldi67, C. Wallace50, R. Wallace13, J. Walsh24, J. Wang61, D.R. Ward49, H.M. Wark54, N.K. Watson47, D. Websdale55, A. Weiden42, M. Whitehead40, J. Wicht50, G. Wilkinson57,40, M. Wilkinson61, M. Williams56, M.P. Williams47, M. Williams58, T. Williams47, F.F. Wilson51, J. Wimberley60, M.A. Winn7, J. Wishahi10, W. Wislicki29, M. Witek27, G. Wormser7, S.A. Wotton49, K. Wraight53, K. Wyllie40, Y. Xie65, Z. Xu4, Z. Yang3, Z. Yang60, Y. Yao61, H. Yin65, J. Yu65, X. Yuan61, O. Yushchenko37, K.A. Zarebski47, M. Zavertyaev11,c, L. Zhang3, Y. Zhang7, A. Zhelezov12, Y. Zheng63, X. Zhu3, V. Zhukov33, J.B. Zonneveld52, S. Zucchelli15. 2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil 3Center for High Energy Physics, Tsinghua University, Beijing, China 4LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France 5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France 6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France 7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France 8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France 9I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany 10Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany 11Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany 12Physikalisches Institut, Ruprecht- Karls-Universität Heidelberg, Heidelberg, Germany 13School of Physics, University College Dublin, Dublin, Ireland 14Sezione INFN di Bari, Bari, Italy 15Sezione INFN di Bologna, Bologna, Italy 16Sezione INFN di Cagliari, Cagliari, Italy 17Universita e INFN, Ferrara, Ferrara, Italy 18Sezione INFN di Firenze, Firenze, Italy 19Laboratori Nazionali dell’INFN di Frascati, Frascati, Italy 20Sezione INFN di Genova, Genova, Italy 21Universita & INFN, Milano-Bicocca, Milano, Italy 22Sezione di Milano, Milano, Italy 23Sezione INFN di Padova, Padova, Italy 24Sezione INFN di Pisa, Pisa, Italy 25Sezione INFN di Roma Tor Vergata, Roma, Italy 26Sezione INFN di Roma La Sapienza, Roma, Italy 27Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland 28AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland 29National Center for Nuclear Research (NCBJ), Warsaw, Poland 30Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania 31Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia 32Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia 33Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia 34Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia 35Yandex School of Data Analysis, Moscow, Russia 36Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia 37Institute for High Energy Physics (IHEP), Protvino, Russia 38ICCUB, Universitat de Barcelona, Barcelona, Spain 39Universidad de Santiago de Compostela, Santiago de Compostela, Spain 40European Organization for Nuclear Research (CERN), Geneva, Switzerland 41Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland 42Physik- Institut, Universität Zürich, Zürich, Switzerland 43Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands 44Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands 45NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine 46Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine 47University of Birmingham, Birmingham, United Kingdom 48H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom 49Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom 50Department of Physics, University of Warwick, Coventry, United Kingdom 51STFC Rutherford Appleton Laboratory, Didcot, United Kingdom 52School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom 53School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom 54Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom 55Imperial College London, London, United Kingdom 56School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom 57Department of Physics, University of Oxford, Oxford, United Kingdom 58Massachusetts Institute of Technology, Cambridge, MA, United States 59University of Cincinnati, Cincinnati, OH, United States 60University of Maryland, College Park, MD, United States 61Syracuse University, Syracuse, NY, United States 62Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to 2 63University of Chinese Academy of Sciences, Beijing, China, associated to 3 64School of Physics and Technology, Wuhan University, Wuhan, China, associated to 3 65Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to 3 66Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to 8 67Institut für Physik, Universität Rostock, Rostock, Germany, associated to 12 68National Research Centre Kurchatov Institute, Moscow, Russia, associated to 32 69National Research Tomsk Polytechnic University, Tomsk, Russia, associated to 32 70Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to 38 71Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to 43 aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil bLaboratoire Leprince-Ringuet, Palaiseau, France cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia dUniversità di Bari, Bari, Italy eUniversità di Bologna, Bologna, Italy fUniversità di Cagliari, Cagliari, Italy gUniversità di Ferrara, Ferrara, Italy hUniversità di Genova, Genova, Italy iUniversità di Milano Bicocca, Milano, Italy jUniversità di Roma Tor Vergata, Roma, Italy kUniversità di Roma La Sapienza, Roma, Italy lAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland mLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain nHanoi University of Science, Hanoi, Viet Nam oUniversità di Padova, Padova, Italy pUniversità di Pisa, Pisa, Italy qUniversità degli Studi di Milano, Milano, Italy rUniversità di Urbino, Urbino, Italy sUniversità della Basilicata, Potenza, Italy tScuola Normale Superiore, Pisa, Italy uUniversità di Modena e Reggio Emilia, Modena, Italy vIligan Institute of Technology (IIT), Iligan, Philippines wNovosibirsk State University, Novosibirsk, Russia †Deceased
# Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation Minsu Kim1, Sunghun Joung1, Seungryong Kim2, Jungin Park1, Ig-Jae Kim3, Kwanghoon Sohn1,* ∗Corresponding author ###### Abstract Existing techniques to adapt semantic segmentation networks across source and target domains within deep convolutional neural networks (CNNs) deal with all the samples from the two domains in a global or category-aware manner. They do not consider an inter-class variation within the target domain itself or estimated category, providing the limitation to encode the domains having a multi-modal data distribution. To overcome this limitation, we introduce a learnable clustering module, and a novel domain adaptation framework, called cross-domain grouping and alignment. To cluster the samples across domains with an aim to maximize the domain alignment without forgetting precise segmentation ability on the source domain, we present two loss functions, in particular, for encouraging semantic consistency and orthogonality among the clusters. We also present a loss so as to solve a class imbalance problem, which is the other limitation of the previous methods. Our experiments show that our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings. ## 1 Introduction Semantic segmentation aims at densely assigning semantic category label to each pixel given an image. Though the remarkable progresses have been dominated by deep neural networks trained on large-scale labeled dataset (Chen et al. 2017a). The segmentation model trained on the labeled data in source domain usually cannot generalize well to the unseen data in target domain. For example, the model trained on the data from one city or computer-generated scene (Richter et al. 2016; Ros et al. 2016) may fail to yield accurate pixel- level predictions for the scenes of another city or real scene. The main reason lies in the different data distribution between such source and target domains, typically known as domain discrepancy (Shimodaira 2000). Figure 1: Illustration of cross-domain grouping and alignment : Conventional methods aim to reduce the domain discrepancy between source and target domains through (a) global and (b) category-level domain alignment, without taking into account the _inter-class_ variation or rely solely on the category classifier. (c) We propose to replace this category classifier with an intermediate cross-domain grouping module to align each group separately (best view in color). To address this issue, domain adaptive semantic segmentation methods have been proposed in which they align data distribution between the source and target domains by adopting a domain discriminator (Hoffman et al. 2016; Tsai et al. 2018). Formally, these methods aim to minimize an adversarial loss (Goodfellow et al. 2014) to reduce the domain discrepancy at image-level (Wu et al. 2018; Hoffman et al. 2018; Chang et al. 2019), feature-level (Hoffman et al. 2016), and category probability-level (Zou et al. 2018; Li, Yuan, and Vasconcelos 2019; Tsai et al. 2018) distributions without forgetting semantic segmentation ability on the source domain. However, their accuracy is still limited when aligning multi-modal data distribution (Arora et al. 2017), which cannot guarantee that the target samples from different categories are properly separated as in Fig. 1 (a). To tackle this limitation, category-level domain adaptation methods (Chen et al. 2017b; Du et al. 2019) have been proposed for semantic segmentation in which they minimize the class-specific domain discrepancy across the source and target domains. Together with supervision from the source domain, this enforces the segmentation network to learn discriminative representation for different classes on both domains. They utilize a category classifier trained on the source domain to generate _pseudo_ class labels on the target domain. It results in inaccurate labels for domain adaptation that misleads the domain alignment and accumulates errors as in Fig. 1 (b). It also contains a class imbalance problem (Zou et al. 2018), where the network works well for majority categories with a large number of pixels (e.g. road and building), while not suitable for minority categories with a small number of pixels (e.g. traffic sign). To overcome this limitation, we present cross-domain grouping and alignment for domain adaptive semantic segmentation. As illustrated in Fig. 1 (c), the key idea of our method is to apply an intermediate grouping module to replace the category classifier, allowing to align samples of source and target domains at each group to be similar without using error-prone category classifier. To make the grouping module help with domain adaptation, we propose several losses in a manner that the category distribution of each group between different domains should be consistent, while the category distribution of different groups in the same domain should be orthogonal. Furthermore, we present a group-level class equivalence scheme in order to align all the categories regardless of the number of pixels. The proposed method is extensively evaluated through an ablation study and comparison with state-of-the-art methods on various domain adaptive semantic segmentation benchmarks, including GTA5 $\rightarrow$ Cityscapes and SYNTHIA $\rightarrow$ Cityscapes. ## 2 Related Work ### 2.1 Semantic Segmentation Numerous methods have been proposed to assign class labels in pixel level for input images. Long et al. (2015) first transformed a classification convolutional neural network (CNN) (Krizhevsky, Sutskever, and Hinton 2012; karen Simonyan and Zisserman 2015; He et al. 2016) to a fully-convolutional network (FCN) for semantic segmentation. Following the line of FCN-based methods, several methods utilized dilated convolutions to enlarge the receptive field (Yu and Koltun 2015) and reason about spatial relationship (Chen et al. 2017a). Recently, Zhao et al. (2017) presented pyramid pooling module to encode the global and local context. Although these methods yielded impressive results in semantic segmentation, they still relied on large datasets with dense pixel-level class labels, which is expensive and laborious. An alternative is to utilize synthetic data (Richter et al. 2016; Ros et al. 2016) which can make unlimited amounts of labels available. Nevertheless, synthetic data still suffer from a substantially different data distribution from real data, which results in a dramatic performance drop when applying the trained model to real scenes. ### 2.2 Domain Adaptive Semantic Segmentation Due to the obvious mismatch between synthetic and real data, unsupervised domain adaptation (UDA) is studied to minimize the domain discrepancy by aligning the feature distribution between source and target data. As a pioneering work, Ganin et al. (2015) introduced the domain adversarial network to transfer the feature distribution, and Tzeng et al. (2017) proposed adversarial discriminative alignment. For pixel-level classification, numerous approaches (Wu et al. 2018; Hoffman et al. 2018; Chang et al. 2019) utilized image-level adaptation methods which translate source image to have the texture appearance of target image, while preserving the structure information of the source image for adapting cross- domain knowledge. In contrast, several methods (Zou et al. 2018; Li, Yuan, and Vasconcelos 2019; Li et al. 2020) adopted the iterative self-training approach to alternatively select unlabelled target samples with higher class probability and utilized them as a pseudo ground-truth. The feature-level adaptation methods align the intermediate feature distribution via adversarial framework. Hoffman et al. (2016) introduced a feature-level adaptation method to align the intermediate feature distribution for the global and local alignment. Tsai et al. (2018) adopted output-level adaptation for structured output space, since it contains similar spatial structure with semantic segmentation. However, these methods aim to align overall data distribution without taking into account the inter-class variation. To solve this problem, several methods (Chen et al. 2017b; Du et al. 2019) introduced category-level adversarial learning to align the data distributions independently for each class. Similarly, other works (Tsai et al. 2019; Huang et al. 2020) discovered patch-level adaptation methods by using multiple modes of patch-wise output distribution to differentiate the feature representation of patches. However, inaccurate domain alignment occurs because these methods rely heavily on category or patch classifiers trained in the source domain. The most similar to our work is Wang et al. (2020), which group the category classes into several groups for domain adaptive semantic segmentation. While they divide stuff and things (i.e. disconnected regions), our cross-domain grouping module divides the categories into multiple groups that the grouping network and segmentation network can be trained in a joint and boosting manner. ### 2.3 Unsupervised Deep Clustering Figure 2: Overview of our method. Images from the source and target domains are passed through segmentation network $G$. We decompose the data distribution of source and target domains into a set of $K$ sub-spaces with cross-domain grouping network $C$. Then discriminator $D$ distinguishes whether the data distribution for each sub-space is from the source or target domain. A variety of approaches have applied deep clustering algorithms that simultaneously discover groups in training data and perform representation learning. Chang et al. (2017) proposed to cast the clustering problem into pairwise classification using CNN. Caron et al. (2018) proposed a learning procedure that alternates between clustered images in the representation space and trains a model that assigns images to their clusters. Other approaches (Joulin, Bach, and Ponce 2012; Tao et al. 2017) localized the salient and common objects by clustering pixels in multiple images. Similarly, Collins et al. (2018) proposed deep feature factorization (DFF) to group the common part segments between images through non-negative matrix factorization (NMF) (Ding, He, and Simon 2005) on CNN features. This paper follows such a strategy to group semantic consistent data representation across the source and target domains. ## 3 Proposed Method ### 3.1 Problem Statement and Overview Let us denote the source and target images as ${I}_{S},{I}_{T}$ , where only the source data is annotated with per-pixel semantic categories as ${Y}_{S}$. We seek to train a semantic segmentation network $G$, which outputs pixel-wise class probability $P_{S},P_{T}$ on both source and target domains reliably, with height $h$, width $w$, and the number of classes $cls$, respectively. Our goal is to train the segmentation network that yields to align probability distribution of the source and target domains $P_{S}$ and $P_{T}$ so that the network $G$ can correctly predict the pixel-level labels even for the target data ${I}_{T}$, following recent study (Tsai et al. 2018) of adaptation in the output probability space, which shows better performance than adaptation in the intermediate feature space. Conventionally two types of domain adaptation approaches have been proposed: _global_ domain adaptation and _category-level_ domain adaptation. The former aims to align the global domain differences, while the latter aims to minimize class specific domain discrepancy for each category. However, _global_ domain adaptation does not take into account the inter-class variations, and _category-level_ domain adaptation rely solely on category classifier. To this end, we propose a novel method by clustering the samples as $K$ groups across the source and target domains. Concretely, we cluster the probability distribution into $K$ groups using cross-domain grouping module, followed by _group-level_ domain alignment. By setting $K$ greater than 1, domain alignment of complicated data distribution can be solved by an alignment of $K$ simple data distributions which is the challenge in _global_ domain adaptation. By setting $K$ less than $cls$, the domain misalignment in _category-level_ can be mitigated without using a category classifier trained in the source domain. In the following, we introduce our overall network architecture (Section 3.2), several constraints for cross-domain grouping (Section 3.3), and cross-domain alignment (Section 3.4). ### 3.2 Network Architecture Fig. 2 illustrates our overall framework. Our network consists of three major components: 1) the semantic segmentation network $G$, 2) the cross-domain grouping module $C$ to cluster sub-spaces based on the output probability distribution, and 3) the discriminator $D$ for group-level domain adaptation. In the following sections, we denote source and target domains as $l\in\left\\{S,T\right\\}$ unless otherwise stated. Figure 3: Visualization of cross-domain grouping on the source (first row) and target (second row) image with $K=8$. (From left to right) Input image and clustering results. Note that color represents the $K$ different sub-space. #### Segmentation network. Following the works (Tsai et al. 2018; Li, Yuan, and Vasconcelos 2019; Wang et al. 2020), we exploit DeepLab-V2 (2017a) with ResNet-101 (2016) pre-trained on ImageNet (2009) dataset. The source and target images ${I}_{l}$ are fed into the segmentation network $G$, outputting pixel-wise class probability distribution ${P}_{l}=G({I}_{l})$. Note that ${P}_{l}$ is extracted from the segmentation network before applying a softmax layer with same resolution as the input using bilinear interpolation, similar to Tsai et al. (2018). #### Cross-Domain grouping network. Our cross-domain grouping network $C$ is formulated as two convolutions. We design each convolution with $1\times 1$ kernel and group mapping function. The first convolution produces 64-channel feature, followed by ReLU and batch normalization. The second convolution produces $K$ grouping scores, followed by softmax function to output group probability ${H}^{k}_{l}=C({P}_{l})$. We then apply element-wise multiplication between ${H}^{k}_{l}$ and each channel dimension in ${P}_{l}$, obtaining group-specific feature ${F}^{k}_{l}$. The cross-domain grouping network can be easily replaced with other learnable clustering methods. #### Discriminator. For group-level domain alignment, we fed ${F}^{k}_{l}$ into the discriminator $D$. Following Li et al. (2019), we set the discriminator using five $4\times 4$ convolutional layers of stride 2, where the number of channels is $\\{64,128,256,512,1\\}$ to form the network. We use a leaky ReLU (2013) parameterized by 0.2 which is utilized for each convolutional layer except the last one. ### 3.3 Losses for Cross-Domain Grouping Perhaps one of the most straightforward ways of grouping is to utilize existing clustering methods, _e.g_. k-means (Coates and Ng 2012) or non- negative matrix factorization (NMF) (Collins, Achanta, and Susstrunk 2018). These strategies, however, are not learnable, and thus, they cannot weave the advantages of category-level domain information. Unlike these, we present a learnable clustering module with two loss functions to take advantage of the category-level domain adaptation methods. We discuss in more detail about the effectiveness of our grouping compared to non-learnable models (Coates and Ng 2012; Collins, Achanta, and Susstrunk 2018) in Section 4.2. In the following, we present each loss function in detail. #### Semantic consistency. Our first insight about grouping is that the category distribution of each group between the source and target domains has to be consistent so that the clustered group can benefit from the category-level domain adaptation method. To this end, we first estimate the class distribution ${Q}^{k}_{l}$ by using average pooling layer on each group-level feature ${F}^{k}_{l}$ such that ${Q}^{k}_{l}$, where each elements in ${Q}^{k}_{l}=[q_{1}^{k},...,q_{cls}^{k}]$ indicates the probability distribution of containing a particular categories at $k^{th}$ group . We then encourage a semantic consistency among class distribution by utilizing l2-norm $\begin{Vmatrix}\cdot\end{Vmatrix}_{2}$ as follows: $\mathcal{L}_{co}(G,C)=\sum_{k\in\\{1,...,K\\}}\;\begin{Vmatrix}{Q}_{S}^{k}-{Q}_{T}^{k}\end{Vmatrix}^{2}.$ (1) Minimizing loss (1) has two desirable effects. First, it encourages the difference of each class distribution of group to be similar, and it also provides the supervisory signals for aligning the probability distribution of group-level features. Figure 4: Visualization of cross-domain grouping result of (a) source and (c) target image with GT classes as corresponding colors (b,i) via (c,j) K-means, (d,f) DFF, (e,l) Ours (Iter 0k), (f,m) Ours (Iter 80k) and (g,n) Ours (Iter 120k). Compared to non-learnable model, our method can better capture semantic consistent objects across source and target domains. #### Orthogonality. The semantic consistency constraint in (1) encourages the class distribution of group across the source and target domains to be consistent. This, however, does not guarantee that class distribution is different for each group. In other words, we cannot divide the multi-modal complex distribution into several simple distributions. To this end, we draw the second insight by introducing orthogonality constraint such that, any two class distribution ${Q}^{j_{1}}_{l}$ and ${Q}^{j_{2}}_{l}$, should be orthogonal each other. It can be realized that their cosine similarity (2) is 0 since ${Q}^{k}_{l}$ are non-negative value. We define the cosine similarity with l2-norm $\begin{Vmatrix}\cdot\end{Vmatrix}_{2}$ as follows: $\cos(Q_{l}^{j_{1}},Q_{l}^{j_{2}})={Q_{l}^{j_{1}}\cdot Q_{l}^{j_{2}}\over\begin{Vmatrix}Q_{l}^{j_{1}}\end{Vmatrix}_{2}\begin{Vmatrix}Q_{l}^{j_{2}}\end{Vmatrix}_{2}},\;\;\;j_{1},j_{2}\in\\{1,...,K\\}.$ (2) We then formulate an orthogonal loss for training such that $\mathcal{L}_{orth}(G,C)=\sum_{l}\sum_{{j}_{1},{j}_{2}}\;\cos({Q}_{l}^{j_{1}},{Q}_{l}^{j_{2}}),$ (3) where we apply a loss function on each domain $l\in\left\\{S,T\right\\}$. By forcing the cross-domain grouping module $C$ to make each group to be orthogonal, it can divide a multi-modal complex distribution into the $K$ simple class distributions. ### 3.4 Losses for Cross-Domain Alignment In this section, we present a group-level adversarial learning framework as an alternative to _global_ domain adaptation and _category-level_ domain adaptation. #### Group-level alignment. To achieve group-level domain alignment, a straight forward method is to use $K$ independent discriminators, similar to conventional category-level domain alignment methods (Chen et al. 2017b; Du et al. 2019). However, we simultaneously update grouping module $C$ while training the overall network, thus cluster assignment may not be consistent at each training iteration. To this end, we adopt conditional adversarial learning framework following (Long et al. 2018), by combining group-level feature ${F}^{k}_{l}$ with ${Q}^{k}_{l}$ as a condition as follows: $\displaystyle\begin{split}\mathcal{L}_{cadv}(G,C,D)=-\sum_{k}\;[\log(D(F_{S}^{k}\otimes Q^{k}_{S}))]\\\ \;\;-\sum_{k}\;[\log(1-D(F_{T}^{k}\otimes Q^{k}_{T}))],\end{split}$ (4) where ${\otimes}$ represent outer product operation. Note that using group- level feature $F^{k}_{l}$ only as input to the discriminator is equivalent to global alignment, while we give a condition by using cross-covariance between $F^{k}_{l}$ and $Q^{K}_{l}$ as input. This leads to discriminative domain alignment according to the different groups. #### Group-level class equivalence. For group-level adversarial learning, the existence of particular classes across different domains is desirable. However, since the number of pixels for particular classes are dominant in each image, it can cause class imbalance problem. Thus adaptation model tends to be biased towards majority classes and ignore minority classes (Zou et al. 2018). To alleviate this, we propose group level class equivalence following Zhao et al. (2018). We first apply max pooling layer for each group level feature $M_{l}^{k}$ such that each element of $M_{l}^{k}=[m_{l,1}^{k},...,m_{l,cls}^{k}]$ is a maximum score for each category corresponding to group $k$. We then utilize maximum classification score in the source domain ${m}_{S}^{k}$ as a pseudo-label, where we aim to train maximum classification score in target domain ${m}_{T}^{k}$ to be similar. To this end, we apply multi-class binary cross-entropy loss for each class as follows : $\begin{split}\mathcal{L}_{cl}(G,C)=-\sum_{k}\;\sum_{u}[m\,_{S,u}^{k}\geq\tau]\;\log(m\,_{T,u}^{k}),\end{split}$ (5) where Iverson bracket indicator function $[$·$]$ evaluates to 1 when it is true and 0 otherwise, and $u\in\\{1,...,cls\\}$ denotes category. Note that we merely exclude too low probability with the threshold parameter $\tau$. ### 3.5 Training The overall loss function of our approach can be written as $\displaystyle\begin{split}L(G,C,D)&=L_{seg}(G)+\lambda_{co}L_{co}(G,C)+\lambda_{cl}L_{cl}(G,C)\\\ &+\lambda_{orth}L_{orth}(G,C)+\lambda_{cadv}L_{cadv}(G,C,D),\end{split}$ (6) where $L_{seg}$ is the supervised cross-entropy loss for semantic segmentation network on the source data, and $\lambda_{co},\lambda_{orth},\lambda_{cadv}$ and $\lambda_{cl}$ are balancing parameter for different losses. We then solve the following minmax problem for optimizing $G,C$ and $D$. $\begin{split}\min_{G,C}\max_{D}L(G,C,D).\end{split}$ (7) ## 4 Experiments ### 4.1 Experimental Setting #### Implementation details. The proposed method was implemented in PyTorch library (Paszke et al. 2017) and simulated on a PC with a single RTX Titan GPU. We utilize BDL (Li, Yuan, and Vasconcelos 2019) as our baseline model following conventional work (Wang et al. 2020), including self-supervised learning and image transferring framework. To train the segmentation network, we utilize stochastic gradient descent (SGD) (1998), where the learning rate is set to $2.5\times{10}^{-4}$. For grouping network, we utilize SGD, with learning rate as $1\times{10}^{-3}$. Both learning rates decreased with “poly” learning rate policy with power fixed to 0.9 and momentum as 0.9. For discriminator training, we use Adam (2014) optimizer with an initial learning rate $1\times{10}^{-4}$. We jointly train our segmentation network, grouping network, and discriminator using (7) for a total of $120k$ iterations. We randomly paired source and target images in each iteration. Through the cross- validation using grid-search in log-scale, we set the hyper-parameters $\lambda_{co},\lambda_{orth},\lambda_{cadv},\lambda_{cl}$ and $\tau$ as 0.001, 0.001, 0.001, 0.0001 and 0.05, respectively. #### Datasets. For experiments, we use the GTA5 (Richter et al. 2016) and SYNTHIA (Ros et al. 2016) as source dataset. GTA5 dataset (Richter et al. 2016) contains 24,966 images with 1914$\times$1052 resolution. We resize images to 1280 $\times$ 760 following other work (Tsai et al. 2018). For SYNTHIA (Ros et al. 2016), we use SYNTHIA-RAND-CITYSCAPES dataset with 9,400 images with 1280$\times$760 resolution. We use Cityscapes (Cordts et al. 2016) as target dataset, which consists of 2,975, 500 and 1,525 images with training, validation and test set. We train our network with training set, while evaluation is done using validation set. We resize images to 1024 $\times$ 512 for both training and testing as (Li, Yuan, and Vasconcelos 2019). We evaluate the class-level intersection over union (IoU) and mean IoU (mIoU) (Everingham et al. 2015). ### 4.2 Analysis We first visualize each group through cross-domain grouping in Fig. 3. Clustered groups for various $k$ showed that our networks clustered semantically consistent regions across the source and target domains through (1). Also, clustered regions along different $k$ indicate that our network effectively divided regions into different group using (2). Figure 5: Visualization of output probability distribution of (a) source and (c) target image with GT classes as corresponding colors (b,d) via t-SNE using (e) non-adapted model, (f) baseline (Li, Yuan, and Vasconcelos 2019) and (g) ours. Our method effectively reduce domain discrepancy along with different domain, while others failed (represented using a circle). Figure 6: Ablation study for domain alignment with different number of clusters $K$ on GTA5 $\rightarrow$ Cityscapes. We further compare our grouping network with k-means clustering algorithm (2012) and deep feature factorization (2018) which is not trainable methods. As shown in Fig. 4, our method can better capture the object boundaries and semantic consistent objects across source and target domains compared to other non-learnable methods. We further visualize each clustered group through cross-domain grouping with an evolving number of iteration. As the number of iterations increases, cross-domain grouping and group-level domain alignment share complementary information, which decomposes the data distribution and aligns domains for each grouped sub-spaces in a joint and boosting manner. In Fig. 5, we show the t-SNE visualization (van der Maaten and Hinton 2008) of the output probability distribution of our method compares to the non-adapted method and baseline (Li, Yuan, and Vasconcelos 2019). The result shows that our method effectively aligns the distribution of the source and target domains, while others failed to reduce the domain discrepancy. Furthermore, we observe that our model successfully grouped minority categories (i.e. traffic signs in yellow) while others failed. It indicates that the loss (5) can solve class imbalance problem. ### 4.3 Ablation Study Fig. 6 shows the result of ablation experiments with different number of groups $K$. Note that the results with $K=1$ are equivalent to global domain adaptation as a baseline. The result shows that ours with various number of $K$ consistently outperform the baseline, which shows the effectiveness of our group-level domain alignment. The performance has improved as $K$ increased from $1$, and after achieving the best performance at $K=8$ the rest showed no significant difference. The lower performance with the larger number of $K$ indicates that over-clustered samples can actually degrade performance as conventional category-level adaptation methods. Since the result with $K=8$ has shown the best performance on both GTA5 $\rightarrow$ Cityscapes and SYNTHIA $\rightarrow$ Cityscapes, we set $K$ as $8$ for all experiments. Table 1 shows the result of ablation experiments to validate the effects of proposed loss functions. It verifies the effectiveness of each loss function, including group-level domain adaptation, group-level semantic consistency, group-level orthogonality, and group-level class equivalence. The full usage of our proposed loss functions yields the best results. We also find that adding group-level orthogonality leads to a large improvement in the performance, which demonstrates that we effectively divide the multi-modal complex distribution into $K$ simple distributions for group-level domain alignment. Method | Loss Functions | mIOU ---|---|--- ${L}_{seg}$ | ${L}_{cadv}$ | ${L}_{co}$ | ${L}_{orth}$ | ${L}_{cl}$ Source only | ✓ | | | | | 36.6 Ours | ✓ | ✓ | | | | 48.8 ✓ | ✓ | ✓ | | | 49.1 ✓ | ✓ | ✓ | ✓ | | 50.8 ✓ | ✓ | ✓ | ✓ | ✓ | 51.5 Table 1: Ablation study for domain alignment with different loss functions on GTA5 $\rightarrow$ Cityscapes. Figure 7: Qualitative results of domain adaptation on GTA5 $\rightarrow$ Cityscapes. (From left to right) Input image, ground-truth, non-adapted result, baseline result and ours result. GTA5 $\rightarrow$ Cityscapes Road SW Build Wall Fence Pole TL TS Veg. terrain Sky PR Rider Car Truck Bus Train Motor Bike mIoU Without Adaptation 75.8 16.8 77.2 12.5 21.0 25.5 30.1 20.1 81.3 24.6 70.3 53.8 26.4 49.9 17.2 25.9 6.5 25.3 36.0 36.6 Tsai et al. (2018) 86.5 36.0 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 Wu et al. (2018) 85.0 30.8 81.3 25.8 21.2 22.2 25.4 26.6 83.4 36.7 76.2 58.9 24.9 80.7 29.5 42.9 2.5 26.9 11.6 41.7 Chang et al. (2019) 91.5 47.5 82.5 31.3 25.6 33.0 33.7 25.8 82.7 28.8 82.7 62.4 30.8 85.2 27.7 34.5 6.4 25.2 24.4 45.4 Li et al. (2019) 91.0 44.7 84.2 34.6 27.6 30.2 36.0 36.0 85.0 43.6 83.0 58.6 31.6 83.3 35.3 49.7 3.3 28.8 35.6 48.5 Luo et al. (2019) 87.0 27.1 79.6 27.3 23.3 28.3 35.5 24.2 83.6 27.4 74.2 58.6 28.0 76.2 33.1 36.7 6.7 31.9 31.4 43.2 Du et al. (2019) 90.3 38.9 81.7 24.8 22.9 30.5 37.0 21.2 84.8 38.8 76.9 58.8 30.7 85.7 30.6 38.1 5.9 28.3 36.9 45.4 Vu et al. (2019) 90.3 38.9 81.7 24.8 22.9 30.5 37.0 21.2 84.8 38.8 76.9 58.8 30.7 85.7 30.6 38.1 5.9 28.3 36.9 45.4 Tsai et al. (2019) 92.3 51.9 82.1 29.2 25.1 24.5 33.8 33.0 82.4 32.8 82.2 58.6 27.2 84.3 33.4 46.3 2.2 29.5 32.3 46.5 Huang et al. (2020) 92.4 55.3 82.3 31.2 29.1 32.5 33.2 35.6 83.5 34.8 84.2 58.9 32.2 84.7 40.6 46.1 2.1 31.1 32.7 48.6 Wang et al. (2020) 90.6 44.7 84.8 34.3 28.7 31.6 35.0 37.6 84.7 43.3 85.3 57.0 31.5 83.8 42.6 48.5 1.9 30.4 39.0 49.2 Ours 91.1 52.8 84.6 32.0 27.1 33.8 38.4 40.3 84.6 42.8 85.0 64.2 36.5 87.3 44.4 51.0 0.0 37.3 44.9 51.5 Table 2: Quantitative results of domain adaptation on GTA5 $\rightarrow$ Cityscapes. SYNTHIA $\rightarrow$ Cityscapes Road SW Build TL TS Veg. Sky PR Rider Car Bus Motor Bike mIoU Tsai et al. (2018) 84.3 42.7 77.5 4.7 7.0 77.9 82.5 54.3 21.0 72.3 32.2 18.9 32.3 46.7 Li et al. (2019) 86.0 46.7 80.3 14.1 11.6 79.2 81.3 54.1 27.9 73.7 42.2 25.7 45.3 51.4 Luo et al. (2019) 82.5 24.0 79.4 16.5 12.7 79.2 82.8 58.3 18.0 79.3 25.3 17.6 25.9 46.3 Du et al. (2019) 84.6 41.7 80.8 11.5 14.7 80.8 85.3 57.5 21.6 82.0 36.0 19.3 34.5 50.0 Huang et al. (2020) 86.2 44.9 79.5 9.4 11.8 78.6 86.5 57.2 26.1 76.8 39.9 21.5 32.1 50.0 Wang et al. (2020) 83.0 44.0 80.3 17.1 15.8 80.5 81.8 59.9 33.1 70.2 37.3 28.5 45.8 52.1 Ours 90.7 49.5 84.5 33.6 38.9 84.6 84.6 59.8 33.3 80.8 51.5 37.6 45.9 54.1 Table 3: Quantitative results of domain adaptation on SYNTHIA $\rightarrow$ Cityscapes. ### 4.4 Comparison with state-of-the-art methods #### GTA5 $\rightarrow$ Cityscapes. In the following, we evaluated our method on GTA5 $\rightarrow$ Cityscapes in comparison to the state-of-the-art methods including without adaptation, global adaptation (Tsai et al. 2018), image-level adaptation (Wu et al. 2018; Chang et al. 2019; Li, Yuan, and Vasconcelos 2019) and category-level domain alignment (Luo et al. 2019; Du et al. 2019; Vu et al. 2019; Tsai et al. 2019; Huang et al. 2020; Wang et al. 2020). As shown in Table 2, our method outperforms all other models on the categories “car, truck, bus, motor, and bike” which share similar appearance. Specifically, we observe that our model achieves performance improvement on the categories “pole and traffic sign”. It demonstrates that our group-level class equivalence effectively solves the class imbalance problem. #### SYNTHIA $\rightarrow$ Cityscapes. We further compared our method to the state-of-the-art methods (Luo et al. 2019; Tsai et al. 2018; Du et al. 2019; Li, Yuan, and Vasconcelos 2019; Huang et al. 2020; Wang et al. 2020) on SYNTHIA $\rightarrow$ Cityscapes, where 13 common classes between SYNTHIA (Ros et al. 2016) and Cityscapes (Cordts et al. 2016) datasets are evaluated. As shown in Table 3, our method outperforms conventional methods. We can observe that our model achieve similar improvements in GTA5 $\rightarrow$ Cityscapes scenario. Compared to baseline (Li, Yuan, and Vasconcelos 2019), our model achieves large improvement in the performance “traffic sign and traffic light”. ## 5 Conclusion We have introduced cross-domain grouping and alignment for domain adaptive semantic segmentation. The key idea is to apply an intermediate grouping module such that multi-modal data distribution can be divided into several simple distributions. We then apply group-level domain alignment across source and target domains, where the grouping network and segmentation network can be trained in a joint and boosting manner using semantic consistency and orthogonality constraints. To solve the class imbalance problem, we have further introduced a group-level class equivalence constraint, resulting state-of-the-art performance on domain adaptive semantic segmentation. We believe our approach will facilitate further advances in unsupervised domain adaptation on various computer vision tasks. #### Acknowledgements. This research was supported by R&D program for Advanced Integrated- intelligence for Identification (AIID) through the National Research Foundation of KOREA (NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289). ## References * Arora et al. (2017) Arora, S.; Ge, R.; Liang, Y.; Ma, T.; and Zhang, Y. 2017. Generalization and equilibrium in generative adversarial nets (gans). In _International conference on machine learning_ , 224–232. * Caron et al. (2018) Caron, M.; Bojanowski, P.; Joulin, A.; and Douze, M. 2018. Deep clustering for unsupervised learning of visual features. In _Proceedings of the European Conference on Computer Vision_ , 132–149. * Chang et al. (2017) Chang, J.; Wang, L.; Meng, G.; Xiang, S.; and Pan, C. 2017. Deep adaptive image clustering. In _Proceedings of the IEEE International Conference on Computer Vision_ , 5879–5887. * Chang et al. (2019) Chang, W.-L.; Wang, H.-P.; Peng, W.-H.; and Chiu, W. 2019. All about structure: Adapting structural information across domains for boosting semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 1900–1909. * Chen et al. (2017a) Chen, L. C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017a. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE transactions on Pattern Analysis and Machine Intelligence_ 40(4): 834–848. * Chen et al. (2017b) Chen, Y. H.; Chen, W. Y.; Chen, Y. T.; Tsai, B. C.; Wang, Y. C. F.; and Sun, M. 2017b. No more discrimination: Cross city adaptation of road scene segmenters. In _Proceedings of the IEEE International Conference on Computer Vision_ , 1992–2001. * Coates and Ng (2012) Coates, A.; and Ng, A. Y. 2012. Learning feature representations with k-means. In _Neural networks: Tricks of the trade_ , 561–580. * Collins, Achanta, and Susstrunk (2018) Collins, E.; Achanta, R.; and Susstrunk, S. 2018. Deep feature factorization for concept discovery. In _Proceedings of the European Conference on Computer Vision_ , 336–352. * Cordts et al. (2016) Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 3213–3223. * Deng et al. (2009) Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 248–255. * Ding, He, and Simon (2005) Ding, C.; He, X.; and Simon, H. D. 2005. On the equivalence of nonnegative matrix factorization and spectral clustering. In _Proceedings of the SIAM International Conference on Data Mining_ , 606–610. * Du et al. (2019) Du, L.; Tan, J.; Yang, H.; Feng, J.; Xue, X.; Zheng, Q.; Ye, X.; and Zhang, X. 2019\. SSF-DAN: Separated semantic feature based domain adaptation network for semantic segmentation. In _Proceedings of the IEEE International Conference on Computer Vision_ , 982–991. * Everingham et al. (2015) Everingham, M.; Eslami, S. M. A.; Gool, L. J. V.; Williams, C. K. I.; Winn, J. M.; and Zisserman, A. 2015. The pascal visual object classes challenge: A retrospective. _International Journal of Computer Vision_ 111(1): 98–136. * Ganin and Lempitsky (2015) Ganin, Y.; and Lempitsky, V. 2015. Unsupervised domain adaptation by backpropagation. In _International conference on machine learning_ , 1180–1189. * Goodfellow et al. (2014) Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In _Advances in Neural Information Processing Systems_ , 2672–2680. * He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 770–778. * Hoffman et al. (2018) Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J. Y.; Isola, P.; Saenko, K.; Efros, A. A.; and Darrell, T. 2018. Cycada:Cycle-consistent adversarial domain adaptation. In _International conference on machine learning_ , 1989–1998. * Hoffman et al. (2016) Hoffman, J.; Wang, D.; Yu, F.; and Darrell, T. 2016. Fcns in the wild:Pixel-level adversarial and constraint-based adaptation. In _arXiv preprint arXiv:1612.02649_. * Huang et al. (2020) Huang, J.; Lu, S.; Guan, D.; and Zhang, X. 2020. Contextual-relation consistent domain adaptation for semantic segmentation. In _Proceedings of the European Conference on Computer Vision_ , 705–722. * Joulin, Bach, and Ponce (2012) Joulin, A.; Bach, F.; and Ponce, J. 2012. Multi-class cosegmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 542–549. * karen Simonyan and Zisserman (2015) karen Simonyan; and Zisserman, A. 2015. Very deep convolutional networks for large-scale image recognition. In _arXiv preprint arXiv:1409.1556_ , –. * Kingma and Ba (2014) Kingma, D. P.; and Ba, J. L. 2014. Adam: A method for stochastic optimization. In _arXiv preprint arXiv:1412.6980_ , –. * Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In _Advances in Neural Information Processing Systems_ , 84–90. * LeCun et al. (1998) LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ 86(11): 2278–2324. * Li et al. (2020) Li, G.; Kang, G.; Liu, W.; Wei, Y.; and Yang, Y. 2020. Content-consistent matching for domain adaptive semantic segmentation. In _Proceedings of the European Conference on Computer Vision_ , 440–456. * Li, Yuan, and Vasconcelos (2019) Li, Y.; Yuan, L.; and Vasconcelos, N. 2019. Bidirectional learning for domain adaptation of semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 6936–6945. * Long and Darrell (2015) Long, J.; and Darrell, E. S. T. 2015. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 3431–3440. * Long et al. (2018) Long, M.; Cao, Z.; Wang, J.; and Jordan, M. I. 2018. Conditional adversarial domain adaptation. In _Advances in Neural Information Processing Systems_ , 1640–1650. * Luo et al. (2019) Luo, Y.; Zheng, L.; Guan, T.; Yu, J.; and Yang, Y. 2019. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2507–2516. * Maas, Hannun, and Ng (2013) Maas, A. L.; Hannun, A. Y.; and Ng, A. Y. 2013. Rectifier nonlinearities improve neural network acoustic models. In _International conference on machine learning_ , 1–3. * Paszke et al. (2017) Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. –. * Richter et al. (2016) Richter, S. R.; Vineet, V.; Roth, S.; and Koltun, V. 2016. Playing for data: Ground truth from computer games. In _Proceedings of the European Conference on Computer Vision_ , 102–118. * Ros et al. (2016) Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; and Lopez, A. M. 2016. The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 3234–3243. * Shimodaira (2000) Shimodaira, H. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. _Journal of statistical planning and inference_ 90(2): 227–244. * Tao et al. (2017) Tao, Z.; Liu, H.; Fu, H.; and Fu, Y. 2017. Image cosegmentation via saliency-guided constrained clustering with cosine similarity. In _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_ , 4285–4291. * Tsai et al. (2018) Tsai, Y.-H.; Hung, W.-C.; Schulter, S.; Sohn, K.; Yang, M.-H.; and Chandraker, M. 2018. Learning to adapt structured output space for semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 7472–7481. * Tsai et al. (2019) Tsai, Y.-H.; Sohn, K.; Schulter, S.; and Chandraker, M. 2019. Domain adaptation for structured Output via discriminative Patch representations. In _Proceedings of the IEEE International Conference on Computer Vision_ , 1456–1465. * Tzeng et al. (2017) Tzeng, E.; Hoffman, J.; Saenko, K.; and Darrell, T. 2017. Adversarial discriminative domain adaptation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 7167–7176. * van der Maaten and Hinton (2008) van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. _Journal of Machine Learning Research_ 9(Nov): 2579–2605. * Vu et al. (2019) Vu, T.-H.; Jain, H.; Bucher, M.; Cord, M.; and Perez, P. 2019. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2517–2526. * Wang et al. (2020) Wang, Z.; Yu, M.; Wei, Y.; Feris, R.; Xiong, J.; Hwu, W. M.; Huang, T. S.; and Shi, H. 2020. Differential treatment for stuff and things: A simple unsupervised domain adaptation method for semantic segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 12635–12644. * Wu et al. (2018) Wu, Z.; Han, X.; Lin, Y.-L.; Uzunbas, M. G.; Goldstein, T.; Lim, S. N.; and Davis, L. S. 2018. Dcan: Dual channel-wise alignment networks for unsupervised scene adaptation. In _Proceedings of the European Conference on Computer Vision_ , 518–534. * Yu and Koltun (2015) Yu, F.; and Koltun, V. 2015. Multi-scale context aggregation by dilated convolutions. In _arXiv preprint arXiv:1511.07122_ , –. * Zhao et al. (2017) Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2881–2890. * Zhao, Liang, and Wei (2018) Zhao, X.; Liang, S.; and Wei, Y. 2018. Pseudo mask augmented object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 4061–4070. * Zou et al. (2018) Zou, Y.; Yu, Z.; Kumar, B. V.; and Wang, J. 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In _Proceedings of the European Conference on Computer Vision_ , 289–305.
# An Evaluation of Large Language Models in Bioinformatics Research Hengchuang Yin, Zhonghui Gu, Fanhao Wang, Yiparemu, Yanqiao Zhu, Xinming Tu, Xian-Sheng Hua, Xiao Luo, and Yizhou Sun Hengchuang Yin, Zhonghui Gu and Fanhao Wang are with Peking University, Beijing, 100871, China.(e-mail: <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Yiparemu is with Henan University School of Stomatology, Kaifeng, China. (e-mail<EMAIL_ADDRESS>Yanqiao Zhu, Xiao Luo, and Yizhou Sun are with the Department of Computer Science, University of California, Los Angeles, USA (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Xinming Tu is with the University of Washington, USA. (e-mail<EMAIL_ADDRESS>Xian-Sheng Hua is with Terminus Group, Beijing 100027, China. (e-mail: <EMAIL_ADDRESS> ###### Abstract Large language models (LLMs) such as ChatGPT have gained considerable interest across diverse research communities. Their notable ability for text completion and generation has inaugurated a novel paradigm for language-interfaced problem solving. However, the potential and efficacy of these models in bioinformatics remain incompletely explored. In this work, we study the performance LLMs on a wide spectrum of crucial bioinformatics tasks. These tasks include the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti- cancer peptides, molecular optimization, and resolution of educational bioinformatics problems. Our findings indicate that, given appropriate prompts, LLMs like GPT variants can successfully handle most of these tasks. In addition, we provide a thorough analysis of their limitations in the context of complicated bioinformatics tasks. In conclusion, we believe that this work can provide new perspectives and motivate future research in the field of LLMs applications, AI for Science and bioinformatics. ###### Index Terms: Large Language Models, Bioinformatics, ChatGPT, Named-entity recognition, Molecular optimization ## I Introduction Large language models (LLMs) [1, 2, 3] such as GPT variants, which are neural network models trained on large amounts of unlabeled data, have recently attracted significant attention across a variety of research communities. Trained with a combination of unsupervised pre-training, supervised fine- tuning, and human feedback, LLMs can generate fluid and reasonable contextual conversations with text-based input queries, i.e. prompts. Such a natural language interface offers a versatile problem-solving platform, addressing tasks from text drafting to mathematical problem solving [4, 5], which exceeds the capabilities of traditional single natural language processing models [6, 7, 8]. It is particularly noted that LLMs have demonstrated remarkable proficiency in human-like language generation and exhibited a discernible level of reasoning ability [9, 10]. In an endeavor to comprehensively understand the capabilities of LLMs, numerous studies have assessed their performance across a variety of language tasks [11, 12]. These tasks include reasoning [13, 5], machine translation [14], and question-answering [15]. Furthermore, the scope of research has been expanded to encompass broader domains. For instance, the applicability of LLMs in AI-assisted medical education has been explored through their ability to answer questions from medical licensing exams [16, 17, 18]. Collectively, these studies suggest that LLMs have the potential to achieve new state-of- the-art performance in traditional tasks and can even establish a new paradigm of research based on interactions with a language model. So far, a wide range of language models have achieved a great success for bioinformatics tasks [19], such as evolutionary scale modeling (ESM) [20] and pre-trained models for proteins [21]. These pre-trained models can be used to predict structure, functionality, and other protein properties, or convert proteins into embedding for downstream tasks. For example, AMP-BERT [22] is a fine-tuned model by leveraging protein language model [21], which achieves remarkable performance in antimicrobial peptide function prediction. However, these previous studies often utilize pre-trained language models specific to their domain, and hence they may not be powerful as modern LLMs that are trained using a wide-ranging corpus of text. Moreover, their research typically concentrates on a limited set of tasks, resulting in a lack of systematic and comprehensive investigations into the potential of LLMs for broader bioinformatics research. Evaluating bioinformatics tasks using LLMs can offer a new, effective approach to understanding and solving complex bioinformatics problems, and thus is a research direction of great significance. In this work, we investigate the potential applications of LLMs on several popular bioinformatics tasks. Our investigation includes a diverse set of tasks that provide a comprehensive evaluation of LLMs within the context of bioinformatics. These tasks comprise the identification of potential coding regions, extraction of named entities for genes and proteins, detection of antimicrobial and anti-cancer peptides, molecular optimization, and addressing educational problems within bioinformatics. To conduct our experiments, we represent chemical compounds, DNA and protein sequences in text format and convert the problem in natural language processing. Then, we feed them into LLMs to generate predictions. Our experiments indicate that, given appropriate prompts, LLMs can partially solve these tasks, underscoring the potential utility of LLMs in bioinformatics research. Further analysis of the extensive evaluation leads to three observations. Firstly, with appropriate prompts, LLMs can achieve performance on par with competitive baselines for simple bioinformatics tasks. Secondly, the model has difficulties when faced with more complex tasks; for instance, it may generate non-existing gene name mentions for gene and protein named entity recognition. Lastly, some prompts and model variants could lead to fluctuating the performance, which indicates their choices would require further investigation. By shedding light on the strengths and limitations of LLMs in bioinformatics, we hope this work can enhance the utility of LLMs in supporting data-driven research and problem solving within bioinformatics and pave the way for future research directions. ## II Related Work ### II-A Large Language Models (LLMs) With the development of ChatGPT, LLMs have become popular in the artificial intelligence community, which involve billions of parameters. Among various LLMs, ChatGPT has made impressive impacts by showing remarkable performance in zero-shot human-machine interaction [23]. The GPT-3.5 and GPT-4 are the two mainstream models that ChatGPT currently offers. Natural language and code can be understood and produced by the GPT-3.5 models. There may not be much of a difference between GPT-3.5 and GPT-4 in normal tasks. When the task’s complexity reaches a certain level, though, GPT-4 distinguishes itself from previous GPT-3.5 by being much more dependable, inventive, and able to handle highly complicated instructions111https://openai.com/research/gpt-4,222https://platform.openai.com/docs/guides/fine- tuning. Some new models, such as Llama 2 (70B) [24] and Google bard [25], have been proposed, and their performance remains to be tested. To understand the capacity and limitations of LLMs, the evaluation of these LLMs has received extensive interest, especially on NLP tasks such as reasoning [13], machine translation [14] and question answering [15]. For example, ChatGPT has shown extraordinary performance on zero-shot dialogue understanding with the help of proper prompts [26, 27]. Although ChatGPT suffers from several limitations in several tasks, the evaluation can also help the enhancement of ChatGPT as the version is updated. However, existing works on the evaluation of LLMs mostly focus on NLP tasks [28, 29, 30, 31] while the evaluation for Bioinformatics is still underexplored. As a result, we aim to evaluate LLMs on a range of bioinformatics tasks to give insights to the AI for science community. ### II-B Language Models for Bioinformatics Bioinformatics has been an important field involving the collection and analysis of biological data such as DNA sequences and protein structures. Language models have been applied to solve various bioinformatics tasks, such as transforming amino acids into different embeddings using protein language models, which can be used for downstream protein understanding. Recently, one of the most impressive applications is AlphaFold2 [32], which employs a transformer architecture to predict protein structures from amino acid sequences. It has demonstrated remarkable accuracy in predicting protein folding, outperforming traditional methods and highlighting the potential of large-scale models in this domain. Language models have inspired the development of drug discovery and design tasks as well [33]. For instance, inspired by the success of BERT [34], MolBERT [35] is developed for predicting molecular properties and generating novel molecular structures. MolBERT demonstrates the capacity of language models to generate de novo molecules and predict their physicochemical properties, which can be invaluable in the drug discovery process. In this paper, we study the power of LLMs to directly solve various bioinformatics problems, which can inspire the development of AI for science. We also notice that there are several works on the evaluation of LLMs related to bioinformatics [36, 37]. [37] investigate the performance of ChatGPT on bioinformatics programming tasks in educational settings. [36] produces a tool based on ChatGPT for bioinformatics education. However, these works are a preliminary exploration of ChatGPT for bioinformatics beginners while our work goes deeper and explores more complicated and typical tasks. [38] primarily focuses on the evolution of biomedical text processing problems in large language models. In contrast, our work involves a broad bioinformatics field, including identifying potential coding regions of viruses, detecting antimicrobial and anticancer peptides, gene and protein-named entity extraction, molecular modification, and educational bioinformatics problem- solving sets. [39] introduces a large language model for scientific knowledge mining while our work focuses on the evaluation of GPT models on various bioinformatics problems. [40] design a domain-specific language model for biomedical problems while our work explores more areas in bioinformatics. We believe our work can help gain a more profound understanding of LLMs’ potential in advancing the field of bioinformatics, opening new avenues for data analysis, hypothesis generation, and further automation of complex computational tasks in this area. ## III Evaluated Tasks ### III-A Identifying Potential Coding Regions from DNA Sequences The coding sequence (CDS) [41, 42] are some regions of a DNA or RNA sequence that contain essential information for protein encoding. Identifying these potential coding regions within viral sequences could be a crucial task, as it can aid us in understanding the biological characteristics of the corresponding virus as well as the expression of DNA sequences and genes [43]. We formalize the task as follows: Task 1. (Identifying Coding Regions) Our objective is to leverage a machine learning approach to identify as many potential coding regions as possible in the given DNA sequence. Prompts: Based on your knowledge, describe as many possible potential coding regions if they exist in frame 1. ### III-B Identifying Antimicrobial Peptide Antimicrobial resistance poses a significant threat to public health [44]. As a potential solution, antimicrobial peptides (AMPs) [45] have emerged as a promising solution, due to their broad-spectrum mechanism of action. Typically, AMPs exterminate bacteria and other hazardous organisms through interfering with vital biological components, such as the cell membrane as well as DNA replication mechanisms [46]. Therefore, the identification of candidate peptides with antimicrobial functions is crucial for developing novel therapeutics. The task is formalized as follows: Task 2. (Identifying Antimicrobial Peptide) Given the training set, we train a machine learning model which can identify antimicrobial peptides in massive protein sequences. Prompts: You are a peptide design researcher. Please tell me if the given peptide sequence has antimicrobial properties. ### III-C Identifying Anti-cancer Peptide The majority of anti-cancer drugs have inadequate selectivity, killing both normal and cancer cells without discrimination [47]. However, anti-cancer peptides (ACPs) function as molecularly targeted peptides that can directly bind to specific cancer cells or organelle membranes, or as binding peptides associated with anti-cancer drugs[48]. As minuscule peptides contain sequences of amino acids, ACPs are cancer-selective and toxic [49]. Therefore, they have emerged as a novel therapeutic strategy that targets some cancer cells specifically [50]. Considering the extensive time and high costs associated with identifying ACPs through biochemical experimentation, the development of deep learning algorithms for ACPs identification is vital. We formalize the task [51] as follows: Task 3. (Identifying Anti-cancer Peptide) Given a training set, we train a machine learning model which can identify anti-cancer peptides in these massive protein sequences. Prompts: You are a peptide design researcher. Please tell me whether a peptide with the sequence N could be an anti-cancer peptide. ### III-D Molecule Optimization To evaluate the extent of knowledge that the GPTs possess in the realms of chemistry and pharmacology, we study their proficiency in optimizing molecular properties. The optimization of molecules represents a pivotal phase in the process of drug discovery [52], allowing for enhancing the desired characteristics (e.g., octanol-water partition coefficients [53], synthetic accessibility [54] and drug-likeness [55]) of drug candidates via targeted chemical modifications. The task is formalized as follows: Task 4. (Molecule Optimization) Our objective is to modify a given molecule while preserving the primary molecular scaffold, such that certain properties can be enhanced. Prompts: Assume that you were a medicinal chemist, please make big modifications that go beyond just changing the charge to the following molecule to optimize the octanol–water partition coefficient penalized by synthetic accessibility and ring size. Here’s the SMILE string for the molecule, $SMILES$, and output the optimized SMILE string, please. ### III-E Gene and Protein Named Entities Extraction For genes and proteins, a wide variety of alternative names are used in abundant scientific literature or public biological databases, which poses a significant challenge to the gene and protein named mentions finding task. Meanwhile, as new biological entities are continuously discovered [56, 57, 58, 59], the diversity of gene and protein nomenclatures also brings some challenges to the gene and proteins named mention extraction task. For instance, the gene names in Drosophila [60], a.k.a., the fruit fly [61], can be common English words such as white, dumpy and forked. This nature could lead to a misleading algorithm or recognition model, making it a challenge to accurately extract gene and protein names. Here, we formalize the task as follows: Task 5. (Gene and Protein Named Entities Extraction) Our objective is to identify potential gene and protein named entities from the given sentences in life science literature. Prompts: You are an expert in the Named Entity Recognition field. Given a token and a sentence that contains gene mentions, you are to generate an ASCII list of identified gene names. Each gene mention will be formatted as follows: sentence-identifier | start-offset end-offset | optional text. Each gene mention from the same sentence will be listed on a separate line. If a sentence doesn’t have any gene mentions, it won’t be included in the list. Counting numbers need to exclude spaces, sentence-identifiers, and start from 1. ### III-F Evaluation on Educational Bioinformatics Problem-Solving Set In addition to using various datasets and different case scenarios to demonstrate the performance of the GPTs, we also employ a set containing 105 bioinformatics questions for evaluation. These problems originate from Rosalind333https://rosalind.info/problems/list-view/, an educational platform dedicated to learning bioinformatics and programming through problem solving. The task is formalized as follows: Task 6. (Educational Bioinformatics Problem Solving) Our objective is to generate corresponding answers to a bioinformatics problem set. These problems primarily encompass seven topics, i.e., String Algorithms, Combinatorics, Dynamic Programming, Alignment, Phylogeny, Probability, and Graph Algorithms. Prompts: Each question will be provided as a prompt. TABLE I: Summary of bioinformatics tasks and corresponding datasets. Task | Description | Dataset Source | Dataset Description | Evaluation Model ---|---|---|---|--- Task 1 | Identifying Coding Regions | [62] | Coding regions of the Vaccinia virus | GPT-3.5, GPT-4, Llama 2 (70B), Google Bard Task 2 | Identifying Antimicrobial Peptide | [22] | The training set is composed of 1,778 antimicrobial peptides (AMPs) and 1,778 non-AMPs, each with an average length of 34 amino acids. The test set contains 2,065 AMPs and 1,908 non-AMPs (each with an average amino acid length of 39), which have a sequence similarity of less than 90% compared to the training set as determined by CD-HIT. | GPT-3.5 (Davinci-ft), ESM, AMP-BERT Task 3 | Identifying Anti-cancer Peptide | [51] | The dataset comprises 206 non-anticancer peptides and 138 anticancer peptides, and each with an average amino acid length of 25. Peptides exhibiting more than 90% similarity were removed from the dataset using CD-HIT. | GPT-3.5 (Davinci-ft), ESM Task 4 | Molecule Optimization | [63] | The test set derived from the Modof dataset contains 800 molecules, with an average molecular weight of approximately 294.27 g/mol as determined through computational analysis of their SMILES representations. | GPT-4, Modof Task 5 | Gene and Protein Named Entities Extraction | [60] [64] | The whole dataset contains 20,000 sentences (a total of 24,583 gene and protein entities). The test set contains 5,000 sequences. | GPT-4, GPT-3.5 (gpt-3.5-turbo-0613), BioBERT, MT-BERT, MT-BioBERT Task 6 | Educational Bioinformatics Problem Solving | Online website444https://rosalind.info/problems/list-view/ | 105 questions were collected from ROSALIND bioinformatics website. | GPT-4, GPT-3.5 Figure 1: Our works aim to validate how LLMs can benefit bioinformatics research. ### III-G How our tasks are beneficial to bioinformatics research. As shown in Figure 1, in this work, we analyze the significance of studying these tasks for bioinformatics research. We introduce six different bioinformatics tasks, which are comprehensive to explore the capabilities of large language models in bioinformatics. In particular, Task 1 focuses on the analysis of genome sequences to identify potential coding regions, indicating that LLMs are promising for functional genomic analysis. Task 2 and Task 3 are about AMP and ACP, which can validate that LLMs can contribute to drug screening. Task 4 aims to introduce LLMs for molecular engineering, indicating that understanding the complex interplay between sequence patterns and biological functions allows LLMs to contribute to the design of optimized proteins and molecules. Furthermore, as showcased in Task 5, LLMs can perform entity recognition via specific prompts, aiding in biomedical text mining. Lastly, as illustrated by Task 6, LLMs can provide assistance to bioinformatics challenges, exemplifying their practical value in the field. We also show the task details and datasets in Table I. For Task 1, we utilize GPT-3.5 and GPT-4 to validate the basic biological knowledge of the language model. We additionally introduce Llama 2 (70B) and Google Bard for validation. For Tasks 2 and 3, we use GPT-3.5 (Davinci-ft), ESM, and AMP-BERT to test how well they can predict antimicrobial and anticancer peptides. GPT-4 does not support fine-tuning, which is skipped in our experiments. For Task 4, we use GPT-4 to verify the model’s capability for molecular modification. A baseline Modof is introduced for molecule optimization. GPT-3.5 is skipped because of its worse performance. For Task 5, we employ GPT-4 and GPT-3.5 (gpt-3.5-turbo-0613) to assess the model’s recognition ability for genes and protein names. BioBERT, MT-BERT, and MT-BioBERT are additionally used for named-entity recognition. For Task 6, we verify the model’s ability of GPT-3.5 and GPT-4 to answer biological questions involving probabilistics, logic, and character processing. Then, we conduct extensive experiments to explore the effectiveness of LLMs in bioinformatics research. Figure 2: Illustration of the identification of coding regions utilizing GPT-4. (The dialogue on the right depicts a comparison using chain of thought.) Figure 3: Comparison of LLMs for identifying potential coding regions from DNA sequences. ## IV Results and Discussions ### IV-A Performance on Identifying Potential Coding Regions As for identifying potential coding regions (CDS) [65] from DNA sequences, we utilize the understanding abilities of GPT-4, GPT-3.5, Llama 2 (70B) [24] and Google bard [25] with a corresponding prompt. Since LLMs have been trained in a variety of internet texts, the analyzing capability of LLMs allows them to generate meaningful and contextually appropriate responses. Our test subject is the DNA sequence of the Vaccinia virus [62] using partial sequence (GeneID: 3707616, 3707624, and 3707625. ACCESSION Id: NC_006998.1), and we require LLMs to give potential CDS in the first frame. The results are shown in Figure 2 and Figure 3. From the results, GPT-4 can successfully deliver the definitions of CDS and Open Reading Frames (ORFs), accurately pinpointing the start codon (usually AUG in RNA or ATG in DNA) and stop codons (UAA, UAG, UGA for RNA or TAA, TAG, TGA for DNA). It generates a list of potential coding regions, specifying their nucleotide lengths and corresponding start and stop codons. From the results, we have the following observations: * • The performance of GPT-4 can be enhanced by adopting a thought-chain approach. Despite its superior advantage, GPT-4 still overlooks some potential coding regions. When tasked with identifying the longest coding region in the first frame, it begins by translating the DNA sequence into an amino acid sequence using the standard genetic code, followed by a detailed explanation of the code. However, it incorrectly identifies an open reading frame (ORF) with 83 codons as the longest CDS. Therefore, we attempt to provide step-by-step prompts to GPT-4: “Begin with a sequential search from the start, initially selecting a start codon, followed by the identification of different stop codons that delineate potential coding sequences. Proceed in this manner before choosing the subsequent start codon.” With this approach, GPT-4 is able to present all potential CDS, including the longest sequence. * • GPT-4 is capable not only of directly providing potential coding sequences but also of delivering an effective algorithm to algorithmically find all potential coding sequences., We observe that the GPT-4 can report all potential coding regions successfully in the given examples using coding. * • The performance of Llama 2 (70B) is the least satisfactory, failing to yield any potential coding sequences or explanations. In contrast, Google Bard only offers suggestions for identifying potential coding sequences using other tools and points out the start and stop codons in the DNA sequence. When we request a program from Google Bard to find coding sequences, it is able to provide an effective algorithm. TABLE II: The compared cross-validation results of different models for identifying antimicrobial peptides on the training set. MODEL | SN | SP | F1 | ACC | AUROC | AUPR ---|---|---|---|---|---|--- XGB | 0.702 | 0.566 | 0.630 | 0.641 | 0.734 | 0.783 MNB | 0.815 | 0.739 | 0.780 | 0.800 | 0.870 | 0.912 SVM | 0.872 | 0.717 | 0.790 | 0.796 | 0.843 | 0.897 KNN | 0.709 | 0.622 | 0.670 | 0.703 | 0.674 | 0.722 LR | 0.843 | 0.735 | 0.790 | 0.804 | 0.836 | 0.889 MLP | 0.792 | 0.654 | 0.720 | 0.731 | 0.776 | 0.802 RF | 0.867 | 0.691 | 0.770 | 0.772 | 0.834 | 0.870 GB | 0.775 | 0.583 | 0.660 | 0.646 | 0.708 | 0.789 ESM | 0.912 | 0.928 | 0.920 | 0.920 | 0.974 | 0.977 AMP-BERT | 0.926 | 0.930 | 0.928 | 0.928 | 0.966 | 0.965 GPT-3.5(Davinci-ft) | 0.979 | 0.962 | 0.970 | 0.968 | 0.968 | 0.978 TABLE III: The compared results of different models for identifying antimicrobial peptides on the test set. MODEL | SN | SP | ACC | Fl | AUC | AUPR ---|---|---|---|---|---|--- XGB | 0.695 | 0.630 | 0.660 | 0.654 | 0.714 | 0.700 MNB | 0.687 | 0.750 | 0.711 | 0.746 | 0.757 | 0.720 SVM | 0.740 | 0.675 | 0.706 | 0.702 | 0.749 | 0.697 KNN | 0.608 | 0.687 | 0.632 | 0.698 | 0.691 | 0.738 LR | 0.724 | 0.676 | 0.699 | 0.702 | 0.746 | 0.711 MLP | 0.701 | 0.715 | 0.707 | 0.730 | 0.749 | 0.715 RF | 0.714 | 0.692 | 0.703 | 0.715 | 0.739 | 0.692 GB | 0.708 | 0.606 | 0.646 | 0.616 | 0.699 | 0.691 ESM | 0.865 | 0.496 | 0.742 | 0.688 | 0.779 | 0.758 AMP-BERT | 0.876 | 0.635 | 0.792 | 0.760 | 0.818 | 0.787 GPT-3.5 (Davinci-ft) | 0.844 | 0.745 | 0.718 | 0.759 | 0.782 | 0.810 ### IV-B Performance on Identifying Antimicrobial Peptide As for identifying antimicrobial peptides, in this subsection, we fine-tune the GPT-3.5 (Davinci) model555https://platform.openai.com/docs/models/gpt-3 to distinguish between AMPs and non-AMPs. We set the number of epochs, batch size and learning rate multiplier during training to 20, 3 and 0.3, respectively. Our selected prompt is as follows: Assuming the role of a peptide design researcher, please evaluate if a peptide with this particular sequence could qualify as an antimicrobial peptide. To access the performance of our fine-tuned model GPT-3.5 (Davinci-ft), we conduct a comparison with the advanced protein large language model, ESM (esm_msa1b_t12_100M_UR50S), several machine learning-based methods, i.e., XGBoost (XGB) [66], Multinomial Naive Bayes (MNB) [67], Support Vector Machines (SVM) [68], K-Nearest Neighbor (KNN) [69], Logistic Regression (LR) [70], MultiLayer Perceptron (MLP) [71], Random Forest (RF) [72], GBoost (GB) [73], and AMP-BERT [22] using two datasets from [22]. We utilize six widely accepted metrics to assess the performance, namely, sensitivity (SN), specificity (SP), F1-score (F1), accuracy (ACC), area under the Receiver Operating Characteristic curve (AUROC), and area under the Precision-Recall curve (AUPR). The training set [22] is comprised of 1,778 AMPs paired with an equal number of non-AMPs. The test set constitutes 2,065 AMPs and 1,908 non-AMPs. Importantly, these two data sets have low overlap. In alignment with the comparison methodology outlined by [22], we initially perform a 5-repeated 10-fold cross-validation during the fine-tuning stage on the training set. Subsequently, the GPT-3.5 (Davinci-ft) model is tested to the test set. The results for different models on the training and test sets are shown in Table II and Table III, respectively. From the results, we have the following observations: * • GPT-3.5 (Davinci-ft) demonstrates the best performance on most metrics during the 5-repeated 10-fold cross-validation process. Different from AMP-BERT which is based on ProtTrans [21] specifically trained on proteins, GPT-3.5 (Davinci- ft) is not a model targeting proteins. Nevertheless, after the fine-tuning procedure, GPT-3.5 (Davinci-ft) outperforms the AMP-BERT model across a variety of metrics. In terms of F1-score, GPT-3.5 (Davinci-ft) also significantly surpasses the other models. Specifically, it outperforms XGB, MNB, SVM, KNN, LR, MLP, RF, GB, and AMP-BERT by margins of 0.340, 0.190, 0.180, 0.300, 0.180, 0.250, 0.200, 0.310, and 0.042, respectively, which validates the strong capacity of LLMs. * • GPT-3.5 (Davinci-ft) has the potential to tackle the imbalanced test set. As shown in Table III, for the metrics of SN, ACC, F1, and AUC, AMP-BERT achieves the best performance with scores of 0.876, 0.792, 0.760, and 0.818, respectively. On the SP metric, MNB demonstrates the best performance with a score of 0.750. * • ESM demonstrates strong performance on the antimicrobial peptide training sets. Specifically, it achieved the best performance on the training set with an AUROC of 0.974, surpassing other models. Though, GPT-3.5 (Davinci-ft) achieves the best performance on AUPR with a score of 0.810. This suggests that it still has the potential in handling the imbalance between positive and negative instances in the test set. ### IV-C Performance on Identifying Anti-cancer Peptide TABLE IV: The compared results of different methods for identifying anti- cancer peptides. MODEL | SN | SP | ACC | MCC | AUC | AUPRC ---|---|---|---|---|---|--- XGB | 0.846 | 0.864 | 0.857 | 0.700 | 0.806 | 0.845 MNB | 1.000 | 0.913 | 0.943 | 0.885 | 0.973 | 0.973 SVM | 1.000 | 0.875 | 0.914 | 0.829 | 0.966 | 0.964 KNN | 0.929 | 0.952 | 0.943 | 0.881 | 0.930 | 0.943 LR | 1.000 | 0.840 | 0.886 | 0.775 | 0.963 | 0.962 MLP | 1.000 | 0.913 | 0.943 | 0.885 | 0.949 | 0.958 RF | 1.000 | 0.875 | 0.914 | 0.829 | 0.990 | 0.984 GB | 1.000 | 0.778 | 0.829 | 0.667 | 0.857 | 0.875 ESM | 0.933 | 0.900 | 0.903 | 0.914 | 0.923 | 0.920 GPT-3.5 (Davinci-ft) | 1.000 | 0.875 | 0.914 | 0.892 | 0.993 | 0.991 As for identifying anti-cancer peptides (ACPs), we also fine-tune the Davinci model, named as Davinci-ft to distinguish between ACPs and non-ACPs. We use the same setting during the training process. Our selected prompt is as follows: Assuming the role of a peptide design researcher, please evaluate if a peptide with this particular sequence could qualify as an anti-cancer peptide. Our dataset originates from [51] encompassing a total of 138 ACPs and 206 non- ACPs. To evaluate the performance of the GPT-3.5 (Davinci-ft) model, we compare it with several machine learning-based methods utilizing widely accepted metrics as detailed in Section IV-B. The compared results are shown in Table IV. We observe that for the SN metric, MNB, SVM, LR, MLP, RF, GB and GPT-3.5 (Davinci-ft) all achieved top performances. In terms of SP, KNN achieves the highest performance, indicating its superior ability to identify non-ACPs. More importantly, GPT-3.5 (Davinci-ft) achieves the best performance in terms of most matrices including MCC, AUC and AUPRC. In particular, GPT-3.5 (Davinci-ft) displays exceptional performance on the crucial AUC metric, outperforming XGB, MNB, SVM, KNN, LR, MLP, RF, and GB by respective margins of 0.187, 0.020, 0.027, 0.063, 0.030, 0.044, 0.003, and 0.136. These results collectively indicate that LLMs can achieve superior performance in identifying anti-cancer peptides. ### IV-D Performance on Molecule Optimization TABLE V: The result of GPT-4 and Modof in the modification of some molecules. Molecule | GPT-4 | Modof ---|---|--- | Optimized Molecule | $\Delta$logP ($\uparrow$) | $\Delta$SA ($\uparrow$) | $\Delta$QED ($\uparrow$) | Optimized Molecule | $\Delta$logP | $\Delta$SA | $\Delta$QED 1 | | 1.19 | 1.72 | 0.0 | | 1.90 | 0.58 | -0.38 2 | | 3.1 | 1.24 | 0.15 | | 3.65 | 0.79 | 0.07 3 | | 1.68 | 0.28 | 0.05 | | 7.81 | -0.54 | -0.57 4 | | 1.61 | 1.20 | 0.06 | - | - | - | - 5 | - | - | - | - | | 3.79 | 0.35 | -0.26 6 | | 2.49 | 1.34 | 0.07 | - | - | - | - 7 | | 1.56 | 1.14 | 0.05 | | 1.58 | 2.30 | -0.18 8 | | 2.04 | 1.63 | 0.05 | | 2.74 | 0.43 | 0.07 9 | | 0.12 | 1.47 | 0.0 | | -0.04 | 1.29 | -0.11 10 | - | - | - | - | | 4.70 | 1.38 | -0.14 11 | | 0.54 | 0.17 | 0.25 | | 2.74 | 0.08 | -0.03 12 | | -0.46 | 0.17 | 0.28 | | 3.98 | 0.02 | -0.04 As for molecule optimization, we target at the enhancement of partition coefficients, a.k.a., logP, which can be quantified by Crippen’s logP methodology [74]. Meanwhile, we also consider penalties incurred by synthetic accessibility (SA) [75]. Following [55], we also evaluate quantitative results in terms of drug-likeness (QED) scores, which can reflect whether a molecular can be a drug candidate or not. In particular, LogP is a measure of a compound’s lipophilicity, indicating its distribution between a hydrophilic (water) and a lipophilic (fat) phase. This property is critical in predicting the absorption, distribution, metabolism, and excretion (ADME) of potential drug candidates. A suitable logP value suggests a balance between solubility (necessary for bioavailability) and permeability (for cellular access). SA assesses the ease with which a compound can be synthesized. This is crucial for practical drug development, as compounds that are difficult or expensive to synthesize may not be viable for large-scale production, regardless of their therapeutic potential. QED measures how closely a compound resembles known drugs based on several physicochemical properties like molecular weight, hydrogen bond donors and acceptors, and logP. This metric helps in prioritizing compounds that have higher chances of success in clinical trials based on historical data. These three metrics are all important for measuring drug properties. To evaluate the performance of GPT-4, we present a comparative analysis with Modof [63], a sophisticated deep generative model. Modof harnesses the capabilities of the junction tree methodology for molecular representation, modifying discrete fragments of the molecule through the employment of variational autoencoders (VAE) [76]. Here, the dataset, we used, is originates from the ZINC database [77]. TABLE VI: The average property improvements of the whole molecules dataset by GPT-4 and Modof. Method | $\Delta$logP | $\Delta$SA | $\Delta$QED ---|---|---|--- Modof | 3.76 | 0.20 | -0.19 GPT-4 | 1.87 | 0.84 | 0.03 Figure 4: Comparison of distributions for different metrics. We summarize the quantitative results in Table VI and Figure 4, and some examples can be found in Table V. We observed a significant difference, as several molecules modified by GPT-4 and Modof exhibit on the metrics. The p-values (T-test) for $\Delta logP$, $\Delta QED$, and $\Delta SA$ are $4.86\times 10^{-65}$, $1.89\times 10^{-72}$, and $1.34\times 10^{-38}$, respectively. The mean value for Modof $\Delta\log P$ is 3.76, with a 95% confidence interval ranging from 3.58 to 3.93. For GPT-4, the mean of $\Delta SA$ is 0.198, with a confidence interval from 0.136 to 0.257. Lastly, the mean for $\Delta QED$ is -0.194, with its confidence interval lying between -0.213 and -0.175. From the results, we have the following observations: * • GPT-4 can successfully generate valid SMILES in most cases. It becomes evident that it has assimilated fundamental principles of physical chemistry. GPT-4 can provide valid optimized molecules for 661 cases, which achieves the comparable performance compared with the superior baseline Modof. In particular, the validity rateof Modof and GPT-4 are 0.800 and 0.830, respectively. * • GPT-4 has an advantage in improving both SA and QED. GPT-4 can achieve higher scores in terms of both SA and QED in most cases. Remarkably, owing to its advanced conversation diagram, GPT-4 is capable of articulating the executed modifications, providing rudimentary rationales behind the alterations. * • GPT-4 still falls short in improving logP. The average $\Delta$ logP achieved by GPT-4 is about 50.3% of that achieved by Modof. We have computed the deviation of the heavy atom number before and after molecular optimization. The average deviation of heavy atom number for Modof optimized molecules is +10.26, while the average deviation of heavy atom number for GPT-4 optimized molecules is only +0.65, which is much lower compared with Modof optimization. The potential reason is that GPT-4 would tend to remove atomic charges in an attempt to improve the octanol-water partition coefficient instead of modifying the hydrophobic fragments into hydrophilic ones. Such a practice could potentially obliterate significant pharmacophores of a drug. Furthermore, GPT-4 often adopts a more conservative approach to modifying the original molecule, primarily excising some fragments to facilitate synthesis, whereas Modof typically opts to append larger new fragments to the molecule. As a consequence, the simple and direct modification learned by GPT-4 could be insufficient for improving logP significantly as Modof does. ### IV-E Performance on Gene and Protein Named Entity Recognition As for gene and protein named entity recognition (NER), we evaluate the performance of GPT-3.5 on BioCreative II gene mention (GM) corpus [60]. This dataset comprises extensive annotated sentences from MEDLINE [64], with a primary goal to focus on the extraction of gene and protein named entities. Here, the test set contains 5,000 sequences. We leverage a prompt to call a GPT-3.5 API (gpt-3.5-turbo-0613) and GPT-4, which evaluates the performance on the test set without utilizing the training set. We compare the API model with the baseline BiLSTM [78],MT-BiLSTM-CRF [79], BioBERT [80], MT-BERT [78], and MT-BioBERT [81] using three metrics. The compared results are shown in Table VII, and we observe that the API model would achieve poor performance for extracting genes or proteins from sequences in terms of both partial and strict matching criteria. The former requires the prediction should exactly match the ground truth while the latter allows partially overlaps [82]. Here, we analyze two limitations of the API model: * • The GPT-3.5 model could miss gene mention entities in sentences. As in Figure 5, in the cases beginning by BC2GM001536665 and BC2GM002436660, the ground truth is “ERCC3Dm protein; ‘helicase’ domains” and “Htf9-a gene; RanBP1 protein; Ran GTPase”, respectively. The GPT-3.5 model misses “helicase domains” and reports “Htf9-a gene; RanBP1; Ran GTPase”, which results from the confounding gene names in the large corpus. * • The GPT-3.5 could misunderstand the gene name. In the case beginning by BC2GM062242948, the ground truth is supposed to be “AD1 Ag” while the GPT-3.5 model reports “AD1; Ag; 2H3”, which validates that the API model cannot understand that “AD1 Ag” should be considered as a whole rather than separately. GPT-3.5 has very low scores, with an F1-Score of 11.17%. * • GPT-4 achieves better performance than GPT-3.5. GPT-4 has better performance than GPT-3.5 but still lower than the other models, with an F1-Score of 62.10%. * • The BiLSTM model outperforms the others with the highest F1-Score, while MT- BiLSTM-CRF scores lower, and BioBERT variants show comparable results. The BiLSTM model has the highest scores in all three categories, with the F1-Score at 88.11%. The MT-BiLSTM-CRF model has lower scores than BiLSTM, with an F1-Score of 80.74%. BioBERT, MT-BERT, and MT-BioBERT models have similar results, with F1-Scores around 85%. TABLE VII: The compared results of different models for extracting gene and protein name mention in GM test set. Model | P | R | F ---|---|---|--- BiLSTM | 87.98 | 88.25 | 88.11 MT-BiLSTM-CRF | 82.10 | 79.42 | 80.74 BioBERT | 84.32 | 85.12 | 84.72 MT-BERT | 84.12 | 84.98 | 84.53 MT-BioBERT | 84.53 | 85.27 | 84.82 GPT-3.5(gpt-3.5-turbo-0613) | 24.07 | 7.26 | 11.17 GPT-4 | 51.72 | 77.7 | 62.10 Figure 5: Illustration of gene and protein NER using GPT-3.5 (gpt-3.5-turbo-0613) and GPT-4. ### IV-F Performance on Educational Bioinformatics Problem Solving As for the evaluation of educational bioinformatics problem solving, we check the performance of two GPT models, GPT-3.5 and GPT-4, across a set of 105 varied problems within the Bioinformatics Stronghold. This collection primarily encompasses seven basic topics as follows: (1) String Algorithms [83]: This topic focuses on the manipulation and exploration of properties inherent to symbol chains. (2) Combinatorics [84]: This scope of problems quantifies distinct objects mathematically. (3) Dynamic Programming [85]: This topic involves progressively building up solutions to complex problems. (4) Alignment [86]: This process superimposes symbols of one string over another, inserting gap symbols into the strings as necessary to represent insertions, deletions, and substitutions. (5) Graph Algorithms [87]: This field involves interpreting and manipulating network structures or graphs. (6) Phylogeny [88]: This topic models the evolutionary trajectories of taxa. (7) Probability [89]: This branch of mathematics studies the likelihood of random event occurrences. Figure 6: Comparative performance of GPT-3.5 and GPT-4 in various bioinformatics problem-solving tasks. Figure 7: Comparison of GPT-3.5 and GPT-4 in dealing with different tasks. We manually collect these 105 problems, individually take them as chat dialogue, and put them into GPT-3.5 and GPT-4. We use the example data set for querying and assessing accuracy. The results related to the seven distinct topics are presented in Figure 6. For a more intuitive illustration, we also select several examples to demonstrate the incorrect and correct responses from GPT-3.5 and GPT-4, as well as their differing performances on the same problems in Figure 7. From the results, we can make the following four observations: * • GPT-4 demonstrates an overall improvement compared to GPT-3.5 across various types of problems. This is especially evident in combinatorics and graph algorithms, where GPT-4 achieves success rates of $81.82\%$ and $72.73\%$ respectively, substantially higher than those achieved by GPT-3.5. Even in Phylogeny and Probability, where both models showed relatively lower performance, GPT-4 again leads with $52.63\%$ and $46.15\%$ success rates, outperforming GPT-3.5’s $21.05\%$ and $38.46\%$. These findings, collated from a total of $105$ problems, highlight the superior capabilities of GPT-4 in a broad spectrum of bioinformatics challenges, answering $71$ questions correctly, compared to GPT-3.5 which correctly solves $53$ questions. * • The performance of GPT-3.5 and GPT-4 is almost consistent across different topics. Simultaneously, we notice that for probability-related problems, both GPT-3.5 and GPT-4 could only achieve accuracy rates of $36.4\%$ and $46.2\%$ respectively. In contrast, both models are good at solving Combinatorics- related problems, achieving higher accuracy rates of $63.3\%$ for GPT-3.5 and $81.8\%$ for GPT-4. * • GPT-3.5 exhibits excellent performance in handling relatively simple problems. As illustrated in Figure 7, when tasked with finding common segments between two DNA sequences, GPT-3.5 is able to respond accurately, providing appropriate program solutions and methods. However, when faced with more complex problems, such as determining the maximum local alignment score of two protein strings, GPT-3.5 proposes incorrect results which are seemingly correct. * • GPT-4 demonstrates a limited capacity in tackling complex problems. For instance, as illustrated in Figure 7, when given the exons and introns of a DNA string, we test GPT-4 by deleting the introns, concatenating the exons to form a new string, and then transcribing and translating this newly formed string. GPT-4 successfully manages to remove the introns and translate the DNA into an amino acid sequence. However, for complex logical problems, such as calculating the number of all basepair edges in a bonding graph that can be exactly matched, GPT-4 manages to outline a correct approach but ultimately provides an incorrect answer. Perhaps we need multi-turn interactions to thoroughly solve these complicated problems. ## V Limitations A limitation of our GPT evaluation is that we cannot get access to the training data for GPT, and thus cannot guarantee these datasets are not included in pretraining. We will utilize more up-to-date test data in future works to promise that the training material does not include this. Another limitation is that some of the models could be depreciated in the future. However, we believe that the performance would be enhanced with an updated model in the future, and our method focuses on bridging GPT and bioinformatics rather than a specific LLM, which is meaningful to provide guidance for this field. We will also update the results with more advanced LLMs in our future work. Our work is committed to the executive order regarding the use of LLMs on biological sequences, and we will always ensure the safety of AI work. LLMs have achieved significant process fields, and it is important to explore whether bioinformatics can benefit from this as well, which can provide guidance for bioinformatics researchers. Moreover, we can also provide insights for researchers using AI in science and encourage more AI researchers to contribute to natural science. Although this work makes the evaluation on six basic bioinformatics tasks, a wide range of sub-regions in bioinformatics have not been considered. In the future, we will further test and develop the relevant applications of the GPT model through more enriched text scenarios. This will specifically manifest in the generative functionalization of sequences of large biomolecules, predicting the interactions between large biomolecules or between drugs and their corresponding receptors, designing and functionalizing large biomolecules from scratch based on original wet-lab data, and establishing a bioinformatics application ecosystem through the GPT model. ## VI Conclusion This paper explores the applications of the GPTs in bioinformatics research and evaluates them in six basic tasks, including gene and protein named entities extraction, solving educational bioinformatics problems and identifying potential coding regions, antimicrobial and anti-cancer peptides. Extensive experiments demonstrate that LLMs like the GPTs can achieve remarkable performance on the majority of these tasks with proper prompts and models. We hope this work can facilitate researchers in bioinformatics about using advanced LLMs and thus promote the development of AI for science. ## Acknowledgement The authors are grateful to the anonymous reviewers for critically reading the manuscript and for giving important suggestions to improve their paper. ## References * [1] A. Birhane, A. Kasirzadeh, D. Leslie, and S. Wachter, “Science in the age of large language models,” _Nature Reviews Physics_ , pp. 1–4, 2023. * [2] U. Katz, M. Geva, and J. Berant, “Inferring implicit relations in complex questions with language models,” in _Findings of the Association for Computational Linguistics: EMNLP 2022_ , 2022, pp. 2548–2566. * [3] X. L. Li, A. Kuncoro, J. Hoffmann, C. de Masson d’Autume, P. Blunsom, and A. Nematzadeh, “A systematic investigation of commonsense knowledge in large language models,” in _Proceedings of the Conference on Empirical Methods in Natural Language Processing_ , 2022, pp. 11 838–11 855. * [4] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray _et al._ , “Training language models to follow instructions with human feedback,” in _Proceedings of the Conference on Neural Information Processing Systems_ , 2022, pp. 27 730–27 744. * [5] F. Xu, Q. Lin, J. Han, T. Zhao, J. Liu, and E. Cambria, “Are large language models really good logical reasoners? a comprehensive evaluation from deductive, inductive and abductive views,” _arXiv preprint arXiv:2306.09841_ , 2023. * [6] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up? sentiment classification using machine learning techniques,” _arXiv preprint cs/0205070_ , 2002. * [7] M. Marrero, J. Urbano, S. Sánchez-Cuadrado, J. Morato, and J. M. Gómez-Berbís, “Named entity recognition: fallacies, challenges and opportunities,” _Computer Standards & Interfaces_, vol. 35, no. 5, pp. 482–489, 2013. * [8] K. Lee, D. Palsetia, R. Narayanan, M. M. A. Patwary, A. Agrawal, and A. Choudhary, “Twitter trending topic classification,” in _Proceedings of the International Conference on Data Mining Workshops_ , 2011, pp. 251–258. * [9] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu _et al._ , “Summary of chatgpt/gpt-4 research and perspective towards the future of large language models,” _arXiv preprint arXiv:2304.01852_ , 2023. * [10] T. Sun, Z. He, H. Qian, Y. Zhou, X.-J. Huang, and X. Qiu, “Bbtv2: towards a gradient-free future with large language models,” in _Proceedings of the Conference on Empirical Methods in Natural Language Processing_ , 2022, pp. 3916–3930. * [11] T. Zhang, F. Ladhak, E. Durmus, P. Liang, K. McKeown, and T. B. Hashimoto, “Benchmarking large language models for news summarization,” _arXiv preprint arXiv:2301.13848_ , 2023. * [12] I. Beltagy, A. Cohan, R. Logan IV, S. Min, and S. Singh, “Zero-and few-shot nlp with pretrained language models,” in _Proceedings of the Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts_ , 2022, pp. 32–37. * [13] Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, W. Chung _et al._ , “A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity,” _arXiv preprint arXiv:2302.04023_ , 2023. * [14] W. Jiao, W. Wang, J.-t. Huang, X. Wang, and Z. Tu, “Is chatgpt a good translator? a preliminary study,” _arXiv preprint arXiv:2301.08745_ , 2023. * [15] Y. Tan, D. Min, Y. Li, W. Li, N. Hu, Y. Chen, and G. Qi, “Evaluation of chatgpt as a question answering system for answering complex questions,” _arXiv preprint arXiv:2303.07992_ , 2023. * [16] T. H. Kung, M. Cheatham, A. Medenilla, C. Sillos, L. De Leon, C. Elepaño, M. Madriaga, R. Aggabao, G. Diaz-Candido, J. Maningo _et al._ , “Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models,” _PLoS Digital Health_ , vol. 2, no. 2, p. e0000198, 2023. * [17] S. B. Patel and K. Lam, “Chatgpt: the future of discharge summaries?” _The Lancet Digital Health_ , vol. 5, no. 3, pp. e107–e108, 2023. * [18] Q. Lu, D. Dou, and T. Nguyen, “Clinicalt5: A generative language model for clinical text,” in _Findings of the Association for Computational Linguistics: EMNLP 2022_ , 2022, pp. 5436–5443. * [19] J. Otmakhova, K. Verspoor, T. Baldwin, A. J. Yepes, and J. H. Lau, “M3: Multi-level dataset for multi-document summarisation of medical studies,” in _Findings of the Association for Computational Linguistics: EMNLP 2022_ , 2022, pp. 3887–3901. * [20] Z. Lin, H. Akin, R. Rao, B. Hie, Z. Zhu, W. Lu, N. Smetanin, R. Verkuil, O. Kabeli, Y. Shmueli _et al._ , “Evolutionary-scale prediction of atomic-level protein structure with a language model,” _Science_ , vol. 379, no. 6637, pp. 1123–1130, 2023. * [21] A. Elnaggar, M. Heinzinger, C. Dallago, G. Rehawi, Y. Wang, L. Jones, T. Gibbs, T. Feher, C. Angerer, M. Steinegger _et al._ , “Prottrans: Toward understanding the language of life through self-supervised learning,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 44, no. 10, pp. 7112–7127, 2021. * [22] H. Lee, S. Lee, I. Lee, and H. Nam, “Amp-bert: Prediction of antimicrobial peptide function based on a bert model,” _Protein Science_ , vol. 32, no. 1, p. e4529, 2023. * [23] I. Jahan, M. T. R. Laskar, C. Peng, and J. Huang, “Evaluation of chatgpt on biomedical tasks: A zero-shot comparison with fine-tuned generative transformers,” _arXiv preprint arXiv:2306.04504_ , 2023. * [24] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale _et al._ , “Llama 2: Open foundation and fine-tuned chat models,” _arXiv preprint arXiv:2307.09288_ , 2023. * [25] Ö. AYDIN, “Google bard generated literature review: metaverse,” _Journal of AI_ , vol. 7, no. 1, pp. 1–14, 2023. * [26] M. Wysocka, O. Wysocki, M. Delmas, V. Mutel, and A. Freitas, “Large language models, scientific knowledge and factuality: A systematic analysis in antibiotic discovery,” _arXiv preprint arXiv:2305.17819_ , 2023. * [27] Y. Liu, S. Feng, D. Wang, Y. Zhang, and H. Schütze, “Evaluate what you can’t evaluate: Unassessable generated responses quality,” _arXiv preprint arXiv:2305.14658_ , 2023. * [28] S. S. Biswas, “Role of chat gpt in public health,” _Annals of Biomedical Engineering_ , vol. 51, no. 5, pp. 868–869, 2023. * [29] C. Shyr, Y. Hu, P. A. Harris, and H. Xu, “Identifying and extracting rare disease phenotypes with large language models,” _arXiv preprint arXiv:2306.12656_ , 2023. * [30] X. Li, Y. Zhang, and E. C. Malthouse, “A preliminary study of chatgpt on news recommendation: Personalization, provider fairness, fake news,” _arXiv preprint arXiv:2306.10702_ , 2023. * [31] D. Pu and V. Demberg, “Chatgpt vs human-authored text: Insights into controllable text summarization and sentence style transfer,” _arXiv preprint arXiv:2306.07799_ , 2023. * [32] P. Cramer, “Alphafold2 and the future of structural biology,” _Nature Structural & Molecular Biology_, vol. 28, no. 9, pp. 704–705, 2021. * [33] J. Deng, Z. Yang, I. Ojima, D. Samaras, and F. Wang, “Artificial intelligence in drug discovery: applications and techniques,” _Briefings in Bioinformatics_ , vol. 23, no. 1, 2022. * [34] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [35] B. Fabian, T. Edlich, H. Gaspar, M. Segler, J. Meyers, M. Fiscato, and M. Ahmed, “Molecular representation learning with language models and domain-relevant auxiliary tasks,” _arXiv preprint arXiv:2011.13230_ , 2020. * [36] E. Shue, L. Liu, B. Li, Z. Feng, X. Li, and G. Hu, “Empowering beginners in bioinformatics with chatgpt,” _bioRxiv_ , pp. 2023–03, 2023. * [37] S. R. Piccolo, P. Denny, A. Luxton-Reilly, S. Payne, and P. G. Ridge, “Many bioinformatics programming tasks can be automated with chatgpt,” _arXiv preprint arXiv:2303.13528_ , 2023. * [38] I. Jahan, M. T. R. Laskar, C. Peng, and J. Huang, “A comprehensive evaluation of large language models on benchmark biomedical text processing tasks,” 2023. * [39] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. Hartshorn, E. Saravia, A. Poulton, V. Kerkez, and R. Stojnic, “Galactica: A large language model for science,” 2022. * [40] R. Luo, L. Sun, Y. Xia, T. Qin, S. Zhang, H. Poon, and T.-Y. Liu, “Biogpt: generative pre-trained transformer for biomedical text generation and mining,” _Briefings in Bioinformatics_ , vol. 23, no. 6, Sep. 2022. [Online]. Available: http://dx.doi.org/10.1093/bib/bbac409 * [41] D. B. Lubahn, T. R. Brown, J. A. Simental, H. N. Higgs, C. J. Migeon, E. M. Wilson, and F. S. French, “Sequence of the intron/exon junctions of the coding region of the human androgen receptor gene and identification of a point mutation in a family with complete androgen insensitivity.” _Proceedings of the National Academy of Sciences_ , vol. 86, no. 23, pp. 9534–9538, 1989. * [42] W. Zhu, A. Lomsadze, and M. Borodovsky, “Ab initio gene identification in metagenomic sequences ,” _Nucleic Acids Research_ , vol. 38, no. 12, pp. e132–e132, 2010. * [43] J. H. Badger and G. J. Olsen, “Critica: coding region identification tool invoking comparative analysis.” _Molecular Biology and Evolution_ , vol. 16, no. 4, pp. 512–524, 1999. * [44] R. Wise, T. Hart, O. Cars, M. Streulens, R. Helmuth, P. Huovinen, and M. Sprenger, “Antimicrobial resistance,” pp. 609–610, 1998. * [45] A. A. Bahar and D. Ren, “Antimicrobial peptides,” _Pharmaceuticals_ , vol. 6, no. 12, pp. 1543–1575, 2013. * [46] M. Zasloff, “Antimicrobial peptides of multicellular organisms,” _Nature_ , vol. 415, no. 6870, pp. 389–395, 2002. * [47] B. Liu, L. Ezeogu, L. Zellmer, B. Yu, N. Xu, and D. Joshua Liao, “Protecting the normal in order to better kill the cancer,” _Cancer Medicine_ , vol. 4, no. 9, pp. 1394–1403, 2015. * [48] J. Li, S. Tan, X. Chen, C.-Y. Zhang, and Y. Zhang, “Peptide aptamers with biological and therapeutic applications,” _Current Medicinal Chemistry_ , vol. 18, no. 27, pp. 4215–4222, 2011. * [49] A. Tyagi, A. Tuknait, P. Anand, S. Gupta, M. Sharma, D. Mathur, A. Joshi, S. Singh, A. Gautam, and G. P. Raghava, “Cancerppd: a database of anticancer peptides and proteins,” _Nucleic Acids Research_ , vol. 43, no. D1, pp. D837–D843, 2015. * [50] W. Chiangjong, S. Chutipongtanate, and S. Hongeng, “Anticancer peptide: Physicochemical property, functional aspect and trend in clinical application,” _International Journal of Oncology_ , vol. 57, no. 3, pp. 678–696, 2020. * [51] Q. Li, W. Zhou, D. Wang, S. Wang, and Q. Li, “Prediction of anticancer peptides using a low-dimensional feature model,” _Frontiers in Bioengineering and Biotechnology_ , vol. 8, p. 892, 2020. * [52] M. L. Verdonk and M. J. Hartshorn, “Structure-guided fragment screening for lead discovery.” _Current Opinion in Drug Discovery & Development_, vol. 7, no. 4, pp. 404–410, 2004. * [53] J. M. Sangster, _Octanol-water partition coefficients: fundamentals and physical chemistry_. John Wiley & Sons, 1997, vol. 1. * [54] B. Ouyang, J. Wang, T. He, C. J. Bartel, H. Huo, Y. Wang, V. Lacivita, H. Kim, and G. Ceder, “Synthetic accessibility and stability rules of nasicons,” _Nature Communications_ , vol. 12, no. 1, p. 5752, 2021. * [55] G. R. Bickerton, G. V. Paolini, J. Besnard, S. Muresan, and A. L. Hopkins, “Quantifying the chemical beauty of drugs,” _Nature Chemistry_ , vol. 4, no. 2, pp. 90–98, 2012. * [56] E. A. Bruford, B. Braschi, P. Denny, T. E. Jones, R. L. Seal, and S. Tweedie, “Guidelines for human gene nomenclature,” _Nature Genetics_ , vol. 52, no. 8, pp. 754–758, 2020. * [57] K. Fundel and R. Zimmer, “Gene and protein nomenclature in public databases,” _BMC Bioinformatics_ , vol. 7, no. 1, pp. 1–13, 2006. * [58] T. C. Rindflesch, L. Tanabe, J. N. Weinstein, and L. Hunter, “Edgar: extraction of drugs, genes and relations from the biomedical literature,” in _Biocomputing 2000_. World Scientific, 1999, pp. 517–528. * [59] C. Blaschke, L. Hirschman, and A. Valencia, “Information extraction in molecular biology,” _Briefings in Bioinformatics_ , vol. 3, no. 2, pp. 154–165, 2002. * [60] A. Yeh, A. Morgan, M. Colosimo, and L. Hirschman, “Biocreative task 1a: gene mention finding evaluation,” _BMC Bioinformatics_ , vol. 6, pp. 1–10, 2005. * [61] M. F. Wangler, S. Yamamoto, and H. J. Bellen, “Fruit flies in biomedical research,” _Genetics_ , vol. 199, no. 3, pp. 639–653, 2015. * [62] S. J. Goebel, G. P. Johnson, M. E. Perkus, S. W. Davis, J. P. Winslow, and E. Paoletti, “The complete dna sequence of vaccinia virus,” _Virology_ , vol. 179, no. 1, pp. 247–266, 1990. * [63] Z. Chen, M. R. Min, S. Parthasarathy, and X. Ning, “A deep generative model for molecule optimization via one fragment modification,” _Nature Machine Intelligence_ , vol. 3, no. 12, pp. 1040–1049, 2021. * [64] T. Greenhalgh, “How to read a paper: the medline database,” _Bmj_ , vol. 315, no. 7101, pp. 180–183, 1997. * [65] M. Furuno, T. Kasukawa, R. Saito, J. Adachi, H. Suzuki, R. Baldarelli, Y. Hayashizaki, and Y. Okazaki, “Cds annotation in full-length cdna sequence,” _Genome Research_ , vol. 13, no. 6b, pp. 1478–1487, 2003. * [66] T. Chen, T. He, M. Benesty, V. Khotilovich, Y. Tang, H. Cho, K. Chen, R. Mitchell, I. Cano, T. Zhou _et al._ , “Xgboost: extreme gradient boosting,” _R package version 0.4-2_ , vol. 1, no. 4, pp. 1–4, 2015. * [67] A. M. Kibriya, E. Frank, B. Pfahringer, and G. Holmes, “Multinomial naive bayes for text categorization revisited,” in _AI 2004: Advances in Artificial Intelligence: 17th Australian Joint Conference on Artificial Intelligence, Cairns, Australia, December 4-6, 2004. Proceedings 17_ , 2005, pp. 488–499. * [68] V. Jakkula, “Tutorial on support vector machine (svm),” _School of EECS, Washington State University_ , vol. 37, no. 2.5, p. 3, 2006. * [69] G. Guo, H. Wang, D. Bell, Y. Bi, and K. Greer, “Knn model-based approach in classification,” in _On The Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE: OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2003, Catania, Sicily, Italy, November 3-7, 2003. Proceedings_ , 2003, pp. 986–996. * [70] M. Maalouf, “Logistic regression in data analysis: an overview,” _International Journal of Data Analysis Techniques and Strategies_ , vol. 3, no. 3, pp. 281–299, 2011. * [71] A. Pinkus, “Approximation theory of the mlp model in neural networks,” _Acta Numerica_ , vol. 8, pp. 143–195, 1999. * [72] G. Biau and E. Scornet, “A random forest guided tour,” _TEST_ , vol. 25, pp. 197–227, 2016. * [73] H. Saigo, S. Nowozin, T. Kadowaki, T. Kudo, and K. Tsuda, “gboost: a mathematical programming approach to graph classification and regression,” _Machine Learning_ , vol. 75, pp. 69–89, 2009. * [74] S. A. Wildman and G. M. Crippen, “Prediction of physicochemical parameters by atomic contributions,” _Journal of Chemical Information and Computer Sciences_ , vol. 39, no. 5, pp. 868–873, 1999. * [75] P. Ertl and A. Schuffenhauer, “Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions,” _Journal of Cheminformatics_ , vol. 1, pp. 1–11, 2009. * [76] W. Jin, K. Yang, R. Barzilay, and T. Jaakkola, “Learning multimodal graph-to-graph translation for molecular optimization,” _arXiv preprint arXiv:1812.01070_ , 2018. * [77] T. Sterling and J. J. Irwin, “Zinc 15–ligand discovery for everyone,” _Journal of Chemical Information and Modeling_ , vol. 55, no. 11, pp. 2324–2337, 2015. * [78] H. Cho and H. Lee, “Biomedical named entity recognition using deep neural networks with contextual information,” _BMC bioinformatics_ , vol. 20, pp. 1–11, 2019. * [79] H. Zhao, Y. Yang, Q. Zhang, and L. Si, “Improve neural entity recognition via multi-task data selection and constrained decoding,” in _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , 2018, pp. 346–351. * [80] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “Biobert: a pre-trained biomedical language representation model for biomedical text mining,” _Bioinformatics_ , vol. 36, no. 4, pp. 1234–1240, 2020. * [81] T. Bansal, R. Jha, and A. McCallum, “Learning to few-shot learn across diverse natural language classification tasks,” 2020. * [82] T. Zhang, C. Xia, P. S. Yu, Z. Liu, and S. Zhao, “Pdaln: Progressive domain adaptation over a pre-trained model for low-resource cross-domain named entity recognition,” in _Proceedings of the Conference on Empirical Methods in Natural Language Processing_ , 2021. * [83] R. A. Baeza-Yates, “Algorithms for string searching,” in _ACM SIGIR Forum_ , vol. 23, no. 3-4, 1989, pp. 34–58. * [84] L. Lovász and H. J. Prömel, “Combinatorics,” _Oberwolfach Reports_ , vol. 1, no. 1, pp. 5–110, 2004. * [85] R. E. Bellman and S. E. Dreyfus, _Applied dynamic programming_. Princeton University Press, 2015, vol. 2050. * [86] R. C. Edgar and S. Batzoglou, “Multiple sequence alignment,” _Current Opinion in Structural Biology_ , vol. 16, no. 3, pp. 368–373, 2006. * [87] S. Even, _Graph algorithms_. Cambridge University Press, 2011. * [88] K. G. Field, G. J. Olsen, D. J. Lane, S. J. Giovannoni, M. T. Ghiselin, E. C. Raff, N. R. Pace, and R. A. Raff, “Molecular phylogeny of the animal kingdom,” _Science_ , vol. 239, no. 4841, pp. 748–753, 1988. * [89] C. M. Grinstead and J. L. Snell, _Introduction to probability_. American Mathematical Soc., 1997.
# Simulated Adversarial Testing of Face Recognition Models Nataniel Ruiz Boston University <EMAIL_ADDRESS>Adam Kortylewski Johns Hopkins University <EMAIL_ADDRESS>Weichao Qiu Johns Hopkins University <EMAIL_ADDRESS>Cihang Xie UC Santa Cruz <EMAIL_ADDRESS>Sarah Adel Bargal Boston University <EMAIL_ADDRESS>Alan Yuille111Equal senior contribution. Johns Hopkins University <EMAIL_ADDRESS>Stan Sclaroff111Equal senior contribution. Boston University <EMAIL_ADDRESS> ###### Abstract Most machine learning models are validated and tested on fixed datasets. This can give an incomplete picture of the capabilities and weaknesses of the model. Such weaknesses can be revealed at test time in the real world. The risks involved in such failures can be loss of profits, loss of time or even loss of life in certain critical applications. In order to alleviate this issue, simulators can be controlled in a fine-grained manner using interpretable parameters to explore the semantic image manifold. In this work, we propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner in order to find weaknesses in the model before deploying it in critical scenarios. We apply this method in a face recognition setup. We show that certain weaknesses of models trained on real data can be discovered using simulated samples. Using our proposed method, we can find adversarial synthetic faces that fool contemporary face recognition models. This demonstrates the fact that these models have weaknesses that are not measured by commonly used validation datasets. We hypothesize that this type of adversarial examples are not isolated, but usually lie in connected spaces in the latent space of the simulator. We present a method to find these adversarial regions as opposed to the typical adversarial points found in the adversarial example literature. ## 1 Introduction Evaluating a machine learning model can have many pitfalls. Ideally, we would like to know (1) when the model will fail (2) in which way it will fail and (3) how badly it will fail. In other words, we would like to be able to accurately estimate the model’s risk on the true test data distribution as well as know what specific factors induce the model to failure. We would like to know how these failures will manifest themselves. For example, whether a face verification model will generate a false-positive or false-negative error. And finally, when this failure happens, we would like to know how confident was the incorrect decision by the model. Testing models is no longer a purely academic endeavour [62], with many high profile bad societal consequences being revealed in recent years due to insufficient testing particularly with respect to racial and gender bias in face analysis systems [5, 15, 19]. These three desiderata are very hard to achieve in practice. There are major philosophical and theoretical obstacles to achieve perfect knowledge of model failures a priori. Nevertheless, partial knowledge of model weaknesses and predictions of model failures are possible. Yet, there are still major hurdles that stand in our way. One such hurdle is the fact that testing data is limited, due to the fact that it is expensive to gather and label. It is not uncommon for a model to perform well on an assigned test set and fail to generalize to specific obscure examples when it is deployed. A second important hurdle is the fact that testing data is unruly. There are latent factors that generate the testing data, which are hard to control or even to fully understand. For example, a known factor that is hard to control is the lighting of a scene. Most datasets have been captured without controlling for this variable, and thus present an insufficient amount of variability in this respect. Testing a model in one environment could yield perfect performance, yet fail on an environment with more lighting variability. Even if a test dataset with carefully controlled lighting were assembled, the dataset would be very expensive and time- consuming to collect and there is no guarantee that the full variability would be explored. A way to tackle these problems is to use simulators to generate test data. Such an approach can cheaply generate a large quantity of data spanning a large spectrum. Also, simulators are fully controllable and the generative parameters are known. This allows for careful exploration of situations where models fail. This includes the possibility to find intepretable factors that generate failures, to study the way these failures manifest themselves (is the model classifying a cat as a jaguar when there is green in the background?) and to examine the degrees of certainty of the model in these failure modes. When simulating test data, we have full control over simulator parameters. Thus, we are able to explore the manifold generated by the simulator in the space of the simulator parameters. We call this manifold the semantic image manifold, in contrast to the adversarial image manifold that is explored in the traditional adversarial attack literature. A random exploration of this manifold is both inefficient and not the most informative approach. In this work we propose to test machine learning models using simulation in an adversarial manner by finding simulator parameters that generate samples that fool the model. We are inspired by the literature on adversarial examples that fool machine learning models, yet in contrast to this body of work, the adversarial examples that our simulator generates are semantically realistic in the sense that we are not adding low magnitude noise to an image in order to fool the model but finding semantically sensible image configurations that generate model failure. In this way, we are not investigating the well-known weakness of gradient-based models to unrealistic targeted noise but to plausible scenes that might be rare, yet mislead the model. We present a method that finds adversarial samples efficiently using a continuous policy that searches the high-dimensional space of possibilities. A limitation of this type of work is that, in general there exists domain shift between the distribution described by the simulator and the real world distribution [14, 7, 57, 56, 20, 40]. Nevertheless, in our work we are able to show that in some situations, real model weaknesses can be found using simulated data. This gives credence to the hypothesis that, even though there is domain shift, simulated samples can be informative. Also, simulators are rapidly improving in terms of realism [49, 36, 11, 30]. This allows for greater opportunities to use these ideas in the future as simulated and real data distributions become more and more aligned. We hypothesize that these adversarial examples are not isolated points in space, but instead are regions of this manifold. In prior work on traditional adversarial examples, optimization procedures find adversarial samples that are points in image space [55, 37, 6, 18, 33, 50]. In contrast to this body of work we propose a method to find these adversarial regions instead. This is valuable because ideally we would like to be able to fully describe the machine learning model’s regions of reliability, where model predictions will tend to be correct. With this knowledge a user would be able to avoid performing inference on a model outside of its scope in order to minimize failures. Contributions of this work are three-fold. We summarize them as follows: * • We show that weaknesses of models trained on real data can be discovered using simulated samples. We perform experiments on face recognition networks showing that we can diagnose the weakness of a model trained on biased data. * • We present a method to find adversarial simulated samples in the semantic image manifold by finding adversarial simulator parameters that generate such samples. We present experiments on contemporary face recognition networks showing that we can efficiently find faces that are incorrectly recognized by the network. * • We present a method to find regions that are adversarial, in order to locate danger zones where a model’s predictions are more liable to be incorrect. To the best of our knowledge, we are the first to explore the existence of these adversarial regions in the interpretable latent space of a simulator. Figure 1: Our method applied to the face verification scenario. The simulator is conditioned on parameters generated by the policy. An image pair of the same identity is generated. Face verification is run on this image pair using the face recognition network that is to be diagnosed. A reward is computed based on the correct or incorrect prediction of the network and policy parameters are updated accordingly. ## 2 A Framework for Simulated Adversarial Testing Here we formalize adversarial testing using a simulator. We postulate some assumptions on the data generation process in the real and simulator world. Then we give the risks for a machine learning model and the mathematical formulation to find adversarial parameters that yield samples that fool machine learning models. We then present some parallels between our scenario and the literature on learning across domains. Finally, we describe our proposed algorithm to find such adversarial simulator parameters and adversarial samples. Let us assume the real world data $(x,y)$ (where $x$ is the data and $y$ is the label) is generated by the distribution $p(x,y|\psi)$ where $\psi$ is a latent variable that causally controls the data generation process. For example, $\psi$ includes the object type in the image and the angle of view of such an object, as well as all other parameters that generate the scene and image. The risk for a discriminative model $f$ is: $\mathbb{E}_{\psi\sim a}[\mathbb{E}_{(x,y)\sim p(x,y|\psi)}[L(f(x),y)]],$ (1) where $a$ is the distribution of $\psi$ and $L$ is the loss. We can search for $\psi^{*}$ that maximizes this risk: $\max_{\psi\in A}[\mathbb{E}_{(x,y)\sim p(x,y|\psi)}[L(f(x),y)]]$ (2) where $A$ is the set of all possible $\psi$. Let us assume that we have $\psi=(\psi_{u},\psi_{k})$, a decomposition of $\psi$ into two latent variables $\psi_{u}$ and $\psi_{k}$. Furthermore, let us assume that $\psi_{u}$ controls for unknown features of the image, and $\psi_{k}$ controls for known features of the image such as the camera pose, or the object position with respect to the camera. We can write the average risk as: $\mathbb{E}_{\psi_{u}\sim a}[\mathbb{E}_{\psi_{k}\sim b}[\mathbb{E}_{(x,y)\sim p(x,y|\psi_{u},\psi_{k})}[L(f(x),y)]]],$ (3) where $b$ is the distribution of $\psi_{k}$. In most scenarios, we do not have access to the real data distribution $p$ and cannot sample from it at will. Additionally, it is very difficult to control the known latent variable $\psi_{k}$ when generating data, and we do not even know what factors are hidden in the variable $\psi_{u}$, much less how to control it. Using simulated data we are able to fully control the generative process. A simulator samples data $(x,y)\sim q(x,y|\rho)$, where $q$ is the simulated data distribution and we have complete knowledge over the latent variable $\rho$. We are able to search for adversarial examples and compute estimates of the mean and worst-case risks using this simulator. For example, the parameter $\rho^{*}$ that maximizes the risk is written as follows: $\max_{\rho\in C}[\mathbb{E}_{(x,y)\sim q(x,y|\rho)}[L(f(x),y)]]$ (4) where $C$ is the set of all possible $\rho$. We can find $\hat{\rho^{*}}$, an estimate of $\rho^{*}$, by sampling (albeit inefficiently). In our case we are working in a less restrictive scenario since we do not try to find the global maximum $\rho^{*}$, instead we try to find any $\rho$ where $\mathbb{E}_{(x,y)\sim q(x,y|\rho)}[L(f(x),y)]$ is above the misclassification threshold. If we assume that the distributions $p$ and $q$ are similar enough we can use the knowledge gathered in simulation to understand the possibilities of failure in the real world. Essentially, this is a different kind of domain shift problem. In a traditional setting of transfer learning between domains, we are concerned about minimizing the risk on a target domain by training on a source domain. In the binary classification case, let us define a domain as a pair consisting of a distribution $p$ on inputs $\mathcal{X}$ and a labeling function $g_{p}:\mathcal{X}\rightarrow[0,1]$. We consider the real domain and the simulated domain denoted by $(p,g_{p})$ and $(q,g_{q})$ respectively. We also introduce a hypothesis that is a function $h:\mathcal{X}\rightarrow\\{0,1\\}$. We can write the risk of this hypothesis on $p$ as: $\epsilon_{p}(h,g_{p})=\mathbb{E}_{x\sim p}[|h(x)-g_{p}(x)|]$ (5) In traditional domain adaptation from simulation to reality, we seek to learn on distribution $q$ and generalize to distribution $p$. We want to find a hypothesis that minimizes the risk on the target real world distribution $\epsilon_{p}(h,g_{p})$ by training on samples from $q$. In our setting, we do not train on synthetic samples. Instead we want to find a relationship between testing a hypothesis $h$ on samples from distribution $q$ and testing $h$ on samples from $p$. There exist bound results for the risks $\epsilon_{p}(h,g_{p})$ and $\epsilon_{q}(h,g_{q})$ in the work of Ben- David et al. [4]: $\epsilon_{p}(h,g_{p})<\epsilon_{q}(h,g_{q})+d_{1}(q,p)+\\\ \min\\{\mathbb{E}_{p}[|g_{q}(x)-g_{p}(x)|],\mathbb{E}_{q}[|g_{q}(x)-g_{p}(x)|]\\},$ (6) where $d_{1}$ is the variation divergence. The second term of the right hand side quantifies the difference between distributions $q$ and $p$, and the third term of the right hand side is the difference between the labeling functions across domains, which is expected to be small. Since this bound characterizes the cross-domain generalization error and $\epsilon_{q}(h,g_{q})$ will usually be minimized by the learning algorithm, it is useful for studying transfer learning between domains. There are some differences in our scenario since for us $h$ is a fixed function that has been trained on the target domain and we would like to talk about individual examples instead of overall risk over distributions. Also, the bound is proven for a binary classification problem, whereas our target scenario can be multi- class classification or regression. Assume there exists a mapping $\tau:C\rightarrow A$, that maps the simulated latent variables to real latent variables $\psi=\tau(\rho)$. In order for adversarial examples in the simulator domain to be informative in the real domain, we want to have a simulator such that: $\mathbb{P}_{(x_{s},y_{s})\sim q,(x_{r},y_{r})\sim p}[|L(x_{s},y_{s})-L(x_{r},y_{r})|<{\epsilon}]>\theta.$ (7) We denote $p(x_{r},y_{r}|\tau(\rho))$ as $p$ and $q(x_{s},y_{s}|\rho)$ as $q$ in the equation above for succinctness. Here ${\epsilon}$ is small and $\theta\in[0,1]$ is large. This way, high-loss examples found in the semantic image manifold using simulation have a high probability of transferring to the real world. Since the simulator and real domain are different, this is a moderately strong assumption. Nevertheless, we show cases where this assumption holds in our experimental evaluations in Section 4.3. #### Finding Adversarial Parameters Our task is then to find $\rho$ such that the loss over samples generated with this latent variable is above the misclasification threshold $T$. One main difficulty in searching for latent variables that fulfill this condition is that in general the simulator $q$ is non-differentiable. Thus, we turn to black-box optimization methods to search for adversarial parameters. Specifically, we use a policy gradient method [59]. We define a policy $\pi_{\omega}$ parameterized by $\omega$ that can sample simulator parameters $\rho\sim\pi_{\omega}(\rho)$. We train this policy to generate simulator parameters that generate samples that obtain high loss when fed to the machine learning model $f$. For this we define a reward $R$ that is equal to the negative loss $L$ and we want to find the parameters $\omega$ that maximize $J(\omega)=\mathbb{E}_{\rho\sim\pi_{\omega}}[R]$. Following the REINFORCE rule we obtain gradients for updating $\omega$ as $\nabla_{\omega}J(\omega)=\mathbb{E}_{\rho\sim\pi_{\omega}}\big{[}\nabla_{\omega}\log(\pi_{\omega})R(\rho)\big{]}\;.$ (8) An unbiased, empirical estimate of the above quantity is $\mathcal{L}(\omega)=\frac{1}{K}\sum_{k=1}^{K}\nabla_{\omega}\log(\pi_{\omega})\hat{A}_{k}\;,$ (9) where $\hat{A}_{k}=R(\omega_{k})-\beta$ is the advantage estimate, $\beta$ is a baseline, $K$ is the number of different parameters $\rho$ sampled in one policy forward pass and $R(\rho_{k})$ designates the reward obtained by evaluating $f$ on $(x_{k},y_{k})\sim q(x_{k},y_{k}|\rho_{k})$. We show all of the steps of our method in Algorithm 1 and we show an illustration of our method applied to the face verification scenario in Figure 1. Result: adversarial simulator parameters $\rho_{k}$ and adversarial sample $x_{k}$ for _iteration=1,2,…_ do Generate $K$ simulator parameters $\rho_{k}\sim\pi_{\omega}(\rho_{k})$; Generate $K$ samples $(x_{k},y_{k})\sim q(x_{k},y_{k}|\rho_{k})$ Test the discriminative model and obtain $K$ losses $L(f(x_{k}),y_{k})$ if _$\exists k\in\\{1,...,K\\};L(f(x_{k}),y_{k}) >T$_ then Terminate and yield adversarial sample $x_{k}$ and adversarial simulator parameters $\rho_{k}$ end if Compute rewards $R(\rho_{k})$ Compute the advantage estimate $\hat{A}_{k}=R(\rho_{k})-\beta$ Update $\omega$ via equation 9 end for Algorithm 1 Our adversarial testing approach using a policy gradient method. ## 3 Finding Adversarial Regions Here we describe our method to find adversarial regions. Once an adversarial simulator latent vector $\rho_{\text{adv}}\in\mathbb{R}^{n}$ have been found using Algorithm 1 we define a graph $G=(V,E)$. $V$ are the vertices of the graph, obtained by discretizing the space around the adversarial point in grid with spacing $\nu$ between vertices. The edges $E$ of the graph connect neighboring vectors, with each vector having $2n$ neighbors. We find the connected space of adversarial examples $\mathcal{R}_{\text{adv}}$ that is seeded by $\rho_{\text{adv}}$ by following Algorithm 2. In essence, our method follows the general idea of an area flooding algorithm [31, 54] with two main differences. First, that we discretize a continuous space that is $n$-dimensional instead of working on binary 2-dimensional image, and second, that we check for sample membership of $\mathcal{R}_{\text{adv}}$ by testing whether the model loss is higher than the adversarial threshold $L(f(x),y)>T$. Result: connected space of adversarial examples $\mathcal{R}_{\text{adv}}$ Data: seed adversarial simulator parameters $\rho_{\text{adv}}$ $\mathcal{R}_{\text{adv}}=\\{\rho_{\text{adv}}\\}$ Initialize a stack $\chi$. Push $2n$ neighbors of $\rho_{\text{adv}}$ to $\chi$. for _$i$ =1,2,…_ do Pop $\rho_{i}$ from $\chi$ Sample $(x_{i},y_{i})\sim q(x_{i},y_{i}|\rho_{i})$ Test the discriminative model and obtain loss $L(f(x_{k}),y_{k})$ if _$L(f(x_{k}),y_{k}) >T$_ then $\mathcal{R}_{\text{adv}}=\mathcal{R}_{\text{adv}}\cup\\{\rho_{i}\\}$ Push all neighbors of $\rho_{i}$ that have not been visited to $\chi$ end if end for Algorithm 2 Finding connected spaces of adversarial examples. ## 4 Experimental Results ### 4.1 Controllable Face Simulation We use the FLAME face model [29] as a controllable face simulator with the Basel texture model [39]. FLAME uses a linear shape space trained from 3,800 3D scans of human heads and combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. In this way, using shape and texture components we can generate faces with different identities. The synthetic faces that are generated in our work are new and do not mimic any existing person’s features. By changing the pose and expression components we can add variability to these faces. Moreover, we have full control over the scene lighting and the head and camera pose and position. In order to render our scene we use the PyTorch3D rendering framework [42]. We extract the corresponding shape, texture and expression components from the real faces of the CASIA WebFace dataset using DECA [10]. ### 4.2 Models, Datasets and Infrastructure In our experiments we use the CASIA WebFace [61] dataset for training the face recognition models and the LFW [23] dataset for real-world data testing. We use a Convolutional Block Attention Module (CBAM) [60] ResNet50 with the ArcFace [8] loss as our base face recognition model. We also test our method on MobileNet [21] and CBAM-Squeeze-Excitation-ResNet [22] architectures and the CosFace [58] loss. We use a multivariate Gaussian policy $\pi(\rho)=\mathcal{N}(\mu_{\pi},\sigma_{\pi}^{2})$ where the variance is fixed $\sigma_{\pi}^{2}=0.05\times I$ and $\mu_{\pi}$ is learned. For the random optimization baseline we use one Gaussian for each parameter type with standard deviation $\sigma_{\text{rs}}=\frac{w_{p}}{10}\times I$, where $w_{p}$ is the width of the parameter domain. For the Gaussian random sampling baseline we use a standard deviation $\sigma_{\text{g}}=\frac{w_{p}}{2}$. We use a GeForce RTX 2080 GPU with 11GB of memory to perform all of our experiments. ### 4.3 Testing Weakened Models We present a way to verify that knowledge from simulated weaknesses translates to real-world weaknesses. We weaken two networks by training on the CASIA WebFace dataset with images that exhibit a yaw parameter $[-\infty,-0.5]$ and $[0.5,+\infty]$ filtered out. We extract the yaw parameter using DECA. We call these the Negative Yaw Filtered (NYF) and Positive Yaw Filtered (PYF) datasets/networks, respectively. Both datasets have roughly the same number of samples: the Negative Yaw Filtered dataset has $\sim$ 440k training samples and the Positive Yaw Filtered dataset has $\sim$ 449k samples. We also train a Normal network on all of the $\sim$ 491k samples of the unfiltered CASIA WebFace dataset. We then test both the normal network and the yaw-weakened networks on simulated samples. We do this by generating two images of a same person, by fixing the shape, texture and expression parameters. The first image is a frontal image of the person. We vary the yaw component of the second image in the $[-1,1]$ range, where $-1$ and $1$ in the yaw component indicate a fully-profile face on the negative and positive sides, and compute the cosine similarity between the embeddings of the two images. This cosine similarity should be large given that the two images presented are of the same identity. A low cosine similarity means that the network has less confidence that the images show the same person. We plot this in Figure 2, and observe that each yaw-weakened network makes less accurate predictions for images presenting high yaw in their respective weakness intervals. Note that all networks perform almost identically with frontal samples. Also, note that the normal network is almost always superior to the two weakened networks. This is a natural result of having 10% more training data. This plot is an average over 25 different identities that we obtain by grid-sampling the first texture and shape components over the range $[-\sigma,\sigma]$. We compute the area between the curves for the $[-1.0,-0.5]$, $[-0.5,0.5]$ and $[0.5,1.0]$ intervals. We observe in Table 1 (left) that in the $[-1.0,-0.5]$ yaw range, precisely where the NYF network has been weakened, the area between the Normal-NYF curves is large and the area between the Normal-PYF curves is small. Conversely, in the $[0.5,1.0]$ range, where PYF has been weakened, we see that the difference between the Normal-PYF curves is large and the Normal- NYF difference is smaller. Also, we observe near identical differences between Normal-NYF and Normal-PYF in the $[-0.5,0.5]$, which is a consequence of the lesser amount of training data of NYF and PYF networks. We also compute pairwise mean differences for the different populations of Normal, NYF and PYF networks and present them in Table 1 (right). We highlight in blue the statistically significant differences. We have similar results as in Table 1 (left). This evidence indicates that when a weakness is purposefully created in a network by filtering out key samples in the real training dataset, we can retrieve this weakness using our face simulator. This gives credence to the idea that we are able to find simulated adversarial examples in the semantic image manifold that will give us knowledge about adversarial examples in the real world. Figure 2: Recognition cosine similarity between two simulated pairs (frontal and variable yaw) of the same identity (avg. over 25 different identities). The Negative Yaw Filtered network exhibits less accurate predictions for highly negative yaw images than both the Positive Yaw Filtered and Normal networks. The Positive Yaw Filtered network exhibits less accurate predictions for highly positive yaw images than both other networks. | Area Between Curves ---|--- $\downarrow$ Models / Yaw Interval $\rightarrow$ | [-1.0, -0.5] | [-0.5, 0.5] | [0.5, 1.0] Normal:NYF | 8.69 | 2.83 | 4.68 Normal:PYF | 2.71 | 2.76 | 8.46 | Mean Difference ---|--- $\downarrow$ Models / Yaw $\rightarrow$ | -1.0 | 0.0 | 1.0 Normal-NYF | 0.18 | 0.01 | 0.10 Normal-PYF | 0.01 | 0.00 | 0.16 NYF-PYF | -0.17 | -0.01 | 0.06 Table 1: Quantitative differences between evaluation of the purposefully weakened Negative Yaw Filtered (NYF) and Positive Yaw Filtered (PYF) and the Normal on synthetic faces (bold values for emphasis). Blue values in the table on the right mean the differences are statistically significant with $p<0.01$. ### 4.4 Simulated Adversarial Testing of Face Recognition Models In this section we evaluate adversarial testing of face recognition models for face verification. Specifically, we generate samples using the FLAME face model and use our proposed search algorithm to fool face recognition models. We train an ArcFace CBAM-ResNet50 on CASIA WebFace for 20 epochs. This network achieves a 99.1% accuracy on the LFW test set for the face verification task. The evaluation task is face verification between two synthetic images of a same person’s face, one frontal and one profile image. We vary the first 15 shape parameters as well as the first 15 texture parameters for our generated identities, ranging from $-2\sigma$ and $2\sigma$ where $\sigma$ is the standard deviation of each parameter in question. We propose testing the network using 100 identities obtained by random sampling these parameters following a uniform distribution. We also test the network using 100 runs of our adversarial testing algorithm (200 maximum iterations). In Table 2, we show that the random sampling testing regime achieves an accuracy of 99%, which is very close to the 99.1% real-world accuracy of the network on the LFW test set. Using adversarial testing, the network exhibits an accuracy of 36%, which is a marked drop in verification performance. We also compute the average cosine similarity between pairs, showing that adversarial testing generates highly adversarial samples (success threshold $T=0.298$) whereas random samples are highly non-adversarial on average. In Figure 3 we show a subset of the generated samples for both the adversarial testing (above) and random sampling (below). We perform further simulated adversarial testing experiments on several combinations of network backbones (CBAM-ResNet50, CBAM-SE-ResNet50 [22], MobileNet) and face recognition losses (ArcFace, CosFace) trained on CASIA WebFace for 20 epochs. All networks achieve accuracies in the (98.85%, 99.1%) range on the LFW test set. We vary 30 shape parameters, 30 texture parameters ranging from $-2\sigma$ and $2\sigma$ where $\sigma$ is the standard deviation of each parameter. We also vary the yaw pose parameter within $[-1,+1]$, corresponding to variations of $[-\pi/2,+\pi/2]$ degrees and the pitch pose parameter from $[-1/4,+1/4]$ corresponding to variations within $[-\pi/8,+\pi/8]$. Thus, in this case our algorithm has to learn 62 parameters. This is a more challenging scenario due to the larger dimensionality of the policy output. We perform 100 runs of our adversarial testing algorithm (200 maximum iterations), 100 runs of Random Optimization using a Gaussian sampling distribution and 1,000 iterations of uniform random sampling and Gaussian random sampling. We compare these testing methods in Table 3 and we show that the networks achieve very high accuracies for both random sampling regimes and for testing using random optimization. Using adversarial testing, all networks exhibit a marked drop in verification performance. There is also a large increase in the average cosine similarity between pairs, showing that adversarial testing generates highly adversarial samples (below success thresholds $T=(0.298,0.237,0.292,0.294)$ respectively), whereas other methods generate “easy” samples on average. Further, for example, for ArcFace CBAM-ResNet50, adversarial testing achieves 51 adversarial samples over 12,587 iterations while random sampling achieves only one adversarial sample over 1,000 iterations. This makes adversarial testing 400% more sample efficient than random sampling in this specific scenario. In some of our tested scenarios and depending on the number of iterations, random sampling was not able to find any adversarial samples. This is reflected by a 100% face verification accuracy. In Figure 4, we show several successful adversarial testing runs (orange/red) and one random sampling run (green). Unsuccessful optimization attempts usually converge to low cosine similarity without becoming adversarial and remain in the high- dimensional local minima. Finally, we show an example of adversarial testing in action where all 30 shape, 30 texture and 2 pose parameters are being learned jointly in Figure 5. The algorithm finds an adversarial sample that reveals model weaknesses such as vulnerability to unusual poses, exaggerated facial features and distinct skin color. Table 2: CBAM-ResNet50 face verification accuracy over synthetic datasets generated by uniform random sampling or by adversarial testing (Adv. Testing). We vary the identity by varying 15 shape parameters and 15 texture parameters. Method | Accuracy $\downarrow$ | Avg. Cosine Similarity $\downarrow$ ---|---|--- Uniform Random | 99% | 0.518 Adv. Testing | 36% | 0.263 Figure 3: Face models obtained using adversarial testing (above) and random parameter sampling (below). A green border denotes pairs that are successfully verified as the same identity, whereas a red border denotes failed verification (model failure). We obtain adversarial samples using our adversarial testing method more consistently than with random parameter sampling. Some recurring features of adversarial faces are ambiguous frontal/profile features (e.g. long nose, tucked jaw), pale/dark skin colors and left/right asymmetries. | Uniform Random | Gaussian Random | Random Opt. | Adv. Testing ---|---|---|---|--- Loss + Backbone | Acc. $\downarrow$ | Avg. CS $\downarrow$ | Acc. $\downarrow$ | Avg. CS $\downarrow$ | Acc. $\downarrow$ | Avg. CS $\downarrow$ | Acc. $\downarrow$ | Avg. CS $\downarrow$ ArcFace CBAM-ResNet50 | 99.9% | 0.766 | 99.3% | 0.695 | 93% | 0.414 | 49% | 0.282 CosFace CBAM-ResNet50 | 99.9% | 0.696 | 99.6% | 0.637 | 86% | 0.318 | 57% | 0.281 ArcFace SE-CBAM-ResNet50 | 99.8% | 0.738 | 97.7% | 0.663 | 73% | 0.348 | 34% | 0.305 ArcFace MobileNet | 100% | 0.825 | 99.8% | 0.751 | 96% | 0.454 | 58% | 0.372 Table 3: Comparison of different synthetic sampling techniques on different combinations of network backbones and face recognition losses. We vary 30 shape, 30 texture and 2 pose parameters. ### 4.5 Finding Adversarial Regions of Face Recognition Models We use our method described in Algorithm 2 to find adversarial regions in the simulator latent space for face recognition models. We do this in the face verification scenario between a frontal image with neutral expression and a profile image with an open jaw. We vary the first shape and texture parameters to find an adversarial sample, and then find the connected spaces to those seed parameters. We also grid sample both parameters in order to plot the synthetic sample surface. We show the surface of all synthetic samples (blue), along with the adversarial region (red) and the adversarial threshold plane (orange) in Figure 6. We are successful in finding the adversarial regions when they exist. We discover a surprising fact when plotting the synthetic loss landscape (Figure 7) of all the tested networks. In this configuration with only 2 variable parameters, the only network with an adversarial region is ArcFace CBAM- ResNet50. Even though all networks have been trained in the same manner on the same dataset, the network backbone and the loss function change the loss landscape substantially. Some networks have a similar downward slope from negative shape towards positive shape, but some particularities arise in some. Strikingly, ArcFace MobileNet is the most robust of all the networks in this scenario with a landscape far above the misclassification threshold plane. The landscape shape is also completely different from the other networks. Figure 4: Cosine similarity for successful adversarial testing (red) and random parameter sampling (green). Figure 5: A sequence of generated synthetic samples undergoing adversarial testing (left to right, top to bottom). Our method searches through all 30 shape, 30 texture and 2 pose parameters jointly to find an adversarial face. The border line colors denote whether the face recognition network can successfully verify the pairs, with red denoting a failed verification and green denoting a successful verification. ## 5 Related Work Testing computer vision models on synthetic data is not a new idea [41, 35, 24, 26, 27, 49], although there is a relative paucity of work in this area. More common are investigations on training models on synthetic data [12, 44, 43, 9, 47, 28, 16, 17, 34]. Recent works even learn to adapt the generative distribution of synthetic data in order for the model to learn better representations [48, 32, 13, 3, 25, 2] or adapt the pixels or features of the synthetic data to bridge the synthetic-to-real domain gap [14, 7, 57, 56, 20, 40]. In contrast to this body of work, we propose to search the parameter space of a simulator in order to test the model in an adversarial manner. There is very interesting work that adapts generative distributions in order to test models [1, 63, 53]. In contrast to [63, 53] we test computer vision models that are trained on real data, which is a more challenging scenario since the domain shift problem has to be described and overcome. Different from [63, 1, 53] we work on the domain of face recognition instead of object classification or VQA, where we have a higher number of simulator parameters including shape, expression, texture, lighting and pose parameters. We search the parameter landscape using a continuous policy that explores all parameters simultaneously, which is important since model performance does not vary independently with each parameter (as Figure 6 shows), and discrete changes in parameter space can yield high loss changes due to gradient sharpness. A final difference with these and work on traditional adversarial attacks [55, 37, 6, 18, 33, 45, 46] is that we present a method that not only finds one isolated adversarial example, but locates regions of them. There exist methods that propose objectives that locate regions of adversarial examples [51]. In contrast, we explore the adversarial regions that lie in the latent space of a simulator instead of pixel space. Figure 6: Our algorithm finds the adversarial region (red) in the shape- texture landscape (blue). We plot the initial learning trajectory (lighter red) that yields the seed adversarial simulator parameters. We also plot the adversarial threshold plane (orange). Figure 7: Landscape comparisons for different network backbones and losses. Networks trained on CASIA WebFace. ## 6 Conclusion In this work we propose to test machine learning models by searching for semantically realistic adversarial examples using a simulator. We present a framework for simulated adversarial testing, as well as a method to find simulated adversarial examples. Finally, we present a method to find connected spaces of adversarial examples in the semantic space of latent variables and evaluate our methods on contemporary face recognition networks using a face simulator. We find that face recognition networks that have real world weaknesses due to biased training sets with respect to pose can be analyzed using controllable simulated faces and these weaknesses can be discerned. We also find that contemporary face recognition networks are fooled by specific combinations of simulated face shapes and textures. Some recurring features of adversarial faces are ambiguous frontal/profile features (e.g. long nose, tucked jaw), pale/dark skin colors and left/right asymmetries. When such a network is tested using adversarial testing, it’s accuracy plummets compared to random testing or testing on a real-world test set such as LFW. We show evidence that these adversarial examples are not isolated, but part of connected spaces of adversarial examples in the manifold of semantically plausible images. We also show that network loss landscapes can vary significantly depending on the network architecture and loss used, even though the training dataset is fixed. Even so, adversarial testing finds adversarial samples for all networks effectively. We will investigate this phenomenon in future work. Finally, we have an in-depth discussion of the limitations and potential negative impact of our work in the supplementary material. #### Acknowledgments This work was supported in part by grants ONR N00014-21-1-2812 and NIH R01 EY029700 to Alan Yuille and a gift grant from Open Philanthropy to Cihang Xie. ## References * [1] Michael A Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4845–4854, 2019. * [2] OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1):3–20, 2020\. * [3] Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Neel Joshi, Markus Meister, and Pietro Perona. Synthetic examples improve generalization for rare classes. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), March 2020. * [4] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010. * [5] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91, 2018. * [6] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017. * [7] Yi-Hsin Chen, Wei-Yu Chen, Yu-Ting Chen, Bo-Cheng Tsai, Yu-Chiang Frank Wang, and Min Sun. No more discrimination: Cross city adaptation of road scene segmenters. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. * [8] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690–4699, 2019. * [9] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning, pages 1–16. PMLR, 2017. * [10] Yao Feng, Haiwen Feng, Michael J. Black, and Timo Bolkart. Learning an animatable detailed 3D face model from in-the-wild images. ACM Transactions on Graphics (ToG), Proc. SIGGRAPH, 40(4):88:1–88:13, Aug. 2021. * [11] Guy Gafni, Justus Thies, Michael Zollhöfer, and Matthias Nießner. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. arXiv preprint arXiv:2012.03065, 2020. * [12] Adrien Gaidon, Qiao Wang, Yohann Cabon, and Eleonora Vig. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4340–4349, 2016. * [13] Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, S. M. Ali Eslami, and Oriol Vinyals. Synthesizing programs for images using reinforced adversarial learning. In ICML, 2018. * [14] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17(1):2096–2030, Jan. 2016. * [15] R. V. Garcia, L. Wandzik, L. Grabner, and J. Krueger. The harms of demographic bias in deep face recognition research. In 2019 International Conference on Biometrics (ICB), pages 1–6, 2019. * [16] Baris Gecer, Binod Bhattarai, Josef Kittler, and Tae-Kyun Kim. Semi-supervised adversarial learning to generate photorealistic face images of new identities from 3d morphable model. In Proceedings of the European conference on computer vision (ECCV), pages 217–234, 2018. * [17] Baris Gecer, Alexandros Lattas, Stylianos Ploumpis, Jiankang Deng, Athanasios Papaioannou, Stylianos Moschoglou, and Stefanos Zafeiriou. Synthesizing coupled 3d face modalities by trunk-branch generative adversarial networks. In European conference on computer vision, pages 415–433. Springer, 2020. * [18] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Proc. ICLR, 2015. * [19] Patrick J Grother, Mei L Ngan, and Kayee K Hanaoka. Face recognition vendor test part 3: Demographic effects. 2019\. * [20] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pages 1989–1998. PMLR, 2018. * [21] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. * [22] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018. * [23] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007\. * [24] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2910, 2017. * [25] Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, and Sanja Fidler. Meta-sim: Learning to generate synthetic datasets. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. * [26] Adam Kortylewski, Bernhard Egger, Andreas Schneider, Thomas Gerig, Andreas Morel-Forster, and Thomas Vetter. Empirically analyzing the effect of dataset biases on deep face recognition systems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2093–2102, 2018. * [27] A. Kortylewski, B. Egger, A. Schneider, T. Gerig, A. Morel-Forster, and T. Vetter. Analyzing and reducing the damage of dataset bias to face recognition with synthetic data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2261–2268, 2019. * [28] Adam Kortylewski, Andreas Schneider, Thomas Gerig, Bernhard Egger, Andreas Morel-Forster, and Thomas Vetter. Training deep face recognition systems with synthetic data. arXiv preprint arXiv:1802.05891, 2018. * [29] Tianye Li, Timo Bolkart, Michael. J. Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):194:1–194:17, 2017. * [30] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, and Zhaoyang Lv. Neural 3d video synthesis. arXiv preprint arXiv:2103.02597, 2021. * [31] Henry Lieberman. How to color in a coloring book. SIGGRAPH Comput. Graph., 12(3):111–116, Aug. 1978. * [32] Gilles Louppe and Kyle Cranmer. Adversarial variational optimization of non-differentiable simulators. arXiv preprint arXiv:1707.07113, 2017. * [33] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. * [34] Richard T Marriott, Sami Romdhani, and Liming Chen. A 3d gan for improved large-pose facial recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13445–13455, 2021. * [35] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4040–4048, 2016. * [36] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision, pages 405–421. Springer, 2020. * [37] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519. ACM, 2017. * [38] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3D hands, face, and body from a single image. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 10975–10985, 2019. * [39] Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3d face model for pose and illumination invariant face recognition. In 2009 sixth IEEE international conference on advanced video and signal based surveillance, pages 296–301. Ieee, 2009. * [40] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1406–1415, 2019. * [41] Nicolas Pinto, James J DiCarlo, and David D Cox. Establishing good benchmarks and baselines for face recognition. In Workshop on Faces In’Real-Life’Images: Detection, Alignment, and Recognition, 2008. * [42] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv:2007.08501, 2020. * [43] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European Conference on Computer Vision, pages 102–118. Springer, 2016. * [44] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3234–3243, 2016. * [45] Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff. Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In European Conference on Computer Vision, pages 236–251. Springer, 2020. * [46] Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff. Protecting against image translation deepfakes by leaking universal perturbations from black-box neural networks. arXiv preprint arXiv:2006.06493, 2020. * [47] Nataniel Ruiz, Eunji Chong, and James M. Rehg. Fine-grained head pose estimation without keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. * [48] Nataniel Ruiz, Samuel Schulter, and Manmohan Chandraker. Learning to simulate. In International Conference on Learning Representations, 2018. * [49] Nataniel Ruiz, Barry-John Theobald, Anurag Ranjan, Ahmed Hussein Abdelaziz, and Nicholas Apostoloff. Morphgan: One-shot face synthesis gan for detecting recognition bias. In 32nd British Machine Vision Conference 2021, BMVC 2021, Virtual Event, UK, 2021. * [50] Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, and Ashish Kapoor. Unadversarial examples: Designing objects for robust vision. arXiv preprint arXiv:2012.12235, 2020. * [51] Hadi Salman, Jerry Li, Ilya P Razenshteyn, Pengchuan Zhang, Huan Zhang, Sébastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. In NeurIPS, 2019. * [52] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and service robotics, pages 621–635. Springer, 2018. * [53] Michelle Shu, Chenxi Liu, Weichao Qiu, and Alan Yuille. Identifying model weakness with adversarial examiner. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11998–12006, 2020. * [54] Alvy Ray Smith. Tint fill. SIGGRAPH Comput. Graph., 13(2):276–283, Aug. 1979. * [55] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In In Proc. ICLR, 2014. * [56] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. * [57] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167–7176, 2017. * [58] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5265–5274, 2018. * [59] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. * [60] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018. * [61] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014. * [62] Alan L Yuille and Chenxi Liu. Deep nets: What have they ever done for vision? International Journal of Computer Vision, 129(3):781–802, 2021\. * [63] Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi-Keung Tang, and Alan L Yuille. Adversarial attacks beyond the image space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4302–4311, 2019. ## Supplementary Material: Simulated Adversarial Testing of Face Recognition Models ## Simulated Adversarial Testing Exploration In order to explore samples generated by simulated adversarial testing and other simulated testing techniques, we are able to project the shape and texture components of our samples onto a plane of two components. We do so for the first two shape components (roughly controlling for height and width of the face). We show the results for adversarial testing, random optimization, Gaussian random testing and uniform random testing in Figure 8. First we observe the relative abundance of adversarial examples found using our method compared to other methods. Next, we can observe that adversarial testing not only finds adversarial examples, but that these examples are also very varied. We note that most unsuccessful runs of adversarial testing occur when the algorithm converges to local maxima that are located at the edges of the feasibility domain of $[-3,3]$. All plots are drawn from samples tested on an ArcFace IR-SE-CBAM-ResNet50. We now show two plots for adversarial testing on this network where we project the samples on the plane generated by the 1st and 2nd shape components, and the 3rd and 4th shape components. Here we discover an interesting phenomenon. In the 1st-2nd shape component plane, samples are varied and seem roughly uniformly distributed in the space. This means that although the 1st and 2nd shape component clearly have a role in finding adversarial samples, adversarial samples can be found with many different 1st and 2nd shape component values. The second plot shows that for the 3rd and 4th shape components, our adversarial sampling method clearly favors/disfavors some pockets in the space. For example we find that samples with average 3rd and 4th shape components tend not to be found by adversarial testing. We can test the hypothesis of whether these values are non adversarial by higher- dimensional grid search, although this would be time consuming. Another idea is to limit the feasibility domain to these average 3rd and 4th shape components, and run many instances of adversarial testing. If few or no adversarial samples are found then there is a chance that this is a space that is highly non-adversarial. We believe these types of projections can give a strong intuition over what features affect network performance. In Figure 10 and 11 we show the first four shape component variations for the FLAME model in frontal and profile poses. We can see that the 3rd and 4th shape component variations, while not overly noticeable in the frontal pose, introduce features that are only visible in the profile pose. For example when the 4th shape component is varied in the positive direction it tucks the subject’s jaw in. This introduces a frontal/profile ambiguity, and the face verification network has a harder time correctly verifying pairs of these faces since it takes as input both the frontal and profile face images. Similarly, the 3rd shape component introduces a protuberance of the subject’s head when varied in the positive direction, which is much more apparent in the profile image. This is congruent to the adversarial faces that we obtain in Figure 3 of the main paper that show frontal/profile ambiguous features. Figure 8: Comparison of adversarial/non-adversarial samples for different testing methods, projected onto the 1st and 2nd shape component plane. Figure 9: Comparison of adversarial/non-adversarial samples for simulated adversarial testing, projected onto the 1st/2nd and 3rd/4th shape component planes. Figure 10: Shape variations along different shape components for the FLAME model in a frontal pose. Figure 11: Shape variations along different shape components for the FLAME model in a profile pose. ## Limitations and Future Work Even though our adversarial testing algorithm using reinforcement learning is much more effective than random optimization and other sampling methods, it does not have a perfect rate of finding adversarial examples. It sometimes converges to local maxima that are hard to classify but nonetheless non- adversarial. We believe that one of the weaknesses is that it can get stuck in the boundary limits for the parameters that are being varied, and it has a hard time getting out of that space. This is especially a large problem in higher dimensional spaces. We will investigate this weakness in future work. While we do exhibit for the first time the fact that two face recognition networks trained on the same data with different architectures and losses have vastly different loss landscapes when face shape and texture are varied, and thus are learning different things, we have not yet found suitable hypotheses that are verified by data that explain why this is the case. We believe this phenomenon requires more in-depth research and are working on verifying some of our hypotheses for our next work. One of the major limitations of our work is the fact that we find adversarial examples that are simulated. We believe that the most challenging aspect of this research direction is verification of simulated adversarial samples using real data. This aspect is so challenging that it has been neglected by prior research. Nevertheless, many works have been accepted to top conferences without treating this specific problem due to the potential they have in furthering our knowledge about robustness issues of neural networks with realistic variation of stimuli [1, 26, 53, 49, 63]. Even the best simulators that are currently available to the computer vision research community exhibit a substantial domain gap with real data [52, 38, 9, 29]. For this reason, it is difficult to verify the transfer capability of certain features. Of the different attributes that were available to us, head pose is one of the most reliable in terms of transfer due to several reasons: the ability to easily extract it from real images using a head pose estimation network, the 1-to-1 correspondence between head pose in simulated and real situations, and the low-dimensional nature of the attribute that can be more easily analyzed and plotted in a curve as shown in Figure 2. Due to all of these reasons we present the first link between simulated adversarial samples in the simulated and real world using this attribute. In the near-future, with more advanced simulators, we expect work to be able to confirm many more strong links between simulated and real samples. Just as [1] found that camera pose influences the predictions of a neural network in simulated data, we show that pose, shape and texture jointly influence predictions of a face recognition network, but we go one step further and show that pose similarly impacts performance in the real and simulated world. Finally, in principle, it is almost impossible to find a real face that is arbitrarily close to any face we simulate. This is simply due to the fact that shape and texture are very high- dimensional, such that a point has very few close neighbors given a fixed-size real dataset. To find a very close sample we would have to collect a dataset that is extremely large. ## Societal Impact The plausible negative social consequences of this work are tightly linked with overall negative consequences of facial analysis systems. An approach that improves testing for face recognition systems such as the one we propose can be used to improve recognition rates on minorities, persecuted groups and oppressed individuals. This is a larger problem acting on any work that can potentially impact facial analysis, and we argue that our work has an asymmetric potential for applications that have a positive social impact. Given that researchers have proven that there exists gender and racial bias of beneficial face analysis systems [5, 15, 19], by better testing such systems these biases can be diagnosed and mitigated, meaning that minorities can more readily benefit from these technologies. Another important point is that a major bottleneck for our work is a simulator that is expressive and realistic. Bias and lack of expressiveness of a simulator might mean that bias in the face recognition network is not correctly detected. We urge developers of future simulators to take into account the bias of their training population in order to increase the expressiveness of their simulator and decrease the bias. We also urge them to understand the power of such a tool for robustness and bias analysis and to distribute the model responsibly, similar to the FLAME head model [29] team.
Qing Xu et al. mode = title]DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation 1]Qing Xu[style=chinese, orcid=0000-0001-6898-0269] [1] 2]Zhicheng Ma[style=chinese] 3]Na HE[style=chinese] Wenting Duan[style=chinese] Deep learning architecture with convolutional neural network (CNN) achieves outstanding success in the field of computer vision. Where U-Net, an encoder-decoder architecture structured by CNN, makes a great breakthrough in biomedical image segmentation and has been applied in a wide range of practical scenarios. However, the equal design of every downsampling layer in the encoder part and simply stacked convolutions do not allow U-Net to extract sufficient information of features from different depths. The increasing complexity of medical images brings new challenges to the existing methods. In this paper, we propose a deeper and more compact split-attention u-shape network (DCSAU-Net), which efficiently utilises low-level and high-level semantic information based on two novel frameworks: primary feature conservation and compact split-attention block. We evaluate the proposed model on CVC-ClinicDB, 2018 Data Science Bowl, ISIC-2018 and SegPC-2021 datasets. As a result, DCSAU-Net displays better performance than other state-of-the-art (SOTA) methods in terms of the mean Intersection over Union (mIoU) and F1-socre. More significantly, the proposed model demonstrates excellent segmentation performance on challenging images. The code for our work and more technical details can be found at https://github.com/xq141839/DCSAU-Net. Medical image segmentation Multi-scale fusion attention Depthwise separable convolution Computer-aided diagnosis § INTRODUCTION Common types of cancer such as colon cancer, multiple myeloma and melanoma, are still one of the major causes of human suffering and death globally. Medical image analysis plays an essential role in terms of diagnosing and treating these diseases [Ma et al., 2021]. For example, numerous cells in a microscopy image are able to illustrate the stage of diseases, assist in discriminating tumour types, support in insight of cellular and molecular genetic mechanisms, and present valuable information to many other applications, such as cancer and chronic obstructive pulmonary disease [Coates et al., 2015]. Traditionally, medical images are analysed by pathologists manually. In other words, the result of diagnosis is usually dominated by on the experience of medical experts, which can be time-consuming, subjective, and error-prone [Chen et al., 2019]. Computer-aided diagnosis (CAD) has received significant attention from both pathological researchers and clinical practice, which is mainly depend on the result of medical image segmentation [He et al., 2021]. Different from classification and detection tasks, the target of biomedical image segmentation is to separate the specified object from the background in an image, which is able to provide patients with more detailed disease analysis [Zhou et al., 2019]. Existing classic segmentation algorithms are based on edge detection, thresholding, morphology, distances between two objects and pixel energy, such as Otsu thresholding [Otsu, 1979], Snake [Kass et al., 1988] and Fuzzy algorithms [Tizhoosh, 2005]. Each algorithm has its own parameters to accommodate different requirements. However, these algorithms often show limited performance on the generalization of complex datasets [Riccio et al., 2018]. The segmentation performance of these methods is also affected by image acquisition quality. For example, some pathological images may be blurred or contain noises. Other situations could have negative influences too, including uneven illumination, low image contrast between foreground and background, and complex tissue background [Feng et al., 2020]. Therefore, it is essential to construct a powerful and generic model which can achieve adequate robustness on challenging images and works for different biomedical applications. CNN-based encoder-decoder architectures have outperformed traditional image processing methods in various medical image segmentation tasks [Litjens et al., 2017]. The success of these models is largely due to the skip connection strategy that incorporates the low-level semantic information with high-level semantic information to generate the final mask [Chen et al., 2019]. However, many improved architectures only focus on optimising algorithms in terms of in-depth feature extraction, which ignores the loss of high-resolution information in the header of the encoder. The sufficient feature maps extracted from this layer is able to help to compensate for the spatial information lost during the pooling operations [Wang et al., 2022]. In this paper, we propose a novel encoder-decoder architecture for medical image segmentation, called DCSAU-Net. In the encoder part, our model first adopts a novel primary feature conservation (PFC) strategy that reduces the number of parameters, amount of computation and integrates the long-range spatial information of the network in the shadow layer. The rich primary feature obtained from this layer will be delivered to our novel constructed module: compact split-attention (CSA) block. The CSA module strengthens the feature representation of different channels using multi-path attention structure. Each path contains a different number of convolutions so that the CSA module can output mixed feature maps with different receptive field scales. Both new frameworks are designed with residual style in order to alleviate gradient vanishing problem with increasing layers. For the decoder, encoded features in every downsampling layer are concatenated with corresponding upsampled features by skip connection. we apply the same CSA block to complete efficient feature extraction from the combined features. The proposed DCSAU-Net is easy to train without any extra support samples (eg. Initialised mask or edge). The main contributions of this work can be summarized as follows: * A novel mechanism, PFC, is embedded in our DCSAU-Net to capture sufficient primary features from the input images. Compared with other common designs, PFS not only improves computational efficiency but also extends the receptive field of the network. * To enhance the multi-scale representation of DCSAU-Net, we build a CSA block that adopts multi-branch feature groups with attention mechanism. Each group is comprised of a different number of convolutions in order to output feature maps with the combination of different receptive field sizes. * Experimental analysis is conducted with four different medical image segmentation datasets, including 2018 Data Science Bowl [Caicedo et al., 2019], ISIC-2018 Lesion Boundary Segmentation [Codella et al., 2018, Tschandl et al., 2018], CVC-ClinicDB [Bernal et al., 2015], and a multi-class segmentation dataset: SegPC-2021 [Gupta et al., 2021]. Evaluation results demonstrate that our proposed DCSAU-Net shows better performance than other SOTA segmentation methods in terms of standard computer vision metrics – mIoU and F1 score, which can be a new SOTA method for medical image segmentation. § RELATED WORK §.§ Medical Image Segmentation Deep learning methods based on Convolutional Neural Network (CNN) have indicated outstanding performance in medical image segmentation. U-Net, proposed by Ronneberger et al. [Ronneberger et al., 2015], is comprised of two components: encoder and decoder. Upsampling operators are added in the decoder, which is used to recover the resolution of input images. Also, features extracted from the encoder are combined with upsampled results to achieve precise localisation. U-Net shows a favourable segmentation performance in different kinds of medical images. Inspired by this architecture, Zhou et al. [Zhou et al., 2018] presented a nested U-Net (Unet++) for medical image segmentation. To reduce the semantic information loss of feature fusion between encoder and decoder, a series of nested and skip pathways are added to the model. Huang et al. [Huang et al., 2020] designed another full-scale skip connection method that combines low-resolution information and high-resolution information in different scales. Jha et al. [Jha et al., 2020] constructed a DoubleU-Net network that organise two U-Net architectures sequentially. In the encoder part, Atrous Spatial Pyramid Pooling (ASPP) is constructed at the end of each downsample layer to obtain contextual information. The evaluation result demonstrates that DoubleU-Net performs well in polyp, lesion boundary and nuclei segmentation. The gradient vanishing issue has been discovered when trying to converge deeper networks. To address this problem, He et al. [He et al., 2016] introduced a deep residual architecture (ResNet) that had been widely applied in different segmentation networks. For medical image segmentation, Jha et al. [Jha et al., 2019] constructed an advanced u-shape architecture for polyp segmentation, called ResUNet++. This model involves residual style, squeeze and excitation module, ASPP, and attention mechanism. §.§ Attention Mechanisms In previous years, the attention mechanism has rapidly appeared in computer vision. SENet [Hu et al., 2018], one of channel attention, has been widely applied in medical image segmentation [Kaul et al., 2019, Liu et al., 2022]. It uses a squeeze module, with global average pooling, to collect global spatial information, and an excitation module to obtain the relationship between each channel in feature maps. Spatial attention can be referred to as an adaptive spatial location selection mechanism. For instance, Oktay et al. [Oktay et al., 2018] introduced an attention U-Net using a novel bottom-up attention gate, which can precisely focus on a specific region that highlights useful features without extra computational costs and model parameters. Furthermore, Transformers [Vaswani et al., 2017] have received lot of attention recently because its success in natural language processing (NLP). Dosovitskiy et al. [Yuan et al., 2019] developed a vision transformer (ViT) architecture for computer vision tasks and indicated comparable performance to CNN. Also, a series of ungraded ViT has been in a wider range of fields. Xu et al. [Xu et al., 2021] proposed LeViT-UNet to collect distant spatial information from features. In addition, Transformers have demonstrated strong performance when incorporated with CNN. Chen et al. [Chen et al., 2021], provided a novel TransUNet that selects CNN as the first half of the encoder to obtain image patches and uses the Transformer model to extract the global contexts. The final mixed feature in the decoder can achieve more accurate localisation. §.§ Depthwise Separable Convolution Depthwise separable convolution is an efficient neural nework architecture proposed by Howard et al. [Howard et al., 2017]. Each convolution filter in this architecture is responsible for one input channel. Compared with a standard convolution, depthwise convolution not only can achieve the same effects but also costs fewer number of parameters and computations. However, it only extracts features of every input channel. To combine the information between the channels and create new feature maps, a 1x1 convolution, called pointwise convolution, follows a depthwise convolution. The final MobileNets model was established and considered as a new backbone in deep learning. In the image classification task, Chollet [Chollet, 2017] used depthwise separable convolution to construct an Xception model that outperformed previous SOTA methods and showed lower complexity. However, Sandler et al. [Sandler et al., 2018] observed that depthwise convolution performs poorly in the low-channel feature maps. To tackle aforementioned issues, they proposed a new MobileNetV2 model that adds a 1x1 convolution in front of the depthwise convolution in order to increase the dimension of features in advance. Compared with MobileNets, MobileNetV2 does not raise the number of parameters but decreases the degradation of performance. In medical image segmentation, Qi et al. [Qi et al., 2019] introduced an X-net model for 3D brain stroke lesion segmentation. A novel feature similarity module (FSM) was created to capture distance spatial information in feature maps using depthwise separable convolution. The experiment results demonstrate the X-net model costs only half the number of parameters of other SOTA models to achieve higher performance. Comparing our PFC strategy with U-Net [Ronneberger et al., 2015], Stem block [Chen et al., 2019] and ResUNet++ [Jha et al., 2019] designs used to extract the low-level semantic information from the input images. § METHOD §.§ Primary Feature Conservation For most of medical image segmentation networks, the covolutions used in the first downsampling block operation is to extract low-level semantic information from images. The U-Net architecture [Ronneberger et al., 2015] in Fig.<ref> (a) has been widely used in different models [Jha et al., 2020, Oktay et al., 2018]. The stem block [Chen et al., 2019] in Fig.<ref> (b) is usually designed to obtain the same receptive field as 7x7 convolution and reduce the number of parameters. The first feature scale downsampling layer in ResUNet++ [Jha et al., 2019] adds skip connection strategy to mitigate the potential impact of the gradient vanish, which is shown in Fig.<ref> (c). Although stacking more convolutional blocks can extend the receptive field of neural network, the number of parameters and the amount of computation will increase rapidly. The stability of the model may be destroyed. Also, recent research suggests that the valid receptive field will decrease to some extent when the number of stacked 3×3 convolutions keep increasing [Ding et al., 2022]. To address this issue, we introduce a new primary feature conservation (PFC) strategy in the first downsampling block, which is provided in Fig.<ref> (d). The main refinement of our module adopts depthwise separable convolution, consisting of 7x7 depthwise convolution followed by 1x1 pointwise convolution. As depthwise separable convolution decreases the costs of computation and the number of parameters compared to the standard convolution [Howard et al., 2017], we have the opportunity to apply large kernel sizes for the depthwise convolution in order to merge distant spatial information and preserve primary features as much as possible in the low-level semantic layer. The 1x1 pointwise convolution is used to combine channel information. Also, 3x3 convolution is added to the head of this module for downsampling the input image and raising the channel because depthwise separable convolution shows degradation of performance on low-dimensional features [Sandler et al., 2018]. Every convolution is followed by a ReLU activation and BatchNorm. To avoid gradient vanish, PFC block is constructed with residual style. To this end, our proposed PFC module can improve the performance without increasing the number of parameters and computational costs. In addition, the reason for using depthwise convolution with 7x7 kernel size will be explained in section <ref>. §.§ Compact Split-Attention block The VGGNet [Simonyan and Zisserman, 2014] and typical residual structures [He et al., 2016] have been applied in many previous semantic segmentation networks, such as DoubleUnet [Jha et al., 2020] and ResUnet [Zhang et al., 2018]. However, convolutional layers in VGGNet are stacked directly, which means every feature layer has a comparatively constant receptive field [Gao et al., 2019]. In medical image segmentation, different lesions may have different sizes. Sufficient representation of multi-scale features is beneficial for the model to perceive data features. Recently various models learning the representation via cross-channel features have been proposed, such as ResNeSt [Zhang et al., 2022]. Inspired by these methods, we develop a new compact split-attention (CSA) architecture. The framework of CSA block An overview of CSA block is illustrated in Fig. <ref>. The ResNeSt utilises a large channel-split groups for feature extraction, which is more efficient for general computer vision tasks with the adequate data and costs massive parameters. Furthermore, each group of this model adopts the same convolutional operations that receive an equal receptive field size. To optimise the structure and make it more suitable for medical image segmentation, our proposed block maintains two feature groups ($N=2$) to reduce the number of parameters the entire network. These two groups split from the input features will be fed into different transformations $F_i$. Both two groups involve one 1×1 convolution followed by one 3×3 convolution. To improve the representation across channels, the output feature maps of the other group ($F_2$) will combine with the result of the first group ($F_1$) and go through another 3×3 convolution, which can receive semantic information from both split groups and expand the receptive field of the network. Therefore, CSA block presents a stronger ability to extract both global and local information from feature maps. Mathematically, the fusion feature maps can be defined as: \begin{equation} \hat{U}=\sum_{i=1}^{N}F_i(X_i),\ \ \hat{U}\in{R}^{H\times W\times C} \end{equation} Where H, W and C are the scales of output feature maps. The channel-wise statistics generated by global average pooling collect global spatial information, which is produced by compressing transformation output through spatial dimensions and the $c$-th component calculated by: \begin{equation} S_c=\frac{1}{H\times W}\sum_{\alpha=1}^{H}\sum_{\beta=1}^{W}{{\hat{U}}_c\left(\alpha,\beta\right),\ \ S\in{R}^C} \end{equation} The channel-wise soft attention is used for aggregating a weighted fusion represented by cardinal group representation, where a split weighted combination can catch crucial information in feature maps. Then the $c$-th channel of feature maps is calculated as: \begin{equation} \end{equation} Where $a_i$ is a (soft) assignment weight designed by: \begin{equation} a_i\left(c\right)=\frac{exp(\mathcal{G}_i^c(S))}{\sum_{j=1}^N exp(\mathcal{G}_j^c(S))} \end{equation} Here $\mathcal{G}_i^c$ indicates the weight of global spatial information $S$ to the $c$-th channel and is quantified using two 1x1 convolutions with BatchNorm and ReLU activation. As a result, the full CSA block is designed with a standard residual architecture that the output $Y$ is calculated using a skip connection: $Y = V + X$, when the shape of output feature maps is the same as the input feature maps. Otherwise, an extra transformation $T$ will be applied on the skip connection to obtain the same shape. For instance, $T$ can be convolution with a stride or mix of convolution and pooling. §.§ DCSAU-Net Architecture For medical image segmentation, we establish a novel model using the proposed PFC strategy and CSA block following the encoder-decoder architecture, which is referred to as DCSAU-Net, and shown in Fig. <ref>. The encoder of DCSAU-Net first uses PFC strategy to extract low-level semantic information from the input images. The depthwise separable convolution with a large 7x7 kernel size is able to broaden the receptive field of the network and preserve primary features without increasing the number of parameters. The CSA block applies multi-path feature groups with a different number of convolutions and the attention mechanism, which incorporates channel information across different receptive field scales and highlights meaningful semantic features. Each of block is followed by a 2×2 max pooling with stride 2 for performing a downsampling operation. Every decoder sub-network starts with an upsampling operator to recover the original size of the input image step by step. The skip connections are used to concatenate these feature maps with the feature maps from the corresponding encoder layer, which mixes low-level and high-level semantic information to generate a precise mask. The skip connections are followed by CSA blocks to alleviate the gradient vanishing problem and capture efficient features. Finally, a 1 × 1 convolution succeeded by a sigmoid or softmax layer is used to output the binary or multi-class segmentation mask. The presentation of DCSAU-Net with PFC strategy and CSA block § EXPERIMENTS AND RESULTS §.§ Datasets To evaluate the effectiveness of DCSAU-Net, we test it on four publicly available medical image datasets. * CVC-ClinicDB [Bernal et al., 2015] is a frequently-used dataset for the polyp segmentation task. It is also the training database for the MICCAI 2015 Sub-Challenge on Automatic Polyp Detection Challenge. * The second dataset used in this study is from the 2018 Data Science Bowl challenge [Caicedo et al., 2019], which is used for the nuclei segmentation task. The dataset labels every cell in microscopic images. * Another dataset used in our experiment is from a sub-task in the ISIC-2018 challenge [Codella et al., 2018, Tschandl et al., 2018]. The target of training the dataset is to develop a model for lesion boundary segmentation. * In order to assess the performance of the proposed architecture on the multi-class segmentation task, we add the SegPC-2021 dataset [Gupta et al., 2021] in our experiment. Each of image in the dataset includes two different Myeloma Plasma cells. More details about data split are presented in Table <ref>. All of these datasets are related to clinic diagnosis. Therefore, their segmentation result can be significant for patients. Details of the medical segmentation datasets used in our experiments. Dataset Images Input size Train Valid Test CVC-ClinicDB 612 384×288 441 110 61 2018 Data Science Bowl 670 Variable 483 120 67 ISIC 2018 2594 Variable 1868 467 259 SegPC 2021 498 Variable 360 89 49 §.§ Evaluation Metrics Mean intersection over union (mIoU), Accuracy, Recall, Precision and F1-score are standard metrics for medical image segmentation, where mIoU is a common metric used in competitions to compare the performance between each of models. For the more exhaustive comparison between the performance of DCSAU-Net and other popular models, we calculate each of these metrics in our experiment. §.§ Data Augmentation Medical image datasets usually have a limited number of samples to be available in the training phase due to obtaining and annotating images is expensive and time-consuming [Chen et al., 2021]. Therefore, the model is prone to overfitting. To mitigate this issue, data augmentation methods are generally used in the training stage to extend the diversity of samples and enhance the model generalisation. In our experiment, we randomly apply horizontal flip, rotation and cutout with the probability of 0.25 to the training set of each dataset. §.§ Implementation Details All experiments are implemented using PyTorch 1.10.0 framework on a single NVIDIA V100 Tensor Core GPU, 8-core CPU and 16GB RAM. We use a common segmentation loss function, Dice loss, and an Adam optimizer with a learning rate of 1e-4 to train all models. The number of batch sizes and epochs are set to 16 and 200 respectively. During training, we resize the images to 256×256 for CVC-ClinicDB and 2018 Data Science Bowl datasets. For ISIC-2018 and SegPC-2021 datasets, the input images are resized to 512×512. Also, we apply ReduceLROnPlateau to optimise the learning rate. All experiments on four datasets are conducted on the same train, validation, and test datasets. In addition, we train other SOTA models with default parameters, meanwhile, a pretrained ViT model is loaded when training the TransUNet and LeViT-UNet. The rest of models are trained from scratch. §.§ Results In this section, we present quantitative results on four different biomedical image datasets and compare our proposed architecture with other SOTA methods. §.§.§ Comparison on CVC-ClinicDB Dataset The quantitative results on CVC-ClinicDB dataset are shown in Table <ref>. For medical image segmentation task, the performance of network on mIoU and F1-score metrics usually receives more attention. Results on the CVC-ClinicDB Method Accuracy Precision Recall F1-score mIoU U-Net [Ronneberger et al., 2015] 0.984±0.019 0.882±0.195 0.893±0.176 0.872±0.189 0.809±0.213 Unet++ [Zhou et al., 2018] 0.984±0.022 0.919±0.139 0.859±0.197 0.876±0.184 0.811±0.196 Attention-UNet [Oktay et al., 2018] 0.986±0.016 0.904±0.170 0.901±0.185 0.895±0.168 0.835±0.179 ResUNet++ [Jha et al., 2019] 0.982±0.021 0.870±0.191 0.853±0.213 0.854±0.196 0.781±0.213 R2U-Net [Alom et al., 2018] 0.978±0.028 0.880±0.185 0.847±0.223 0.841±0.205 0.765±0.224 DoubleU-Net [Jha et al., 2020] 0.986±0.017 0.892±0.179 0.912±0.197 0.896±0.173 0.836±0.196 UNet3+ [Huang et al., 2020] 0.984±0.022 0.907±0.152 0.885±0.155 0.892±0.171 0.827±0.191 TransUNet [Chen et al., 2021] 0.982±0.209 0.876±0.199 0.873±0.191 0.867±0.188 0.799±0.201 LeViT-UNet [45] 0.980±0.023 0.849±0.241 0.826±0.232 0.828±0.233 0.754±0.244 DCSAU-Net 0.990±0.015 0.917±0.148 0.920±0.143 0.916±0.141 0.861±0.156 From Table 2, DCSAU-Net achieves a F1-score of 0.916 and a mIoU of 0.861, which outperforms DoubleU-Net by 2.0% in terms of F1-score and 2.5% in mIoU. Particularly, our proposed model provides a significant improvement over the two recent transformer-based architectures, where the mIoU of DCSAU-Net is 6.2% and 10.7% higher than TransUNet and LeViT-UNet, and the F1-score of DCSAU-Net is 4.9% and 8.8% higher than these two models respectively. §.§.§ Comparison on SegPC-2021 Dataset For medical image analysis, some of medical images may have multi-class objects that need to be segmented out. To satisfy this demand, we evaluate all models on SegPC-2021 dataset with two different kinds of cells. The quantitative results are provided in Table <ref>. Results on the SegPC 2021 (Multiple Myeloma Plasma Cells Segmentation challenge) Method Accuracy Precision Recall F1-score mIoU U-Net [Ronneberger et al., 2015] 0.939±0.053 0.842±0.142 0.879±0.118 0.855±0.119 0.766±0.148 Unet++ [Zhou et al., 2018] 0.942±0.058 0.855±0.142 0.876±0.141 0.857±0.127 0.770±0.163 Attention-UNet [Oktay et al., 2018] 0.940±0.048 0.845±0.143 0.866±0.125 0.849±0.117 0.757±0.147 ResUNet++ [Jha et al., 2019] 0.934±0.051 0.838±0.118 0.858±0.101 0.840±0.086 0.736±0.121 R2U-Net [Alom et al., 2018] 0.933±0.056 0.852±0.122 0.831±0.136 0.834±0.112 0.744±0.128 DoubleU-Net [Jha et al., 2020] 0.937±0.052 0.833±0.120 0.896±0.084 0.858±0.089 0.763±0.130 UNet3+ [Huang et al., 2020] 0.939±0.051 0.848±0.119 0.866±0.078 0.852±0.083 0.766±0.131 TransUNet [Chen et al., 2021] 0.939±0.047 0.822±0.130 0.869±0.121 0.838±0.113 0.741±0.146 LeViT-UNet [45] 0.939±0.049 0.850±0.120 0.837±0.115 0.837±0.101 0.738±0.137 DCSAU-Net 0.950±0.045 0.871±0.113 0.910±0.067 0.886±0.078 0.806±0.121 Compared with other SOTA models, DCSAU-Net displays the best performance in all defined metrics. Specifically, our proposed method produces a mIoU score of 0.8048 with a more significant rise of 3.6% over Unet++ and 2.8% in F1-score compared to the DoubleU-Net architecture. §.§.§ Comparison on 2018 Data Science Bowl Dataset Nuclei segmentation plays an important role in the biomedical image analysis. We use an open-access dataset from 2018 Data Science Bowl challenge to evaluate the performance of DSAU-Net and other SOTA networks. A comparison between each model is presented in Table <ref> Results on the 2018 Data Science Bowl Method Accuracy Precision Recall F1-score mIoU U-Net [Ronneberger et al., 2015] 0.955±0.047 0.872±0.105 0.920±0.111 0.887±0.090 0.808±0.126 Unet++ [Zhou et al., 2018] 0.955±0.047 0.874±0.122 0.918±0.141 0.886±0.132 0.814±0.150 Attention-UNet [Oktay et al., 2018] 0.953±0.046 0.870±0.151 0.918±0.136 0.887±0.134 0.816±0.152 ResUNet++ [Jha et al., 2019] 0.954±0.048 0.900±0.120 0.903±0.104 0.894±0.104 0.822±0.138 R2U-Net [Alom et al., 2018] 0.956±0.047 0.884±0.135 0.911±0.140 0.891±0.135 0.822±0.156 DoubleU-Net [Jha et al., 2020] 0.955±0.045 0.876±0.111 0.927±0.131 0.889±0.133 0.817±0.150 UNet3+ [Huang et al., 2020] 0.957±0.044 0.889±0.149 0.909±0.135 0.893±0.133 0.825±0.150 TransUNet [Chen et al., 2021] 0.954±0.047 0.900±0.101 0.906±0.121 0.895±0.099 0.821±0.136 LeViT-UNet [45] 0.953±0.049 0.889±0.150 0.888±0.147 0.882±0.136 0.808±0.157 DCSAU-Net 0.959±0.045 0.914±0.098 0.924±0.077 0.914±0.077 0.850±0.114 The results demonstrate that DCSAU-Net achieves a F1-score of 0.914 which is 1.9% higher than TransUNet and mIoU of 0.850, which is 2.5% higher than UNet3+. Overall, our proposed model demonstrates the highest score in the most of evaluation metrics, including precision and accuracy. §.§.§ Comparison on ISIC-2018 Dataset Table <ref> shows the quantitative results on ISIC-2018 dataset for the lesion boundary segmentation task. Results on the ISIC 2018 (Skin Lesion Segmentation challenge) Method Accuracy Precision Recall F1-score mIoU U-Net [Ronneberger et al., 2015] 0.952±0.079 0.883±0.152 0.906±0.180 0.874±0.158 0.802±0.182 Unet++ [Zhou et al., 2018] 0.954±0.077 0.899±0.136 0.906±0.155 0.883±0.138 0.812±0.171 Attention-UNet [Oktay et al., 2018] 0.954±0.078 0.915±0.140 0.890±0.171 0.883±0.149 0.814±0.180 ResUNet++ [Jha et al., 2019] 0.954±0.082 0.905±0.139 0.889±0.183 0.879±0.153 0.810±0.181 R2U-Net [Alom et al., 2018] 0.945±0.078 0.834±0.189 0.912±0.163 0.848±0.160 0.762±0.189 DoubleU-Net [Jha et al., 2020] 0.953±0.092 0.903±0.149 0.897±0.186 0.879±0.167 0.813±0.191 UNet3+ [Huang et al., 2020] 0.956±0.068 0.889±0.151 0.916±0.130 0.886±0.132 0.816±0.165 TransUNet [Chen et al., 2021] 0.945±0.085 0.847±0.186 0.898±0.185 0.849±0.178 0.770±0.203 LeViT-U [45] 0.954±0.089 0.896±0.152 0.908±0.176 0.883±0.161 0.817±0.185 DCSAU-Net 0.960±0.075 0.917±0.127 0.922±0.139 0.904±0.128 0.841±0.158 mIoU is an official evaluation metric for the challenge. According to Table 4, DCSAU-Net has an increase of 2.4% over LeViT-UNet in this metric, and 1.8% over UNet3+ in F1-score. Within the rest of metrics, our model achieves a recall of 0.922 and an accuracy of 0.960, which is better than other baseline methods. Also, a high recall score is more favourable in clinic applications [Oreiller et al., 2022]. Detailed ablation study of the DCSAU-Net architecture. Dataset Method Accuracy Precision Recall F1-score mIoU Parameters FLOPs FPS U-Net [Ronneberger et al., 2015] 0.984±0.019 0.882±0.195 0.893±0.176 0.872±0.189 0.809±0.213 13.40M 31.11 109.95 2CVC-ClinicDB U-Net + PFC 0.987±0.014 0.901±0.191 0.885±0.214 0.881±0.211 0.828±0.216 13.37M 29.70 103.49 U-Net + CSA 0.987±0.015 0.890±0.211 0.903±0.179 0.890±0.193 0.840±0.204 2.62M 8.33 44.26 U-Net + PFC + CSA (ours) 0.990±0.015 0.917±0.148 0.920±0.143 0.916±0.141 0.861±0.156 2.60M 6.91 43.37 U-Net [Ronneberger et al., 2015] 0.939±0.053 0.842±0.142 0.879±0.118 0.855±0.119 0.766±0.148 13.40M 124.58 48.46 2SegPC-2021 U-Net + PFC 0.946±0.046 0.866±0.123 0.874±0.086 0.864±0.085 0.780±0.144 13.37M 119.79 47.63 U-Net + CSA 0.946±0.046 0.855±0.135 0.896±0.071 0.870±0.080 0.781±0.146 2.62M 33.35 33.22 U-Net + PFC + CSA (ours) 0.950±0.045 0.871±0.113 0.910±0.067 0.886±0.078 0.806±0.121 2.60M 27.66 32.08 U-Net [Ronneberger et al., 2015] 0.955±0.047 0.872±0.105 0.920±0.111 0.887±0.090 0.808±0.126 13.40M 31.11 125.30 22018 Data Science Bowl U-Net + PFC 0.955±0.046 0.905±0.105 0.910±0.096 0.901±0.084 0.830±0.123 13.37M 29.70 117.09 U-Net + CSA 0.957±0.045 0.903±0.105 0.925±0.090 0.908±0.082 0.839±0.122 2.62M 8.33 43.87 U-Net + PFC + CSA (ours) 0.959±0.045 0.914±0.098 0.924±0.077 0.914±0.077 0.850±0.114 2.60M 6.91 43.42 U-Net [Ronneberger et al., 2015] 0.952±0.079 0.883±0.152 0.906±0.180 0.874±0.158 0.802±0.182 13.40M 31.11 115.85 2ISIC-2018 U-Net + PFC 0.955±0.076 0.915±0.129 0.901±0.148 0.890±0.128 0.821±0.161 13.37M 29.70 113.36 U-Net + CSA 0.955±0.078 0.915±0.123 0.909±0.140 0.893±0.127 0.830±0.160 2.62M 8.33 43.19 U-Net + PFC + CSA (ours) 0.960±0.075 0.917±0.127 0.922±0.139 0.904±0.128 0.841±0.158 2.60M 6.91 41.91 An investigation of different kernel size in the PFC block of the DCSAU-Net architecture. Dataset Kernel Size Accuracy Precision Recall F1-score mIoU Parameters FLOPs FPS 3x3 0.989±0.014 0.892±0.196 0.922±0.176 0.903±0.188 0.857±0.194 2.58M 6.24 43.02 2CVC-ClinicDB 5x5 0.987±0.010 0.898±0.172 0.916±0.136 0.904±0.159 0.858±0.174 2.59M 6.50 42.89 7x7 0.990±0.015 0.917±0.148 0.920±0.143 0.916±0.141 0.861±0.156 2.60M 6.91 43.37 9x9 0.988±0.017 0.908±0.160 0.902±0.180 0.894±0.177 0.841±0.198 2.61M 7.44 43.39 3x3 0.946±0.058 0.866±0.118 0.882±0.091 0.869±0.075 0.790±0.145 2.58M 39.42 32.09 2SegPC-2021 5x5 0.948±0.048 0.863±0.122 0.901±0.070 0.877±0.080 0.800±0.131 2.59M 40.49 32.02 7x7 0.950±0.045 0.871±0.113 0.910±0.067 0.886±0.078 0.806±0.121 2.60M 42.10 32.08 9x9 0.946±0.050 0.851±0.134 0.896±0.078 0.868±0.104 0.786±0.153 2.61M 44.25 31.45 3x3 0.958±0.045 0.911±0.101 0.920±0.076 0.911±0.077 0.845±0.115 2.58M 6.24 43.31 22018 Data Science Bowl 5x5 0.958±0.044 0.915±0.096 0.918±0.077 0.912±0.077 0.847±0.114 2.59M 6.50 43.12 7x7 0.959±0.045 0.914±0.098 0.924±0.077 0.914±0.077 0.850±0.114 2.60M 6.91 43.42 9x9 0.957±0.045 0.908±0.106 0.921±0.083 0.908±0.081 0.841±0.119 2.61M 7.44 43.08 3x3 0.958±0.080 0.921±0.112 0.904±0.171 0.893±0.144 0.829±0.173 2.58M 6.24 42.17 2ISIC-2018 5x5 0.959±0.077 0.919±0.127 0.913±0.149 0.898±0.139 0.836±0.165 2.59M 6.50 42.12 7x7 0.960±0.075 0.917±0.127 0.922±0.139 0.904±0.128 0.841±0.158 2.60M 6.91 41.91 9x9 0.958±0.080 0.922±0.117 0.903±0.164 0.893±0.146 0.830±0.172 2.61M 7.44 42.63 §.§ Ablation Study In this section, we conduct an extensional ablation study on the DCSAU-Net. The number of parameters, floating point operations (FLOPs) and frames per second (FPS) are calculated to investigate the effectiveness of each module in more detail. Table <ref> provides the ablation results of four configurations on all four datasets. §.§.§ Significance of PFC Strategy The PFC Strategy is an essential part of the proposed DCSAU-Net model. It uses residual depthwise separable architecture with a large kernel size to enrich low-level semantic information in the initial downsampling block and help to generate a more accurate segmentation mask. We compare the network configurations: U-Net and U-Net + PFC to evaluate the efficiency of the PFC strategy. From the mIoU metric in Table <ref>, PFC shows an improvement of 1.9% on the CVC-ClinicDB dataset, 1.4% improvement on the SegPC-2021, 2.2% improvement on the 2018 Data Science Bowl dataset and 1.9% improvement on the ISIC 2018 dataset. Thus, it can be concluded that the PFC strategy enhances the performance of the original U-Net. §.§.§ Effectiveness of CSA Block The DCSAU-Net model uses the CSA block to combine multi-scale feature maps, which can perceive different sizes of lesions in medical images. The effectiveness of CSA block can be evaluated by comparing the configurations: U-Net and U-Net + CSA in Table <ref>. On the mIoU, the CSA block achieves an improvement of 3.1% on the CVC-ClinicDB dataset, 1.5% improvement on the SegPC-2021, 3.1% improvement on the 2018 Data Science Bowl dataset and 2.8% improvement on the ISIC 2018 dataset. Therefore, we can argue that the CSA block performs better than the U-Net model and has a more significant impact than the PFC strategy. By taking advantage of both modules, the DCSAU-Net model (U-Net + PFC + CSA) can further improve the F1-score by 0.6% to 3.5% and the mIoU by 1.1% to 3.3% compared to the U-Net with a single PFC or CSA module. Qualitative comparison results between DCSAU-Net and other SOTA models on challenging images of four different medical segmentation datasets. Results of the first 20 epochs on the test dataset of four medical image segmentation tasks. § DISCUSSION Semantic segmentation has been widely witnessed in the field of medical image analysis. Many deep learning models construct encoder-decoder architectures and fuse low-level to high-level semantic information through skip connection. These methods usually select the U-Net [Ronneberger et al., 2015] block as the header of the encoder to extract low-level semantic information, which probably misses some momentous features in images. Our approach adopts the depthwise separable convolutions with a larger kernel size to build a novel PFC strategy that retains these primary features as much as possible. In addition, we explore the impact of depthwise convolution with different number of kernel sizes on the performance, which is presented in Table <ref>. From the experiment results, we can observe that the DCSAU-Net model is able to achieve a similar performance when using 3x3, 5x5 and 7x7 kernel sizes. In practical scenarios, people probably select a small kernel size to reduce the number of parameters and computation costs. However, to display the best performance of our proposed architecture in the study, we use a 7x7 kernel size to train the model. Based on the efficiency of depthwise separable convolution, adding more such layers may improve the information capture capability of the PFC module in the low-level semantic layer, which is worth exploring in future work. We next establish the CSA block that not only enhances the connectivity across different channels but also strengthens the feature representation in different scales with the attention mechanism and completes the multi-scale combination in the end. The effectiveness of both modules has been shown in Table <ref> and proved by the ablation study. Although U-Net performs a shorter inference time than the DCSAU-Net model, our approach uses a tiny number of parameters in the equal output feature channels and also expends acceptable inference time, which is more suitable for deployment on machines with limited memory. To further demonstrate that there is a significant improvement of the DCSAU-Net model for the medical image segmentation task, we visualise some of segmentation results using all models on challenging images, which is provided in Fig. <ref>. From the qualitative results, the segmentation mask generated by our proposed model is able to capture more proper foreground information from low-quality images, such as incomplete staining or obscurity, compared to other SOTA methods. Although the segmentation result of DCSAU-Net is not completely correct, this imperfect mask with more shape information has the possibility to be fixed using image post-processing algorithms, such as applying conditional random fields. In our experiments, we train all models based on a standard dice loss function. We compared the convergence speed of each model on all four datasets, which is shown in Fig <ref>. It can be observed that our proposed model converges noticeably faster than other SOTA methods in the first 20 epochs, which means the DCSAU-Net model is able to reach reliable performance by training fewer epochs. Furthermore, Using other advanced methods in training, such as deep supervision or combined loss functions, may show higher performance in medical image segmentation. Therefore, DCSAU-Net shows its robustness and superior performance on various medical segmentation tasks and we believe it can be used as a new SOTA model for medical image segmentation. § CONCLUSION In this paper, we propose a novel encoder-decoder architecture for medical image segmentation, called DCSAU-Net. The presented model is comprised of the PFC strategy and the CSA block. The former enhances the ability to preserve primary features from images. The latter splits the input feature maps into two feature groups. Each group contains a different number of convolutions and highlights meaningful features using the attention mechanism. Therefore, the CSA block can combine feature maps in the different receptive fields. We evaluate our model on four different medical image segmentation datasets. The results show that the DCSAU-Net architecture achieves higher scores than other SOTA models in the F1-score and mIoU metrics. Especially, our model performs better on the multi-class segmentation task and complex images. In the future, we will focus on optimising the DCSAU-Net architecture to improve its performance and make it suitable for more medical image segmentation tasks. [Ma et al., 2021] authorX. Ma, authorY. Niu, authorL. Gu, authorY. Wang, authorY. Zhao, authorJ. Bailey, authorF. Lu, titleUnderstanding adversarial attacks on deep learning based medical image analysis systems, journalPattern Recognition volume110 (year2021) pages107332. [Coates et al., 2015] authorA. S. Coates, authorE. P. Winer, authorA. Goldhirsch, authorR. D. Gelber, authorM. Gnant, authorM. Piccart-Gebhart, authorB. Thürlimann, authorH.-J. Senn, authorP. Members, authorF. André, et al., titleTailoring therapies—improving the management of early breast cancer: St gallen international expert consensus on the primary therapy of early breast cancer 2015, journalAnnals of oncology volume26 (year2015) pages1533–1546. [Chen et al., 2019] authorX. Chen, authorB. M. Williams, authorS. R. Vallabhaneni, authorG. Czanner, authorR. Williams, authorY. Zheng, titleLearning active contour models for medical image in: booktitleProceedings of the IEEE/CVF conference on computer vision and pattern recognition, year2019, pp. [He et al., 2021] authorS. He, authorK. T. Minn, authorL. Solnica-Krezel, authorM. A. Anastasio, authorH. Li, titleDeeply-supervised density regression for automatic cell counting in microscopy images, journalMedical Image Analysis volume68 (year2021) pages101892. [Zhou et al., 2019] authorS. Zhou, authorD. Nie, authorE. Adeli, authorJ. Yin, authorJ. Lian, authorD. Shen, titleHigh-resolution encoder–decoder networks for low-contrast medical image segmentation, journalIEEE Transactions on Image Processing volume29 (year2019) pages461–475. [Otsu, 1979] authorN. Otsu, titleA threshold selection method from gray-level journalIEEE transactions on systems, man, and cybernetics volume9 (year1979) pages62–66. [Kass et al., 1988] authorM. Kass, authorA. Witkin, authorD. Terzopoulos, titleSnakes: Active contour models, journalInternational journal of computer vision volume1 (year1988) pages321–331. [Tizhoosh, 2005] authorH. R. Tizhoosh, titleImage thresholding using type ii fuzzy sets, journalPattern recognition volume38 (year2005) pages2363–2372. [Riccio et al., 2018] authorD. Riccio, authorN. Brancati, authorM. Frucci, authorD. Gragnaniello, titleA new unsupervised approach for segmenting and counting cells in high-throughput microscopy image sets, journalIEEE journal of biomedical and health informatics volume23 (year2018) pages437–448. [Feng et al., 2020] authorS. Feng, authorH. Zhao, authorF. Shi, authorX. Cheng, authorM. Wang, authorY. Ma, authorD. Xiang, authorW. Zhu, authorX. Chen, titleCpfnet: Context pyramid fusion network for medical image segmentation, journalIEEE transactions on medical imaging volume39 (year2020) pages3008–3018. [Litjens et al., 2017] authorG. Litjens, authorT. Kooi, authorB. E. Bejnordi, authorA. A. A. Setio, authorF. Ciompi, authorM. Ghafoorian, authorJ. A. Van Der Laak, authorB. Van Ginneken, authorC. I. Sánchez, titleA survey on deep learning in medical image analysis, journalMedical image analysis volume42 (year2017) pages60–88. [Chen et al., 2019] authorS. Chen, authorG. Bortsova, authorA. García-Uceda Juárez, authorG. v. Tulder, authorM. d. Bruijne, titleMulti-task attention-based semi-supervised learning for medical image segmentation, in: booktitleInternational Conference on Medical Image Computing and Computer-Assisted Intervention, organizationSpringer, year2019, pp. [Wang et al., 2022] authorH. Wang, authorP. Cao, authorJ. Wang, authorO. R. Zaiane, titleUctransnet: rethinking the skip connections in u-net from a channel-wise perspective with transformer, in: booktitleProceedings of the AAAI Conference on Artificial Intelligence, volume volume36, year2022, pp. pages2441–2449. [Caicedo et al., 2019] authorJ. C. Caicedo, authorA. Goodman, authorK. W. Karhohs, authorB. A. Cimini, authorJ. Ackerman, authorM. Haghighi, authorC. Heng, authorT. Becker, authorM. Doan, authorC. McQuin, et al., titleNucleus segmentation across imaging experiments: the 2018 data science bowl, journalNature methods volume16 (year2019) pages1247–1253. [Codella et al., 2018] authorN. C. Codella, authorD. Gutman, authorM. E. Celebi, authorB. Helba, authorM. A. Marchetti, authorS. W. Dusza, authorA. Kalloo, authorK. Liopyris, authorN. Mishra, authorH. Kittler, et al., titleSkin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic), in: booktitle2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), organizationIEEE, year2018, pp. pages168–172. [Tschandl et al., 2018] authorP. Tschandl, authorC. Rosendahl, authorH. Kittler, titleThe ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions, journalScientific data volume5 (year2018) pages1–9. [Bernal et al., 2015] authorJ. Bernal, authorF. J. Sánchez, authorG. Fernández-Esparrach, authorD. Gil, authorC. Rodríguez, authorF. Vilariño, titleWm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians, journalComputerized Medical Imaging and Graphics volume43 (year2015) pages99–111. [Gupta et al., 2021] authorA. Gupta, authorR. Gupta, authorS. Gehlot, authorS. Goswami, titleSegpc-2021: Segmentation of multiple myeloma plasma cells in microscopic images, year2021. <https://dx.doi.org/10.21227/7np1-2q42>. [Ronneberger et al., 2015] authorO. Ronneberger, authorP. Fischer, authorT. Brox, titleU-net: Convolutional networks for biomedical image in: booktitleInternational Conference on Medical image computing and computer-assisted intervention, organizationSpringer, year2015, pp. [Zhou et al., 2018] authorZ. Zhou, authorM. M. Rahman Siddiquee, authorN. Tajbakhsh, authorJ. Liang, titleUnet++: A nested u-net architecture for medical image in: booktitleDeep learning in medical image analysis and multimodal learning for clinical decision support, publisherSpringer, year2018, pp. [Huang et al., 2020] authorH. Huang, authorL. Lin, authorR. Tong, authorH. Hu, authorQ. Zhang, authorY. Iwamoto, authorX. Han, authorY.-W. Chen, authorJ. Wu, titleUnet 3+: A full-scale connected unet for medical image segmentation, in: booktitleICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), organizationIEEE, year2020, pp. [Jha et al., 2020] authorD. Jha, authorM. A. Riegler, authorD. Johansen, authorP. Halvorsen, authorH. D. Johansen, titleDoubleu-net: A deep convolutional neural network for medical image segmentation, in: booktitle2020 IEEE 33rd International symposium on computer-based medical systems (CBMS), organizationIEEE, year2020, pp. pages558–564. [He et al., 2016] authorK. He, authorX. Zhang, authorS. Ren, authorJ. Sun, titleDeep residual learning for image recognition, in: booktitleProceedings of the IEEE conference on computer vision and pattern recognition, year2016, pp. [Jha et al., 2019] authorD. Jha, authorP. H. Smedsrud, authorM. A. Riegler, authorD. Johansen, authorT. De Lange, authorP. Halvorsen, authorH. D. Johansen, titleResunet++: An advanced architecture for medical image in: booktitle2019 IEEE International Symposium on Multimedia (ISM), organizationIEEE, year2019, pp. [Hu et al., 2018] authorJ. Hu, authorL. Shen, authorG. Sun, titleSqueeze-and-excitation networks, in: booktitleProceedings of the IEEE conference on computer vision and pattern recognition, year2018, pp. [Kaul et al., 2019] authorC. Kaul, authorS. Manandhar, authorN. Pears, titleFocusnet: An attention-based fully convolutional network for medical image segmentation, in: booktitle2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019), organizationIEEE, year2019, pp. pages455–458. [Liu et al., 2022] authorA. Liu, authorX. Huang, authorT. Li, authorP. Ma, titleCo-net: A collaborative region-contour-driven network for fine-to-finer medical image segmentation, in: booktitleProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, year2022, pp. [Oktay et al., 2018] authorO. Oktay, authorJ. Schlemper, authorL. L. Folgoc, authorM. Lee, authorM. Heinrich, authorK. Misawa, authorK. Mori, authorS. McDonagh, authorN. Y. Hammerla, authorB. Kainz, et al., titleAttention u-net: Learning where to look for the journalarXiv preprint arXiv:1804.03999 [Vaswani et al., 2017] authorA. Vaswani, authorN. Shazeer, authorN. Parmar, authorJ. Uszkoreit, authorL. Jones, authorA. N. Gomez, authorŁ. Kaiser, authorI. Polosukhin, titleAttention is all you need, journalAdvances in neural information processing systems volume30 (year2017). [Yuan et al., 2019] authorY. Yuan, authorX. Chen, authorX. Chen, authorJ. Wang, titleSegmentation transformer: Object-contextual representations for semantic segmentation, journalarXiv preprint arXiv:1909.11065 [Xu et al., 2021] authorG. Xu, authorX. Wu, authorX. Zhang, authorX. He, titleLevit-unet: Make faster encoders with transformer for medical image segmentation, journalarXiv preprint arXiv:2107.08623 [Chen et al., 2021] authorJ. Chen, authorY. Lu, authorQ. Yu, authorX. Luo, authorE. Adeli, authorY. Wang, authorL. Lu, authorA. L. Yuille, authorY. Zhou, titleTransunet: Transformers make strong encoders for medical image segmentation, journalarXiv preprint arXiv:2102.04306 [Howard et al., 2017] authorA. G. Howard, authorM. Zhu, authorB. Chen, authorD. Kalenichenko, authorW. Wang, authorT. Weyand, authorM. Andreetto, authorH. Adam, titleMobilenets: Efficient convolutional neural networks for mobile vision applications, journalarXiv preprint arXiv:1704.04861 [Chollet, 2017] authorF. Chollet, titleXception: Deep learning with depthwise separable in: booktitleProceedings of the IEEE conference on computer vision and pattern recognition, year2017, pp. [Sandler et al., 2018] authorM. Sandler, authorA. Howard, authorM. Zhu, authorA. Zhmoginov, authorL.-C. Chen, titleMobilenetv2: Inverted residuals and linear in: booktitleProceedings of the IEEE conference on computer vision and pattern recognition, year2018, pp. [Qi et al., 2019] authorK. Qi, authorH. Yang, authorC. Li, authorZ. Liu, authorM. Wang, authorQ. Liu, authorS. Wang, titleX-net: Brain stroke lesion segmentation based on depthwise separable convolution and long-range dependencies, in: booktitleInternational conference on medical image computing and computer-assisted intervention, organizationSpringer, year2019, pp. [Chen et al., 2019] authorK. Chen, authorJ. Wang, authorJ. Pang, authorY. Cao, authorY. Xiong, authorX. Li, authorS. Sun, authorW. Feng, authorZ. Liu, authorJ. Xu, et al., titleMmdetection: Open mmlab detection toolbox and journalarXiv preprint arXiv:1906.07155 [Ding et al., 2022] authorX. Ding, authorX. Zhang, authorJ. Han, authorG. Ding, titleScaling up your kernels to 31x31: Revisiting large kernel design in cnns, in: booktitleProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, year2022, pp. [Simonyan and Zisserman, 2014] authorK. Simonyan, authorA. Zisserman, titleVery deep convolutional networks for large-scale image recognition, journalarXiv preprint arXiv:1409.1556 [Zhang et al., 2018] authorZ. Zhang, authorQ. Liu, authorY. Wang, titleRoad extraction by deep residual u-net, journalIEEE Geoscience and Remote Sensing Letters volume15 (year2018) pages749–753. [Gao et al., 2019] authorS.-H. Gao, authorM.-M. Cheng, authorK. Zhao, authorX.-Y. Zhang, authorM.-H. Yang, authorP. Torr, titleRes2net: A new multi-scale backbone architecture, journalIEEE transactions on pattern analysis and machine intelligence volume43 (year2019) [Zhang et al., 2022] authorH. Zhang, authorC. Wu, authorZ. Zhang, authorY. Zhu, authorH. Lin, authorZ. Zhang, authorY. Sun, authorT. He, authorJ. Mueller, authorR. Manmatha, et al., titleResnest: Split-attention networks, in: booktitleProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, year2022, pp. [Chen et al., 2021] authorJ. Chen, authorE. Asma, authorC. Chan, titleTargeted gradient descent: A novel method for convolutional neural networks fine-tuning and online-learning, in: booktitleInternational Conference on Medical Image Computing and Computer-Assisted Intervention, organizationSpringer, year2021, pp. [Alom et al., 2018] authorM. Z. Alom, authorM. Hasan, authorC. Yakopcic, authorT. M. Taha, authorV. K. Asari, titleRecurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation, journalarXiv preprint arXiv:1802.06955 [45] authorG. Xu, authorX. Zhang, authorY. Fang, authorX. Cao, authorW. Liao, authorX. He, authorX. Wu, titleLevit-unet: Make faster encoders with transformer for biomedical image segmentation (????). [Oreiller et al., 2022] authorV. Oreiller, authorV. Andrearczyk, authorM. Jreige, authorS. Boughdad, authorH. Elhalawani, authorJ. Castelli, authorM. Vallières, authorS. Zhu, authorJ. Xie, authorY. Peng, et al., titleHead and neck tumor segmentation in pet/ct: the hecktor challenge, journalMedical image analysis volume77 (year2022) pages102336.
# On antisymmetric infinitesimal conformal bialgebras Yanyong Hong Department of Mathematics, Hangzhou Normal University, Hangzhou 311121, PR China<EMAIL_ADDRESS>and Chengming Bai Chern Institute of Mathematics & LPMC, Nankai University, Tianjin 300071, PR China <EMAIL_ADDRESS> ###### Abstract. In this paper, we construct a bialgebra theory for associative conformal algebras, namely antisymmetric infinitesimal conformal bialgebras. On the one hand, it is an attempt to give conformal structures for antisymmetric infinitesimal bialgebras. On the other hand, under certain conditions, such structures are equivalent to double constructions of Frobenius conformal algebras, which are associative conformal algebras that are decomposed into the direct sum of another associative conformal algebra and its conformal dual as $\mathbb{C}[\partial]$-modules such that both of them are subalgebras and the natural conformal bilinear form is invariant. The coboundary case leads to the introduction of associative conformal Yang-Baxter equation whose antisymmetric solutions give antisymmetric infinitesimal conformal bialgebras. Moreover, the construction of antisymmetric solutions of associative conformal Yang-Baxter equation is given from $\mathcal{O}$-operators of associative conformal algebras as well as dendriform conformal algebras. ###### Key words and phrases: associative conformal algebra, dendriform conformal algebra, associative conformal Yang-Baxter equation, $\mathcal{O}$-operator, Rota-Baxter operator ###### 2010 Mathematics Subject Classification: 16D20, 16D70, 17A30, 17B38 ## 1\. Introduction The theory of Lie conformal algebras appeared as a formal language describing the algebraic properties of the operator product expansion in two-dimensional conformal field theory ([16]). In particular, Lie conformal algebras turn out to be valuable tools in studying vertex algebras and Hamiltonian formalism in the theory of nonlinear evolution equations ([3]). Moreover, Lie conformal algebras have close connections to infinite-dimensional Lie algebras satisfying the locality property ([17]). The conformal analogues of associative algebras, namely, associative conformal algebras naturally appeared in the representation theory of Lie conformal algebras ([9]). They were studied widely on the structure theory ([4, 5, 6, 8, 10, 11, 18, 20, 22, 26, 27, 28, 29, 30, 31, 32]) as well as representation theory ([7, 21, 23]). We would like to point that there are the “conformal analogues” for certain algebras besides Lie and associative algebras or the “conformal structures” of these algebras such as left-symmetric conformal algebras ([13]) and Jordan conformal algebras ([19]). It is natural to extend such structures to the bialgebras, that is, consider the conformal analogues of bialgebras. In the case of Lie bialgebras, Liberati in [24] developed a theory of Lie conformal bialgebras including the introduction of the notions of conformal classical Yang-Baxter equation, conformal Manin triples and conformal Drinfeld’s doubles. Similarly, a theory of left-symmetric conformal bialgebras was developed in [14], which are equivalent to a class of special Lie conformal algebras named parakähler Lie conformal algebras and the notion of conformal $S$-equation was introduced in the coboundary case. Moreover, the operator forms of the conformal classical Yang-Baxter equation and the conformal $S$-equation were studied in [12], which shows that the antisymmetric solutions of the conformal classical Yang- Baxter equation and the symmetric solutions of the conformal $S$-equation can be interpreted in terms of a kind of operators called $\mathcal{O}$-operators in the conformal sense. But as far as we know, there is not a conformal analogue of “associative bialgebras” yet. In fact, there are two kinds of “associative bialgebras”. One is the usual bialgebras in the theory of Hopf algebras, which the coproducts are homomorphisms of the products. Another is the infinitesimal bialgebras, which the coproducts are “derivations” of the products in certain sense, introduced by Joni and Rota in [15] in order to provide an algebraic framework for the calculus of divided difference. In particular, for the latter, in the case of antisymmetric infinitesimal (ASI) bialgebras which are called “associative D-algebras” in [33] or “balanced infinitesimal bialgebras” in the sense of the opposite algebras in [1], there is a systematic study in terms of their equivalences with double constructions of Frobenius algebras as well as their relationships with associative Yang-Baxter equation ([2]). In this paper, we develop a “conformal” theory for antisymmetric infinitesimal bialgebras, namely antisymmetric infinitesimal (ASI) conformal bialgebras. It is also a bialgebra theory for associative conformal algebras. That is, the following diagram is commutative: associative algebrasassociative conformal algebrasASI bialgebrasASI conformal bialgebrasconformal structuresconformal structuresbialgebra structures bialgebra structures We would like to point out that such an approach might not be available for considering the conformal analogues of the usual (associative) bialgebras and even Hopf algebras, but can help to shed light on further studies on the latter. The main idea is to extend the study of ASI bialgebras given in [2] to the “conformal case”. Explicitly, we first introduce the notion of double constructions of Frobenius conformal algebras as a conformal analogue of double constructions of Frobenius algebras, which are associative conformal algebras that are decomposed into the direct sum of another associative conformal algebra and its conformal dual as a $\mathbb{C}[\partial]$-module such that both of them are subalgebras and the natural conformal bilinear form is invariant. Such structures are interpreted equivalently in terms of matched pairs of associative conformal algebras which were introduced in [11]. Finally the notion of antisymmetric infinitesimal (ASI) conformal bialgebras is introduced as equivalent structures of the aforementioned matched pairs of associative conformal algebras as well as the double constructions of Frobenius conformal algebras. Note that the notion of ASI conformal bialgebras is available for any associative conformal algebra, whereas the equivalence with double constructions of Frobenius conformal algebras is available for the associative conformal algebras which are finitely generated and free as $\mathbb{C}[\partial]$-modules. As in the case of ASI bialgebras, the definition of coboundary ASI conformal bialgebra is introduced and its study is also meaningful. It leads to the introduction of associative conformal Yang-Baxter equation as a conformal analogue of the associative Yang-Baxter equation. In particular, its antisymmetric solutions give ASI conformal bialgebras. The associative conformal Yang-Baxter equation is interpreted in terms of its operator forms by introducing the notion of $\mathcal{O}$-operators of associative conformal algebras, especially an antisymmetric solution of the associative conformal Yang-Baxter equation corresponds to the skew-symmetric part of a conformal linear map $T$, where $T_{0}=T_{\lambda}\mid_{\lambda=0}$ is an $\mathcal{O}$-operator in the conformal sense and moreover, an $\mathcal{O}$-operator of an associative conformal algebra gives an antisymmetric solution of associative conformal Yang-Baxter equation in a semi-direct product associative conformal algebra. Furthermore, we introduce the notion of dendriform conformal algebras and show that for a dendriform conformal algebra which is finite and free as a $\mathbb{C}[\partial]$-module, there is a natural $\mathcal{O}$-operator on the associated conformal associative algebra. Therefore there are constructions of antisymmetric solutions of associative conformal Yang-Baxter equation and hence ASI conformal bialgebras from $\mathcal{O}$-operators of associative conformal algebras as well as dendriform conformal algebras. This paper is organized as follows. In Section 2, the notions of an associative conformal algebra, its bimodule and a matched pair of associative conformal algebras are recalled. In Section 3, we introduce the notion of double constructions of Frobenius conformal algebras and study their relationship with matched pairs of associative conformal algebras. In Section 4, the notion of ASI conformal bialgebras is introduced as (under certain conditions) equivalent structures of the aforementioned matched pair of associative conformal algebras as well as the double constructions of Frobenius conformal algebras. Section 5 is devoted to studying the coboundary case of ASI conformal bialgebras and the associative conformal Yang-Baxter equation is introduced. In Section 6, we introduce the notions of $\mathcal{O}$-operators of associative conformal algebras and dendriform conformal algebras to construct (antisymmetric) solutions of associative conformal Yang-Baxter equation and hence give ASI conformal bialgebras. Throughout this paper, we denote by $\mathbb{C}$ the field of complex numbers. All tensors over $\mathbb{C}$ are denoted by $\otimes$. We denote the identity map by $I$. Moreover, if $A$ is a vector space, then the space of polynomials of $\lambda$ with coefficients in $A$ is denoted by $A[\lambda]$. ## 2\. Preliminaries on associative conformal algebras We recall the notions of an associative conformal algebra, its bimodule and a matched pair of associative conformal algebras. The interested readers may consult [16] and [11] for more details. ###### Definition 2.1. A conformal algebra $R$ is a $\mathbb{C}[\partial]$-module endowed with a $\mathbb{C}$-bilinear map $\cdot_{\lambda}\cdot:R\times R\rightarrow R[\lambda]$, $(a,b)\mapsto a_{\lambda}b$ satisfying $\displaystyle(\partial a)_{\lambda}b=-\lambda a_{\lambda}b,\quad a_{\lambda}(\partial b)=(\partial+\lambda)a_{\lambda}b,\quad\forall\ a,b\in R.~{}~{}~{}~{}~{}~{}\quad\text{(conformal sesquilinearity)}$ (1) An associative conformal algebra $R$ is a conformal algebra with the $\mathbb{C}$-bilinear map $\cdot_{\lambda}\cdot:R\times R\rightarrow R[\lambda]$ satisfying $\displaystyle(a_{\lambda}b)_{\lambda+\mu}c=a_{\lambda}(b_{\mu}c),\quad\forall\ a,b,c\in R.$ (2) A conformal algebra is called finite if it is finitely generated as a $\mathbb{C}[\partial]$-module. The rank of a conformal algebra $R$ is its rank as a $\mathbb{C}[\partial]$-module. The notions of a homomorphism, an ideal and a subalgebra of an associative conformal algebra are defined as usual. ###### Example 2.2. Let $(A,\cdot)$ be an associative algebra. Then $\text{Cur}(A)=\mathbb{C}[\partial]\otimes A$ is an associative conformal algebra with the following $\lambda$-product: $\displaystyle(p(\partial)a)_{\lambda}(q(\partial)b)=p(-\lambda)q(\lambda+\partial)(a\cdot b),\;\;\forall\ \text{$p(\partial)$, $q(\partial)\in\mathbb{C}[\partial]$, $a$, $b\in A$.}$ ###### Definition 2.3. A left module $M$ over an associative conformal algebra $A$ is a $\mathbb{C}[\partial]$-module endowed with a $\mathbb{C}$-bilinear map $A\times M\longrightarrow M[\lambda]$, $(a,v)\mapsto a\rightharpoonup_{\lambda}v$, satisfying the following axioms $(\forall\ a,b\in A,v\in M)$: (LM1)$\qquad\qquad(\partial a)\rightharpoonup_{\lambda}v=-\lambda a\rightharpoonup_{\lambda}v,~{}~{}~{}a\rightharpoonup_{\lambda}(\partial v)=(\partial+\lambda)(a\rightharpoonup_{\lambda}v),$ (LM2)$\qquad\qquad(a_{\lambda}b)\rightharpoonup_{\lambda+\mu}v=a\rightharpoonup_{\lambda}(b\rightharpoonup_{\mu}v).$ We denote it by $(M,\rightharpoonup_{\lambda})$. A right module $M$ over an associative conformal algebra $A$ is a $\mathbb{C}[\partial]$-module endowed with a $\mathbb{C}$-bilinear map $M\times A\longrightarrow M[\lambda]$, $(v,a)\mapsto v\leftharpoonup_{\lambda}a$, satisfying the following axioms $(\forall\ a,b\in A,v\in M)$: (RM1)$\qquad\qquad(\partial v)\leftharpoonup_{\lambda}a=-\lambda v\leftharpoonup_{\lambda}a,~{}~{}~{}v\leftharpoonup_{\lambda}(\partial a)=(\partial+\lambda)(v\leftharpoonup_{\lambda}a),$ (RM2)$\qquad\qquad(v\leftharpoonup_{\lambda}a)\leftharpoonup_{\lambda+\mu}b=v\leftharpoonup_{\lambda}(a_{\mu}b).$ We denote it by $(M,\leftharpoonup_{\lambda})$. An $A$-bimodule is a triple $(M,\rightharpoonup_{\lambda},\leftharpoonup_{\lambda})$ such that $(M,\rightharpoonup_{\lambda})$ is a left $A$-module, $(M,\leftharpoonup_{\lambda})$ is a right $A$-module, and they satisfy the following condition $\displaystyle(a\rightharpoonup_{\lambda}v)\leftharpoonup_{\lambda+\mu}b=a\rightharpoonup_{\lambda}(v\leftharpoonup_{\mu}b),\;\;\forall\ a,b\in A,v\in M.$ (3) ###### Definition 2.4. Let $U$ and $V$ be two $\mathbb{C}[\partial]$-modules. A conformal linear map from $U$ to $V$ is a $\mathbb{C}$-linear map $a:U\rightarrow V[\lambda]$, denoted by $a_{\lambda}:U\rightarrow V$, such that $[\partial,a_{\lambda}]=-\lambda a_{\lambda}$. Denote the vector space of all such maps by $Chom(U,V)$. It has a canonical structure of a $\mathbb{C}[\partial]$-module $(\partial a)_{\lambda}=-\lambda a_{\lambda}.$ Define the conformal dual of a $\mathbb{C}[\partial]$-module $U$ as $U^{\ast c}=Chom(U,\mathbb{C})$ where $\mathbb{C}$ is viewed as the trivial $\mathbb{C}[\partial]$-module, that is, $U^{\ast c}=\\{a:U\rightarrow\mathbb{C}[\lambda]|\mathbb{C}\mbox{-linear~{}~{}and~{}~{}}a_{\lambda}(\partial b)=\lambda a_{\lambda}b,\forall\ b\in U\\}.$ In the special case $U=V$, set $Cend(V)=Chom(V,V)$. If $V$ is a finite $\mathbb{C}[\partial]$-module, then the $\mathbb{C}[\partial]$-module $Cend(V)$ has a canonical structure of an associative conformal algebra defined by $\displaystyle(a_{\lambda}b)_{\mu}v=a_{\lambda}(b_{\mu-\lambda}v),\quad\forall\ a,b\in Cend(V),v\in V.$ Set $a\rightharpoonup_{\lambda}v=l_{A}(a)_{\lambda}v$ and $v\leftharpoonup_{\lambda}a=r_{A}(a)_{-\lambda-\partial}v$. Then a structure of a bimodule $M$ over an associative conformal algebra $A$ is the same as two $\mathbb{C}[\partial]$-module homomorphisms $l_{A}$ and $r_{A}$ from $A$ to $Cend(M)$ such that the following conditions hold: $\displaystyle l_{A}(a_{\lambda}b)_{\lambda+\mu}v=l_{A}(a)_{\lambda}(l_{A}(b)_{\mu}v),$ (4) $\displaystyle r_{A}(b)_{-\lambda-\mu-\partial}(r_{A}(a)_{-\lambda-\partial}v)=r_{A}(a_{\mu}b)_{-\lambda-\partial}v,$ (5) $\displaystyle l_{A}(a)_{\lambda}(r_{A}(b)_{-\mu-\partial}v)=r_{A}(b)_{-\lambda-\mu-\partial}(l_{A}(a)_{\lambda}v),$ (6) for all $a$, $b\in A$ and $v\in M$. We denote this bimodule by $(M,l_{A},r_{A})$. ###### Proposition 2.5. Let $(M,l_{A},r_{A})$ be a finite bimodule of an associative conformal algebra $A$. Let $l_{A}^{\ast}$ and $r_{A}^{\ast}$ be two $\mathbb{C}[\partial]$-module homomorphisms from $A$ to $Cend(M^{\ast c})$ defined by $\displaystyle(l_{A}^{\ast}(a)_{\lambda}f)_{\mu}u=f_{\mu-\lambda}(l_{A}(a)_{\lambda}u),~{}(r_{A}^{\ast}(a)_{\lambda}f)_{\mu}u=f_{\mu-\lambda}(r_{A}(a)_{\lambda}u),\;\;\forall\ a\in A,f\in M^{\ast c},u\in M.$ (7) Then $(M^{\ast c},r_{A}^{\ast},l_{A}^{\ast})$ is a bimodule of $A$. ###### Proof. Let $a$, $b\in A$, $f\in M^{\ast c}$ and $u\in M$. Since $\displaystyle(r_{A}^{\ast}(a)_{\lambda}(r_{A}^{\ast}(b)_{\mu}f))_{\nu}u$ $\displaystyle=$ $\displaystyle(r_{A}^{\ast}(b)_{\mu}f)_{\nu-\lambda}(r_{A}(a)_{\lambda}u)=f_{\nu-\lambda-\mu}(r_{A}(b)_{\mu}(r_{A}(a)_{\lambda}u))$ $\displaystyle=$ $\displaystyle f_{\nu-\lambda-\mu}(r_{A}(a_{\lambda}b)_{\lambda+\mu}u)=(r_{A}^{\ast}(a_{\lambda}b)_{\lambda+\mu}f)_{\nu}u;$ $\displaystyle(l_{A}^{\ast}(b)_{-\lambda-\mu-\partial}(l_{A}^{\ast}(a)_{-\lambda-\partial}f))_{\nu}u$ $\displaystyle=$ $\displaystyle(l_{A}^{\ast}(a)_{\mu}f)_{\lambda+\mu}(l_{A}(b)_{\nu-\lambda-\mu}u)=f_{\lambda}(l_{A}(a)_{\mu}(l_{A}(b)_{\nu-\lambda-\mu}u))$ $\displaystyle=$ $\displaystyle f_{\lambda}(l_{A}(a_{\mu}b)_{\nu-\lambda}u)=(l_{A}^{\ast}(a_{\mu}b)_{\nu-\lambda}f)_{\nu}u=(l_{A}^{\ast}(a_{\mu}b)_{-\lambda-\partial}f)_{\nu}u;$ $\displaystyle(r_{A}^{\ast}(a)_{\lambda}(l_{A}^{\ast}(b)_{-\mu-\partial}f))_{\nu}u$ $\displaystyle=$ $\displaystyle(l_{A}^{\ast}(b)_{-\lambda-\mu+\nu}f)_{\nu-\lambda}(r_{A}(a)_{\lambda}u)=f_{\mu}(l_{A}(b)_{\nu-\lambda-\mu}(r_{A}(a)_{\lambda}u))$ $\displaystyle=$ $\displaystyle f_{\mu}(r_{A}(a)_{\lambda}(l_{A}(b)_{\nu-\lambda-\mu}u))=(r_{A}^{\ast}(a)_{\lambda}f)_{\lambda+\mu}(l_{A}(b)_{\nu-\lambda-\mu}u)$ $\displaystyle=$ $\displaystyle(l_{A}^{\ast}(b)_{-\lambda-\mu-\partial}(r_{A}^{\ast}(a)_{\lambda}f))_{\nu}u,$ we have $\displaystyle r_{A}^{\ast}(a)_{\lambda}(r_{A}^{\ast}(b)_{\mu}f)=r_{A}^{\ast}(a_{\lambda}b)_{\lambda+\mu}f,\;\;l_{A}^{\ast}(b)_{-\lambda-\mu-\partial}(l_{A}^{\ast}(a)_{-\lambda-\partial}f)=l_{A}^{\ast}(a_{\mu}b)_{-\lambda-\partial}f,$ $\displaystyle r_{A}^{\ast}(a)_{\lambda}(l_{A}^{\ast}(b)_{-\mu-\partial}f)=l_{A}^{\ast}(b)_{-\lambda-\mu-\partial}(r_{A}^{\ast}(a)_{\lambda}f).$ Hence $(M^{\ast c},r_{A}^{\ast},l_{A}^{\ast})$ is a bimodule of $A$. ∎ ###### Example 2.6. Let $A$ be a finite associative conformal algebra. Define two $\mathbb{C}[\partial]$-module homomorphisms $L_{A}$ and $R_{A}$ from $A$ to $Cend(A)$ by $L_{A}(a)_{\lambda}b=a_{\lambda}b$ and $R_{A}(a)_{\lambda}b=b_{-\lambda-\partial}a$ for all $a$, $b\in A$. Then $(A,L_{A},R_{A})$ is a bimodule of $A$. Moreover, $(A^{\ast c},R_{A}^{\ast},L_{A}^{\ast})$ is a bimodule of $A$. ###### Proposition 2.7. ([11, Proposition 4.4]) Let $A$ and $B$ be two associative conformal algebras. Suppose that there are $\mathbb{C}[\partial]$-module homomorphisms $l_{A}$, $r_{A}:A\rightarrow Cend(B)$ and $l_{B}$, $r_{B}:B\rightarrow Cend(A)$ such that $(B,l_{A},r_{A})$ is a bimodule of $A$ and $(A,l_{B},r_{B})$ is a bimodule of $B$ and they satisfy the following relations: $\displaystyle l_{A}(a)_{\lambda}(x_{\mu}y)=(l_{A}(a)_{\lambda}x)_{\lambda+\mu}y+l_{A}(r_{B}(x)_{-\lambda-\partial}a)_{\lambda+\mu}y,$ (8) $\displaystyle r_{B}(x)_{-\lambda-\mu-\partial}(a_{\lambda}b)=a_{\lambda}(r_{B}(x)_{-\mu-\partial}b)+r_{B}(l_{A}(b)_{\mu}(x))_{-\lambda-\partial}a,$ (9) $\displaystyle l_{B}(x)_{\lambda}(a_{\mu}b)=(l_{B}(x)_{\lambda}a)_{\lambda+\mu}b+l_{B}(r_{A}(a)_{-\lambda-\partial}x)_{\lambda+\mu}b,$ (10) $\displaystyle r_{A}(l_{B}(y)_{\mu}a)_{-\lambda-\partial}x+x_{\lambda}(r_{A}(a)_{-\mu-\partial}y)=r_{A}(a)_{-\lambda-\mu-\partial}(x_{\lambda}y),$ (11) $\displaystyle r_{A}(r_{B}(y)_{-\mu-\partial}(a))_{-\lambda-\partial}x+x_{\lambda}(l_{A}(a)_{\mu}y)=l_{A}(l_{B}(x)_{\lambda}a)_{\lambda+\mu}y+(r_{A}(a)_{-\lambda-\partial}x)_{\lambda+\mu}y,$ (12) $\displaystyle a_{\lambda}(l_{B}(x)_{\mu}b)+r_{B}(r_{A}(b)_{-\mu-\partial}x)_{-\lambda-\partial}(a)=(r_{B}(x)_{-\lambda-\partial}a)_{\lambda+\mu}b+l_{B}(l_{A}(a)_{\lambda}x)_{\lambda+\mu}b,$ (13) for all $a$, $b\in A$ and $x$, $y\in B$. Then there is an associative conformal algebra structure on the $\mathbb{C}[\partial]$-module $A\oplus B$ given by $\displaystyle(a+x)_{\lambda}(b+y)=(a_{\lambda}b+l_{B}(x)_{\lambda}b+r_{B}(y)_{-\lambda-\partial}a)+(x_{\lambda}y+l_{A}(a)_{\lambda}y+r_{A}(b)_{-\lambda-\partial}x),$ (14) for all $a,b\in A$ and $x,y\in B$. We denote this associative conformal algebra by $A\bowtie B$. $(A,B,l_{A},r_{A},l_{B},r_{B})$ satisfying the above relations is called a matched pair of associative conformal algebras. Moreover, any associative conformal algebra $E=A\oplus B$ where the sum is the direct sum of $\mathbb{C}[\partial]$-modules and $A$, $B$ are two associative conformal subalgebras of $E$, is $A\bowtie B$ associated to some matched pair of associative conformal algebras. ###### Remark 2.8. If $l_{B}$, $r_{B}$ and the $\lambda$-product on $B$ are trivial in Proposition 2.7, that is, $B$ is exactly a bimodule of $A$, then $A\bowtie B$ is the semi-direct product of $A$ and its bimodule $B$, which is denoted by $A\ltimes_{l_{A},r_{A}}B$. ## 3\. Double constructions of Frobenius conformal algebras We introduce the notion of double constructions of Frobenius conformal algebras. The relationship between double constructions of Frobenius conformal algebras and matched pairs of associative conformal algebras is investigated. ###### Definition 3.1. Let $V$ be a $\mathbb{C}[\partial]$-module. A conformal bilinear form on $V$ is a $\mathbb{C}$-bilinear map $\langle\cdot,\cdot\rangle_{\lambda}:V\times V\rightarrow\mathbb{C}[\lambda]$ satisfying $\displaystyle\langle\partial a,b\rangle_{\lambda}=-\lambda\langle a,b\rangle_{\lambda},~{}~{}~{}\langle a,\partial b\rangle_{\lambda}=\lambda\langle a,b\rangle_{\lambda},\;\;\forall\ a,b\in V.$ (15) A conformal bilinear form is called symmetric if $\langle a,b\rangle_{\lambda}=\langle b,a\rangle_{-\lambda}$ for any $a$, $b\in V$. Suppose that there is a conformal bilinear form on a $\mathbb{C}[\partial]$-module $V$. Then we have a $\mathbb{C}[\partial]$-module homomorphism $\varphi:V\longrightarrow V^{\ast c},~{}~{}v\mapsto\varphi_{v}$ defined by $(\varphi_{v})_{\lambda}w=\langle v,w\rangle_{\lambda},\quad\forall\ v,w\in V.$ A conformal bilinear form is called non-degenerate if $\varphi$ gives an isomorphism of $\mathbb{C}[\partial]$-modules between $V$ and $V^{\ast c}$. ###### Definition 3.2. An associative conformal algebra $A$ is called a Frobenius conformal algebra if there is a non-degenerate conformal bilinear form on $A$ such that $\displaystyle\langle a_{\lambda}b,c\rangle_{\mu}=\langle a,b_{\mu-\partial}c\rangle_{\lambda},\;\;\forall\ a,b,c\in A.$ (16) A Frobenius conformal algebra $A$ is called symmetric if the conformal bilinear form on $A$ is symmetric. ###### Example 3.3. Let $A=\mathbb{C}[\partial]a\oplus\mathbb{C}[\partial]b$. Suppose that $A$ is an associative conformal algebra with the following $\lambda$-product: $\displaystyle a_{\lambda}a=(\partial^{2}+\lambda\partial+\lambda^{2})b,~{}~{}a_{\lambda}b=b_{\lambda}a=b_{\lambda}b=0.$ Then $A$ is a Frobenius conformal algebra with the following conformal bilinear form: $\displaystyle\langle a,a\rangle_{\lambda}=\langle b,b\rangle_{\lambda}=0,~{}~{}~{}\langle a,b\rangle_{\lambda}=\langle b,a\rangle_{\lambda}=1.$ ###### Example 3.4. Let $(A,\langle\cdot,\cdot\rangle)$ be a Frobenius algebra, that is, $A$ is an associative algebra with a non-degenerate bilinear form $\langle\cdot,\cdot\rangle$ satisfying $\langle ab,c\rangle=\langle a,bc\rangle,\;\;\forall\ a,b\in A.$ Then $(\text{Cur}(A),\langle\cdot,\cdot\rangle_{\lambda})$ is a Frobenius conformal algebra with $\langle\cdot,\cdot\rangle_{\lambda}$ defined by $\displaystyle\langle p(\partial)a,q(\partial)b\rangle_{\lambda}=p(-\lambda)q(\lambda)\langle a,b\rangle,~{}~{}~{}\forall\ \text{$p(\partial)$, $q(\partial)\in\mathbb{C}[\partial]$, $a$, $b\in A$.}$ ###### Definition 3.5. If a Frobenius conformal algebra $A$ satisfies the following conditions: 1. (1) $A=B\oplus B^{\ast c}$ where the sum is the direct sum of $\mathbb{C}[\partial]$-modules; 2. (2) $B$ and $B^{\ast c}$ are two associative conformal subalgebras of $A$; 3. (3) the conformal bilinear form on $A$ is naturally given by $\displaystyle\langle a+f,b+g\rangle_{\lambda}=f_{\lambda}(b)+g_{-\lambda}(a),~{}~{}~{}~{}\forall\ a,b\in B,f,g\in B^{\ast c},$ (17) then $A$ is called a double construction of Frobenius conformal algebra associated to $B$ and $B^{\ast c}$. ###### Theorem 3.6. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Suppose that there is an associative conformal algebra structure on $A^{\ast c}$. Then there is a double construction of Frobenius conformal algebra associated to $A$ and $A^{\ast c}$ if and only if $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras. ###### Proof. Suppose that $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras. Then $A\bowtie A^{\ast c}$ is endowed with an associative conformal algebra structure as follows. $\displaystyle(a+f)_{\lambda}(b+g)=(a_{\lambda}b+R_{A^{\ast c}}^{\ast}(f)_{\lambda}b+L_{A^{\ast c}}^{\ast}(g)_{-\lambda-\partial}a)+(f_{\lambda}g+R_{A}^{\ast}(a)_{\lambda}g+L_{A}^{\ast}(b)_{-\lambda-\partial}f),$ (18) for all $a,b\in A$ and $f,g\in A^{\ast c}$. By this $\lambda$-product, $A$ and $A^{\ast c}$ are two subalgebras of $A\bowtie A^{\ast c}$. Obviously, the conformal bilinear form on $A\bowtie A^{\ast c}$ given by (17) is symmetric and non-degenerate. For all $a$, $b$, $c\in A$, and $f$, $g$, $h\in A^{\ast c}$, we have $\displaystyle\langle(a+f)_{\lambda}(b+g),c+h\rangle_{\mu}$ $\displaystyle=$ $\displaystyle\langle a_{\lambda}b+R_{A^{\ast c}}^{\ast}(f)_{\lambda}b+L_{A^{\ast c}}^{\ast}(g)_{-\lambda-\partial}a+f_{\lambda}g+R_{A}^{\ast}(a)_{\lambda}g+L_{A}^{\ast}(b)_{-\lambda-\partial}f,c+h\rangle_{\mu}$ $\displaystyle=$ $\displaystyle(f_{\lambda}g+R_{A}^{\ast}(a)_{\lambda}g+L_{A}^{\ast}(b)_{-\lambda-\partial}f)_{\mu}(c)+h_{-\mu}(a_{\lambda}b+R_{A^{\ast c}}^{\ast}(f)_{\lambda}b+L_{A^{\ast c}}^{\ast}(g)_{-\lambda-\partial}a)$ $\displaystyle=$ $\displaystyle(f_{\lambda}g)_{\mu}c+g_{\mu-\lambda}(R_{A}(a)_{\lambda}c)+f_{\lambda}(L_{A}(b)_{\mu-\lambda}c)+h_{-\mu}(a_{\lambda}b)$ $\displaystyle+(R_{A^{\ast c}}(f)_{\lambda}h)_{\lambda-\mu}b+(L_{A^{\ast c}}(g)_{\mu-\lambda}h)_{-\lambda}a$ $\displaystyle=$ $\displaystyle(f_{\lambda}g)_{\mu}c+g_{\mu-\lambda}(c_{-\lambda-\partial}a)+f_{\lambda}(b_{\mu-\lambda}c)+h_{-\mu}(a_{\lambda}b)+(h_{-\lambda-\partial}f)_{\lambda-\mu}b+(g_{\mu-\lambda}h)_{-\lambda}a$ $\displaystyle=$ $\displaystyle(f_{\lambda}g)_{\mu}c+g_{\mu-\lambda}(c_{-\mu}a)+f_{\lambda}(b_{\mu-\lambda}c)+h_{-\mu}(a_{\lambda}b)+(h_{-\mu}f)_{\lambda-\mu}b+(g_{\mu-\lambda}h)_{-\lambda}a,$ and $\displaystyle\langle a+f,(b+g)_{\mu-\partial}(c+h)\rangle_{\lambda}$ $\displaystyle=$ $\displaystyle\langle a+f,b_{\mu-\partial}c+R_{A^{\ast c}}^{\ast}(g)_{\mu-\partial}c+L_{A^{\ast c}}^{\ast}(h)_{-\mu}b+g_{\mu-\partial}h+R_{A}^{\ast}(b)_{\mu-\partial}h+L_{A}^{\ast}(c)_{-\mu}g\rangle_{\lambda}$ $\displaystyle=$ $\displaystyle f_{\lambda}(b_{\mu-\partial}c)+f_{\lambda}(R_{A^{\ast c}}^{\ast}(g)_{\mu-\partial}c)+f_{\lambda}(L_{A^{\ast c}}^{\ast}(h)_{-\mu}b)$ $\displaystyle+(g_{\mu-\partial}h)_{-\lambda}a+(R_{A}^{\ast}(b)_{\mu-\partial}h)_{-\lambda}a+(L_{A}^{\ast}(c)_{-\mu}g)_{-\lambda}a$ $\displaystyle=$ $\displaystyle f_{\lambda}(b_{\mu-\lambda}c)+(R_{A^{\ast c}}(g)_{\mu-\lambda}f)_{\mu}c+(L_{A^{\ast c}}(h)_{-\mu}f)_{\lambda-\mu}b$ $\displaystyle+(g_{\mu-\lambda}h)_{-\lambda}a+h_{-\mu}(R_{A}(b)_{\mu-\lambda}a)+g_{\mu-\lambda}(L_{A}(c)_{-\mu}a)$ $\displaystyle=f_{\lambda}(b_{\mu-\lambda}c)+(f_{\lambda}g)_{\mu}c+(h_{-\mu}f)_{\lambda-\mu}b+(g_{\mu-\lambda}h)_{-\lambda}a+h_{-\mu}(a_{\lambda}b)+g_{\mu-\lambda}(c_{-\mu}a).$ Hence this conformal bilinear form on $A\bowtie A^{\ast c}$ is invariant. Therefore $A\bowtie A^{\ast c}$ is a double construction of Frobenius conformal algebra associated to $A$ and $A^{\ast c}$. Conversely, suppose that there is a double construction of Frobenius conformal algebra associated to $A$ and $A^{\ast c}$. Therefore there is an associative conformal algebra structure on $A\oplus A^{\ast c}$ associated to a matched pair $(A,A^{\ast c},l_{A},r_{A},l_{A^{\ast c}},r_{A^{\ast c}})$. Note that in $A\oplus A^{\ast c}$, $\displaystyle a_{\lambda}f=r_{A^{\ast c}}(f)_{-\lambda-\partial}a+l_{A}(a)_{\lambda}f,\;\;f_{\lambda}a=l_{A^{\ast c}}(f)_{\lambda}a+r_{A}(a)_{-\lambda-\partial}f,\;\;\forall\ a\in A,f\in A^{\ast c}.$ For all $a$, $b\in A$, and $f\in A^{\ast c}$, we have $\displaystyle\langle{l_{A}(a)}_{\lambda}f,b\rangle_{\mu}$ $\displaystyle=$ $\displaystyle\langle a_{\lambda}f,b\rangle_{\mu}=\langle a,f_{\mu-\partial}b\rangle_{\lambda}=\langle f_{\mu-\lambda}b,a\rangle_{-\lambda}=\langle f,b_{-\mu}a\rangle_{\mu-\lambda}$ $\displaystyle=$ $\displaystyle f_{\mu-\lambda}({R_{A}(a)}_{\mu-\partial}b)=(R_{A}^{\ast}(a)_{\lambda}f)_{\mu}b=\langle R_{A}^{\ast}(a)_{\lambda}f,b\rangle_{\mu}.$ Since $\langle\cdot,\cdot\rangle_{\mu}$ is non-degenerate, $l_{A}(a)_{\lambda}f=R_{A}^{\ast}(a)_{\lambda}f$ for all $a\in A$ and $f\in A^{\ast c}$ and hence $l_{A}=R_{A}^{\ast}$. Similarly, we have $r_{A}=L_{A}^{\ast},\;\;l_{A^{\ast c}}=R_{A^{\ast c}}^{\ast},\;\;r_{A^{\ast c}}=L_{A^{\ast c}}^{\ast}.$ Thus $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras. ∎ ###### Theorem 3.7. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Assume that there is an associative conformal algebra structure on the $\mathbb{C}[\partial]$-module $A^{\ast c}$. Then $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras if and only if $\displaystyle R_{A}^{\ast}(a)_{\lambda}(f_{\mu}g)=(R_{A}^{\ast}(a)_{\lambda}f)_{\lambda+\mu}g+R_{A}^{\ast}(L_{A^{\ast c}}^{\ast}(f)_{-\lambda-\partial}a)_{\lambda+\mu}g,$ (19) $\displaystyle R_{A}^{\ast}(R_{A^{\ast c}}^{\ast}(f)_{\lambda}a)_{\lambda+\mu}g+(L_{A}^{\ast}(a)_{-\lambda-\partial}f)_{\lambda+\mu}g=L_{A}^{\ast}(L_{A^{\ast c}}^{\ast}(g)_{-\mu-\partial}(a))_{-\lambda-\partial}f+f_{\lambda}(R_{A}^{\ast}(a)_{\mu}g),$ (20) for all $a\in A$ and $f$, $g\in A^{\ast c}$. ###### Proof. Obviously, (19) is exactly (8) and (20) is exactly (12) when $l_{A}=R_{A}^{\ast},\;\;r_{A}=L_{A}^{\ast},\;\;l_{B}=R_{A^{\ast c}}^{\ast},\;\;r_{B}=L_{A^{\ast c}}^{\ast},$ in Proposition 2.7. Then the conclusion follows if we prove that (19), (9), (10) and (11) are mutually equivalent, (20) and (13) are equivalent. As an example, we give an explicit proof that (20) holds if and only if (13) holds. The other cases can be proved similarly. Let $\\{e_{1},\cdots,e_{n}\\}$ be a $\mathbb{C}[\partial]$-basis of $A$ and $\\{e_{1}^{\ast},\cdots,e_{n}^{\ast}\\}$ be a dual $\mathbb{C}[\partial]$-basis of $A^{\ast c}$ in the sense that $(e_{j}^{\ast})_{\lambda}e_{i}=\delta_{ij}$. Set ${e_{i}}_{\lambda}e_{j}=\sum_{k=1}^{n}P_{k}^{ij}(\lambda,\partial)e_{k},\;\;{e_{i}^{\ast}}_{\lambda}e_{j}^{\ast}=\sum_{k=1}^{n}R_{k}^{ij}(\lambda,\partial)e_{k}^{\ast},$ where $P_{k}^{ij}(\lambda,\partial)$ and $R_{k}^{ij}(\lambda,\partial)\in\mathbb{C}[\lambda,\partial]$. Since $(L_{A}^{\ast}(e_{i})_{\lambda}e_{j}^{\ast})_{\mu}e_{k}={e_{j}^{\ast}}_{\mu-\lambda}({e_{i}}_{\lambda}e_{k})={e_{j}^{\ast}}_{\mu-\lambda}(\sum_{j=1}^{n}P_{j}^{ik}(\lambda,\partial)e_{j})=P_{j}^{ik}(\lambda,\mu-\lambda),$ we have $\displaystyle L_{A}^{\ast}(e_{i})_{\lambda}e_{j}^{\ast}=\sum_{k=1}^{n}P_{j}^{ik}(\lambda,-\lambda-\partial)e_{k}^{\ast}.$ Similarly, we have $\displaystyle L_{A^{\ast c}}^{\ast}(e_{i}^{\ast})_{\lambda}e_{j}=\sum_{k=1}^{n}R_{j}^{ik}(\lambda,-\lambda-\partial)e_{k},\;\;R_{A}^{\ast}(e_{i})_{\lambda}e_{j}^{\ast}=\sum_{k=1}^{n}P_{j}^{ki}(\partial,-\lambda-\partial)e_{k}^{\ast},$ $\displaystyle R_{A^{\ast c}}^{\ast}(e_{i}^{\ast})_{\lambda}e_{j}=\sum_{k=1}^{n}R_{j}^{ki}(\partial,-\lambda-\partial)e_{k}.$ Therefore (20) holds for any $a\in A$, $f$, $g\in A^{\ast c}$ if and only if $\displaystyle(R_{A}^{\ast}(R_{A^{\ast c}}^{\ast}(e_{j}^{\ast})_{\lambda}e_{i})_{\lambda+\mu}e_{k}^{\ast}+(L_{A}^{\ast}(e_{i})_{-\lambda-\partial}e_{i}^{\ast})_{\lambda+\mu}e_{k}^{\ast}$ $\displaystyle-L_{A}^{\ast}(L_{A^{\ast c}}^{\ast}(e_{k}^{\ast})_{-\mu-\partial}e_{i})_{-\lambda-\partial}e_{j}^{\ast}+{e_{j}^{\ast}}_{\lambda}(R_{A}^{\ast}(e_{i})_{\mu}e_{k}^{\ast}))_{\nu}e_{s}=0,~{}~{}~{}~{}\forall~{}~{}i,j,k,s,$ if and only if the following equation holds: $\displaystyle\sum_{t=1}^{n}(R_{i}^{tj}(-\lambda-\mu,\mu)P_{k}^{st}(-\nu,\nu-\lambda-\mu)+R_{s}^{tk}(\lambda+\mu,-\nu)P_{j}^{it}(\mu,\lambda)$ $\displaystyle- R_{i}^{kt}(\nu-\lambda-\mu,\mu)P_{j}^{ts}(\nu-\lambda,\lambda)-R_{s}^{jt}(\lambda,-\nu)P_{k}^{ti}(\lambda-\nu,\nu-\lambda-\mu))=0,~{}~{}~{}~{}\forall~{}~{}i,j,k,s.$ (21) On the other hand, (13) holds for any $a$, $b\in A$, $x\in A^{\ast c}$ if and only if $\displaystyle{e_{j}^{\ast}}_{\nu}({e_{i}}_{\lambda}(R_{A^{\ast c}}^{\ast}(e_{k}^{\ast})_{\mu}e_{s})+L_{A^{\ast c}}^{\ast}(L_{A}^{\ast}(e_{s})_{-\mu-\partial}e_{k}^{\ast})_{-\lambda-\partial}e_{i}$ $\displaystyle-(L_{A^{\ast c}}^{\ast}(e_{k}^{\ast})_{-\lambda-\partial}e_{i})_{\lambda+\mu}e_{s}-R_{A^{\ast c}}^{\ast}(R_{A}^{\ast}(e_{i})_{\lambda}e_{k}^{\ast})_{\lambda+\mu}e_{s})=0,~{}~{}~{}~{}\forall~{}~{}i,j,k,s,$ if and only if the following equation holds: $\displaystyle\sum_{t=1}^{n}(R_{i}^{tj}(-\lambda-\nu,\lambda)P_{k}^{st}(-\lambda-\mu-\nu,\mu)+R_{s}^{tk}(\lambda+\nu,-\lambda-\mu-\nu)P_{j}^{it}(\lambda,\nu)$ $\displaystyle- R_{i}^{kt}(\mu,\lambda)P_{j}^{ts}(\lambda+\mu,\nu)-R_{s}^{jt}(\nu,-\lambda-\mu-\nu)P_{k}^{ti}(-\lambda-\mu,\mu))=0,~{}~{}~{}~{}\forall~{}~{}i,j,k,s.$ (22) Note that (22) is exactly (21) when we replace $\lambda$ by $\mu$, $\nu$ by $\lambda$, and $\mu$ by $\nu-\lambda-\mu$ in (22). Therefore (20) holds if and only if (13) holds. ∎ ## 4\. Antisymmetric infinitesimal conformal bialgebras We introduce the notion of antisymmetric infinitesimal conformal bialgebras as (under certain conditions) the equivalent structures of the matched pairs of associative conformal algebras as well as double constructions of Frobenius conformal algebras given in the previous section. ###### Definition 4.1. An associative conformal coalgebra is a $\mathbb{C}[\partial]$-module $A$ endowed with a $\mathbb{C}[\partial]$-module homomorphism $\Delta:A\longrightarrow A\otimes A$ such that $(I\otimes\Delta)\Delta(a)=(\Delta\otimes I)\Delta(a),\;\;\forall\ a\in A,$ (23) where the module action of $\mathbb{C}[\partial]$ on $A\otimes A$ is defined as $\partial(a\otimes b)=(\partial a)\otimes b+a\otimes(\partial b)$ for any $a$, $b\in A$. ###### Proposition 4.2. Let $(A,\Delta)$ be a finite associative conformal coalgebra. Then $A^{\ast c}=Chom(A,\mathbb{C})$ is endowed with an associative conformal algebra structure with the following $\lambda$-product: $\displaystyle(f_{\lambda}g)_{\mu}(a)=\sum f_{\lambda}(a_{(1)})g_{\mu-\lambda}(a_{(2)})=(f\otimes g)_{\lambda,\mu-\lambda}(\Delta(a)),\;\;\forall\ f,g\in A^{\ast c},$ (24) where $\Delta(a)=\sum a_{(1)}\otimes a_{(2)}$ for all $a\in A$. ###### Proof. By (24), the conformal sesquilinearity of the $\lambda$-product is naturally satisfied. For all $a\in A$, $f$, $g$ and $h\in A^{\ast c}$, we have $\displaystyle((f_{\lambda}g)_{\lambda+\mu}h-f_{\lambda}(g_{\mu}h))_{\nu}(a)$ $\displaystyle=\sum(f_{\lambda}g)_{\lambda+\mu}(a_{(1)})h_{\nu-\lambda-\mu}(a_{(2)})-\sum f_{\lambda}(a_{(1)})(g_{\mu}h)_{\nu-\lambda}(a_{(2)})$ $\displaystyle=\sum f_{\lambda}(a_{(11)})g_{\mu}(a_{(12)})h_{\nu-\lambda-\mu}(a_{(2)})-\sum f_{\lambda}(a_{(1)})g_{\mu}(a_{(21)})h_{\nu-\lambda-\mu}(a_{(22)})$ $\displaystyle=(f\otimes g\otimes h)_{\lambda,\mu,\nu-\lambda-\mu}(((\Delta\otimes I)\Delta-(I\otimes\Delta)\Delta)(a))=0.$ Hence the conclusion holds. ∎ ###### Proposition 4.3. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module, that is, $A=\sum_{i=1}^{n}\mathbb{C}[\partial]e_{i}$, where $\\{e_{1},\cdots,e_{n}\\}$ is a $\mathbb{C}[\partial]$-basis of $A$. Then $A^{\ast c}=Chom(A,\mathbb{C})=\sum_{i=1}^{n}\mathbb{C}[\partial]e_{i}^{\ast}$ is an associative conformal coalgebra with the following coproduct: $\displaystyle\Delta(f)=\sum_{i,j}f_{\mu}({e_{i}}_{\lambda}e_{j})(e_{i}^{\ast}\otimes e_{j}^{\ast})|_{\lambda=\partial\otimes 1,\mu=-\partial\otimes 1-1\otimes\partial},$ (25) where $\\{e_{1}^{\ast},\cdots,e_{n}^{\ast}\\}$ is a dual $\mathbb{C}[\partial]$-basis of $A^{\ast c}$. More precisely, if ${e_{i}}_{\lambda}e_{j}=\sum_{k}P_{k}^{ij}(\lambda,\partial)e^{k}$, then $\Delta(e_{k}^{\ast})=\sum_{i,j}Q_{k}^{ij}(\partial\otimes 1,1\otimes\partial)(e_{i}^{\ast}\otimes e_{j}^{\ast}),$ where $Q_{k}^{ij}(x,y)=P_{k}^{ij}(x,-x-y)$. ###### Proof. By (25), $\Delta$ is a $\mathbb{C}[\partial]$-module homomorphism. By the definition of $\Delta$, we have $\displaystyle(I\otimes\Delta)\Delta(e_{k}^{\ast})-(\Delta\otimes I)\Delta(e_{k}^{\ast})$ $\displaystyle=\sum_{i,j,l,r}Q_{k}^{ij}(\partial\otimes 1\otimes 1,1\otimes\partial\otimes 1+1\otimes 1\otimes\partial)$ $\displaystyle\times Q_{j}^{lr}(1\otimes\partial\otimes 1,1\otimes 1\otimes\partial)(e_{i}^{\ast}\otimes e_{l}^{\ast}\otimes e_{r}^{\ast})-\sum_{i,j,l,r}Q_{k}^{ij}(\partial\otimes 1\otimes 1+1\otimes\partial\otimes 1,1\otimes 1\otimes\partial)$ $\displaystyle\times Q_{i}^{lr}(\partial\otimes 1\otimes 1,1\otimes\partial\otimes 1)(e_{l}^{\ast}\otimes e_{r}^{\ast}\otimes e_{j}^{\ast}).$ On the other hand, since ${e_{i}}_{\lambda}({e_{l}}_{\mu}e_{r})=({e_{i}}_{\lambda}e_{l})_{\lambda+\mu}e_{r}$, we have $\displaystyle\sum_{j}P_{j}^{lr}(\mu,\lambda+\partial)P_{k}^{ij}(\lambda,\partial)=\sum_{j}P_{j}^{il}(\lambda,-\lambda-\mu)P_{k}^{jr}(\lambda+\mu,\partial).$ Since $Q_{k}^{ij}(x,y)=P_{k}^{ij}(x,-x-y)$, we have $\displaystyle\sum_{j}Q_{j}^{lr}(\mu,-\lambda-\mu-\partial)Q_{k}^{ij}(\lambda,-\lambda-\partial)=\sum_{j}Q_{j}^{il}(\lambda,\mu)Q_{k}^{jr}(\lambda+\mu,-\lambda-\mu-\partial).$ (26) Set $\lambda=\partial\otimes 1\otimes 1,\;\;\mu=1\otimes\partial\otimes 1,\;\;\partial=-\partial\otimes 1\otimes 1-1\otimes\partial\otimes 1-1\otimes 1\otimes\partial.$ Then by (26), we have $\displaystyle\sum_{j}Q_{k}^{ij}(\partial\otimes 1\otimes 1,1\otimes\partial\otimes 1+1\otimes 1\otimes\partial)\cdot Q_{j}^{lr}(1\otimes\partial\otimes 1,1\otimes 1\otimes\partial)$ $\displaystyle=\sum_{j}Q_{k}^{jr}(\partial\otimes 1\otimes 1+1\otimes\partial\otimes 1,1\otimes 1\otimes\partial)\cdot Q_{j}^{il}(\partial\otimes 1\otimes 1,1\otimes\partial\otimes 1).$ Therefore for all $k\in\\{1,\cdots,n\\}$, we have $\displaystyle(I\otimes\Delta)\Delta(e_{k}^{\ast})=(\Delta\otimes I)\Delta(e_{k}^{\ast}).$ Hence the conclusion holds. ∎ ###### Corollary 4.4. Let $(A=\mathbb{C}[\partial]a,\Delta)$ be an associative conformal coalgebra which is free of rank 1 as a $\mathbb{C}[\partial]$-module. Then $\Delta(a)=ka\otimes a$ for some $k\in\mathbb{C}$. ###### Proof. If $A=\mathbb{C}[\partial]a$ is an associative conformal algebra, then $a_{\lambda}a=ka$ for some $k\in\mathbb{C}$. Therefore this conclusion follows from Proposition 4.3. ∎ In the sequel, denote $\partial\otimes 1+1\otimes\partial$ by $\partial^{\otimes^{2}}$ and $\partial\otimes 1\otimes 1+1\otimes\partial\otimes 1+1\otimes 1\otimes\partial$ by $\partial^{\otimes^{3}}$. Moreover, for any vector space $V$, let $\tau:V\otimes V\rightarrow V\otimes V$ be the flip map, that is, $\tau(x\otimes y)=y\otimes x,\;\;\forall\ x,y\in V.$ ###### Theorem 4.5. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Suppose there is another associative conformal algebra structure on the $\mathbb{C}[\partial]$-module $A^{\ast c}$ obtained from a $\mathbb{C}[\partial]$-module homomorphism $\Delta:A\rightarrow A\otimes A$. Then $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras if and only if $\Delta$ satisfies $\displaystyle\Delta(a_{\lambda}b)=(I\otimes{L_{A}(a)}_{\lambda})\Delta(b)+({R_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I)\Delta(a),$ (27) $\displaystyle({L_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I-I\otimes R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}})\Delta(a)+\tau({L_{A}(a)}_{\lambda}\otimes I-I\otimes{R_{A}(a)}_{\lambda})\Delta(b)=0,$ (28) for all $a$, $b\in A$. ###### Proof. With the same assumption as that in the proof of Theorem 3.7 and by Proposition 4.3, we have $\displaystyle\Delta(e_{k})=\sum_{i,j}Q_{k}^{ij}(\partial\otimes 1,1\otimes\partial)e_{i}\otimes e_{j},~{}~{}{\rm where}~{}~{}Q_{k}^{ij}(x,y)=R_{k}^{ij}(x,-x-y).$ Considering the coefficient of $e_{j}\otimes e_{k}$ in $\displaystyle\Delta({e_{s}}_{\lambda}e_{i})=(I\otimes{L_{A}(e_{s})}_{\lambda})\Delta(e_{i})+({R_{A}(e_{i})}_{-\lambda-\partial\otimes 1-1\otimes\partial}\otimes I)\Delta(e_{s}),$ we have $\displaystyle\sum_{i=1}^{n}P_{t}^{si}(\lambda,\partial\otimes 1+1\otimes\partial)R_{t}^{jk}(\partial\otimes 1,-1\otimes\partial-\partial\otimes 1)$ $\displaystyle=\sum_{i=1}^{n}R_{i}^{jt}(\partial\otimes 1,-\lambda-\partial\otimes 1-1\otimes\partial)P_{k}^{st}(\lambda,1\otimes\partial)$ $\displaystyle+\sum_{i=1}^{n}R_{s}^{tk}(-\lambda-1\otimes\partial,\lambda)P_{j}^{ti}(\lambda+1\otimes\partial,\partial\otimes 1).$ (29) On the other hand, (19) holds for any $a\in A$, $f$, $g\in A^{\ast c}$ if and only if $\displaystyle(R_{A}^{\ast}(e_{i})_{\lambda}({e_{j}^{\ast}}_{\mu}e_{k}^{\ast}))_{\nu}e_{s}=((R_{A}(e_{i})_{\lambda}e_{j}^{\ast})_{\lambda+\mu}e_{k}^{\ast}+R_{A}(L_{A^{\ast c}}^{\ast}(e_{j}^{\ast})_{-\lambda-\partial}e_{i})_{\lambda+\mu}e_{k}^{\ast})_{\nu}e_{s},~{}~{}~{}~{}\forall~{}~{}i,j,k,s,$ if and only if the following equation holds: $\displaystyle\sum_{i=1}^{n}P_{t}^{si}(-\nu,\nu-\lambda)R_{t}^{jk}(\mu,\lambda-\nu)$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{n}R_{i}^{jt}(\mu,\lambda)P_{k}^{st}(-\nu,\nu-\lambda-\mu)$ (30) $\displaystyle+\sum_{i=1}^{n}R_{s}^{tk}(\lambda+\mu,-\nu)P_{j}^{ti}(-\lambda-\mu,\mu)),~{}~{}~{}~{}~{}\forall~{}~{}j,k,s,t.$ Obviously, (29) is exactly (30) by replacing $\lambda$ by $-\nu$, $1\otimes\partial$ by $\nu-\lambda-\mu$, and $\partial\otimes 1$ by $\mu$ in (29). Then (27) holds if and only if (19) holds. With a similar discussion, (28) holds if and only if (20) holds. Then this conclusion follows from Theorem 3.7. ∎ ###### Definition 4.6. Let $(A,\cdot_{\lambda}\cdot)$ be an associative conformal algebra and $(A,\Delta)$ be an associative conformal coalgebra. If $\Delta$ satisfies (27), then $(A,\cdot_{\lambda}\cdot,\Delta)$ is called an infinitesimal conformal bialgebra. Moreover, if $\Delta$ also satisfies (28), then $(A,\cdot_{\lambda}\cdot,\Delta)$ is called an antisymmetric infinitesimal (ASI) conformal bialgebra. ###### Remark 4.7. By the definition of infinitesimal bialgebra given in [1], the corresponding definition of conformal version of infinitesimal bialgebra should be given as follows. $(A,\circ_{\lambda})$ is an associative conformal algebra and $(A,\Delta)$ is an associative conformal coalgebra satisfying $\displaystyle\Delta(a\circ_{\lambda}b)=({L_{A}(a)}_{\lambda}\otimes I)\Delta(b)+(I\otimes{R_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}})\Delta(a),\;\;\forall\ a,b\in A.$ (31) We would like to point out that the definition of infinitesimal conformal bialgebra given in Definition 4.6 is equivalent to this definition if we replace $(A,\circ_{\lambda})$ by the opposite algebra of $(A,\cdot_{\lambda}\cdot)$ in the sense that $a\circ_{\lambda}b=b_{-\lambda-\partial}a$ for all $a$, $b\in A$. Combining Theorems 3.6 and 4.5 together, we have the following conclusion. ###### Corollary 4.8. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Suppose there is another associative conformal algebra structure on the $\mathbb{C}[\partial]$-module $A^{\ast c}$ obtained from a $\mathbb{C}[\partial]$-module homomorphism $\Delta:A\rightarrow A\otimes A$. Then the following conditions are equivalent: 1. (1) $(A,\cdot_{\lambda}\cdot,\Delta)$ is an ASI conformal bialgebra; 2. (2) There is a double construction of Frobenius conformal algebra associated to $A$ and $A^{\ast c}$; 3. (3) $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras. ###### Proposition 4.9. Let $(A,\cdot_{\lambda}\cdot,\Delta_{A})$ be a finite ASI conformal bialgebra, where $A$ is free as a $\mathbb{C}[\partial]$-module. Then $(A^{\ast c},\cdot_{\lambda}\cdot,\Delta_{A^{\ast c}})$ is also an ASI conformal bialgebra, where $\cdot_{\lambda}\cdot$ and $\Delta_{A^{\ast c}}$ are on $A^{\ast c}$ given by (24) and (25) respectively. ###### Proof. It follows directly from Corollary 4.8 and the symmetry of the associative conformal algebra $A$ and $A^{\ast c}$ in the double construction $A\bowtie A^{\ast c}$ of Frobenius conformal algebra associated to $A$ and $A^{\ast c}$. ∎ At the end of this section, we present two examples of ASI conformal bialgebras. ###### Example 4.10. Let $(A,\cdot,\overline{\Delta})$ be an antisymmetric infinitesimal bialgebra given in [2]. Then $\text{Cur}(A)$ has a natural ASI conformal bialgebra structure defined by $\displaystyle\Delta(p(\partial)a)=p(\partial\otimes 1+1\otimes\partial)\overline{\Delta}(a),\;\;\forall\ \text{$p(\partial)\in\mathbb{C}[\partial]$, $a\in A$.}$ ###### Example 4.11. Let $p(\lambda)\in\mathbb{C}[\lambda]$ and $A=\mathbb{C}[\partial]a\oplus\mathbb{C}[\partial]b$ be a rank two associative conformal algebra with the following $\lambda$-product: $\displaystyle a_{\lambda}a=p(\lambda+\partial)b,~{}~{}~{}a_{\lambda}b=b_{\lambda}a=b_{\lambda}b=0.$ Define a $\mathbb{C}[\partial]$-module homomorphism $\Delta:A\rightarrow A\otimes A$ by $\displaystyle\Delta(a)=a\otimes b,~{}~{}~{}\Delta(b)=b\otimes b.$ By a straightforward computation, $(A,\cdot_{\lambda}\cdot,\Delta)$ is an ASI conformal bialgebra if and only if $p(\lambda)$ is an odd polynomial, i.e., $p(\lambda)=-p(-\lambda)$. ## 5\. Coboundary antisymmetric infinitesimal conformal bialgebras We consider a special class of ASI conformal bialgebras which are called coboundary ASI conformal bialgebras. ###### Definition 5.1. For an ASI conformal bialgebra $(A,\cdot_{\lambda}\cdot,\Delta)$, if there exists $r\in A\otimes A$ such that $\displaystyle\Delta(a)=(I\otimes L_{A}(a)_{\lambda}-R_{A}(a)_{\lambda}\otimes I)r|_{\lambda=-\partial^{\otimes^{2}}},\;\;\forall\ a\in A,$ (32) then $(A,\cdot_{\lambda}\cdot,\Delta)$ is called coboundary. For $r=\sum_{i}r_{i}\otimes l_{i}\in A\otimes A$, define $\displaystyle r\bullet r=\sum_{i,j}r_{i}\otimes r_{j}\otimes{l_{i}}_{\mu}l_{j}|_{\mu=\partial\otimes 1\otimes 1}-r_{i}\otimes{r_{j}}_{\mu}l_{i}\otimes l_{j}|_{\mu=-\partial^{\otimes^{2}}\otimes 1}+{r_{i}}_{\mu}r_{j}\otimes l_{i}\otimes l_{j}|_{\mu=1\otimes\partial\otimes 1}.$ (33) ###### Proposition 5.2. Let $A$ be an associative conformal algebra and $r=\sum_{i}r_{i}\otimes l_{i}\in A\otimes A$. Then the map defined by (32) gives an associative conformal coalgebra $(A,\Delta)$ if and only if $\displaystyle(I\otimes I\otimes L_{A}(a)_{-\partial^{\otimes^{3}}}-R_{A}(a)_{-\partial^{\otimes^{3}}}\otimes I\otimes I)(r\bullet r)=0,\;\;\forall\ a\in A.$ (34) ###### Proof. By the definition of $\Delta$, we have $\displaystyle(I\otimes\Delta)\Delta(a)$ $\displaystyle=$ $\displaystyle(I\otimes\Delta)(\sum_{i}r_{i}\otimes a_{\lambda}l_{i}-\sum_{i}{r_{i}}_{-\lambda-\partial\otimes 1}a\otimes l_{i})|_{\lambda={-\partial^{\otimes^{2}}}}$ $\displaystyle=$ $\displaystyle(I\otimes\Delta)(\sum_{i}r_{i}\otimes a_{-\partial^{\otimes^{2}}}l_{i}-\sum_{i}{r_{i}}_{1\otimes\partial}a\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i}r_{i}\otimes\Delta(a_{-\partial^{\otimes^{2}}}l_{i})-\sum_{i}{r_{i}}_{1\otimes\partial\otimes 1+1\otimes 1\otimes\partial}a\otimes\Delta(l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i,j}(r_{i}\otimes r_{j}\otimes(a_{-\partial^{\otimes^{2}}\otimes 1}l_{i})_{\mu}l_{j}-r_{i}\otimes{r_{j}}_{-\mu-1\otimes\partial\otimes 1}(a_{-\partial^{\otimes^{2}}\otimes 1}l_{i})\otimes l_{j}$ $\displaystyle-({r_{i}}_{1\otimes\partial\otimes 1+1\otimes 1\otimes\partial}a\otimes r_{j}\otimes{l_{i}}_{\mu}l_{j}-{r_{i}}_{1\otimes\partial\otimes 1+1\otimes 1\otimes\partial}a\otimes{r_{j}}_{-\mu-1\otimes\partial\otimes 1}{l_{i}}\otimes l_{j}))_{\mu={-1\otimes\partial^{\otimes^{2}}}}$ $\displaystyle=$ $\displaystyle\sum_{i,j}r_{i}\otimes r_{j}\otimes a_{-\partial^{\otimes^{3}}}({l_{i}}_{\partial\otimes 1\otimes 1}l_{j})-\sum_{i,j}r_{i}\otimes{r_{j}}_{1\otimes 1\otimes\partial}(a_{-\partial^{\otimes^{2}}\otimes 1}l_{i})\otimes l_{j}$ $\displaystyle-\sum_{i,j}{r_{i}}_{1\otimes\partial^{\otimes^{2}}}a\otimes r_{j}\otimes{l_{i}}_{-1\otimes\partial^{\otimes^{2}}}l_{j}+\sum_{i,j}{r_{i}}_{1\otimes\partial\otimes 1+1\otimes 1\otimes\partial}a\otimes{r_{j}}_{1\otimes 1\otimes\partial}l_{i}\otimes l_{j}$ $\displaystyle=$ $\displaystyle(1\otimes 1\otimes L_{A}(a)_{-\partial^{\otimes^{3}}})(\sum_{i,j}r_{i}\otimes r_{j}\otimes{l_{i}}_{\partial\otimes 1\otimes 1}l_{j})-\sum_{i,j}r_{i}\otimes{r_{j}}_{1\otimes 1\otimes\partial}(a_{-\partial^{\otimes^{2}}\otimes 1}l_{i})\otimes l_{j}$ $\displaystyle-(R_{A}(a)_{-\partial^{\otimes^{3}}}\otimes 1\otimes 1)\sum_{i,j}(r_{i}\otimes r_{j}\otimes{l_{i}}_{\partial\otimes 1\otimes 1}l_{j}-r_{i}\otimes{r_{j}}_{1\otimes 1\otimes\partial}l_{i}\otimes l_{j}).$ Similarly, we have $\displaystyle(\Delta\otimes I)\Delta(a)$ $\displaystyle=$ $\displaystyle(\Delta\otimes I)\sum_{i}(r_{i}\otimes a_{-\partial^{\otimes^{2}}}l_{i}-{r_{i}}_{1\otimes\partial}a\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i}(\Delta(r_{i})\otimes a_{-\partial^{\otimes^{3}}}l_{i}-\Delta({r_{i}}_{1\otimes\partial}a)\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i,j}(r_{j}\otimes{r_{i}}_{-\partial^{\otimes^{2}}\otimes 1}l_{j}\otimes a_{-\partial^{\otimes^{3}}}l_{i}-{r_{j}}_{1\otimes\partial\otimes 1}r_{i}\otimes l_{j}\otimes a_{-\partial^{\otimes^{3}}}l_{i})$ $\displaystyle-\sum_{i,j}(r_{j}\otimes({r_{i}}_{1\otimes 1\otimes\partial}a)_{-\partial^{\otimes^{2}}\otimes 1}l_{j}\otimes l_{i}-{r_{j}}_{1\otimes\partial\otimes 1}({r_{i}}_{1\otimes 1\otimes\partial}a)\otimes l_{j}\otimes l_{i})$ $\displaystyle=$ $\displaystyle(I\otimes I\otimes L_{A}(a)_{-\partial^{\otimes^{3}}})\sum_{i,j}(r_{i}\otimes{r_{j}}_{-\partial^{\otimes^{2}}\otimes 1}l_{i}\otimes l_{j}-{r_{i}}_{1\otimes\partial\otimes 1}r_{j}\otimes l_{i}\otimes l_{j})$ $\displaystyle-\sum_{i,j}r_{i}\otimes{r_{j}}_{1\otimes 1\otimes\partial}(a_{-\partial^{\otimes^{2}}\otimes 1}l_{i})\otimes l_{j}$ $\displaystyle+(R_{A}(a)_{-\partial^{\otimes^{3}}}\otimes 1\otimes 1)\sum_{i,j}{r_{i}}_{1\otimes\partial\otimes 1}r_{j}\otimes l_{i}\otimes l_{j}.$ Therefore $\Delta$ satisfies (23) if and only if (34) holds. ∎ ###### Theorem 5.3. Let $(A,\cdot_{\lambda}\cdot)$ be an associative conformal algebra and $r=\sum_{i}r_{i}\otimes l_{i}\in A\otimes A$. Then the map defined by (32) gives an associative conformal coalgebra $(A,\Delta)$ such that $(A,\cdot_{\lambda}\cdot,\Delta)$ is an ASI conformal bialgebra if and only if (34) and the following equation hold: $\displaystyle(L_{A}(b)_{-\lambda-\partial^{\otimes^{2}}}\otimes I-I\otimes R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}})(I\otimes L_{A}(a)_{-\partial^{\otimes^{2}}}-R_{A}(a)_{-\partial^{\otimes^{2}}}\otimes I)(r+\tau r)=0,$ (35) for all $a$, $b\in A$. ###### Proof. Let $a,b\in A$. We first check that (27) holds automatically. In fact, we have $\displaystyle(I\otimes{L_{A}(a)}_{\lambda})\Delta(b)+({R_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I)\Delta(a)$ $\displaystyle=$ $\displaystyle(I\otimes{L_{A}(a)}_{\lambda})\sum_{i}(r_{i}\otimes b_{-\partial^{\otimes^{2}}}l_{i}-{r_{i}}_{1\otimes\partial}b\otimes l_{i})$ $\displaystyle+(R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}}\otimes I)\sum_{i}(r_{i}\otimes a_{-\partial^{\otimes^{2}}}l_{i}-{r_{i}}_{1\otimes\partial}a\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i}(r_{i}\otimes a_{\lambda}(b_{-\partial^{\otimes^{2}}}l_{i})-{r_{i}}_{\lambda+1\otimes\partial}b\otimes a_{\lambda}l_{i}$ $\displaystyle+{r_{i}}_{\lambda+1\otimes\partial}b\otimes a_{\lambda}l_{i}-({r_{i}}_{1\otimes\partial}a)_{\lambda+1\otimes\partial}b\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i}(r_{i}\otimes a_{\lambda}(b_{-\partial^{\otimes^{2}}}l_{i})-({r_{i}}_{1\otimes\partial}a)_{\lambda+1\otimes\partial}b\otimes l_{i})$ $\displaystyle=$ $\displaystyle\sum_{i}(r_{i}\otimes(a_{\lambda}b)_{-\partial^{\otimes^{2}}}l_{i}-{r_{i}}_{1\otimes\partial}(a_{\lambda}b)\otimes l_{i})$ $\displaystyle=$ $\displaystyle\Delta(a_{\lambda}b).$ Therefore (27) holds. Obviously, we have $\displaystyle({L_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I-I\otimes R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}})\Delta(a)$ $\displaystyle=({L_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I-I\otimes R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}})(I\otimes L_{A}(a)_{-\partial^{\otimes^{2}}}-R_{A}(a)_{-\partial^{\otimes^{2}}}\otimes I)r.$ Moreover, we have $\displaystyle\tau({L_{A}(a)}_{\lambda}\otimes I-I\otimes{R_{A}(a)}_{\lambda})\Delta(b)$ $\displaystyle=$ $\displaystyle\tau({L_{A}(a)}_{\lambda}\otimes I-I\otimes{R_{A}(a)}_{\lambda})\sum_{i}(r_{i}\otimes b_{-\partial^{\otimes^{2}}}l_{i}-{r_{i}}_{1\otimes\partial}b\otimes l_{i})$ $\displaystyle=$ $\displaystyle\tau\sum_{i}(a_{\lambda}r_{i}\otimes b_{-\lambda-\partial^{\otimes^{2}}}l_{i}-a_{\lambda}({r_{i}}_{1\otimes\partial}b)\otimes l_{i}-r_{i}\otimes(b_{-\partial^{\otimes^{2}}}l_{i})_{-\lambda-1\otimes\partial}a+{r_{i}}_{\lambda+1\otimes\partial}b\otimes{l_{i}}_{-\lambda-1\otimes\partial}a$ $\displaystyle=$ $\displaystyle\sum_{i}(b_{-\lambda-\partial^{\otimes^{2}}}l_{i}\otimes a_{\lambda}r_{i}-l_{i}\otimes a_{\lambda}({r_{i}}_{\partial\otimes 1}b)-(b_{-\partial^{\otimes^{2}}}l_{i})_{-\lambda-\partial\otimes 1}a\otimes r_{i}+{l_{i}}_{-\lambda-\partial\otimes 1}a\otimes{r_{i}}_{\lambda+\partial\otimes 1}b$ $\displaystyle=$ $\displaystyle\sum_{i}(b_{-\lambda-\partial^{\otimes^{2}}}l_{i}\otimes a_{\lambda}r_{i}-l_{i}\otimes(a_{\lambda}r_{i})_{\lambda+\partial\otimes 1}b)-b_{-\lambda-\partial^{\otimes^{2}}}({l_{i}}_{1\otimes\partial}a)+{l_{i}}_{-\lambda-\partial\otimes 1}a\otimes{r_{i}}_{\lambda+\partial\otimes 1}b$ $\displaystyle=$ $\displaystyle({L_{A}(b)}_{-\lambda-\partial^{\otimes^{2}}}\otimes I-I\otimes R_{A}(b)_{-\lambda-\partial^{\otimes^{2}}})(I\otimes L_{A}(a)_{-\partial^{\otimes^{2}}}-R_{A}(a)_{-\partial^{\otimes^{2}}}\otimes I)(\tau r).$ Therefore (28) holds if and only if (35) holds. Hence by Proposition 5.2, the conclusion holds. ∎ ###### Corollary 5.4. Let $A$ be an associative conformal algebra and $r\in A\otimes A$. Suppose that $r$ is antisymmetric. Then the map defined by (32) gives an associative conformal coalgebra $(A,\Delta)$ such that $(A,\cdot_{\lambda}\cdot,\Delta)$ is an ASI conformal bialgebra if $\displaystyle r\bullet r\equiv 0~{}~{}\text{mod}~{}~{}(\partial^{\otimes^{3}}).$ (36) ###### Proof. If $r\bullet r\equiv 0~{}~{}\text{mod}~{}~{}(\partial^{\otimes^{3}})$, (34) naturally holds by conformal sesquilinearity. Then this conclusion directly follows from Theorem 5.3. ∎ ###### Definition 5.5. Let $A$ be an associative conformal algebra and $r\in A\otimes A$. (36) is called associative conformal Yang-Baxter equation in $A$. ###### Remark 5.6. In fact, the associative conformal Yang-Baxter equation (36) is regarded as a conformal analogue of the associative Yang-Baxter equation in an associative algebra ([2]). ###### Definition 5.7. Let $(A,\cdot_{\lambda}\cdot,\Delta_{A})$ and $(B,\cdot_{\lambda}\cdot,\Delta_{B})$ be two ASI conformal bialgebras. If $\varphi:A\rightarrow B$ is a homomorphism of associative conformal algebras satisfying $\displaystyle(\varphi\otimes\varphi)\Delta_{A}(a)=\Delta_{B}(\varphi(a)),~{}~{}\forall\ a\in A,$ (37) then $\varphi$ is called a homomorphism of ASI conformal bialgebras. ###### Theorem 5.8. Let $(A,\cdot_{\lambda}\cdot,\Delta_{A})$ be a finite ASI conformal bialgebra where $A$ is free as a $\mathbb{C}[\partial]$-module. Then there is a canonical ASI conformal bialgebra structure on the $\mathbb{C}[\partial]$-module $A\oplus A^{\ast c}$ such that the inclusions $i_{1}:A\rightarrow A\oplus A^{\ast c}$ and $i_{2}:A^{\ast c}\rightarrow A\oplus A^{\ast c}$ are homomorphisms of ASI conformal bialgebras. Here the ASI conformal bialgebra structure on $A^{\ast c}$ is $(A^{\ast c},\cdot_{\lambda}\cdot,-\Delta_{A^{\ast c}})$, where $\cdot_{\lambda}\cdot$ and $\Delta_{A^{\ast c}}$ on $A^{\ast c}$ are defined by (24) and (25) respectively. ###### Proof. Let $\\{e_{1},\cdots,e_{n}\\}$ be a $\mathbb{C}[\partial]$-basis of $A$ and $\\{e_{1}^{\ast},\cdots,e_{n}^{\ast}\\}$ be the dual basis in $A^{\ast c}$. Since $(A,\cdot_{\lambda}\cdot,\Delta_{A})$ is an ASI conformal bialgebra, $(A,A^{\ast c},R_{A}^{\ast},L_{A}^{\ast},R_{A^{\ast c}}^{\ast},L_{A^{\ast c}}^{\ast})$ is a matched pair of associative conformal algebras by Corollary 4.8, where the associative conformal algebra structure on $A^{\ast c}$ is obtained from $\Delta_{A}$. Then by Proposition 2.7, there is an associative conformal algebra structure on the $\mathbb{C}[\partial]$-module $A\oplus A^{\ast c}$ associated with such a matched pair. Set $\displaystyle{e_{i}}_{\lambda}e_{j}=\sum_{k}P_{k}^{ij}(\lambda,\partial)e_{k},~{}~{}~{}~{}\Delta_{A}(e_{j})=\sum_{i,k}R_{j}^{ik}(\partial\otimes 1,-\partial\otimes 1-1\otimes\partial)e_{i}\otimes e_{k},$ (38) where $P_{k}^{ij}(\lambda,\partial)$, $R_{j}^{ik}(\lambda,\partial)\in\mathbb{C}[\lambda,\partial]$. Then by Proposition 2.7 again, we have $\displaystyle{e_{i}^{\ast}}_{\lambda}e_{j}^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{k}R_{k}^{ij}(\lambda,\partial)e_{k}^{\ast},$ $\displaystyle{e_{i}}_{\lambda}e_{j}^{\ast}$ $\displaystyle=$ $\displaystyle R_{A}^{\ast}(e_{i})_{\lambda}e_{j}^{\ast}+L_{A^{\ast c}}^{\ast}(e_{j}^{\ast})_{-\lambda-\partial}e_{i}=\sum_{k}P_{j}^{ki}(\partial,-\lambda-\partial)e_{k}^{\ast}+\sum_{k}R_{i}^{jk}(-\lambda-\partial,\lambda)e_{k},$ $\displaystyle{e_{i}^{\ast}}_{\lambda}e_{j}$ $\displaystyle=$ $\displaystyle R_{A^{\ast c}}^{\ast}(e_{i}^{\ast})_{\lambda}e_{j}+L_{A^{\ast c}}^{\ast}(e_{j})_{-\lambda-\partial}e_{i}^{\ast}=\sum_{k}R_{j}^{ki}(\partial,-\lambda-\partial)e_{k}+\sum_{k}P_{i}^{jk}(-\lambda-\partial,\lambda)e_{k}^{\ast}.$ Let $r=\sum_{i=1}^{n}e_{i}\otimes e_{i}^{\ast}\in(A\oplus A^{\ast c})\otimes(A\oplus A^{\ast c})$ and define $\Delta_{A\oplus A^{\ast c}}(a)=(I\otimes L_{A}(a)_{\lambda}-R_{A}(a)_{\lambda}\otimes I)r|_{\lambda=-\partial^{\otimes^{2}}},\;\;\forall\ a\in A\oplus A^{\ast c}.$ Note that $\displaystyle r\bullet r$ $\displaystyle=$ $\displaystyle\sum_{i,j}(e_{i}\otimes e_{j}\otimes{e_{i}^{\ast}}_{\mu}e_{j}^{\ast}|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle-e_{i}\otimes{e_{j}}_{\mu}e_{i}^{\ast}\otimes e_{j}^{\ast}|_{\mu=-\partial^{\otimes^{2}}\otimes 1}+{e_{i}}_{\mu}e_{j}\otimes e_{i}^{\ast}\otimes e_{j}^{\ast}|_{\mu=1\otimes\partial\otimes 1})$ $\displaystyle=$ $\displaystyle\sum_{i,j,k}(R_{k}^{ij}(\partial\otimes 1\otimes 1,1\otimes 1\otimes\partial)e_{i}\otimes e_{j}\otimes e_{k}^{\ast}-P_{i}^{kj}(1\otimes\partial\otimes 1,\partial\otimes 1\otimes 1)e_{i}\otimes e_{k}^{\ast}\otimes e_{j}^{\ast}$ $\displaystyle- R_{j}^{ik}(\partial\otimes 1\otimes 1,-\partial^{\otimes^{2}}\otimes 1)e_{i}\otimes e_{k}\otimes e_{j}^{\ast}+P_{k}^{ij}(1\otimes\partial\otimes 1,\partial\otimes 1\otimes 1)e_{k}\otimes e_{i}^{\ast}\otimes e_{j}^{\ast})$ $\displaystyle=$ $\displaystyle\sum_{i,j,k}(R_{k}^{ij}(\partial\otimes 1\otimes 1,1\otimes 1\otimes\partial)-R_{k}^{ij}(\partial\otimes 1\otimes 1,-\partial^{\otimes^{2}}\otimes 1))e_{i}\otimes e_{i}\otimes e_{k}^{\ast}$ $\displaystyle\equiv$ $\displaystyle 0~{}~{}~{}\text{mod}~{}~{}~{}\partial^{\otimes^{3}}.$ Then (34) holds by conformal sesquilinearity. Moreover, for all $e_{i}\in A$, we have $\displaystyle(I\otimes L_{A}(e_{i})_{-\partial^{\otimes^{2}}}-R_{A}(e_{i})_{-\partial^{\otimes^{2}}}\otimes I)(r+\tau r)$ $\displaystyle=$ $\displaystyle\sum_{j}(I\otimes L_{A}(e_{i})_{-\partial^{\otimes^{2}}}-R_{A}(e_{i})_{-\partial^{\otimes^{2}}}\otimes I)(e_{j}\otimes e_{j}^{\ast}+e_{j}^{\ast}\otimes e_{j})$ $\displaystyle=$ $\displaystyle\sum_{j}(e_{j}\otimes{e_{i}}_{-\partial^{\otimes^{2}}}e_{j}^{\ast}+e_{j}^{\ast}\otimes{e_{i}}_{-\partial^{\otimes^{2}}}e_{j}-{e_{j}}_{1\otimes\partial}e_{i}\otimes e_{j}^{\ast}-{e_{j}^{\ast}}_{1\otimes\partial}e_{i}\otimes e_{j})$ $\displaystyle=$ $\displaystyle\sum_{j,k}(P_{j}^{ki}(1\otimes\partial,\partial\otimes 1)e_{j}\otimes e_{k}^{\ast}+R_{i}^{jk}(\partial\otimes 1,-\partial^{\otimes^{2}})e_{j}\otimes e_{k}+P_{k}^{ij}(-\partial^{\otimes^{2}},1\otimes\partial)e_{j}^{\ast}\otimes e_{k}$ $\displaystyle-P_{k}^{ij}(1\otimes\partial,\partial\otimes 1)e_{k}\otimes e_{j}^{\ast}-R_{i}^{kj}(\partial\otimes 1,-\partial^{\otimes^{2}})e_{k}\otimes e_{j}-P_{j}^{ik}(-\partial^{\otimes^{2}},1\otimes\partial)e_{k}^{\ast}\otimes e_{j})=0.$ Similarly, for all $e_{i}^{*}\in A^{\ast c}$, we have $(I\otimes L_{A}(e_{i}^{\ast})_{-\partial^{\otimes^{2}}}-R_{A}(e_{i}^{\ast})_{-\partial^{\otimes^{2}}}\otimes I)(r+\tau r)=0.$ Therefore (35) holds. Hence by Theorem 5.3, $\Delta_{A\oplus A^{\ast c}}$ gives an ASI conformal bialgebra structure on $A\oplus A^{\ast c}$. Furthermore, for all $e_{i}\in A$, we have $\displaystyle\Delta_{A\oplus A^{\ast c}}(e_{i})$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}(I\otimes L_{A\oplus A^{\ast c}}(e_{i})_{\lambda}-R_{A\oplus A^{\ast c}}(e_{i})_{\lambda}\otimes I)(e_{j}\otimes e_{j}^{\ast})|_{\lambda=-\partial^{\otimes^{2}}}$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}(e_{j}\otimes{e_{i}}_{-\partial^{\otimes^{2}}}e_{j}^{\ast}-{e_{j}}_{1\otimes\partial}e_{i}\otimes e_{j}^{\ast})$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}(e_{j}\otimes(\sum_{k}P_{j}^{ki}(1\otimes\partial,\partial\otimes 1)e_{k}^{\ast}+\sum_{k}R_{i}^{jk}(\partial\otimes 1,-\partial^{\otimes^{2}})e_{k})$ $\displaystyle-\sum_{k}P_{k}^{ji}(1\otimes\partial,\partial\otimes 1)e_{k}\otimes e_{j}^{\ast})$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}\sum_{k}R_{i}^{jk}(\partial\otimes 1,-\partial^{\otimes^{2}})e_{j}\otimes e_{k}=\Delta_{A}(e_{i}).$ Similarly, for all $e_{i}^{*}\in A^{\ast c}$, we have $\Delta_{A\oplus A^{\ast c}}(e_{i}^{\ast})=-\Delta_{A^{\ast c}}(e_{i}^{\ast}).$ Therefore $i_{1}:A\rightarrow A\oplus A^{\ast c}$ and $i_{2}:A^{\ast c}\rightarrow A\oplus A^{\ast c}$ are homomorphisms of ASI conformal bialgebras. Hence the proof is finished. ∎ ## 6\. $\mathcal{O}$-operators of associative conformal algebras and dendriform conformal algebras We introduce the notion of $\mathcal{O}$-operators of associative conformal algebras to interpret the associative conformal Yang-Baxter equation. In particular, an $\mathcal{O}$-operator of an associative conformal algebra $A$ associated to a bimodule gives an antisymmetric solution of associative conformal Yang-Baxter equation in a semi-direct product associative conformal algebra. We also introduce the notion of dendriform conformal algebras to construct $\mathcal{O}$-operators of their associated associative conformal algebras and hence give (antisymmetric) solutions of associative conformal Yang-Baxter equation. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Define a linear map $\varphi:A\otimes A\rightarrow Chom(A^{\ast c},A)$ as $\displaystyle\varphi(u\otimes v)_{\lambda}(g)=g_{-\lambda-\partial^{A}}(u)v,\;\;\forall\ u,v\in A,g\in A^{\ast c}.$ (39) Here $\partial^{A}$ represents the action of $\partial$ on $A$. Obviously, $\varphi$ is a $\mathbb{C}[\partial]$-module homomorphism. Similar to Proposition 6.1 in [7], we show that $\varphi$ is a $\mathbb{C}[\partial]$-module isomorphism. Set $r=\sum_{i}r_{i}\otimes l_{i}\in A\otimes A$. By $\varphi$, we associate a conformal linear map $T^{r}\in Chom(A^{\ast c},A)$ given by $\displaystyle T^{r}_{\lambda}(f)=\sum_{i}f_{-\lambda-\partial^{A}}(r_{i})l_{i},\;\;\forall\ f\in A^{\ast c}.$ For all $f\in A^{\ast c}$ and $a\in A$, we define $\langle a,f\rangle_{\lambda}=\\{f,a\\}_{-\lambda}=f_{-\lambda}(a)$. Obviously, $\displaystyle\langle\partial a,f\rangle_{\lambda}=-\lambda\langle a,f\rangle_{\lambda}.$ (40) We also define $\langle a\otimes b\otimes c,f\otimes g\otimes h\rangle_{(\lambda,\nu,\theta)}=\langle a,f\rangle_{\lambda}\langle b,g\rangle_{\nu}\langle c,h\rangle_{\theta},\;\;\forall\ a,b,c\in A,f,g,h\in A^{\ast c}.$ By Proposition 2.5, we have $\displaystyle\langle a_{\lambda}b,f\rangle_{\mu}=\langle b,L_{A}^{\ast}(a)_{\lambda}f\rangle_{\mu-\lambda},~{}~{}~{}\langle b_{\mu-\lambda}a,f\rangle_{\mu}=\langle b,R_{A}^{\ast}(a)_{\lambda}f\rangle_{\mu-\lambda},\;\;\forall a,b\in A,f\in A^{\ast c}.$ (41) ###### Theorem 6.1. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module and $r\in A\otimes A$ be antisymmetric. Then $r$ is a solution of associative conformal Yang-Baxter equation if and only if $T^{r}\in Chom(A^{\ast c},A)$ satisfies $\displaystyle T^{r}_{0}(f)_{\lambda}T^{r}_{0}(g)-T_{0}^{r}(R_{A}^{\ast}(T_{0}^{r}(f))_{\lambda}g)-T^{r}_{0}(L_{A}^{\ast}(T^{r}_{0}(g))_{-\lambda-\partial}f)=0,~{}~{}~{}~{}~{}\forall\ f,~{}g\in A^{\ast c}.$ (42) ###### Proof. Since $r$ is antisymmetric, we have $T^{r}_{\lambda}(f)=\sum_{i}\langle r_{i},f\rangle_{\lambda+\partial^{A}}l_{i}=-\sum_{i}\langle l_{i},f\rangle_{\lambda+\partial^{A}}r_{i},\;\;\forall\ f\in A^{\ast c}.$ Obviously, the fact that $r\bullet r~{}~{}\text{mod}~{}~{}(\partial^{\otimes^{3}})=0$ holds if and only if the following equation holds: $\langle r\bullet r~{}~{}\text{mod}~{}~{}(\partial^{\otimes^{3}}),f\otimes g\otimes h\rangle_{(\lambda,\eta,\nu)}=0,\;\;\forall\ f,g,h\in A^{\ast c},$ if and only if the following equation holds: $\displaystyle\langle r\bullet r,f\otimes g\otimes h\rangle_{(\lambda,\eta,\nu)}=0~{}~{}~{}\text{mod}~{}~{}(\lambda+\eta+\nu),\;\;\forall\ f,g,h\in A^{\ast c}.$ (43) Let $f$, $g$, $h\in A^{\ast c}$. Then we have $\displaystyle\langle\sum_{i,j}r_{i}\otimes r_{j}\otimes{l_{i}}_{\mu}l_{j}|_{\mu=\partial\otimes 1\otimes 1},f\otimes g\otimes h\rangle_{(\lambda,\eta,\nu)}$ $\displaystyle=$ $\displaystyle\sum_{i,j}\langle r_{i},f\rangle_{\lambda}\langle r_{j},g\rangle_{\eta}\langle{l_{i}}_{-\lambda}l_{j},h\rangle_{\nu}=\langle(\sum_{i}\langle r_{i},f\rangle_{\lambda}{l_{i}})_{-\lambda}(\sum_{j}\langle r_{j},g\rangle_{\eta}l_{j}),h\rangle_{\nu}$ $\displaystyle=$ $\displaystyle\langle T^{r}_{\lambda-\partial}(f)_{-\lambda}T^{r}_{\eta-\partial}(g),h\rangle_{\nu}=\langle T^{r}_{0}(f)_{-\lambda}T^{r}_{\lambda+\eta+\nu}(g),h\rangle_{\nu},$ and $\displaystyle\langle\sum_{i,j}r_{i}\otimes{r_{j}}_{\mu}l_{i}\otimes l_{j}|_{\mu=-\partial^{\otimes^{2}}\otimes 1},f\otimes g\otimes h\rangle_{(\lambda,\eta,\nu)}$ $\displaystyle=$ $\displaystyle\sum_{i,j}\langle r_{i},f\rangle_{\lambda}\langle{r_{j}}_{\lambda+\eta}l_{i},g\rangle_{\eta}\langle l_{j},h\rangle_{\nu}=\sum_{i,j}\langle{r_{j}}_{\lambda+\eta}(\langle r_{i},f\rangle_{\lambda}l_{i}),g\rangle_{\eta}\langle l_{j},h\rangle_{\nu}$ $\displaystyle=$ $\displaystyle\sum_{j}\langle{r_{j}}_{\lambda+\eta}(T_{\lambda-\partial}^{r}(f)),g\rangle_{\eta}\langle l_{j},h\rangle_{\nu}=\sum_{j}\langle{r_{j}},R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g\rangle_{\lambda+\eta}\langle l_{j},h\rangle_{\nu}$ $\displaystyle=$ $\displaystyle\langle(\sum_{j}\langle{r_{j}},R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g\rangle_{\lambda+\eta}l_{j}),h\rangle_{\nu}=\langle T_{\lambda+\eta-\partial}^{r}(R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g),h\rangle_{\nu}$ $\displaystyle=$ $\displaystyle\langle T_{\lambda+\eta+\nu}^{r}(R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g),h\rangle_{\nu}.$ Similarly, we have $\displaystyle\langle\sum_{i,j}{r_{i}}_{\mu}r_{j}\otimes l_{i}\otimes l_{j}|_{\mu=1\otimes\partial\otimes 1},f\otimes g\otimes h\rangle_{(\lambda,\eta,\nu)}=-\langle T^{r}_{\lambda+\eta+\nu}(L_{A}^{\ast}(T^{r}_{0}(g))_{-\eta}f),h\rangle_{\nu}.$ Therefore (43) holds if and only if $\displaystyle\langle T^{r}_{0}(f)_{-\lambda}T^{r}_{\lambda+\eta+\nu}(g)-T_{\lambda+\eta+\nu}^{r}(R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g)-T^{r}_{\lambda+\eta+\nu}(L_{A}^{\ast}(T^{r}_{0}(g))_{-\eta}f),h\rangle_{\nu}$ $\displaystyle=0~{}~{}~{}\text{mod}~{}~{}~{}(\lambda+\eta+\nu),\;\;\forall\ f,g,h\in A^{\ast c}.$ It is straightforward that the above equality holds if and only if $\displaystyle T^{r}_{0}(f)_{-\lambda}T^{r}_{0}(g)-T_{0}^{r}(R_{A}^{\ast}(T_{0}^{r}(f))_{-\lambda}g)-T^{r}_{0}(L_{A}^{\ast}(T^{r}_{0}(g))_{\lambda-\partial}f)=0,\;\;\forall f,g\in A^{\ast c}.$ (44) Therefore the conclusion follows by replacing $\lambda$ by $-\lambda$ in (44). ∎ ###### Corollary 6.2. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module, $r\in A\otimes A$ be antisymmetric and $\Delta_{A}$ be the map defined by (32) through $r$. Suppose the associative conformal algebra structure on $A^{\ast c}$ is obtained from $\Delta_{A}$. Let $T^{r}\in\text{Chom}(A^{\ast c},A)$ be the element corresponding to $r$ through the isomorphism $A\otimes A\cong\text{Chom}(A^{\ast c},A)$. Then $T_{0}^{r}:A^{\ast c}\rightarrow A$ is a homomorphism of associative conformal algebras. ###### Proof. Let $\\{e_{1},\cdots,e_{n}\\}$ be a $\mathbb{C}[\partial]$-basis of $A$ and $\\{e_{1}^{\ast},\cdots,e_{n}^{\ast}\\}$ be the dual $\mathbb{C}[\partial]$-basis. Set ${e_{i}}_{\lambda}e_{j}=\sum_{k}P_{ij}^{k}e_{k},\;\;r=\sum_{i,j}a_{ij}(\partial\otimes 1,1\otimes\partial)e_{i}\otimes e_{j},$ where $a_{ij}(\lambda,\mu)\in\mathbb{C}[\lambda,\mu]$. Then $\displaystyle\Delta_{A}(e_{k})$ $\displaystyle=$ $\displaystyle(I\otimes L_{A}(e_{k})_{\lambda}-R_{A}(a)_{\lambda}\otimes 1)r|_{\lambda=-\partial^{\otimes^{2}}}$ $\displaystyle=$ $\displaystyle\sum_{i,j}a_{ij}(\partial\otimes 1,\lambda+1\otimes\partial)e_{i}\otimes{e_{k}}_{\lambda}e_{j}-\sum_{i,j}a_{ij}(\lambda+\partial\otimes 1,1\otimes\partial){e_{i}}_{-\lambda-\partial}e_{k}\otimes e_{j}$ $\displaystyle=$ $\displaystyle\sum_{i,j,s}(a_{ij}(\partial\otimes 1,-\partial\otimes 1)P_{kj}^{s}(-\partial^{\otimes^{2}},1\otimes\partial)-a_{js}(-1\otimes\partial,1\otimes\partial)P_{jk}^{i}(1\otimes\partial,\partial\otimes 1))e_{i}\otimes e_{s}.$ Note that $T_{0}^{r}(e_{k}^{\ast})=\sum_{j}a_{kj}(-\partial,\partial)e_{j}$. Then by Proposition 4.2, $\displaystyle{e_{l}^{\ast}}_{\lambda}e_{t}^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{j,k}(a_{lj}(\lambda,-\lambda)P_{kj}^{t}(\partial,-\lambda-\partial)-a_{jt}(\lambda+\partial,-\lambda-\partial)P_{jk}^{l}(-\lambda-\partial,\lambda))e_{k}^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{j,k}(a_{lj}(\lambda,-\lambda){e_{t}^{\ast}}_{-\lambda-\partial^{A^{\ast c}}}({e_{k}}_{\partial^{A^{\ast c}}}e_{j})-a_{jt}(\lambda+\partial,-\lambda-\partial){e_{l}^{\ast}}_{\lambda}({e_{j}}_{-\lambda-\partial^{A^{\ast c}}}e_{k}))e_{k}^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{j,k}(a_{lj}(\lambda,-\lambda){e_{t}^{\ast}}_{-\lambda-\partial^{A^{\ast c}}}({e_{k}}_{\partial^{A^{\ast c}}}e_{j})+a_{tj}(-\lambda-\partial,\lambda+\partial){e_{l}^{\ast}}_{\lambda}({e_{j}}_{-\lambda-\partial^{A^{\ast c}}}e_{k}))e_{k}^{\ast}$ $\displaystyle=$ $\displaystyle\sum_{k}({e_{t}^{\ast}}_{-\lambda-\partial^{A^{\ast c}}}({e_{k}}_{\partial^{A^{\ast c}}}T_{0}^{r}(e_{l}))+{e_{l}^{\ast}}_{\lambda}(T_{0}^{r}({e_{t}})_{-\lambda-\partial^{A^{\ast c}}}e_{k}))e_{k}^{\ast}$ $\displaystyle=$ $\displaystyle R_{A}^{\ast}(T_{0}^{r}(e_{l}))_{\lambda}e_{t}^{\ast}+L_{A}^{\ast}(T_{0}^{r}(e_{t}))_{-\lambda-\partial}e_{l}^{\ast}.$ Thus by Theorem 6.1, we have ${T_{0}^{r}(e_{l}^{\ast})}_{\lambda}{T_{0}^{r}(e_{t}^{\ast})}=T_{0}^{r}({e_{l}^{\ast}}_{\lambda}e_{t}^{\ast}),\;\;\forall\ l,t\in\\{1,\cdots,n\\}.$ Therefore $T_{0}^{r}:A^{\ast c}\rightarrow A$ is a homomorphism of associative conformal algebras. ∎ Let $A$ be a Frobenius conformal algebra with a non-degenerate invariant conformal bilinear form $\langle\cdot,\cdot\rangle_{\lambda}$, which is finitely generated and free as a $\mathbb{C}[\partial]$-module. Define $\displaystyle\langle a\otimes b,c\otimes d\rangle_{(\lambda,\mu)}=\langle a,c\rangle_{\lambda}\langle b,d\rangle_{\mu},\;\;\forall\ a,b,c,d\in A.$ (45) Let $r=\sum_{i}r_{i}\otimes l_{i}\in A\otimes A$. Define a linear map $P^{r}:A\rightarrow A[\lambda]$ by $\displaystyle\langle r,u\otimes v\rangle_{(\lambda,\mu)}=\langle P_{\lambda-\partial}^{r}(u),v\rangle_{\mu},\;\;\forall u,v\in A.$ Obviously, $P^{r}\in\text{Cend}(A)$. ###### Corollary 6.3. Let $A$ be a symmetric Frobenius conformal algebra which is finitely generated and free as a $\mathbb{C}[\partial]$-module and $r\in A\otimes A$ be antisymmetric. Then $r$ is a solution of associative conformal Yang-Baxter equation in $A$ if and only if $P^{r}\in\text{Cend}(A)$ satisfies $\displaystyle P^{r}_{0}(a)_{\lambda}P^{r}_{0}(b)=P_{0}^{r}(P_{0}^{r}(a)_{\lambda}b)+P^{r}_{0}(a_{\lambda}P^{r}_{0}(b)),\;\;\forall\ a,b\in A.$ (46) ###### Proof. Since $A$ has a non-degenerate symmetric invariant conformal bilinear form, $\varphi:A\longrightarrow A^{\ast c},~{}~{}a\mapsto\varphi_{a}$ defined by $(\varphi_{a})_{\lambda}b=\langle a,b\rangle_{\lambda},\quad\forall\ a,b\in A,$ is an isomorphism of $\mathbb{C}[\partial]$-modules. By the definitions of $T^{r}$ in Theorem 6.1 and $P^{r}$, we get $P^{r}=T^{r}\circ\varphi$. Therefore $P^{r}_{0}=T^{r}_{0}\circ\varphi$. Since $\varphi$ is a $\mathbb{C}[\partial]$-module isomorphism, for any $f$, $g\in A^{\ast c}$, we assume that $\varphi(a)=f$ and $\varphi(b)=g$. Then (42) becomes $\displaystyle T^{r}_{0}(\varphi(a))_{\lambda}T^{r}_{0}(\varphi(b))-T_{0}^{r}(R_{A}^{\ast}(T_{0}^{r}(\varphi(a)))_{\lambda}\varphi(b))-T^{r}_{0}(L_{A}^{\ast}(T^{r}_{0}(\varphi(b)))_{-\lambda-\partial}\varphi(a))=0.$ (47) Thus $\displaystyle P^{r}_{0}(a)_{\lambda}P^{r}_{0}(b)-T_{0}^{r}(R_{A}^{\ast}(P_{0}^{r}(a))_{\lambda}\varphi(b))-T^{r}_{0}(L_{A}^{\ast}(P^{r}_{0}(b))_{-\lambda-\partial}\varphi(a))=0,\;\;\forall\ a,b\in A.$ (48) For all $a,b,c\in A$, we have $\displaystyle(R_{A}^{\ast}(P_{0}^{r}(a))_{\lambda}\varphi(b))_{\mu}c$ $\displaystyle=$ $\displaystyle\varphi(b)_{\mu-\lambda}(R_{A}(P_{0}^{r}(a))_{\lambda}c)=\langle b,R_{A}(P_{0}^{r}(a))_{\lambda}c\rangle_{\mu-\lambda}$ $\displaystyle=$ $\displaystyle\langle b,c_{-\lambda-\partial}P_{0}^{r}(a)\rangle_{\mu-\lambda}=\langle b_{\mu-\lambda}c,P_{0}^{r}(a)\rangle_{-\lambda}$ $\displaystyle=$ $\displaystyle\langle P_{0}^{r}(a),b_{\mu-\lambda}c\rangle_{\lambda}=\langle P_{0}^{r}(a)_{\lambda}b,c\rangle_{\mu}.$ Therefore we have $R_{A}^{\ast}(P_{0}^{r}(a))_{\lambda}\varphi(b)=\varphi(P_{0}^{r}(a)_{\lambda}b),\;\;\forall\ a,b\in A.$ Similarly, we have $L_{A}^{\ast}(P^{r}_{0}(b))_{-\lambda-\partial}\varphi(a)=\varphi(a_{\lambda}P_{0}^{r}(b)),\;\;\forall\ a,b\in A.$ Hence (46) follows from (48). Similarly, we also obtain (42) from (46) through $\varphi$. Then by Theorem 6.1, the conclusion holds. ∎ By the fact that the $T_{0}^{r}$ in Theorems 6.1 and the $P_{0}^{r}$ in Corollary 6.3 are $\mathbb{C}[\partial]$-module homomorphisms, we present the following notions. ###### Definition 6.4. Let $A$ be an associative conformal algebra and $(M,l_{A},r_{A})$ be a bimodule of $A$. A $\mathbb{C}[\partial]$-module homomorphism $T:M\rightarrow A$ is called an $\mathcal{O}$-operator associated with $(M,l_{A},r_{A})$ if $T$ satisfies $\displaystyle T(u)_{\lambda}T(v)=T(l_{A}(T(u))_{\lambda}v)+T(r_{A}(T(v))_{-\lambda-\partial}u),\;\;\forall\text{$u$, $v\in M$.}$ (49) In particular, an $\mathcal{O}$-operator $T:A\rightarrow A$ associated with the bimodule $(A,L_{A},R_{A})$ is called a Rota-Baxter operator (of weight zero) on $A$, that is, $T$ is a $\mathbb{C}[\partial]$-module homomorphism satisfying $\displaystyle T(a)_{\lambda}T(b)=T(T(a)_{\lambda}b)+T(a_{\lambda}T(b)),\;\;\forall\ \text{$a$, $b\in A$.}$ (50) ###### Example 6.5. Let $A$ be a finite associative conformal algebra which is free as a $\mathbb{C}[\partial]$-module and $r\in A\otimes A$ be antisymmetric. Then $r$ is a solution of associative conformal Yang-Baxter equation if and only if $T_{0}^{r}$ is an $\mathcal{O}$-operator associated with the bimodule $(A^{\ast c},R_{A}^{\ast},L_{A}^{\ast})$. If in addition, $A$ is a symmetric Frobenius conformal algebra, that is, $A$ has a non-degenerate symmetric invariant conformal bilinear form, then $r$ is a solution of associative conformal Yang-Baxter equation if and only if $P_{0}^{r}$ is a Rota-Baxter operator (of weight zero) on $A$. ###### Example 6.6. Let $A$ be an associative conformal algebra. The identity map $I$ is an $\mathcal{O}$-operator associated with the bimodule $(A,L_{A},0)$ or $(A,0,R_{A})$. Let $(M,l_{A},r_{A})$ be a bimodule of an associative conformal algebra $A$. Then $(M^{\ast c},r_{A}^{\ast},l_{A}^{\ast})$ is a bimodule of $A$ by Proposition 2.5. Suppose that $M$ is a $\mathbb{C}[\partial]$-module of finite rank. By Proposition 6.1 in [7], $M^{\ast c}\otimes A\cong Chom(M,A)$ as $\mathbb{C}[\partial]$-modules through the isomorphism $\varphi$ defined as $\varphi(f\otimes a)_{\lambda}v=f_{\lambda+\partial^{A}}(v)a,\;\;\forall\ a\in A,v\in M,f\in M^{\ast c}.$ By the $\mathbb{C}[\partial]$-module actions on $M^{\ast c}\otimes A$, we also get $M^{\ast c}\otimes A\cong A\otimes M^{\ast c}$ as $\mathbb{C}[\partial]$-modules. Therefore as $\mathbb{C}[\partial]$-modules, $Chom(M,A)\cong A\otimes M^{\ast c}$. Consequently, for any $T\in Chom(M,A)$, we associate an $r_{T}\in A\otimes M^{\ast c}\subset(A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c})\otimes(A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c})$. ###### Theorem 6.7. Let $A$ be a finite associative conformal algebra and $(M,l_{A},r_{A})$ be a finite bimodule of $A$. Suppose that $A$ and $M$ are free as $\mathbb{C}[\partial]$-modules. Let $T\in Chom(M,A)$ and $r_{T}\in A\otimes M^{\ast c}\subset(A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c})\otimes(A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c})$ be the element corresponding to $T$ under the above correspondence. Then $r=r_{T}-\tau r_{T}$ is an antisymmetric solution of the associative conformal Yang-Baxter equation in $A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c}$ if and only if $T_{0}=T_{\lambda}|_{\lambda=0}$ is an $\mathcal{O}$-operator associated with the bimodule $(M,l_{A},r_{A})$. ###### Proof. Let $\\{e_{1},\cdots,e_{n}\\}$ be a $\mathbb{C}[\partial]$-basis of $A$, $\\{v_{1},\cdots,v_{m}\\}$ be a $\mathbb{C}[\partial]$-basis of $M$ and $\\{v_{1}^{\ast},\cdots,v_{m}^{\ast}\\}$ be the dual $\mathbb{C}[\partial]$-basis of $M^{\ast c}$. Assume that $T_{\lambda}(v_{i})=\sum_{j=1}^{n}g_{ij}(\lambda,\partial)e_{j},\;\;\forall\ i=1,\cdots,m,$ where $g_{ij}(\lambda,\partial)\in\mathbb{C}[\lambda,\partial]$. Then we have $\displaystyle r_{T}=\sum_{j=1}^{n}\sum_{i=1}^{m}g_{ij}(-\partial^{\otimes^{2}},\partial\otimes 1)e_{j}\otimes v_{i}^{\ast}.$ Therefore we have $\displaystyle r=\sum_{i,j}(g_{ij}(-\partial^{\otimes^{2}},\partial\otimes 1)e_{j}\otimes v_{i}^{\ast}-g_{ij}(-\partial^{\otimes^{2}},1\otimes\partial)v_{i}^{\ast}\otimes e_{j}).$ Moreover, by the definition of $(M^{\ast c},r_{A}^{\ast},l_{A}^{\ast})$, we have $\displaystyle l_{A}^{\ast}(e_{i})_{\lambda}v_{j}^{\ast}=\sum_{k}{v_{j}^{\ast}}_{-\lambda-\partial}(l_{A}(e_{i})_{\lambda}v_{k})v_{k}^{\ast},~{}~{}~{}r_{A}^{\ast}(e_{i})_{\lambda}v_{j}^{\ast}=\sum_{k}{v_{j}^{\ast}}_{-\lambda-\partial}(r_{A}(e_{i})_{\lambda}v_{k})v_{k}^{\ast}.$ Then we get $\displaystyle r\bullet r$ $\displaystyle\equiv$ $\displaystyle\sum_{i,j,k,l}(-g_{ij}(0,\partial\otimes 1\otimes 1)g_{kl}(0,-1\otimes\partial\otimes 1)e_{j}\otimes v_{k}^{\ast}\otimes{v_{i}^{\ast}}_{\mu}e_{l}|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle-g_{ij}(0,-\partial\otimes 1\otimes 1)g_{kl}(0,1\otimes\partial\otimes 1)v_{i}^{\ast}\otimes e_{l}\otimes{e_{j}}_{\mu}v_{k}^{\ast}|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle+g_{ij}(0,-\partial\otimes 1\otimes 1)g_{kl}(0,-1\otimes\partial\otimes 1)v_{i}^{\ast}\otimes v_{k}^{\ast}\otimes{e_{j}}_{\mu}e_{l}|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle-g_{ij}(0,\partial\otimes 1\otimes 1)g_{kl}(0,-1\otimes 1\otimes\partial)e_{j}\otimes{e_{l}}_{\mu}v_{i}^{\ast}\otimes v_{k}^{\ast}|_{\mu=-\partial^{\otimes^{2}}\otimes 1}$ $\displaystyle+g_{ij}(0,-\partial\otimes 1\otimes 1)g_{kl}(0,-1\otimes 1\otimes\partial)v_{i}^{\ast}\otimes{e_{l}}_{\mu}e_{j}\otimes v_{k}^{\ast}|_{\mu=-\partial^{\otimes^{2}}\otimes 1}$ $\displaystyle- g_{ij}(0,-\partial\otimes 1\otimes 1)g_{kl}(0,1\otimes 1\otimes\partial)v_{i}^{\ast}\otimes{v_{k}^{\ast}}_{\mu}e_{j}\otimes e_{l}|_{\mu=-\partial^{\otimes^{2}}\otimes 1}$ $\displaystyle+g_{ij}(0,-1\otimes\partial\otimes 1)g_{kl}(0,-1\otimes 1\otimes\partial){e_{j}}_{\mu}e_{l}\otimes v_{i}^{\ast}\otimes v_{k}^{\ast}|_{\mu=1\otimes\partial\otimes 1}$ $\displaystyle- g_{ij}(0,-1\otimes\partial\otimes 1)g_{kl}(0,1\otimes 1\otimes\partial){e_{j}}_{\mu}v_{k}^{\ast}\otimes v_{i}^{\ast}\otimes e_{l}|_{\mu=1\otimes\partial\otimes 1}$ $\displaystyle- g_{ij}(0,1\otimes\partial\otimes 1)g_{kl}(0,-1\otimes 1\otimes\partial){v_{i}^{\ast}}_{\mu}e_{l}\otimes e_{j}\otimes v_{k}^{\ast}|_{\mu=1\otimes\partial\otimes 1})~{}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}$ $\displaystyle\equiv$ $\displaystyle\sum_{i,k}((-T_{0}(v_{i})\otimes v_{k}^{\ast}\otimes{{v_{i}}^{\ast}}_{\mu}T_{0}(v_{k})-v_{i}^{\ast}\otimes T_{0}(v_{k})\otimes T_{0}(v_{i})_{\mu}v_{k}^{\ast}$ $\displaystyle+v_{i}^{\ast}\otimes v_{k}^{\ast}\otimes{T_{0}(v_{i})}_{\mu}T_{0}(v_{k}))|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle+(-T_{0}(v_{i})\otimes T_{0}(v_{k})_{\mu}v_{i}^{\ast}\otimes v_{k}^{\ast}+v_{i}^{\ast}\otimes{T_{0}(v_{k})}_{\mu}T_{0}(v_{i})\otimes v_{k}^{\ast}$ $\displaystyle- v_{i}^{\ast}\otimes{v_{k}^{\ast}}_{\mu}T_{0}(v_{i})\otimes T_{0}(v_{k}))|_{\mu=1\otimes 1\otimes\partial}$ $\displaystyle+(T_{0}(v_{i})_{\mu}T_{0}(v_{k})\otimes v_{i}^{\ast}\otimes v_{k}^{\ast}-T_{0}(v_{i})_{\mu}v_{k}^{\ast}\otimes v_{i}^{\ast}\otimes T_{0}(v_{k})$ $\displaystyle-{v_{i}^{\ast}}_{\mu}T_{0}(v_{k})\otimes T_{0}(v_{i})\otimes v_{k}^{\ast})|_{\mu=1\otimes\partial\otimes 1}~{}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}.$ Since $T_{0}$ is a $\mathbb{C}[\partial]$-module homomorphism, we have $\displaystyle\sum_{i,k}T_{0}(v_{i})\otimes v_{k}^{\ast}\otimes{v_{i}^{\ast}}_{\mu}T_{0}(v_{k})|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle=$ $\displaystyle\sum_{i,k}T_{0}(v_{i})\otimes v_{k}^{\ast}\otimes l_{A}^{\ast}(T_{0}(v_{k}))_{-\mu-\partial}v_{i}^{\ast}|_{\mu=\partial\otimes 1\otimes 1}$ $\displaystyle\equiv$ $\displaystyle\sum_{i,k}T_{0}(v_{i})\otimes v_{k}^{\ast}\otimes l_{A}^{\ast}(T_{0}(v_{k}))_{1\otimes\partial\otimes 1}v_{i}^{\ast}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}$ $\displaystyle\equiv$ $\displaystyle\sum_{i,j,k}T_{0}(v_{i})\otimes v_{k}^{\ast}\otimes{v_{i}^{\ast}}_{\partial\otimes 1\otimes 1}(l_{A}(T_{0}(v_{k}))_{1\otimes\partial\otimes 1}v_{j})v_{j}^{\ast}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}$ $\displaystyle\equiv$ $\displaystyle\sum_{i,j,k}T_{0}({v_{i}^{\ast}}_{\partial}(l_{A}(T_{0}(v_{k}))_{1\otimes\partial\otimes 1}v_{j})v_{i})\otimes v_{k}^{\ast}\otimes v_{j}^{\ast}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}$ $\displaystyle\equiv$ $\displaystyle\sum_{j,k}T_{0}(l_{A}((T_{0}(v_{k}))_{\mu}v_{j})\otimes v_{k}^{\ast}\otimes v_{j}^{\ast}|_{\mu=1\otimes\partial\otimes 1}~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}.$ Similarly, we have $\displaystyle r\bullet r~{}~{}\text{$mod~{}~{}(\partial^{\otimes^{3}})$}$ $\displaystyle\equiv$ $\displaystyle\sum_{i,k}(({T_{0}(v_{k})}_{\mu}T_{0}(v_{i})-T_{0}(l_{A}(T(v_{k}))_{\mu}v_{i})-T_{0}(r_{A}(T_{0}(v_{i}))_{-\mu-\partial}v_{k})\otimes v_{k}^{\ast}\otimes v_{i}^{\ast})|_{\mu=1\otimes\partial\otimes 1}$ $\displaystyle+(v_{i}^{\ast}\otimes({T_{0}(v_{k})}_{\mu}T_{0}(v_{i})-T_{0}(l_{A}(T(v_{k}))_{\mu}v_{i})-T_{0}(r_{A}(T_{0}(v_{i}))_{-\mu-\partial}v_{k})\otimes v_{k}^{\ast})|_{\mu=1\otimes 1\otimes\partial}$ $\displaystyle+(v_{k}^{\ast}\otimes v_{i}^{\ast}\otimes({T_{0}(v_{k})}_{\mu}T_{0}(v_{i})-T_{0}(l_{A}(T(v_{k}))_{\mu}v_{i})-T_{0}(r_{A}(T_{0}(v_{i}))_{-\mu-\partial}v_{k})\otimes v_{k}^{\ast})|_{\mu=\partial\otimes 1\otimes 1}.$ Therefore $r$ is a solution of associative conformal Yang-Baxter equation in the associative conformal algebra $A\ltimes_{r_{A}^{\ast},l_{A}^{\ast}}M^{\ast c}$ if and only if $\displaystyle{T_{0}(v_{k})}_{\mu}T_{0}(v_{i})=T_{0}(l_{A}(T(v_{k}))_{\mu}v_{i})+T_{0}(r_{A}(T_{0}(v_{i}))_{-\mu-\partial}v_{k}),\;\;\forall\ i,k\in\\{1,\cdots,m\\}.$ Thus this conclusion holds. ∎ At the end of this paper, we introduce a class of conformal algebras, namely, dendriform conformal algebras, which are used to construct $\mathcal{O}$-operators naturally and hence give solutions of associative conformal Yang-Baxter equation. ###### Definition 6.8. Let $A$ be a $\mathbb{C}[\partial]$-module with two bilinear products $\prec_{\lambda}$ and $\succ_{\lambda}:A\times A\rightarrow A[\lambda]$. If for all $a$, $b$, $c\in A$, $\displaystyle(\partial a)\succ_{\lambda}b=-\lambda a\succ_{\lambda}b,~{}~{}a\succ_{\lambda}(\partial b)=(\partial+\lambda)(a\succ_{\lambda}b),$ (51) $\displaystyle(\partial a)\prec_{\lambda}b=-\lambda a\prec_{\lambda}b,~{}~{}a\prec_{\lambda}(\partial b)=(\partial+\lambda)(a\prec_{\lambda}b),$ (52) $\displaystyle(a\prec_{\lambda}b)\prec_{\lambda+\mu}c=a\prec_{\lambda}(b\ast_{\mu}c),$ (53) $\displaystyle~{}~{}(a\succ_{\lambda}b)\prec_{\lambda+\mu}c=a\succ_{\lambda}(b\prec_{\mu}c),$ (54) $\displaystyle~{}~{}a\succ_{\lambda}(b\succ_{\mu}c)=(a\ast_{\lambda}b)\succ_{\lambda+\mu}c,$ (55) where $a\ast_{\lambda}b=a\prec_{\lambda}b+a\succ_{\lambda}b$, then $(A,\prec_{\lambda},\succ_{\lambda})$ is called a dendriform conformal algebra. ###### Remark 6.9. It is obvious that $(A,\prec_{\lambda},\succ_{\lambda})$ with $\succ_{\lambda}$ being trivial (or $\prec_{\lambda}$ being trivial) is a dendriform conformal algebra if and only if $(A,\prec_{\lambda})$ (or $(A,\succ_{\lambda}))$ is an associative conformal algebra. ###### Example 6.10. Let $(A,\prec,\succ)$ be a dendriform algebra ([25]). Then $\text{Cur}(A)=\mathbb{C}[\partial]\otimes A$ is endowed a natural dendriform conformal algebra as follows. $\displaystyle a\prec_{\lambda}b=a\prec b,~{}~{}~{}a\succ_{\lambda}b=a\succ b,~{}~{}~{}\forall a,b\in A.$ (56) Moreover, $(\text{Cur}(A),\prec_{\lambda},\succ_{\lambda})$ is called a current dendriform conformal algebra. It is straightforward that any dendriform conformal algebra which is free and of rank one as a $\mathbb{C}[\partial]$-module is current. ###### Proposition 6.11. Let $(A,\prec_{\lambda},\succ_{\lambda})$ be a dendriform conformal algebra. Define $a\ast_{\lambda}b=a\prec_{\lambda}b+a\succ_{\lambda}b,\;\;\forall\ a,b\in A.$ (57) Then $(A,\ast_{\lambda})$ is an associative conformal algebra. We call $(A,\ast_{\lambda})$ the associated associative conformal algebra of $(A,\prec_{\lambda},\succ_{\lambda})$ and $(A,\prec_{\lambda},\succ_{\lambda})$ is called a compatible dendriform conformal algebra structure on the associative conformal algebra $(A,\ast_{\lambda})$. ###### Proof. It is straightforward. ∎ Let $(A,\prec_{\lambda},\succ_{\lambda})$ be a dendriform conformal algebra. Set $\displaystyle{L_{\succ}(a)}_{\lambda}(b)=a\succ_{\lambda}b,\;\;{L_{\prec}(a)}_{\lambda}(b)=a\prec_{\lambda}b,$ $\displaystyle{R_{\succ}(a)}_{\lambda}b=b\succ_{-\lambda-\partial}a,\;\;{R_{\prec}(a)}_{\lambda}b=b\prec_{-\lambda-\partial}a,\;\;\forall\ a,b\in A.$ ###### Proposition 6.12. Let $(A,\prec_{\lambda},\succ_{\lambda})$ be a dendriform conformal algebra. Then $(A,L_{\succ},R_{\prec})$ is a bimodule of the associated associative conformal algebra $(A,\ast_{\lambda})$. Hence the identity $I$ is an $\mathcal{O}$-operator of the associative conformal algebra $(A,\ast_{\lambda})$ associated with $(A,L_{\succ},R_{\prec})$. ###### Proof. It is straightforward. ∎ ###### Proposition 6.13. Let $A$ be an associative conformal algebra and $(M,l_{A},r_{A})$ be a bimodule of $A$. Suppose $T:M\rightarrow A$ is an $\mathcal{O}$-operator associated with $(M,l_{A},r_{A})$. Then the following $\lambda$-product $\displaystyle u\succ_{\lambda}v=l_{A}(T(u))_{\lambda}v,~{}~{}~{}u\prec_{\lambda}v=r_{A}(T(v))_{-\lambda-\partial}u,~{}~{}~{}~{}~{}u,~{}v\in M,$ (58) endows a dendriform conformal algebra structure on $M$. Therefore there is an associated associative conformal algebra structure on $M$ and $T:M\rightarrow A$ is a homomorphism of associative conformal algebras. Moreover, $T(M)\subset A$ is an associative conformal subalgebra of $A$ and there is also a dendriform conformal algebra structure on $T(M)$ defined by $\displaystyle T(u)\succ_{\lambda}T(v)=T(u\succ_{\lambda}v),~{}~{}T(u)\prec_{\lambda}T(v)=T(u\prec_{\lambda}v),~{}~{}~{}\text{$u$, $v\in M$.}$ (59) Furthermore, the associated associative conformal algebra on $T(M)$ is a subalgebra of $A$ and $T:M\rightarrow A$ is a homomorphism of dendriform conformal algebras. ###### Proof. For all $u$, $v$, $w\in M$, we have $\displaystyle(u\prec_{\lambda}v)\prec_{\lambda+\mu}w-u\prec_{\lambda}(v\prec_{\mu}w+v\succ_{\mu}w)$ $\displaystyle=$ $\displaystyle r_{A}(T(w))_{-\lambda-\mu-\partial}(r_{A}(T(v))_{-\lambda-\partial}u)-r_{A}(r_{A}(T(w))_{-\mu-\partial}v)_{-\lambda-\partial}u-r_{A}(l_{A}(T(v))_{\mu}w)_{-\lambda-\partial}u$ $\displaystyle=$ $\displaystyle r_{A}(T(v)_{\mu}T(w))_{-\lambda-\partial}u-r_{A}(r_{A}(T(w))_{-\mu-\partial}v)_{-\lambda-\partial}u-r_{A}(l_{A}(T(v))_{\mu}w)_{-\lambda-\partial}u$ $\displaystyle=$ $\displaystyle r_{A}(T(v)_{\mu}T(w)-r_{A}(T(w))_{-\mu-\partial}v-l_{A}(T(v))_{\mu}w)_{-\lambda-\partial}u=0.$ Similarly, we have $u\succ_{\lambda}(v\succ_{\mu}w)=(u\succ_{\lambda}v+u\prec_{\lambda}v)\succ_{\lambda+\mu}w,\;\;\forall\ u,v,w\in M.$ Moreover, since $(M,l_{A},r_{A})$ is a bimodule of $A$, we have $(u\succ_{\lambda}v)\prec_{\lambda+\mu}w=u\succ_{\lambda}(v\prec_{\mu}w),\;\;\forall\ u,v,w\in M.$ Hence $(M,\prec_{\lambda},\succ_{\lambda})$ is a dendriform conformal algebra. Moreover, the other conclusions follow straightforwardly. Therefore the conclusion holds. ∎ ###### Corollary 6.14. Let $(A,\ast_{\lambda})$ be an associative conformal algebra. There is a compatible dendriform conformal algebra structure on $A$ if and only if there exists a bijective $\mathcal{O}$-operator $T:M\rightarrow A$ associated with some bimodule $(M,l_{A},r_{A})$ of $A$. ###### Proof. Suppose that there is a compatible dendriform conformal algebra structure $(A,\succ_{\lambda},\prec_{\lambda})$ on $A$. Then by Proposition 6.12, the identity map $I:A\rightarrow A$ is a bijective $\mathcal{O}$-operator of $A$ associated with $(A,L_{\succ},R_{\prec})$. Conversely, suppose that there exists a bijective $\mathcal{O}$-operator $T:M\rightarrow A$ of $(A,\ast_{\lambda})$ associated with a bimodule $(M,l_{A},r_{A})$. Then by Proposition 6.13 with a straightforward checking, we have $\displaystyle a\succ_{\lambda}b=T(l_{A}(a)_{\lambda}T^{-1}(b)),~{}~{}~{}a\prec_{\lambda}b=T({r_{A}(b)}_{-\lambda-\partial}T^{-1}(a)),\;\;\forall\ a,b\in A,$ (60) defines a compatible dendriform conformal algebra structure on $A$. ∎ Finally, there is a construction of (antisymmetric) solutions of associative conformal Yang-Baxter equation from dendriform conformal algebras. ###### Theorem 6.15. Let $(A,\succ_{\lambda},\prec_{\lambda})$ be a finite dendriform conformal algebra which is free as a $\mathbb{C}[\partial]$-module. Then $\displaystyle r=\sum_{i=1}^{n}(e_{i}\otimes e_{i}^{\ast}-e_{i}^{\ast}\otimes e_{i})$ (61) is a solution of associative conformal Yang-Baxter equation in the associative conformal algebra $A\ltimes_{R_{A}^{\ast},L_{A}^{\ast}}A^{\ast c}$, where $\\{e_{1},\cdots,e_{n}\\}$ is a $\mathbb{C}[\partial]$-basis of $A$ and $\\{e_{1}^{\ast},\cdots,e_{n}^{\ast}\\}$ is the dual $\mathbb{C}[\partial]$-basis of $A^{\ast c}$. ###### Proof. By Proposition 6.12, $T=I:A\rightarrow A$ is an $\mathcal{O}$-operator associated with $(A,L_{\succ},R_{\prec})$. Then by Theorem 6.7, the conclusion holds. ∎ Acknowledgments This work was supported by the National Natural Science Foundation of China (11425104, 11931009), the Zhejiang Provincial Natural Science Foundation of China (LY20A010022) and the Scientific Research Foundation of Hangzhou Normal University (2019QDL012). C. Bai is also supported by the Fundamental Research Funds for the Central Universities and Nankai ZhiDe Foundation. ## References * [1] M. Aguiar, On the associative analog of Lie bialgebras, J. Algebra 244 (2001), 492-532. * [2] C. Bai, Double constructions of Frobenius algebras, Connes cocycles and their duality, J. Noncommut. Geom. 4 (2010), 475-530. * [3] A. Barakat, A.De Sole, V. Kac, Poisson vertex algebras in the theory of Hamiltonian equations, Japan. J. Math. 4 (2009), 141-252. * [4] L. Bokut, Y. Fong, W. Ke, Free associative conformal algebras, Proc. of the 2nd Tainan-Moscow Algebra and Combinatorics Workshop, Tainan, 1997, 13-25. * [5] L. Bokut, Y. Fong, W. Ke, Grobner-Shirshov bases and composition lemma for associative conformal algebras: an example, Contemp. Math. 264 (2000), 63-90. * [6] L. Bokut, Y. Fong, W. Ke, Composition-Diamond lemma for associative conformal algebras, J. Algebra 272 (2004), 739-774. * [7] C. Boyallian, V. Kac, J. Liberati, On the classification of subalgebras of $Cend_{N}$ and $gc_{N}$, J. Algebra 260 (2003), 32-63. * [8] B. Bakalov, V. Kac, A. Voronov, Cohomology of conformal algebras, Comm. Math. Phys. 200 (1999), 561-598. * [9] A. D’Andrea, V. Kac, Structure theory of finite conformal algebras, Selecta Math. New Ser. 4 (1998), 377-418. * [10] I. Dolguntseva, The Hochschild cohomolohy for associative conformal algebras, Algebra Logika 46 (2007), 688-706; English tranl., Algebra Logic 46 (2007), 373-384. * [11] Y. Hong, Extending structures for associative conformal algebras, Linear Multilinear Algebra 67 (2019), 196-212. * [12] Y. Hong, C. Bai, Conformal classical Yang-Baxter equation, $S$-equation and $\mathcal{O}$-operators, Lett. Math. Phys. 110 (2020), 885-909. * [13] Y. Hong, F. Li, Left-symmetric conformal algebras and vertex algebras, J. Pure. Appl. Algebra 219 (2015), 3543-3567. * [14] Y. Hong, F. Li, On left-symmetric conformal bialgebras, J. Algebra Appl. 14 (2015), 1450079. * [15] S. Joni, G. Rota, Coalgebras and bialgebras in combinatorics, Stud. Appl. Math. 61 (1979), 93-139; Reprinted in Gian-Carto Rota on combinatorics: Introductory papers and commentaries, edited by J. Kung, Birkhäuser, Boston, 1995. * [16] V. Kac, Vertex algebras for beginners, 2nd Edition, Amer. Math. Soc. Providence, RI, 1998. * [17] V. Kac, The idea of locality, in Physical Applications and Mathematical Aspects of Geometry, Groups and Algebras, edited by H.-D. Doebner et al., World Scientific Publishing, Singapore, 1997, 16-32. * [18] V. Kac, Formal distribution algebras and conformal algebras, Brisbane Congress in Math. Phys., 1997. * [19] V. Kac, R. Alexander, Simple Jordan conformal superalgebras, J. Algebra Appl. 7 (2008), 517-533. * [20] P. Kolesnikov, Simple associative conformal algebras of linear growth, J. Algebra 295 (2006), 247-268. * [21] P. Kolesnikov, Associative conformal algebras with finite faithful representation, Adv. Math. 202 (2006), 602-637. * [22] P. Kolesnikov, On the Wedderburn principal theorem in conformal algebras, J. Algebra Appl. 6 (2007), 119-134. * [23] P. Kolesnikov, On finite representations of conformal algebras, J. Algebra 331 (2011), 169-193. * [24] J. Liberati, On conformal bialgebras, J. Algebra 319 (2008), 2295-2318. * [25] J.-L. Loday, Dialgebras. In “Dialgebras and Related Operads”, Lecture Notes in Math. 1763, Springer, Berlin, 2001, 7-66. * [26] A. Retakh, Associative conformal algebras of linear growth, J. Algebra 237 (2001), 769-788. * [27] A. Retakh, On associative conformal algebras of linear growth II, J. Algebra 304 (2006), 543-556. * [28] M. Roitman, On embedding of Lie conformal algebras into associative conformal algebras, J. Lie theory 15 (2005), 575-588. * [29] M. Roitman, Universal enveloping conformal algebras, Selecta Math. New Ser. 6 (2000), 319-345. * [30] M. Roitman, On free conformal and vertex algebras, J. Algebra 217 (1999), 496-527. * [31] E. Zelmanov, On the structure of conformal algebras, Contemp. Math. 264 (2000), 139-156. * [32] E. Zelmanov, Idempotents in conformal algebras, Proceedings of the Third International Algebra Conference, Tainan, 2002, 257-266. * [33] V. Zhelyabin, Jordan bialgebras and their connection with Lie bialgebras, Algebra Logika 36 (1997), 3-35; English tranl., Algebra Logic 36 (1997), 1-15.
# A Collaboration Strategy in the Mining Pool for Proof-of-Neural-Architecture Consensus Boyang Li Qing Lu Weiwen Jiang Taeho Jung Yiyu Shi ###### Abstract In most popular public accessible cryptocurrency systems, the mining pool plays a key role because mining cryptocurrency with the mining pool turns the non-profitable situation into profitable for individual miners. In many recent novel blockchain consensuses, the deep learning training procedure becomes the task for miners to prove their workload, thus the computation power of miners will not purely be spent on the hash puzzle. In this way, the hardware and energy will support the blockchain service and deep learning training simultaneously. While the incentive of miners is to earn tokens, individual miners are motivated to join mining pools to become more competitive. In this paper, we are the first to demonstrate a mining pool solution for novel consensuses based on deep learning. The mining pool manager partitions the full searching space into subspaces and all miners are scheduled to collaborate on the Neural Architecture Search (NAS) tasks in the assigned subspace. Experiments demonstrate that the performance of this type of mining pool is more competitive than an individual miner. Due to the uncertainty of miners’ behaviors, the mining pool manager checks the standard deviation of the performance of high reward miners and prepares backup miners to ensure completion of the tasks of high reward miners. ###### keywords: Blockchain , Consensus , Deep learning , Mining Pool , Neural architecture search(NAS) ###### PACS: 0000 , 1111 ###### MSC: 0000 , 1111 ††journal: Blockchain: Research and Applications [inst1]organization=University of Notre Dame,addressline=Department of Computer Science and Engineering, city=Notre Dame, postcode=46556, state=Indiana, country=USA [inst2]organization=George Mason University,addressline=Department of Electrical and Computer Engineering, city=Fairfax, postcode=22030, state=Virginia, country=USA ## 1 Introduction Previously, multiple publications described novel blockchain consensuses that support alternative mining puzzles other than the hash algorithm. As a result, the computation power of miners’ hardware will not be wasted on a pure brute- force algorithm. Especially, Privacy‐Preserving Blockchain Mining [1], Coin.AI [2], WekaCoin [3], DLBC [4, 5], and PoDL [6] are on top of novel consensus which perform deep learning training algorithms as proof-of-useful-work (PoUW). Deep learning (DL) has developed rapidly in recent decades and has been widely applied in different fields. Training a deep learning model with good performance not only takes a massive amount of energy but also requires significant research effort. Neural architecture search (NAS) has recently become popular because it can help researchers to design DL models automatically. Similar to blockchain services, NAS also requires enormously high computation power, and insufficient computation resources will slow down the productivity of researchers. With the help of novel consensuses, the computation power of miners in blockchain services could be leveraged to accelerate NAS. In the permission-less blockchain system, the security of the system depends on the high volume of individual miners. The incentive to mine is to earn tokens. For an individual miner, it is profitable only if the value of the earned token is more than the electricity bill. Because only the winner miner will receive a block reward, it is a risk for individual miners that they have spent electricity but do not receive the block reward. In practice, the majority of the miners will not receive the block reward. The possibility for an individual miner to earn the credit is the ratio of its computation power to the computation power of the whole network in PoW consensus. In a mature permission-less blockchain system, the computation power of the entire network is ideally exceptionally high, so it can serve as the backup of the security property of the blockchain system. Therefore, the possibility for an individual miner to earn the reward is extremely low. For nowadays publicly accessible cryptocurrencies, an individual miner most likely will join a mining pool to earn tokens. Because mining with the mining pool will minimize the risk of zero earnings in a short term. A mining pool is formed by a group of miners and it behaves as mutual insurance[7]. But, the mining pool for hash-based consensus is different from the PoNAS consensus. In this paper we proposed a collaboration strategy in the mining pool for PoNAS consensus. To my best knowledge, we are the first to demonstrate the design of the mining pool for NAS-based PoUW. The main contribution of this design is as followed: * 1. We explained the benefits of the mining pool for the PoNAS and discussed the difficulties to achieve the mining pool. An intuitive method is to adopt the existing distributed deep learning frameworks to train the PoDL workload. Without optimization of scheduling of the training tasks, the slowest miner will become the bottleneck of overall performance and most of the miners will be idle. * 2. We introduced a collaboration strategy of exploration and exploitation to the mining pool. Therefore, the miners with less computation power will not drawback the overall performance, and the resource of these miners will not be wasted. * 3. We applied the naive parallel computing solution with which the dependency between miners is further reduced. The bottleneck of the network does not exist in the design. In addition, this design leveraged the NAS as the workload. The actual training tasks among miners are different while the final proof information is the same, which is an effective neural architecture. This is different from PoUW and PoDL, which require all miners to work on the same workload within one block. The effectiveness of NAS is beyond the contribution of this paper, yet we gave an example that miners can train the DL model with different architectures while providing the information to be proven by consensus. ## 2 Background & Related Work ### 2.1 Consensus of existing cryptocurrency The Bitcoin is built on top of the Proof of Work (PoW) consensus which requests all miners to solve problems. Generally, the required problems in PoW consensus are easy to validate and hard to solve. The PoW is stable but its power consumption is enormous. The Ethereum is a popular cryptocurrency based on Proof of Stake (PoS) consensus which decides the creator of the new block. In PoS, the amount of owned token of a certain miner will prove the support the authority. In this consensus, the computation is relatively more efficient than PoW, but it is less stable and robust owing to various limitations.[8, 9]. ### 2.2 Proof-of-Deep-Learning (PoDL): In the consensus based on the deep learning algorithm, it divides each block time into two or more interval phases [6, 4, 5]. In general, it includes the initialization phase, the training phase, and the validation phase. In the initialization phase, all miners confirm the target task and evaluate the training setup, such as the target training epochs and the size of the dataset. In the training phase, miners train the confirmed target task and commit their model before the training phase ends. Here the miners submit the hash of their deep learning model, training results, and miners ID. The task publishers release training dataset and deep learning training source code. [6, 4, 5] In the validation phase, the task publisher releases the test dataset to miners and full nodes, and each miner submits (1) the block header and the block that contains information describing the trained model on top of existing attributes, (2) the trained model, and (3) the accuracy of the trained model, to full nodes. The full nodes validate the submitted models. Here, full nodes discard all submissions if the model was not committed before the training phase ends in the current block (i.e., hash of the model and ID have not been received), thus it prevents from miners over-fitting their models on the disclosed test dataset or stealing others models. The full nodes will confirm the new block and authorize the creator of this block. During the confirmation process, the full nodes validate the model with the highest accuracy amount of all submissions. If the validated results equal as claimed, the confirmation is finished. Otherwise, the full nodes will continue the process and validate the models in decreasing order of the claimed accuracy. This confirmation process yields a robust consensus. [6, 4, 5] ### 2.3 Other Proof-of-Useful-Work mechanisms: Primecoin [10] is built on top of Proof-of-Useful-Work mechanisms that request all miners to solve problems. Here, the puzzle is to find a special sequence of prime numbers. The consensus helps to solve mathematical problems, i.e., discovering the Cunningham chain. Hybrid mining [11] solves problems with the computational power of the blockchain system. Privacy‐Preserving Blockchain[1] introduced their two parallel chains and dynamic committee members’ strategy. Here, PoUW runs on top of the long-interval and transactions in short- interval. In Coin.AI [2], miners will train DL model mode and also store the valuable data for tokens. WekaCoin [3] will contribute to creating a public distributed and verified database. Figure 1: The full results of NAS of two miners searching in different searching spaces. The solid lines shows the best reward of NAS in two spaces. The dash lines shows the current reward of each configuration during searching procedure. The x-axis is the number of episodes and y-axis is the reward. ### 2.4 NAS While artificial neural networks have achieved ground-breaking success in machine learning applications, their topology is trending towards complex and indecipherable. As a result, the traditional architecture engineering process relying on the labor and expertise of human designers are regarded as either impossible or inefficient in pushing the state of the art to the next level. Therefore, design automation in neural networks is logically the future of machine learning. In the Fig. 1, it shows the full results of the NAS of two miners searching in different searching spaces. The solid lines show the best reward of NAS in two spaces. The dash lines show the current reward of each configuration during the searching procedure. As an essential part of AutoML, neural architecture search refers to the technique of automating exploration in the design space of neural network topology. A typical use case of NAS would be having an abstract structure with multiple sub-structures to be optimized using a set of optional building blocks. This design space is too huge to be exhausted and manually pruned, so the success of NAS is dependent on a carefully devised searching strategy. In order to find improved architectures over the previous models, a variety of search algorithms have been studied in existing works, including reinforcement learning [12, 13], evolutionary methods [14, 15], Bayesian Optimization [16], hill-climbing [17], etc. It is noted that NAS is a computational-intensive problem, so how to formulate the NAS problem to improve efficiency has attracted more and more research interest [18, 19]. A mainstream NAS framework of today is the weight sharing scheme where multiple child networks are trained together as one “supernet” and inherit the same set of weights for evaluation [20]. In such frameworks, the “Childnet’s” are sampled by some searching algorithm for optimal performance. Particularly, [21] proposed the gradient-based method named “DARTS” to jointly search and train the child networks with very high efficiency. Soon after NAS was brought up, it was applied under hardware constraints to form a research topic—hardware-aware NAS [22, 23]. Since quantization is one of the most common techniques for approximated computation in hardware, some works employed NAS to search the best configuration for quantizing neural networks, a.k.a quantization-aware NAS [24]. Hardware-aware NAS is not only deployment- oriented but also efficient as the hardware search can be guided. ## 3 Methodology ### 3.1 Overview of mining pool In this mining pool, the participants include miners and managers. The pool manager will receive the rewards once any of the participants find the block and the manager will distribute the rewards to each participant [7]. The pool manager normally hosts very powerful servers to maintain a stable connection and job distribution. Miners pay mining fees as a return to the good performing pool manager. The miners in the pool will contribute to the assigned tasks. For the consensus based on hash, the miners are working independently thus the individual performance of a miner will not suffer from dependency from miners. The amount of the reward distributions will depend on the ratio of computation power and contribution. The job distribution is relatively simple because the computation is relatively independent. In the recent deep learning-based PoUW consensus [6, 4, 5, 1, 2, 3, 25], multiple papers discussed the possible solution that the computation power of miners will be spent on relative useful work and here are the deep learning training tasks. For instance, PoDL [6], Proof of Federated Learning [25], and etc.. Because all miners will wish to avoid risks, a mining pool will appear naturally. In the mining pool based on NAS, the amount of the reward distributions will still depend on the ratio of computation power and contribution. But a weak miner may hold back the performance of the whole mining pool. Therefore, the Intuitive solution is that the manager will distribute easier jobs to weak miners and distribute harder jobs to strong miners. Therefore, all miners will be able to deliver useful results We will introduce miner type in the Sec. 3.2.2 ### 3.2 Design of mining pool #### 3.2.1 Deep learning consensus As introduced in Sec.2, there are multiple existing publications about a novel blockchain consensus based on a deep learning algorithm instead of a brute force algorithm. For demonstration purposes, this mining pool will service a consensus such as Deep Learning consensus [4, 5]. In this consensus [4, 5], it described the design of three phases block interval and task scheduling for each interval phase. For phase one, it is the initial stage that miners will confirm the target task to train and evaluate the difficulty of the task. In [4, 5], the difficulties of the tasks include model size, data size, network bandwidth, FLOPs of the task, and computation power of the network. For phase two, it is the time for GPUs of miners to train the DL tasks and full nodes to spread submitted tasks. Miner nodes select the next target task to train based on the ranking score. The ranking score is the ratio of the difficulty of the task over the task reward. All miner nodes will only allow submitting the training results before phase two is finished. For phase three, full nodes rank the submitted training results and evaluated the winner model. During phase two and phase three, once a task is selected for the next block, all miners will fetch the data from full nodes and task publishers. Once the performance of the winner model is the same as the miner claimed, the winner miner will generate the current block and full nodes start to spread the current block. If the performance of the winner model is worse than the submitted value, the full nodes will remove it and evaluate the next best model. For all nodes, if they find the block is generated, they will validate and confirm the block. Once a block is confirmed for an individual node, it will move onto phase one of the next block. For a mining pool, the pool manager will split the searching space of the selected NAS task into multiple subspaces. Therefore, the whole mining pool may achieve better performance. #### 3.2.2 Miner types In the mining pool design, we separate miners into strong miners and weak miners. For strong miners, they can finish the search task in a given subspace. For weak miners, they cannot finish the search task to give subspace due to the limitation of network bandwidths, hardware, etc.. In practice, the performance of a weak miner would affect the performance of the whole mining pool. In the experiments, we noticed that some subspace can be better than others that the miner will find a better neural architecture in this “lucky space”. In the case, if this “lucky space” is assigned to a weak miner and the miner may waste the good opportunity to find the better neural architecture in this subspace. To increase the efficiency of the mining pool, the pool manager will try to reduce the overlap between each subspace. Therefore, it is very possible that a mining pool will not find this neural architecture and the performance of the pool will be held back. #### 3.2.3 General embarrassingly parallel computations Input: Hyperparameters $h_{1},h_{2},...,h_{n}$ with $h_{i}$ of searching range $r_{i}=\\{R_{i}^{1},R_{i}^{2},...,R_{i}^{k}\\}$, $m$ is the total number of miners. Return: A list of subspaces, Subspace = $[S_{1},S_{2},...,S_{m}]$. Initialization: Initialize empty list with size equals to m; Initialize table T with the key equals to index of hyperparameters and the value equals to a list of searching range for the corresponding hyperparameter; for _$i=1;\ i\leq n;\ i=i+1$_ do T[$i$] equals to a list of all subsets of $r_{i}$; end for Partition: for _$i=1;\ i\leq m;\ i=i+1$_ do Initialize the space $S_{i}$ for miner $i$; for _$j=1;\ j\leq n;\ j=j+1$_ do The selected searching range for hyperparameter j = T[$j$][randint()]) $S_{i}$.append(The selected searching range) end for Subspace.append($S_{i}$) end for return Subspace; Algorithm 1 Partition of search space for miners In this mining pool, the pool manager will split the search space into multiple subspaces and each miner will search the neural architecture independently. Therefore, it will reduce the effect of network bandwidth on the performance of whole searching tasks. This parallelism will not request any dependence during the search for each miner thus it is named embarrassingly parallelism. In NAS tasks, the searching space is a high dimension which covers all combination of neural network configuration. To search a high-performance model in this high dimension space may request many experiments. The NAS will help to find a good configuration of a neural network. In the algorithm 1, each hyperparameter $h_{i}$ $(\in\\{h_{1},h_{2},...,h_{n}\\})$ of the target neural network can be selected from a searching range $r_{i}=[R_{i}^{1},R_{i}^{2},...,R_{i}^{k}]$. The $i$ is the index of the hyperparameter and $k$ represents the maximum size of the searching range of the $i$ hyperparameter, and $n$ is the total number of hyperparameter. A subspace $S$ $(\in S_{1},S_{2},...,S_{m})$ formed by $n$ elements and each element formed by a searching range of one hyperparameter. For instance, to assign a subspace to a miner, the mining pool manager follows algorithm 1 and selects one searching range for each hyperparameter. In section 4, we will demonstrate the algorithm in a certain value based on a given NAS task. #### 3.2.4 Collaboration strategy of exploration and exploitation The tasks for the pool manager include collecting tasks, splitting tasks, collecting results, and submitting the best results. The pool manager collects input information from full nodes and splits the searching space into subspaces as described in algorithm 1. The pool manager collects the best results from all miners and submits the best solution to full nodes before the training phase ends. Here, the collaboration strategy of exploration and exploitation is to schedule strong miners and weak miners separately. In practice, some subspaces may be easier to find better performance. The NAS agent is designed to propose a certain neural architecture without training it. If a weak miner is assigned to search neural architecture in the subspace which contains the final best solution, the weak miner may not have sufficient computation power to find the final target architecture, thus it will waste the opportunity. Therefore, the searching space will be only sent to the strong miners. Once a solution is confirmed to beats the current best results, the strong miners will share the corresponding hyperparameter with weak miners. The weak miners will continue to exploit the confirmed architecture. The weak miners will also update the improved solution with strong miners. All miners submit the best solution to the pool manager once they find a solution that beats the current best results. ## 4 Experiment Table 1: Model architecture subspaces for subspaces S1 to S9 and full space. space ID | kernel height | kernel width | number of kernel | stride height | stride width | pool size ---|---|---|---|---|---|--- full space | 1, 3, 5, 7, 9 | 1, 3, 5, 7, 9 | 4, 8, 12, 24, 36, 64, 128 | 1, 2, 3, 4, 5 | 1, 2, 3, 4, 5 | 1, 2 subspace S1 | 1, 5, 7 | 3, 5, 7 | 24, 36, 48, 64 | 1, 2, 3 | 1, 2, 3 | 1, 2 subspace S2 | 1, 3, 5, 7 | 1, 3, 5, 7 | 24, 36, 48, 64 | 1, 2, 3 | 1, 2, 3 | 1, 2 subspace S3 | 1, 3, 5, 7, 9 | 1, 3, 5, 7, 9 | 4, 8, 12, 24, 36, 64, 128 | 0, 1, 2, 3 | 0, 1, 2, 3 | 1 subspace S4 | 1, 3, 5, 7, 9 | 1, 3, 5, 7, 9 | 4, 8, 12, 24, 36, 64, 128 | 1, 2, 3, 4, 5 | 1, 2, 3, 4, 5 | 1 subspace S5 | 1, 3, 5, 7, 9 | 1, 3, 5, 7, 9 | 4, 8, 12, 24, 36, 64, 128 | 1, 2, 3, 4, 5 | 1, 2, 3, 4, 5 | 1 subspace S6 | 1, 3, 5 | 1, 3, 5 | 4, 8, 12 | 1, 2, 3 | 1, 2, 3 | 1 subspace S7 | 5, 7, 9 | 5, 7, 9 | 32, 64, 128 | 3, 4, 5 | 3, 4, 5 | 1 subspace S8 | 5, 7, 9 | 5, 7, 9 | 32, 64, 128 | 3, 4, 5 | 3, 4, 5 | 1 subspace S9 | 1, 3, 5 | 1, 3, 5 | 24, 36 | 1, 2, 3 | 1, 2, 3 | 1 Table 2: Model quantization subspaces for subspaces S1 to S9 and full space. space ID | act_num_int_bits | act_num_frac_bits | weight_num_int_bits | weight_num_frac_bits ---|---|---|---|--- full space | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 | 0, 1, 2, 3, 4 | 0, 1, 2, 3, 4, 5, 6 subspace S1 | 1, 2, 3 | 1, 2, 3, 4, 5 | 0, 1, 2, 3, 4 | 2, 3, 4, 5 subspace S2 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 subspace S3 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 subspace S4 | 2, 3 | 4, 5, 6 | 2, 3 | 4, 5, 6 subspace S5 | 0, 1 | 1, 2, 3 | 0, 1 | 1, 2, 3 subspace S6 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 subspace S7 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 | 0, 1, 2, 3 | 0, 1, 2, 3, 4, 5, 6 subspace S8 | 2, 3 | 4, 5, 6 | 2, 3 | 4, 5, 6 subspace S9 | 2, 3 | 5, 6 | 2, 3 | 5, 6 In this section, we will firstly introduce the experimental setup in subsection 4.1, including the NAS setting general information, the hardware information, and the weak miner simulation. In subsection 4.2, we will analyze the performance of the mining pool searching in nine different subspaces versus the performance of an individual miner searching in the full space. In subsection 4.3, we will analyze the effect reliability of miners on the performance of the mining pool. ### 4.1 Experiment setup #### 4.1.1 Experiment overview To evaluate the performance of the mining pool, we adopt the popular RNN based NAS [12]. The NAS algorithm is targeting to find a CNN architecture for classification tasks under a certain hardware constrain. We use CIFAR-10 dataset [26] which has 10 classes images. All experiments were deployed on the workstation with with Intel(R) Core(TM) i7-9900K CPU @ 3.60GHz, 32Gb RAM, 2$\times$ GTX 1080 Ti. #### 4.1.2 Hardware constraints The hardware constraints include the size of the lookup table of FPGA, and throughput. The size of the lookup table of FPGA represents the required chip size. In this NAS task, we will give 100,000 as of the upper bound. Throughput also represents the latency of the architecture. Here, we set 10 as the lower bound. Due to these strong hardware constraints, the performance of neural architecture may not be able to beat the performance of a model with unlimited hardware resources. #### 4.1.3 Searching and reward In the searching process, we will first evaluate the required chip size and latency. Based on the hyperparameter, if the neural architecture is not exceeding the bounds, the miner will start training and return the best testing accuracy within 30 epochs of training in this experiment. If the result beats the previous best reward, it will be updated as the best reward. Here, the reward means the testing accuracy for the image classifier. The same value is named reward for the RNN. This RNN is for selecting the best hyperparameter value in the given searching range. If the required lookup table or throughput exceeding the bound, the controller will return zero as the reward for RNN. For each subspace, the controller will search for 2000 episodes and each episode will train for 30 epochs. #### 4.1.4 Experiments input data In the table 1, it shows the model architecture subspaces for subspaces S1 to S9 and full space. In table 2, it shows the model quantization subspaces for subspaces S1 to S9 and full space. The hyperparameters for model architecture include kernel height, kernel width, number of kernels, stride height, stride width, and pool size. The hyperparameters for model quantization include the number of bits of the activation integer part, number of bits of activation fraction part, number of bits of the weight integer part, and number of bits of weight fraction part. The total number of hyperparameter ${}^{\prime}n^{\prime}$ equals to 10. The maximum sizes ${}^{\prime}k^{\prime}$ of the searching ranges ${}^{\prime}r_{i}^{\prime}$ are (5, 5, 7, 5, 5, 2, 4, 7, 5, 7). The total number of miners ${}^{\prime}m^{\prime}$ equals to 9. The full searching ranges for each hyperparameter are given in the row of full space in table 1 and 2. For the rows of subspaces S1 to S9, it gives assigned searching ranges of each hyperparameter of each subspace. In the algorithm 1, The table $T$ saves all subset of each searching range. Here, $T[i]$ saves a list of all subset of the searching range of the $i-th$ hyperparameter. Because the table $T$ contains all subset and miners will fetch searching range for a certain hyperparameter randomly, it is possible to achieve one subspace overlap another subspace. This algorithm 1 may lead to inefficient searching space partition. But we will demonstrate that the searching space overlapping is not an efficiency issue in subsection 4.2. An efficient searching space partition may improve NAS mining pool performance, but it is out of our current scope. #### 4.1.5 Weak miner simulation As explained in section 3, the weak miner may drawback the overall performance, thus we will need to simulated the weak miners in our experiment. In practice, many reasons will affect the performance of a miner, such as frequency and cores number of CPU, size, and bandwidth of memory, GPU, etc.. This will bring difficulties for us to evaluate the consequence when we assign the same amount of workload. We will use the workstation with the same configuration and hardware, but only allow a shorter run time to simulate a weak miner, such machine will run 1/10 of the total strong miner run time to simulate 10 times weaker miner. ### 4.2 benchmark of mining pool Figure 2: The results of NAS in full searching space, and subspace 1 to 9. The x-axis is the number of episodes and y-axis is the best rewards. The experiments conduct as it is introduced in subsection 4.1. As an individual miner search neural architecture in a searching space, it evaluates the required sizes of the lookup table of FPGA. If the required size is bigger than the given upper bound, the miner returns zero as the accuracy of the current configuration. Otherwise, the miner continues to evaluate the throughput of the current configuration. If it is lower than the given lower bound, it will return zero as the accuracy of the current configuration. Otherwise, the miner will start to train the current neural architecture for 30 epochs. Then the controller will test the model. If the result beats the current best reward, the controller updates the value of the current best reward. Otherwise, the controller only saves the testing accuracy as the reward of the current configuration. The controller repeats this procedure for 2000 episodes. The current reward is the test accuracy of the current configuration and it is for training the RNN model. After the performance of the RNN model improved, the controller will find better performance neural architecture under the given hardware constraints more efficiently. In the novel consensuses blockchain system, the full nodes only check the best performance models. Therefore, we will only shows the best reward (accuracy) in the Fig. 2 and 3. In the Fig. 2, the solid line shows the result of one individual miner searching in the full searching space. The dash lines show the results of nine miners searching in the searching subspaces S1 to S9 independently. In the Fig. 2, four of the miners return low-performance results, and four of the miners return high-performance results. Here, the results for subspace S7 and subspace S8 are overlapping. The searching ranges of each hyperparameter is given in the row of full space in the table 1 and table 2. In this subsection, we will evaluate whether this mining pool will help to find a better architecture than a single miner. This seems to be straightforward. Due to the network latency and system overhead, it is not always true that a group of machines find a better neural network architecture than a single machine. In our design, we applied embarrassingly parallel computation parallelism and there is no dependence between different miners. Therefore, the network latency will not strongly affect performance. As shown in the Fig. 2, this embarrassingly parallel computation parallelism helps the whole mining pool to be more competitive than a single miner. Four miners are searching in subspace S1, S2, S4, and S9 from the mining pool to return better results than the individual miner during the majority of the mining time. Three miners return better results at the end, and two miners return better results from beginning to the end. The miner searching in subspaces S1 finds very good performance neural architecture at a very early time and finds the best performance neural architecture at the end. In the Fig. 3, it is a zoom-in result where the y-axis range is from 0.7 to 0.9. Around episode 200 to 250, the miner searching subspace S1 is overtaken by the miners in subspace S2 and S4. The miner searching subspace S4 keeps the leading position till episode 1300 to 1400. In practice, miners will adjust the number of episodes due to the length of the given training phase and their computation power. These results show that more miners searching in different subspaces will help the mining pool to be more competitive during the whole training phase. ### 4.3 Evaluation for effect of miners reliability #### 4.3.1 Miners reliability In subsection 4.2, it shows the performance of this mining pool based on embarrassingly parallel computations is more competitive than the performance of an individual miner. In this small scale of evaluation, only 9 miners are searching in 9 different subspaces. In the Fig. 2, four of the miners return low-performance results, and four of the miners return high-performance results. The high performance versus low-performance ratio is not promised in a large scale of testing, but it shows a good amount of miners return high- performance results. Especially, the results from different subspace S1 and S4 are very close in the Fig. 3. In the large scale mining pool, if there are multiple miners like the miners in subspace S1 and S4. It means the overall performance of the mining pool is not depends on any individual miner. In the Fig. 4, it shows the standard deviation of the results for the miners searching in subspaces S1, S2, S4, and S9. This standard deviation is calculated from all high reward results from the current experiments. The standard deviation is high when the number of the episode is small. After 500 episodes, the standard deviation value will become low and flatten after more episodes of searching. When we need to evaluate the mining pool performance it is also important to check whether the overall performance of the mining pool depends on any individual miners. The data should only be collected based on all the miners who returns high reward values. Once the standard deviation value amount these miners raise, it is necessary for the mining pool manager to prepare backup miners to continue the searching task if any high reward miners leave the pool. The backup miners can be selected from the miners who returns low reward. In the Fig. 2, the miners searching subspaces S1, S2, S4, and S9 are considered as high reward miners, and the miners searching subspaces S3, S5, S7, and S8 are considered as low reward miners in this experiment. Only strong miners search for neural architecture. Here, both high and low reward miners are strong miners. Because miners may leave the mining pool at any time, the manager needs to handle this case. Figure 3: The results of NAS in subspace S1, S2, S4, and S9. The x-axis is the number of episodes and y-axis is the best rewards with the range from 0.7 to 0.9. Figure 4: The standard deviation of the results for the miners searching in subspace S1, S2, S4, and S9. The x-axis is the number of episodes and y-axis is the standard deviation value. #### 4.3.2 Exploration and exploiting As it is described in section 3, the mining pool manager assigns exploration tasks to strong miners and assign exploiting tasks to weak miners once strong miners return confirmed neural architecture. Therefore, there is no chance that a weak miner will drawback the overall performance of the mining pool and the strong miners will be able to spend the valuable computation power on searching in more space. Thus, the performance could be maximum. As it is explained in section 3, a weak miner may drawback the overall performance. As description in subsection 4.1, we will simulate a weak miner by scale the run time down to 1/10 of the strong miner. When the weak miner is assigned to search architecture from space two, it is the most possible setting to find the best solution. Based on the Fig. 2 and 3, it easy to find high performance neural architecture in the subspaces S1 and S4. Without this exploration and exploiting strategy, if 10 times slower miners are assigned to search subspaces S1 and S4, the miners in subspace s2 will be the only contributor to the overall best performance of the mining pool which is 0.7829. The weak miners will return 0.5701 and 0.7724. With exploration and exploiting strategy, the overall performance of the mining pool is 0.8204. Therefore this weak miner simulation shows that the weak miner only able to search for 200 episodes and weak miners will eventually waste a good opportunity to find the best solution. Although other miners may achieve similar results to the best solution from different subspaces if the mining pool manager will not schedule other miners to search in the same subspace. Therefore, the weak miner may waste a good opportunity and drawback the performance of the whole mining pool without this exploration and exploiting strategy. ## 5 Conclusion In this paper, we briefly introduced recent PoUW novel consensuses and details of the consensuses based on deep learning training. We adopt NAS as the workload to demonstrate the concepts. For a public accessible cryptocurrency blockchain system, earning tokens is the incentive for an individual miner to participates in mining. In a large-scale cryptocurrency system, the possibility of earning tokens for an individual is very low. A mining pool will support individual miners to be profitable in the large-scale cryptocurrency blockchain system. In this project, we are the first to demonstrate a mining pool solution to support novel deep learning-based consensuses. In section 3, we have introduced the function of pool manager, subspaces partition algorithm, and exploration & exploitation strategy. In section 4, we evaluate the performance of the mining pool, and it is more competitive than an individual miner who searches the full space individually. Due to the uncertainty of individual miners, the mining pool manager will keep track of the high-performance miners and prepare backup miners as introduced in subsection 4.3. ## References * Turesson et al. [2018] H. Turesson, A. Roatis, H. Kim, M. Laskowski, Deep learning models as proof-of-useful work: A smarter, utilitarian scheme for achieving consensus on a blockchain (2018). * Gómez and Sáez [2019] A. B. Gómez, Y. Sáez, Coin.AI: A proof-of-useful-work scheme for blockchain-based distributed deep learning, Entropy 21 (2019). * Bravo-Marquez et al. [2019] F. Bravo-Marquez, S. Reeves, M. Ugarte, Proof-of-learning: a blockchain consensus mechanism based on machine learning competitions, in: 2019 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPCON), IEEE, 2019, pp. 119–124. * Li et al. [2019a] B. Li, C. Chenli, X. Xu, T. Jung, Y. Shi, Exploiting computation power of blockchain for biomedical image segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019a. * Li et al. [2019b] B. Li, C. Chenli, X. Xu, Y. Shi, T. Jung, DLBC: A deep learning-based consensus in blockchains for deep learning services, arXiv preprint arXiv:1904.07349 (2019b). * Chenli et al. [2019] C. Chenli, B. Li, Y. Shi, T. Jung, Energy-recycling blockchain with proof-of-deep-learning, in: IEEE International Conference on Blockchain and Cryptocurrency, IEEE, 2019. * Narayanan et al. [2016] A. Narayanan, J. Bonneau, E. Felten, A. Miller, S. Goldfeder, Bitcoin and cryptocurrency technologies: A comprehensive introduction, Princeton University Press, 2016. * Poelstra et al. [2014] A. Poelstra, et al., Distributed consensus from proof of stake is impossible, URL: https://download. wpsoftware. net/bitcoin/old-pos. pdf (2014). * Ogawa et al. [2018] T. Ogawa, H. Kima, N. Miyaho, Proposal of proof-of-lucky-id (PoL) to solve the problems of PoW and PoS, in: Blockchain, IEEE, 2018\. * King [2013] S. King, Primecoin: Cryptocurrency with prime number proof-of-work, July 7th (2013). * Chatterjee et al. [2019] K. Chatterjee, A. K. Goharshady, A. Pourdamghani, Hybrid mining: exploiting blockchain’s computational power for distributed problem solving, in: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, ACM, 2019, pp. 374–381. * Zoph and Le [2017] B. Zoph, Q. V. Le, Neural architecture search with reinforcement learning, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. * Zoph et al. [2018] B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, Learning transferable architectures for scalable image recognition, in: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 2018, pp. 8697–8710. * Real et al. [2017] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, A. Kurakin, Large-scale evolution of image classifiers, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, 2017, p. 2902–2911. * Liu et al. [2018] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, K. Kavukcuoglu, Hierarchical representations for efficient architecture search, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. * Kandasamy et al. [2018] K. Kandasamy, W. Neiswanger, J. Schneider, B. Póczos, E. P. Xing, Neural architecture search with bayesian optimisation and optimal transport, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, p. 2020–2029. * Elsken et al. [2018] T. Elsken, J. H. Metzen, F. Hutter, Simple and efficient architecture search for convolutional neural networks, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings, 2018. * Pham et al. [2018] H. Pham, M. Guan, B. Zoph, Q. Le, J. Dean, Efficient neural architecture search via parameters sharing, in: Proceedings of the 35th International Conference on Machine Learning, volume 80, 2018, pp. 4095–4104. * Yan et al. [2019] S. Yan, B. Fang, F. Zhang, Y. Zheng, X. Zeng, H. Xu, M. Zhang, HM-NAS: Efficient neural architecture search via hierarchical masking, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (2019) 1942–1950. * Guo et al. [2020] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, J. Sun, Single path one-shot neural architecture search with uniform sampling, in: Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI, 2020, p. 544–560. * Liu et al. [2019] H. Liu, K. Simonyan, Y. Yang, DARTS: Differentiable architecture search, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019\. * Jiang et al. [2019] W. Jiang, X. Zhang, E. H. M. Sha, L. Yang, Q. Zhuge, Y. Shi, J. Hu, Accuracy vs. efficiency: Achieving both through FPGA-implementation aware neural architecture search, in: Proceedings of the 56th Annual Design Automation Conference, 2019. * Wu et al. [2019] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, K. Keutzer, FBNet: Hardware-aware efficient convnet design via differentiable neural architecture search, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 10726–10734. * Wang et al. [2019] K. Wang, Z. Liu, Y. Lin, J. Lin, S. Han, HAQ: Hardware-aware automated quantization with mixed precision, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * Qu et al. [2021] X. Qu, S. Wang, Q. Hu, X. Cheng, Proof of federated learning: A novel energy-recycling consensus algorithm, IEEE Transactions on Parallel and Distributed Systems 32 (2021) 2074–2085. * Krizhevsky et al. [2009] A. Krizhevsky, V. Nair, G. Hinton, Cifar-10 and cifar-100 datasets, URl: https://www. cs. toronto. edu/kriz/cifar. html 6 (2009).
# Comparison of Electron Capture Rates in the N=50 Region using 1D Simulations of Core-collapse Supernovae Zac Johnston Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA Joint Institute for Nuclear Astrophysics – Center for the Evolution of the Elements, Michigan State University, East Lansing, MI 48824, USA<EMAIL_ADDRESS>Sheldon Wasik Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA Rachel Titus Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824, USA MacKenzie L. Warren Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA Joint Institute for Nuclear Astrophysics – Center for the Evolution of the Elements, Michigan State University, East Lansing, MI 48824, USA Department of Physics, North Carolina State University, Raleigh, NC 27695, USA Evan P. O’Connor The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-106 91 Stockholm, Sweden Remco Zegers Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA Joint Institute for Nuclear Astrophysics – Center for the Evolution of the Elements, Michigan State University, East Lansing, MI 48824, USA National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824, USA Sean M. Couch Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA Joint Institute for Nuclear Astrophysics – Center for the Evolution of the Elements, Michigan State University, East Lansing, MI 48824, USA National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824, USA Department of Computational Mathematics, Science, and Engineering, Michigan State University, East Lansing, MI 48824, USA (Received 18 February 2022; Revised 9 September 2022; Accepted 16 September 2022) ###### Abstract Recent studies have highlighted the sensitivity of core-collapse supernovae (CCSNe) models to electron-capture (EC) rates on neutron-rich nuclei near the $N=50$ closed-shell region. In this work, we perform a large suite of one- dimensional CCSN simulations for 200 stellar progenitors using recently updated EC rates in this region. For comparison, we repeat the simulations using two previous implementations of EC rates: a microphysical library with parametrized $N=50$ rates (LMP), and an older independent-particle approximation (IPA). We follow the simulations through shock revival up to several seconds post-bounce, and show that the EC rates produce a consistent imprint on CCSN properties, often surpassing the role of the progenitor itself. Notable impacts include the timescale of core collapse, the electron fraction and mass of the inner core at bounce, the accretion rate through the shock, the success or failure of revival, and the properties of the central compact remnant. We also compare the observable neutrino signal of the neutronization burst in a DUNE-like detector, and find consistent impacts on the counts and mean energies. Overall, the updated rates result in properties that are intermediate between LMP and IPA, and yet slightly more favorable to explosion than both. ††journal: ApJ††software: FLASH666https://flash.rochester.edu/site/ (Fryxell et al., 2000; Dubey et al., 2009), NSCL Weak-rate Library v1.2 (Sullivan, 2015), NuLib (O’Connor, 2015), SNOwGLoBES (Scholberg, 2012), GLoBES (Huber et al., 2005), Matplotlib777https://matplotlib.org (Hunter, 2007), NumPy888https://www.numpy.org (Harris et al., 2020), SciPy999https://www.scipy.org (Virtanen et al., 2020), yt101010https://yt- project.org (Turk et al., 2011), Pandas111111https://pandas.pydata.org (Pandas development team et al., 2022), xarray121212https://xarray.pydata.org (Hoyer & Hamman, 2017; Hoyer et al., 2020), Astropy131313https://www.astropy.org (Astropy Collaboration et al., 2013, 2018), flashbang141414https://github.com/zacjohnston/flashbang (Johnston, 2022a), flash_snowglobes151515https://github.com/zacjohnston/flash_snowglobes (Johnston, 2022b). ## 1 Introduction Massive stars ($\gtrsim$8\text{\,}\mathrm{M_{\odot}}$$) are destined to undergo iron core-collapse, either imploding entirely into a black hole (BH), or violently ejecting their outer layers and leaving behind a proto-neutron star (PNS) in a core-collapse supernova (CCSN; see reviews in Janka et al., 2007; Müller, 2020). Of the myriad physical processes that contribute to these stellar deaths, the capture of electrons onto protons via the weak interaction plays a central role. Electron-capture (EC) regulates the deleptonization of nuclear matter during collapse, and thus helps to set the initial conditions of the shock at core bounce (see reviews in Langanke & Martínez-Pinedo, 2003; Langanke et al., 2021). The uncertainties on electron capture rates can span orders of magnitude and produce larger variations in core-collapse properties than changes to the nuclear equation of state (EOS) or stellar progenitor (Sullivan et al., 2016; Pascal et al., 2020). It is experimentally and computationally difficult to constrain EC rates under astrophysical conditions, especially for the heavy, neutron-rich nuclei relevant to core-collapse. For this reason, CCSN simulations typically rely on parametrized approximations, particularly those of Bruenn (1985) and Langanke et al. (2003). Recent decades, however, have seen the continued development of tabulated rates for larger numbers of nuclei based on shell-model calculations (e.g., Oda et al., 1994; Langanke & Martínez-Pinedo, 2000; Langanke et al., 2003; Suzuki et al., 2016). A systematic study by Sullivan et al. (2016) showed that core-collapse simulations were most sensitive to changes in the electron capture rates of neutron-rich nuclei near the $N=50$, $Z=28$ closed-shell region. The drawback is that most of these rates relied on the parametrization of Langanke et al. (2003), which is extrapolated from rates on nuclei near the valley of stability. In a follow-up study focusing on 74 nuclei in the high-sensitivity region, Titus et al. (2018) showed that these rates are likely overestimated by up to two orders of magnitude, further emphasizing the need for updated rates. Using new experimental measurements of the $\mathrm{{}^{86}Kr}(t,\mathrm{{}^{3}He}+\gamma)$ charge-exchange reaction, Titus et al. (2019) calculated microphysical rates on 78 nuclei in this region, confirming that the parametrized rates are overestimated. In the meantime, Raduta et al. (2017) improved upon the parametrization of Langanke et al. (2003) for neutron-rich nuclei by accounting for temperature, electron density, and odd-even effects. Comparing one-dimensional (1D) core- collapse simulations using this improved parametrization, Pascal et al. (2020) demonstrated the expected increase in core mass and electron fraction due to an average decrease in EC rates. Pascal et al. (2020) also performed a rate sensitivity study, independently verifying the findings of Sullivan et al. (2016) and Titus et al. (2018) that EC rates in the $N=50$ region remain the most crucial for CCSNe. In this paper, we present simulations of CCSNe through to shock revival/failure for 200 stellar progenitors using the updated $N=50$ rates from Titus et al. (2019). We run corresponding simulations using the baseline rate library from Sullivan et al. (2016) with the improved approximation of Raduta et al. (2017), and a third set using the independent-particle approximation (IPA) of Bruenn (1985). By comparing the model sets, we investigate the impact of updated EC rates on core collapse, shock revival, and neutrino emission across a variety of progenitors. The paper is structured as follows. In Section 2 we describe our methods, including the EC rate tables (§ 2.1), the setup of the CCSN simulations (§ 2.2), and the calculation of observable neutrino signals (§ 2.3). In Section 3, we compare the simulation results, with a detailed comparison of three reference progenitors (§ 3.1), the impact across the full progenitor population (§ 3.2), the compact remnants (§ 3.3), and the predicted neutrino signal (§ 3.4). In Section 4 we interpret our results and compare them to previous studies, and give concluding remarks in Section 5. Figure 1: Post-bounce evolution of shock radius ($r_{\mathrm{sh}}$) and neutrino heating efficiency in the gain region ($\eta_{\mathrm{heat}}$) for three reference progenitors (§ 3.1). The 0.01$\times{}$ and 10$\times{}$ models consist of the LMP and LMP+N50 rates systematically scaled by factors of $0.01$ and $10$ (§ 2.1). Note the different $\eta_{\mathrm{heat}}$ ranges. ## 2 Methods To explore the impact of EC rates on CCSNe, we run multiple large sets of 1D simulations using the FLASH code (Fryxell et al., 2000; Dubey et al., 2009). For initial conditions, we use 200 stellar progenitor models from Sukhbold et al. (2016) with zero-age main-sequence (ZAMS) masses between $9120\text{\,}\mathrm{M_{\odot}}$ (§ 2.2). For each progenitor, we run simulations using three different implementations of EC rates, which include a independent-particle approximation, microphysical calculations, and updated experimental rates (§ 2.1). For the $122040\text{\,}\mathrm{M_{\odot}}$ progenitors, we also run simulations with the microphysical rates scaled by factors of $0.01$ and $10$. In total, this results in 612 supernova models evolved to between $15\text{\,}\mathrm{s}$ post-bounce. ### 2.1 EC Rates The first of our three EC rate sets uses the IPA on a mean nucleus (Fuller et al., 1982; Bruenn, 1985). These rates were used in the comparable FLASH simulations of Couch et al. (2020) and Warren et al. (2020, though see § 2.2 for a note on differences). Crucially, IPA assumes that EC completely halts for nuclei with $N\geq 40$ due to Pauli blocking, thus only permitting captures on free protons at densities above $\sim${10}^{10}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$$, where neutron-rich nuclei dominate the composition (Langanke et al., 2003). The second set, which we label LMP, uses a library of microphysical rates compiled by the National Superconducting Cyclotron Laboratory (NSCL) Charge- Exchange Group 111https://github.com/csullivan/weakrates,222https://groups.nscl.msu.edu/charge_exchange/weakrates.html (Sullivan et al., 2016; Titus et al., 2018). This library includes rates from Fuller et al. (1982), Oda et al. (1994), Langanke & Martínez-Pinedo (2000), Langanke et al. (2003), Pruet et al. (2003) and Suzuki et al. (2016). For nuclei that are not covered by the above calculations, the single-state parametrization from Raduta et al. (2017, Model 3) is used, which extends the approximation of Langanke et al. (2003) to account for temperature, electron density, and odd-even effects. By unblocking EC on neutron-rich nuclei, these modern calculations replace the assumption in IPA that deleptonization is dominated at high densities by EC on free protons (Langanke et al., 2003). The third set, which we label LMP+N50, is the same as LMP except for updated microphysical rates in the region around $N=50$, $Z=28$, where core-collapse is known to be highly sensitive (Sullivan et al., 2016; Titus et al., 2018; Pascal et al., 2020). These new rates were calculated for 78 nuclei by Titus et al. (2019) using a quasi-particle random-phase approximation (QRPA) using constraints from $(t,\mathrm{{}^{3}He}+\gamma)$ charge-exchange experiments. The LMP set largely relies on the Raduta et al. (2017) parametrization for these rates, and are up to two orders of magnitude higher than the LMP+N50 set. This is due to the fact that in the LMP+N50 set Pauli-blocking effects that reduce the rates play an important role as only transitions from the ground state are considered (Titus et al., 2018, 2019). Recently, Dzhioev et al. (2020) argued that Pauli unblocking at finite temperature may actually reduce or eliminate this gap. Indeed, during the latter stages of our present study, Giraud et al. (2022) reported new finite- temperature calculations for the $N=50$ region, which resulted in rates around an order of magnitude higher than LMP+N50 for $T\lesssim$10\text{\,}\mathrm{G}\mathrm{K}$$ and $\rho Y_{e}=${10}^{11}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$$. This improvement brings the rates closer to the original LMP approximation, but still lower by about a factor of 5. The calculations by Dzhioev et al. (2020) and Giraud et al. (2022) both indicate that the temperature-dependent effects significantly increase the EC rates compared to those estimated on the basis of captures on the ground state only by Titus et al. (2019). Although a future study will be required to determine the impact of the rates developed in Giraud et al. (2022), we note that the LMP+N50 rates by Titus et al. (2019) should be regarded as lower limits, and on the basis of Giraud et al. (2022), the LMP rates provide a more realistic estimate but are likely still overestimated. For the $122040\text{\,}\mathrm{M_{\odot}}$ progenitors, in addition to the three EC rate sets above, we also systematically scale the rates of LMP and LMP+N50 following the approach used in Sullivan et al. (2016), whereby the rates of all nuclei with atomic mass numbers of $A>4$ are scaled by factors of $0.01$ and $10$ (hereafter labeled 0.01$\times{}$ and 10$\times{}$). Table 1: Neutrino detection channels used in the SNOwGLoBES analysis for a DUNE-like liquid argon detector Channel | Reaction | Flavor | $(\%)$ ---|---|---|--- $\nu_{e}\mathrm{CC}$ | $\nu_{e}+\mathrm{{}^{40}Ar}\rightarrow e^{-}+\mathrm{{}^{40}K}$ | $\nu_{e}$ | $7080\text{\,}$ $\bar{\nu}_{e}\mathrm{CC}$ | $\bar{\nu}_{e}+\mathrm{{}^{40}Ar}\rightarrow e^{+}+\mathrm{{}^{40}Cl}$ | $\bar{\nu}_{e}$ | $\lesssim 1$ NC | $\nu+\mathrm{{}^{40}Ar}\rightarrow\nu+\mathrm{{}^{40}Ar}$ | $\nu_{e}$, $\bar{\nu}_{e}$, $\nu_{x}$ | $1020\text{\,}$ ES | $\nu+e^{-}\rightarrow\nu+e^{-}$ | $\nu_{e}$, $\bar{\nu}_{e}$, $\nu_{x}$ | $\sim 7$ ### 2.2 Numerical Methods To simulate the collapse and explosion of massive stars, we use the FLASH hydrodynamics code (Fryxell et al., 2000; Dubey et al., 2009) with the Supernova Turbulence in Reduced-dimensionality (STIR) framework (Couch et al., 2020; Warren et al., 2020). STIR enhances the explodability of 1D CCSN models by using time-dependent mixing-length theory (MLT) to approximate convective turbulence in 1D. We use a mixing length parameter of $\alpha_{\Lambda}=1.25$, chosen to reproduce the convective velocities of 3D simulations, which is multiplied by the pressure scale height to obtain the mixing length (for details, see Couch et al., 2020). We use a recently implemented hydrodynamics solver (Couch et al., 2020), which uses a fifth-order finite-volume weighted essentially non-oscillatory (WENO) spatial discretization, and a method-of-lines Runge-Kutta time integration. For neutrino transport, we use the “M1” scheme, an explicit two-moment method with an analytic closure (described in O’Connor & Couch, 2018), with three neutrino species ($\nu_{e}$, $\bar{\nu}_{e}$, and $\nu_{x}{}=\\{\nu_{\mu},\,\nu_{\tau},\,\bar{\nu}_{\mu},\,\bar{\nu}_{\tau}\\}$) and 18 logarithmically spaced energy groups between $1300\text{\,}\mathrm{MeV}$. We generate neutrino opacity tables using the open source neutrino interaction library NuLib333https://github.com/evanoconnor/nulib (O’Connor, 2015). The interaction rates largely follow Bruenn (1985) and Burrows et al. (2006), with corrections for weak magnetism and nucleon recoil from Horowitz (2002). Separate tables are calculated using the neutrino emissivities derived from each EC rate set described in Section 2.1. We note that our tables do not include the many-body effects and virial corrections to neutrino-nucleon scattering from Horowitz et al. (2017). These corrections aid explodability by enhancing neutrino heating in the gain region (O’Connor et al., 2017), and thus our simulations result in fewer explosions than the corresponding models in Couch et al. (2020) and Warren et al. (2020). Nevertheless, this does not impede our goal of a comparison study between the EC rates. We use the SFHo EOS from Steiner et al. (2013), and assume nuclear statistical equilibrium (NSE) abundances everywhere in the domain. For the IPA EC rates, the average nucleus from the NSE distribution is used. Self-gravity is included using an approximate general-relativistic effective potential (Marek et al., 2006; O’Connor & Couch, 2018). For initial conditions, we use 200 stellar progenitor models from Sukhbold et al. (2016), the same set used with FLASH+STIR in Couch et al. (2020) and Warren et al. (2020). These progenitors are spherically symmetric, solar- metallicity, nonrotating, and nonmagnetic, with ZAMS masses ranging from $9120\text{\,}\mathrm{M_{\odot}}$. The set spans core compactness values of $0\lesssim\xi_{2.5}\lesssim 0.54$ (as defined in O’Connor & Ott, 2011) and iron core masses of $1.29\lesssim M_{\mathrm{Fe}}\lesssim$1.84\text{\,}\mathrm{M_{\odot}}$$. The simulation domain extends from the center of the star to $r=$15\,000\text{\,}\mathrm{km}$$. The domain is divided into 15 adaptive mesh refinement blocks, each containing 16 zones. We allow up to nine levels of mesh refinement, resulting in a zone resolution of $62.5\text{\,}\mathrm{km}$ at the coarsest level and $0.244\text{\,}\mathrm{km}$ at the finest level. The adaptive mesh refinement results in a total of roughly $1000$ zones. ### 2.3 Neutrino Observables Following the approach used in Warren et al. (2020), we calculate simulated observations of the neutrino burst at core bounce using SNOwGLoBES444https://github.com/SNOwGLoBES/snowglobes (Scholberg, 2012), which uses the GLoBES555www.mpi-hd.mpg.de/personalhomes/globes (Huber et al., 2005) framework to predict event rates for a given detector material. As input for SNOwGLoBES, we calculate from our simulations the neutrino flux at Earth assuming a CCSN distance of $10\text{\,}\mathrm{k}\mathrm{p}\mathrm{c}$ and a pinched neutrino spectrum with a Fermi-Dirac parametrization (Keil et al., 2003). We include adiabatic neutrino flavor conversions from Mikheyev–Smirnov–Wolfenstein (MSW) matter effects (Dighe & Smirnov, 2000). For each model, we apply three separate cases of flavor mixing: no flavor mixing, normal neutrino mass ordering, and inverted mass ordering (Appendix A). We calculate detection events for a $40\text{\,}\mathrm{k}\mathrm{t}$ liquid argon detector, representing the under-construction Deep Underground Neutrino Experiment (DUNE; Abi et al., 2021) capable of detecting large numbers of $\nu_{e}$ from nearby CCSNe (Kato et al., 2017). Table 1 summarizes the different interaction channels: charged-current (CC) reactions on $\mathrm{{}^{40}Ar}$ by $\nu_{e}$ and $\bar{\nu}_{e}$; neutral-current (NC) reactions on $\mathrm{{}^{40}Ar}$ for all flavors; and electron scattering (ES) for all flavors. The $\nu_{e}\mathrm{CC}$ reaction channel accounts for approximately $7080\text{\,}\mathrm{\char 37\relax}$ of the total counts in our models. We capture the neutronization burst by integrating events over $100\text{\,}\mathrm{m}\mathrm{s}$ centered on the bounce, using $5\text{\,}\mathrm{m}\mathrm{s}$ time bins and $0.2\text{\,}\mathrm{MeV}$ energy bins. For each model and flavor-mixing case, we thus obtain the total neutrino counts and the mean detected neutrino energy, ${\langle E\rangle}$, summed over all detection channels. Figure 2: Radial matter profiles at core bounce versus enclosed mass for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor (left), and all 200 progenitors between $9120\text{\,}\mathrm{M_{\odot}}$ (right). From top to bottom: electron fraction ($Y_{e}$); specific entropy ($S$); density ($\rho$); and radial velocity ($v_{r}$). Differences between the EC rate are typically larger than differences between the progenitors. For the $20\text{\,}\mathrm{M_{\odot}}$ progenitor, the IPA and 0.01$\times{}$ models fail to explode, whereas the LMP, LMP+N50, and 10$\times{}$ models successfully explode (Fig. 1). Figure 3: Lepton fraction ($Y_{l}$) versus density ($\rho$) at core bounce for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor. Of the baseline EC rate sets, IPA has the weakest deleptonization at densities $\gtrsim$1\text{\times}{10}^{12}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$$, but the strongest at lower densities. ## 3 Results Our collection of 612 simulations can be sorted into three groups based on the explosion outcomes for each progenitor. Firstly, there are those models that, for a given progenitor, fail to explode for all three EC rate sets. Secondly, there are those with mixed explosion outcomes between the rate sets. And thirdly, there are those that successfully explode for all three sets. Of the 200 progenitors, 126 fail for all three rate sets, 29 have mixed explosion outcomes, and 44 explode for all sets. The $10.25\text{\,}\mathrm{M_{\odot}}$ progenitor simulations experience numerical crashes mid-shock revival, and are excluded from discussions of explosion outcome. Of the mixed-outcome group, the IPA models always fail, whereas LMP and LMP+N50 both explode in 26 cases, and LMP+N50 is the only explosion for the remaining three cases ($2227.433\text{\,}\mathrm{M_{\odot}}$). In summary, there are 44 successful explosions for IPA, 70 for LMP, and 73 for LMP+N50. The data presented here, and the codes used to analyze it, are publicly available (Appendix B). ### 3.1 Reference Progenitors We here present detailed simulation comparisons for the $122040\text{\,}\mathrm{M_{\odot}}$ progenitors. These progenitors are representative of the three groups of explosion outcomes, respectively: all EC rate sets fail to explode; mixed outcomes; and all successfully explode. The evolution of the shock radius, $r_{\mathrm{sh}}$, and the neutrino heating efficiency, $\eta_{\mathrm{heat}}$, are shown in Figure 1. Here, $\eta_{\mathrm{heat}}$ is the fraction of the total $\nu_{e}$ and $\bar{\nu}_{e}$ luminosity absorbed in the gain region, which we estimate following O’Connor & Ott (2011). All $12\text{\,}\mathrm{M_{\odot}}$ models fail to explode and all $40\text{\,}\mathrm{M_{\odot}}$ models successfully explode. For the $20\text{\,}\mathrm{M_{\odot}}$ models, the LMP, LMP+N50, and both 10$\times{}$ models explode, whereas the IPA and both 0.01$\times{}$ models fail. A consistent hierarchy of $r_{\mathrm{sh}}$ evolution is seen across the reference progenitors. Overall, the IPA models reach the smallest $r_{\mathrm{sh}}$, experience the earliest shock recession ($1220\text{\,}\mathrm{M_{\odot}}$), and the latest shock revival ($40\text{\,}\mathrm{M_{\odot}}$), followed closely by the 0.01$\times{}$ models. The LMP and LMP+N50 models reach larger $r_{\mathrm{sh}}$ before recession ($12\text{\,}\mathrm{M_{\odot}}$) and undergo earlier shock revival ($2040\text{\,}\mathrm{M_{\odot}}$). Finally, the 10$\times{}$ models reach the largest $r_{\mathrm{sh}}$ before recession ($12\text{\,}\mathrm{M_{\odot}}$) and the earliest shock revivals ($2040\text{\,}\mathrm{M_{\odot}}$). For the $20\text{\,}\mathrm{M_{\odot}}$ progenitor, LMP+50 appears to require a smaller heating efficiency for shock revival than LMP, suggesting more favorable explosion conditions. The 10$\times{}$ models experience a surge in $\eta_{\mathrm{heat}}$ around $250\text{\,}\mathrm{m}\mathrm{s}$, which appears to contribute to an early shock runaway. In contrast, IPA does not reach sufficient $\eta_{\mathrm{heat}}$ before its shock contracts, shrinking the available gain region for neutrino interactions. The matter profiles at core bounce are shown in Figure 2 for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor (left) and the full set of 200 progenitors (right). We define bounce as the moment when the peak entropy in the core reaches $3\,k_{\mathrm{B}}\,\mathrm{baryon^{-1}}$. We also define the inner core mass at bounce, $M_{\mathrm{core}}$, as the mass enclosed within this point (also known as the homologous core mass). As with $r_{\mathrm{sh}}$, the models maintain a consistent hierarchy, even across the entire population of progenitors. The IPA models have the largest inner core mass and electron fraction, entropy, density, and infall velocity. This trend is followed, in order, by the 0.01$\times{}$, LMP+N50, LMP, and finally 10$\times{}$ models. This ordering is reversed for the $Y_{e}$ outside the shock, where IPA is the lowest and 10$\times{}$ the largest. In Figure 3, the lepton fraction is shown versus density at bounce for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor. Above the neutrino-trapping densities of $\sim$1\text{\times}{10}^{12}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$$, the 0.01$\times{}$ models have the largest $Y_{l}$, followed by IPA, LMP+N50, LMP, and 10$\times{}$. Figure 4: Core bounce properties versus progenitor iron core mass, $M_{\mathrm{Fe}}$, for all simulations (§ 3.2). From top to bottom: electron fraction at bounce in the $M=$0.1\text{\,}\mathrm{M_{\odot}}$$ mass shell ($Y_{e}$); inner core mass at bounce ($M_{\mathrm{core}}$); gravitational potential at the shock at bounce ($V_{b}$); and convergence time of the shock with the $\nu_{e}$ sphere, relative to bounce ($t_{\nu_{e}}-t_{b}{}$). The 0.01$\times{}$ and 10$\times{}$ rate-scaled models are marked by downward and upward-pointing triangles, respectively (appearing from left to right: $122040\text{\,}\mathrm{M_{\odot}}$). In most cases, the differences between EC rates are larger than the dependence on stellar progenitor. Figure 5: Fractional and absolute differences of LMP+N50 and IPA models relative to LMP, versus progenitor iron core mass (§ 3.2). The quantities compared, from top to bottom: time to bounce from start of simulation ($t_{b}$); accretion rate through $r=$500\text{\,}\mathrm{k}\mathrm{m}$$ at bounce ($\dot{M}_{b}$); and maximum shock radius reached for failed explosion models ($\mathrm{max}(r_{\mathrm{sh}})$). Note the grid resolution of the simulation limits the precision of $r_{\mathrm{sh}}$ here to $\approx$0.5\text{\,}\mathrm{k}\mathrm{m}$$. Figure 6: Absolute difference of compact remnant properties relative to LMP (§ 3.3), versus progenitor iron core mass. Top: proto-neutron star mass at the end of the simulation ($M_{\mathrm{PNS}}$) for exploding models. Bottom: time from bounce to BH formation ($t_{\mathrm{BH}}$). ### 3.2 Population Comparisons A selection of properties at core bounce for all 200 progenitors are plotted versus the progenitor iron core mass, $M_{\mathrm{Fe}}$, in Figure 4. These quantities in particular demonstrate large differences between the EC rates compared to differences between the progenitors. The electron fraction of the inner core at bounce, $Y_{e}$, is taken at the $M=$0.1\text{\,}\mathrm{M_{\odot}}$$ enclosed mass coordinate (see also Fig. 2). The IPA models have the largest $Y_{e}$ (i.e., weakest deleptonization), followed by LMP+N50 and LMP. The extent of deleptonization translates directly into the inner core mass at bounce, $M_{\mathrm{core}}$. The IPA rates produce systematically larger $M_{\mathrm{core}}$ than LMP by around $0.08\text{\,}\mathrm{M_{\odot}}$ ($\approx$15\text{\,}\mathrm{\char 37\relax}$$), whereas LMP+N50 are around $0.03\text{\,}\mathrm{M_{\odot}}$ ($\approx$5\text{\,}\mathrm{\char 37\relax}$$) larger. The density profile at bounce (Fig. 2) determines the gravitational potential at the shock, $V_{b}$. Following previous trends, IPA results in a potential around $15\text{\,}\mathrm{\char 37\relax}$ deeper than LMP, compared to LMP+N50, which is consistently $\approx$5\text{\,}\mathrm{\char 37\relax}$$ deeper. Also shown is $t_{\nu_{e}}$, the time when the shock crosses the $\nu_{e}$ sphere immediately following the bounce. IPA reaches $t_{\nu_{e}}$ consistently $\approx$1.5\text{\,}\mathrm{m}\mathrm{s}$$ earlier than LMP, while LMP+N50 is $\approx$0.5\text{\,}\mathrm{m}\mathrm{s}$$ earlier. This relatively small but persistent difference impacts the neutrino signal of the deleptonization burst (§ 3.4). Quantities that have much larger variation between progenitors than between the EC rates are illustrated in Figure 5. For clarity, we emphasize the changes due to EC rates by plotting the fractional or absolute difference relative to LMP for each progenitor. For example, the difference of a given quantity $X$ corresponds to $\Delta X=X-X_{\mathrm{LMP}}$. For quantities where the absolute value depends somewhat arbitrarily on the initial conditions, we instead compare the fractional difference, i.e., $\Delta X/X=(X-X_{\mathrm{LMP}})/X_{\mathrm{LMP}}$. The time of core bounce from the start of the simulation, $t_{b}$, illustrates the speed of collapse from a common starting point. IPA reaches core bounce between $2.58.5\text{\,}\mathrm{\char 37\relax}$ earlier than LMP (corresponding to approximately $1017\text{\,}\mathrm{m}\mathrm{s}$), whereas LMP+N50 reaches bounce $\lesssim$1.5\text{\,}\mathrm{\char 37\relax}$$ later than LMP (corresponding to $\lesssim$2\text{\,}\mathrm{m}\mathrm{s}$$). In both cases, the difference is the smallest for larger progenitor iron core masses. The mass accretion rate through $r=$500\text{\,}\mathrm{k}\mathrm{m}$$ at bounce, $\dot{M}_{b}$, further illustrates the strength of collapse. Overall, IPA reaches accretion rates around $0.1\text{\,}\mathrm{M_{\odot}}\,\mathrm{s}^{-1}$ larger than LMP, whereas LMP+N50 is $\approx$0.1\text{\,}\mathrm{M_{\odot}}\,\mathrm{s}^{-1}$$ smaller. The maximum shock radius reached for failed explosions, $\mathrm{max}(r_{\mathrm{sh}})$, further supports the differences in shock evolution seen in the reference progenitors (Fig. 1). IPA reaches the smallest $r_{\mathrm{sh}}$, generally around $520\text{\,}\mathrm{k}\mathrm{m}$ smaller than LMP, whereas LMP+N50 is around $05\text{\,}\mathrm{k}\mathrm{m}$ larger. ### 3.3 Compact Remnants The compact remnant properties are compared in Figure 6 as the absolute difference relative to LMP, as used in Section 3.2. The PNS mass at the end of the simulation, $M_{\mathrm{PNS}}$, is compared for exploding models. Here, we define $M_{\mathrm{PNS}}$ as the baryonic mass contained in the region above a density of $1\text{\times}{10}^{12}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$. IPA tends to produce a heavier PNS, with progenitors of $M_{\mathrm{Fe}}\gtrsim$1.45\text{\,}\mathrm{M_{\odot}}$$ around $0.020.1\text{\,}\mathrm{M_{\odot}}$ larger than LMP. On the other hand, LMP+N50 tends to produce a similar or slightly lighter PNS, at most up to $0.010.02\text{\,}\mathrm{M_{\odot}}$ lighter than LMP. All of the exploding models for $M_{\mathrm{Fe}}\lesssim$1.45\text{\,}\mathrm{M_{\odot}}$$ are within $0.02\text{\,}\mathrm{M_{\odot}}$ of LMP. The post-bounce time to BH formation, $t_{\mathrm{BH}}$, is compared for the subset of models that reach PNS collapse. Not all failed-explosion models reach PNS collapse within the time simulated (between $15\text{\,}\mathrm{s}$ post-bounce). For the handful of models that do allow comparison with LMP, IPA reaches BH formation around $2080\text{\,}\mathrm{m}\mathrm{s}$ earlier, whereas LMP+N50 is only around $27\text{\,}\mathrm{m}\mathrm{s}$ earlier. Assuming the entire star is accreted, there would be no difference in the final BH mass between the EC rates. Figure 7: Neutrino emission at $r=$500\text{\,}\mathrm{k}\mathrm{m}$$ for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor. Top row: neutrino luminosity, $L_{\nu}$. Bottom row: mean neutrino energy, ${\langle E_{\nu}\rangle}$. Left: electron-neutrino ($\nu_{e}$) emission. Right: electron antineutrino ($\bar{\nu}_{e}$) and heavy-lepton neutrino ($\nu_{x}$) emission. Note the different time and $L_{\nu}$ ranges. Figure 8: Neutrino burst signal in a DUNE-like liquid argon detector for all 200 progenitors, versus progenitor iron core mass. Top row: total neutrino counts. Bottom row: mean detected neutrino energy. From left to right are the adiabatic flavor mixing implementations: no flavor mixing, normal mass ordering, and inverted mass ordering. The signal is integrated over $100\text{\,}\mathrm{m}\mathrm{s}$ centered on the bounce (§ 2.3), across all detection channels (Table 1), and assuming a distance of $10\text{\,}\mathrm{k}\mathrm{p}\mathrm{c}$. The error bars show typical $1\sigma$ uncertainties due to Poisson counting statistics alone. Note the different $y$-axis ranges. ### 3.4 Neutrino Signal The neutrino emission at $r=$500\text{\,}\mathrm{k}\mathrm{m}$$ is shown in Figure 8 for the $20\text{\,}\mathrm{M_{\odot}}$ progenitor. The three baseline sets reach similar peak electron-neutrino luminosities of $L_{\nu_{e}}\approx$5\text{\times}{10}^{53}\text{\,}\mathrm{erg}\text{\,}{\mathrm{s}}^{-1}$$. The 0.01$\times{}$ luminosities peak roughly $10\text{\,}\mathrm{\char 37\relax}$ higher, whereas the 10$\times{}$ peak roughly $10\text{\,}\mathrm{\char 37\relax}$ lower. Additionally, the IPA and 0.01$\times{}$ models peak slightly earlier than the LMP, LMP+N50, and 10$\times{}$ models, which have $\nu_{e}$ emission spread out to later times. The mean neutrino energies, ${\langle E_{\nu}\rangle}$, follow a similar pattern. The $\bar{\nu}_{e}$ and $\nu_{x}$ emission is largely reversed, with IPA and 0.01$\times{}$ generally having the largest luminosities and mean energies, followed by LMP+N50, LMP, and 10$\times{}$. The predicted neutrino burst signal in a DUNE-like detector is shown for all 612 models across 200 progenitors in Figure 8. For all three MSW flavor-mixing cases, there are consistent differences in the detected neutrinos between the EC rate sets. Overall, LMP and LMP+N50 produce similar total neutrino counts and mean detected energies, ${\langle E\rangle}$, with larger differences for IPA, particularly in ${\langle E\rangle}$. When no flavor mixing is assumed, IPA results in the lowest counts and ${\langle E\rangle}$. The effect is reversed when flavor mixing is included for both normal and inverted neutrino mass ordering. Overall, the inclusion of flavor mixing results in fewer counts and larger ${\langle E\rangle}$. ## 4 Discussion The choice of EC rates has a clear impact on our simulations of CCSNe. Perhaps the starkest difference in outcome is whether the models undergo successful shock revival and explosion. This variation in outcome can largely be traced to the pre-bounce core-collapse phase, where the EC rates control deleptonization and set the conditions at the core bounce for the subsequent shock evolution. The EC rates also influence the formation of the compact remnant and the observable neutrino signals. ### 4.1 Collapse and Bounce Due to the lack in IPA of forbidden transitions and thermal unblocking effects, the rates are limited to EC on free protons when the average nucleus has a neutron number of $N\geq 40$ (Hix et al., 2003). Because protons are less abundant than neutron-rich nuclei at high densities, the total number of ECs are suppressed, resulting in a larger $Y_{e}$ in the inner core at bounce compared to LMP and LMP+N50, which do allow EC for $N\geq 40$. On the other hand, ECs are actually enhanced in IPA at low densities below the $N=40$ threshold because the rates are overestimated compared to the LMP-based tables (Lentz et al., 2012). The combined effect is a larger $Y_{e}$ and $Y_{l}$ for IPA in the inner core region ($\rho\gtrsim$1\text{\times}{10}^{12}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$$) and lower values in the outer core compared to LMP and LMP+N50 (Fig. 2 and 3). This enhanced deleptonization of matter passing through lower densities accelerates the collapse to core bounce, leading to stronger accretion rates, larger infall velocities, larger inner core mass, deeper gravitational potential, and higher densities (Fig. 4 and 5). These bounce profile differences between the IPA and LMP-based rates are well- established in the literature, noted by Langanke et al. (2003) and Hix et al. (2003), and reproduced in subsequent studies (e.g., Lentz et al., 2012; Sullivan et al., 2016; Richers et al., 2017; Pascal et al., 2020). ### 4.2 Shock Evolution and Explosion Outcome Initially, owing to the larger inner core mass, the IPA shock has more kinetic energy and less overlying material to pass through than LMP and LMP+N50. Competing with these favorable conditions, however, are the higher densities (and thus deeper gravitational potential), faster infall of the overlying material (Fig. 2 and 4), and stronger neutrino cooling. Despite rapid expansion at early times in IPA, the shock is soon overwhelmed by accretion and neutrino cooling, leading to earlier stalling at smaller radii (Fig. 5). Compounded by the smaller gain region now available for neutrino heating, the result is an earlier shock recession or a delayed explosion (Fig. 1). By contrast, LMP+N50 tends to have more favorable conditions for a successful explosion, with the slowest collapse to bounce and smallest accretion rates (Fig. 5). The impact of the EC rates on shock evolution is borne out by the incidence of successful explosions. Of the 73 progenitors with at least one explosion among the EC rate sets, LMP+N50 explodes in all 73 and LMP explodes in 70. In contrast, IPA explodes in only 43 of these cases, and for no progenitor is IPA the sole explosion. ### 4.3 Compact Remnants The EC rates also impact the formation of the compact remnant (§ 3.3). In successful explosions, the PNS mass, $M_{\mathrm{PNS}}$, is effectively determined by the total mass accreted through the shock before shock revival unbinds the remaining material. The two deciding factors are thus the accretion rate and the elapsed time before shock runaway. IPA typically experiences both higher accretion rates (Fig. 5) and later shock revival (Fig. 1), resulting in the largest $M_{\mathrm{PNS}}$ (Fig. 6). The converse effects on LMP+N50 result in smaller $M_{\mathrm{PNS}}$. In the case of failed explosions, the conditions at core bounce influence the evolution and eventual collapse of the PNS. The larger core $Y_{e}$ and $M_{\mathrm{core}}$ in IPA results in a stronger initial shock, producing a larger PNS radius and stronger $\nu_{x}$ radiation (Fig. 8). The subsequent cooling of the PNS results in IPA reaching collapse approximately $2080\text{\,}\mathrm{m}\mathrm{s}$ earlier than LMP (Fig. 6). LMP+N50 has only marginally more efficient PNS cooling than LMP, and collapses at most a few milliseconds earlier. A change in the BH formation time would alter the shutoff of multimessenger signals from neutrinos and gravitational waves. ### 4.4 Neutrino Emission The effects of EC on deleptonization and shock formation also manifest in the neutrino emission around bounce (§ 3.4). For IPA, a larger $M_{\mathrm{core}}$ at bounce and stronger initial shock, combined with smaller neutrino spheres due to lowered opacities, leads to a faster convergence of the shock with the $\nu_{e}$ sphere, producing an earlier peak in $L_{\nu_{e}}$ (Fig. 8). The mean time between core bounce and the shock reaching the $\nu_{e}$ sphere was $1.8\pm$0.1\text{\,}\mathrm{m}\mathrm{s}$$ for IPA, $3.3\pm$0.1\text{\,}\mathrm{m}\mathrm{s}$$ for LMP, and $2.8\pm$0.1\text{\,}\mathrm{m}\mathrm{s}$$ for LMP+N50 ($1\sigma$ standard deviations; Fig. 4). These differences in $L_{\nu_{e}}$ have been noted in previous EC rate studies (e.g., Hix et al., 2003; Lentz et al., 2012; Sullivan et al., 2016; Pascal et al., 2020). For LMP and LMP+N50, the extended emission of $\nu_{e}$ at larger $L_{\nu_{e}}$ and ${\langle E_{\nu_{e}}\rangle}$ results in higher detected neutrino counts and energies in a DUNE-like liquid argon detector when no flavor mixing is assumed (Fig. 8). When MSW flavor mixing is included with normal neutrino mass ordering, approximately $98\text{\,}\mathrm{\char 37\relax}$ of the emitted $\nu_{e}$ are converted to $\nu_{x}$, and vice versa. This reduces the number of $\nu_{e}$ available for detection in the dominant $\nu_{e}\mathrm{CC}$ channel (Table 1), resulting in fewer total counts. There is also a shift to larger ${\langle E\rangle}$ because most of the $\nu_{e}$ that are now detected originate as high-energy $\nu_{x}$. Because IPA has larger emitted $L_{\nu_{x}}$ and ${\langle E_{\nu_{x}}\rangle}$, it now has the highest counts and ${\langle E\rangle}$. The inverted mass ordering case is somewhat intermediate, with only $70\text{\,}\mathrm{\char 37\relax}$ of the emitted $\nu_{e}$ converted to $\nu_{x}$, and vice versa. These favorable survival probabilities result in overall counts and ${\langle E\rangle}$ that are between the previous two cases. This appears to roughly coincide with the crossover point, where all three EC rates produce very similar counts. The large error bars from Poisson statistics alone suggest that these differences would be difficult to detect under these narrow assumptions, especially for ${\langle E\rangle}$, even if the progenitor was known. Additionally, there are degeneracies between the EC rate and the impacts of the progenitor star, neutrino mass hierarchy, and potentially the nuclear EOS (not investigated here). Reducing the uncertainties via larger count numbers could be achieved by combining measurements from multiple neutrino detectors, or if the supernova occurred closer than the assumed $10\text{\,}\mathrm{k}\mathrm{p}\mathrm{c}$. The degenerate signals might be broken by incorporating into the analysis: additional parts of the neutrino lightcurve (e.g., Segerlund et al., 2021); additional detectors sensitive to other neutrino flavors (e.g., water Cherenkov detectors); or, given sufficient counts, the full neutrino spectrum instead of an average energy. ## 5 Conclusion We have produced a suite of 612 one-dimensional CCSN simulations, using three sets of EC rates and 200 stellar progenitors between $9120\text{\,}\mathrm{M_{\odot}}$. The three EC rate sets were (§ 2.1): an IPA (Bruenn, 1985); a microphysical library with parametrized rates in the high-sensitivity $N=50$ region (LMP; Sullivan et al., 2016; Titus et al., 2018); and the same library with updated $N=50$ rates (LMP+N50; Titus et al., 2019). Of the 200 progenitors, there were 43 successful explosions for the IPA set, 70 for LMP, and 73 for LMP+N50. In general, the IPA models reached smaller shock radii and exploded later than their LMP and LMP+N50 counterparts (§ 3.1). Of the latter two, LMP+N50 appeared marginally more favorable to explosion, with larger shock radii and earlier shock runaway than LMP. At core bounce, IPA typically had the largest inner core mass, electron fraction, density, accretion rate, infall velocity, and gravitational potential (Fig. 2, 4, and 5). The next largest values were generally LMP+N50 followed by LMP, although LMP+N50 changed places with LMP for the collapse time and accretion rate (Fig. 5). The standard ordering was also reversed for $Y_{e}$ in the outer core, where IPA had the lowest values and LMP the highest (Fig. 3). For exploding progenitors, IPA produced a PNS mass around $0.020.1\text{\,}\mathrm{M_{\odot}}$ larger than LMP due to higher accretion rates and delayed shock revival, and LMP+N50 was typically $\lesssim$0.02\text{\,}\mathrm{M_{\odot}}$$ smaller (Fig. 6). For failed explosions, enhanced PNS cooling in IPA resulted in a collapse to BH roughly $2080\text{\,}\mathrm{m}\mathrm{s}$ earlier than LMP, whereas LMP+N50 was at most a few milliseconds earlier. Without flavor mixing, the extended $\nu_{e}$ emission of LMP and LMP+N50 following bounce (Fig. 8) resulted in higher detected counts and mean energies than IPA in a DUNE-like liquid argon detector (Fig. 8). Conversely, when adiabatic flavor mixing is included, the enhanced $\nu_{x}$ emission in IPA is converted to $\nu_{e}$, resulting in higher counts and energies than LMP and LMP+N50. Given only $\sim 10^{2}$ counts, however, these differences were typically smaller than the estimated uncertainties. All of these results largely stem from the total rate of electron captures at different densities during collapse (§ 4.1). For IPA, the total EC rate is overestimated at lower densities, but subsequently underestimated at higher densities due to Pauli blocking on a mean nucleus of $N\geq 40$. The LMP-based rates unblock EC for neutron-rich nuclei, and so deleptonization proceeds further than IPA during collapse. The updated $N=50$ rates in LMP+N50 are lower than the parametrized rates in LMP, producing an intermediate case between LMP and IPA (Fig. 4). It is important to emphasize the limitations of our study. While our STIR framework (Couch et al., 2020) approximates the effects of turbulence in 1D, there is ultimately no substitute for multidimensional simulations. Only high- fidelity 3D simulations can hope to fully capture the interplay between fluid instabilities, magnetohydrodynamics, and neutrino transport in CCSNe (e.g., Hanke et al., 2013; Lentz et al., 2015; O’Connor & Couch, 2018; Summa et al., 2018; Müller et al., 2019). Our progenitors from Sukhbold et al. (2016) were 1D, solar-metallicity, nonrotating, and nonmagnetic, and do not represent the full variety of stellar populations. The progenitor models also used microphysical EC rates, and so the sudden transition to approximate rates in our IPA models is somewhat artificial. Although we accounted for adiabatic flavor mixing when calculating neutrino signals, there remains copious uncertainty around the effects of flavor oscillations, which were not included in our simulations. We note that our study only considered the SFHo nuclear EOS, as previous works have demonstrated that the EOS dependence of the collapse and early post- bounce phase ($\lesssim$100\text{\,}\mathrm{m}\mathrm{s}$$) is dwarfed by the impact of EC rate uncertainties (Sullivan et al., 2016). We posit that the qualitative differences seen here between the EC rates beyond $\approx$100\text{\,}\mathrm{m}\mathrm{s}$$ post-bounce are unlikely to be dramatically altered by the EOS. Nevertheless, the choice of EOS can alter the nuclear abundances that EC acts upon (Nagakura et al., 2019), and it would be valuable to test our assumption in a future study. Finally, the updated rates in LMP+N50 do not include temperature dependence effects (Dzhioev et al., 2020). Very recently, Giraud et al. (2022) reported new finite-temperature calculations for the $N=50$ region, with rates around an order of magnitude higher than LMP+N50 and about a factor of 5 below the LMP rates. Their simulations suggest that CCSN properties would be intermediate between the LMP and LMP+N50 models presented here, which are already in relatively good agreement compared to the commonly used IPA rates. Work is underway to incorporate these new rates into NuLib so that they can be freely used in future CCSN simulations. EC plays a central role in deleptonization, shock formation, and neutrino production during core-collapse. Our study has explored the effects of updated EC rates in the high-sensitivity $N=50$ region, including a detailed comparison between microphysical rates and a simple IPA. By producing simulations across 200 progenitors, we have shown there are clear, systematic impacts of EC rates on the core structure, shock dynamics, and neutrino signals throughout the CCSN mechanism. This work was supported in part by Michigan State University through computational resources provided by the Institute for Cyber-Enabled Research. SMC is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Numbers DE-SC0015904 and DE-SC0017955. RT and RZ are supported by the US National Science Foundation under Grants PHY-1913554 (Windows on the Universe: Nuclear Astrophysics at the NSCL), PHY-1430152 (JINA Center for the Evolution of the Elements), and PHY-1927130 (AccelNet-WOU: International Research Network for Nuclear Astrophysics [IReNA]). EOC is supported by the Swedish Research Council (Project No. 2020-00452). ## References * Abi et al. (2021) Abi, B., Acciarri, R., Acero, M. A., et al. 2021, EPJC, 81, 423, doi: 10.1140/epjc/s10052-021-09166-w * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f * Bruenn (1985) Bruenn, S. W. 1985, ApJS, 58, 771, doi: 10.1086/191056 * Burrows et al. (2006) Burrows, A., Reddy, S., & Thompson, T. A. 2006, NuPhA, 777, 356 , doi: 10.1016/j.nuclphysa.2004.06.012 * Capozzi et al. (2017) Capozzi, F., Di Valentino, E., Lisi, E., et al. 2017, Phys. Rev. D, 95, 096014, doi: 10.1103/PhysRevD.95.096014 * Couch et al. (2020) Couch, S. M., Warren, M. L., & O’Connor, E. P. 2020, ApJ, 890, 127, doi: 10.3847/1538-4357/ab609e * Dighe & Smirnov (2000) Dighe, A. S., & Smirnov, A. Y. 2000, Phys. Rev. D, 62, 033007, doi: 10.1103/PhysRevD.62.033007 * Dubey et al. (2009) Dubey, A., Antypas, K., Ganapathy, M. K., et al. 2009, ParC, 35, 512 , doi: 10.1016/j.parco.2009.08.001 * Dzhioev et al. (2020) Dzhioev, A. A., Langanke, K., Martínez-Pinedo, G., Vdovin, A. I., & Stoyanov, C. 2020, Phys. Rev. C, 101, 025805, doi: 10.1103/PhysRevC.101.025805 * Fryxell et al. (2000) Fryxell, B., Olson, K., Ricker, P., et al. 2000, ApJS, 131, 273, doi: 10.1086/317361 * Fuller et al. (1982) Fuller, G. M., Fowler, W. A., & Newman, M. J. 1982, ApJ, 252, 715, doi: 10.1086/159597 * Giraud et al. (2022) Giraud, S., Zegers, R. G. T., Brown, B. A., et al. 2022, Phys. Rev. C, 105, 055801, doi: 10.1103/PhysRevC.105.055801 * Hanke et al. (2013) Hanke, F., Müller, B., Wongwathanarat, A., Marek, A., & Janka, H.-T. 2013, ApJ, 770, 66, doi: 10.1088/0004-637X/770/1/66 * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Natur, 585, 357, doi: 10.1038/s41586-020-2649-2 * Hix et al. (2003) Hix, W. R., Messer, O. E., Mezzacappa, A., et al. 2003, Phys. Rev. Lett., 91, 201102, doi: 10.1103/PhysRevLett.91.201102 * Horowitz (2002) Horowitz, C. J. 2002, Phys. Rev. D, 65, 043001, doi: 10.1103/PhysRevD.65.043001 * Horowitz et al. (2017) Horowitz, C. J., Caballero, O. L., Lin, Z., O’Connor, E., & Schwenk, A. 2017, Phys. Rev. C, 95, 025801, doi: 10.1103/PhysRevC.95.025801 * Hoyer & Hamman (2017) Hoyer, S., & Hamman, J. 2017, JORS, 5, doi: 10.5334/jors.148 * Hoyer et al. (2020) Hoyer, S., Hamman, J., Roos, M., et al. 2020, pydata/xarray v0.15.0, Zenodo, doi: 10.5281/zenodo.3631851 * Huber et al. (2005) Huber, P., Lindner, M., & Winter, W. 2005, CoPhC, 167, 195, doi: 10.1016/j.cpc.2005.01.003 * Hunter (2007) Hunter, J. D. 2007, CSE, 9, 90, doi: 10.1109/MCSE.2007.55 * Janka et al. (2007) Janka, H.-T., Langanke, K., Marek, A., Martínez-Pinedo, G., & Müller, B. 2007, Phys. Rep., 442, 38, doi: 10.1016/j.physrep.2007.02.002 * Johnston (2022a) Johnston, Z. 2022a, flashbang, Zenodo, doi: 10.5281/zenodo.6000257 * Johnston (2022b) —. 2022b, flash_snowglobes, Zenodo, doi: 10.5281/zenodo.6012564 * Johnston et al. (2022) Johnston, Z., Wasik, S., Titus, R., et al. 2022, 1D Core-collapse Supernova Simulations with Updated N=50 Electron Capture Rates, Mendeley Data, doi: 10.17632/w36ns2t3rd.1 * Kato et al. (2017) Kato, C., Nagakura, H., Furusawa, S., et al. 2017, ApJ, 848, 48, doi: 10.3847/1538-4357/aa8b72 * Keil et al. (2003) Keil, M. T., Raffelt, G. G., & Janka, H.-T. 2003, ApJ, 590, 971, doi: 10.1086/375130 * Langanke & Martínez-Pinedo (2000) Langanke, K., & Martínez-Pinedo, G. 2000, NuPhA, 673, 481, doi: 10.1016/S0375-9474(00)00131-7 * Langanke & Martínez-Pinedo (2003) —. 2003, RvMP, 75, 819, doi: 10.1103/RevModPhys.75.819 * Langanke et al. (2021) Langanke, K., Martínez-Pinedo, G., & Zegers, R. G. T. 2021, RPPh, 84, 066301, doi: 10.1088/1361-6633/abf207 * Langanke et al. (2003) Langanke, K., Martínez-Pinedo, G., Sampaio, J. M., et al. 2003, Phys. Rev. Lett., 90, 241102, doi: 10.1103/PhysRevLett.90.241102 * Lentz et al. (2012) Lentz, E. J., Mezzacappa, A., Messer, O. E. B., Hix, W. R., & Bruenn, S. W. 2012, ApJ, 760, 94, doi: 10.1088/0004-637X/760/1/94 * Lentz et al. (2015) Lentz, E. J., Bruenn, S. W., Hix, W. R., et al. 2015, ApJ, 807, L31, doi: 10.1088/2041-8205/807/2/L31 * Marek et al. (2006) Marek, A., Dimmelmeier, H., Janka, H.-T., Müller, E., & Buras, R. 2006, A&A, 445, 273, doi: 10.1051/0004-6361:20052840 * Müller et al. (2019) Müller, B., Tauris, T. M., Heger, A., et al. 2019, MNRAS, 484, 3307, doi: 10.1093/mnras/stz216 * Müller (2020) Müller, B. 2020, LRCA, 6, 3, doi: 10.1007/s41115-020-0008-5 * Nagakura (2021) Nagakura, H. 2021, MNRAS, 500, 319, doi: 10.1093/mnras/staa3287 * Nagakura et al. (2019) Nagakura, H., Furusawa, S., Togashi, H., et al. 2019, ApJS, 240, 38, doi: 10.3847/1538-4365/aafac9 * O’Connor (2015) O’Connor, E. 2015, ApJS, 219, 24, doi: 10.1088/0067-0049/219/2/24 * O’Connor & Ott (2011) O’Connor, E., & Ott, C. D. 2011, ApJ, 730, 70, doi: 10.1088/0004-637X/730/2/70 * O’Connor & Couch (2018) O’Connor, E. P., & Couch, S. M. 2018, ApJ, 865, 81, doi: 10.3847/1538-4357/aadcf7 * Oda et al. (1994) Oda, T., Hino, M., Muto, K., Takahara, M., & Sato, K. 1994, ADNDT, 56, 231, doi: 10.1006/adnd.1994.1007 * O’Connor et al. (2017) O’Connor, E., Horowitz, C. J., Lin, Z., & Couch, S. 2017, Proc Int Astron Union, 12, 107, doi: 10.1017/S1743921317004586 * Pandas development team et al. (2022) Pandas development team, Reback, J., jbrockmendel, et al. 2022, Pandas, Zenodo, doi: 10.5281/zenodo.3509134 * Pascal et al. (2020) Pascal, A., Giraud, S., Fantina, A. F., et al. 2020, Phys. Rev. C, 101, 015803, doi: 10.1103/PhysRevC.101.015803 * Pruet et al. (2003) Pruet, J., Woosley, S. E., & Hoffman, R. D. 2003, ApJ, 586, 1254, doi: 10.1086/367957 * Raduta et al. (2017) Raduta, A. R., Gulminelli, F., & Oertel, M. 2017, Phys. Rev. C, 95, 025805, doi: 10.1103/PhysRevC.95.025805 * Richers et al. (2017) Richers, S., Nagakura, H., Ott, C. D., et al. 2017, ApJ, 847, 133, doi: 10.3847/1538-4357/aa8bb2 * Scholberg (2012) Scholberg, K. 2012, ARNPS, 62, 81, doi: 10.1146/annurev-nucl-102711-095006 * Segerlund et al. (2021) Segerlund, M., O’Sullivan, E., & O’Connor, E. 2021, arXiv e-prints, PRL submitted. https://arxiv.org/abs/2101.10624 * Steiner et al. (2013) Steiner, A. W., Hempel, M., & Fischer, T. 2013, ApJ, 774, 17, doi: 10.1088/0004-637X/774/1/17 * Sukhbold et al. (2016) Sukhbold, T., Ertl, T., Woosley, S. E., Brown, J. M., & Janka, H.-T. 2016, ApJ, 821, 38, doi: 10.3847/0004-637X/821/1/38 * Sullivan (2015) Sullivan, C. 2015, weakrates: Weak-rate library (ApJ release), Zenodo, doi: 10.5281/zenodo.33788 * Sullivan et al. (2016) Sullivan, C., O’Connor, E., Zegers, R. G. T., Grubb, T., & Austin, S. M. 2016, ApJ, 816, 44, doi: 10.3847/0004-637X/816/1/44 * Summa et al. (2018) Summa, A., Janka, H.-T., Melson, T., & Marek, A. 2018, ApJ, 852, 28, doi: 10.3847/1538-4357/aa9ce8 * Suzuki et al. (2016) Suzuki, T., Toki, H., & Nomoto, K. 2016, ApJ, 817, 163, doi: 10.3847/0004-637X/817/2/163 * Titus et al. (2018) Titus, R., Sullivan, C., Zegers, R. G. T., Brown, B. A., & Gao, B. 2018, JPhG, 45, 014004, doi: 10.1088/1361-6471/aa98c1 * Titus et al. (2019) Titus, R., Ney, E. M., Zegers, R. G. T., et al. 2019, Phys. Rev. C, 100, 045805, doi: 10.1103/PhysRevC.100.045805 * Turk et al. (2011) Turk, M. J., Smith, B. D., Oishi, J. S., et al. 2011, ApJS, 192, 9, doi: 10.1088/0067-0049/192/1/9 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nat. Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 * Warren et al. (2020) Warren, M. L., Couch, S. M., O’Connor, E. P., & Morozova, V. 2020, ApJ, 898, 139, doi: 10.3847/1538-4357/ab97b7 ## Appendix A Neutrino Flavor Mixing When calculating detectable neutrino counts with SNOwGLoBES (§ 2.3), we account for adiabatic neutrino flavor conversions due to MSW matter effects (Dighe & Smirnov, 2000). Following similar approaches in Nagakura (2021) and Segerlund et al. (2021), we calculate the neutrino flux at Earth, $F_{i}$, for each neutrino flavor $i$, using $\displaystyle F_{e}$ $\displaystyle=pF_{e}^{0}+(1-p)F_{x}^{0},$ (A1) $\displaystyle\bar{F}_{e}$ $\displaystyle=\bar{p}\bar{F}_{e}^{0}+(1-\bar{p})\bar{F}_{x}^{0},$ (A2) $\displaystyle F_{x}$ $\displaystyle=\frac{1}{2}(1-p)F_{e}^{0}+\frac{1}{2}(1+p)F_{x}^{0},$ (A3) $\displaystyle\bar{F}_{x}$ $\displaystyle=\frac{1}{2}(1-\bar{p})\bar{F}_{e}^{0}+\frac{1}{2}(1+\bar{p})\bar{F}_{x}^{0},$ (A4) where $p$ and $\bar{p}$ are the survival probabilities, and $F^{0}_{i}$ are the emitted fluxes for each flavor $i$. Under normal neutrino mass ordering, the survival probabilities are given by $\displaystyle p$ $\displaystyle=\sin^{2}\theta_{13}\approx 0.02,$ (A5) $\displaystyle\bar{p}$ $\displaystyle=\cos^{2}\theta_{12}\cos^{2}\theta_{13}\approx 0.69,$ (A6) and under inverted mass ordering by $\displaystyle p$ $\displaystyle=\sin^{2}\theta_{12}\cos^{2}\theta_{13}\approx 0.29$ (A7) $\displaystyle\bar{p}$ $\displaystyle=\sin^{2}\theta_{13}\approx 0.02,$ (A8) where we use mixing parameters of $\sin^{2}\theta_{12}=0.297$ and $\sin^{2}\theta_{13}=0.0215$ (Capozzi et al., 2017). The no-mixing case is equivalent to $p=\bar{p}=1$. Note that our simulations use the combined heavy- lepton species $\nu_{x}=\\{\nu_{\mu}$, $\nu_{\tau}$, $\bar{\nu}_{\mu}$, $\bar{\nu}_{\tau}\\}$, and thus we assume $F_{x}^{0}=\bar{F}_{x}^{0}\propto\frac{1}{4}L_{\nu_{x}},$ (A9) where $L_{\nu_{x}}$ is the heavy-lepton neutrino luminosity from the simulation (Fig. 8). The separated heavy-flavor inputs to SNOwGLoBES are then $\displaystyle F_{\mu}=F_{\tau}$ $\displaystyle=F_{x},$ (A10) $\displaystyle\bar{F}_{\mu}=\bar{F}_{\tau}$ $\displaystyle=\bar{F}_{x}.$ (A11) The code used for these calculations is publicly available as a Python package, flash_snowglobes (Johnston, 2022b). ## Appendix B Data Availability The data for all simulations presented in this work are publicly available in a Mendeley Data repository (Johnston et al., 2022). The dataset includes summarized simulation results, radial profiles at core bounce, time-dependent quantities (e.g., shock radius and neutrino luminosities), and time-binned SNOwGLoBES neutrino counts and energies. Much of the model analysis and plotting were performed using our publicly available Python package flashbang (Johnston, 2022a).
# Specialist or Generalist? Instruction Tuning for Specific NLP Tasks Chufan Shi♠ , Yixuan Su♢,♣ , Cheng Yang♠ , Yujiu Yang♠ , Deng Cai♡ ♠Tsinghua Shenzhen International Graduate School, Tsinghua University ♢Language Technology Lab, University of Cambridge ♣Cohere ♡Tencent AI Lab <EMAIL_ADDRESS><EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS> This work was completed during an internship at Tencent AI Lab. Corresponding author. ###### Abstract The potential of large language models (LLMs) to simultaneously perform a wide range of natural language processing (NLP) tasks has been the subject of extensive research. Although instruction tuning has proven to be a data- efficient method for transforming LLMs into such generalist models, their performance still lags behind specialist models trained exclusively for specific tasks. In this paper, we investigate whether incorporating broad- coverage generalist instruction tuning can contribute to building a specialist model. We hypothesize that its efficacy depends on task specificity and skill requirements. Our experiments assess four target tasks with distinct coverage levels, revealing that integrating generalist instruction tuning consistently enhances model performance when the task coverage is broad. The effect is particularly pronounced when the amount of task-specific training data is limited. Further investigation into three target tasks focusing on different capabilities demonstrates that generalist instruction tuning improves understanding and reasoning abilities. However, for tasks requiring factual knowledge, generalist data containing hallucinatory information may negatively affect the model’s performance. Overall, our work provides a systematic guide for developing specialist models with general instruction tuning. Our code and other related resources can be found at https://github.com/DavidFanzz/Generalist_or_Specialist. ## 1 Introduction The latest generation of large language models (LLMs), such as ChatGPT OpenAI (2022) and GPT4 OpenAI (2023), are often referred to as generalist models for their exceptional generalizability to perform various natural language processing (NLP) tasks. Recent studies Taori et al. (2023); Zhou et al. (2023); Gudibande et al. (2023) suggest that (1) the foundation of their superior performance (i.e., knowledge and capabilities) is predominantly acquired during large-scale unsupervised pre-training; and (2) instruction tuning Sanh et al. (2022); Wei et al. (2022a); Mishra et al. (2021); Ouyang et al. (2022) is an incredibly data-efficient method for unleashing the power of LLMs to complete realistic NLP tasks. However, under rigorous evaluation, the performance of those instruction-following generalist models often falls short compared to traditional task-specific specialist models Jiao et al. (2023b); Qin et al. (2023); Fang et al. (2023); Liu et al. (2023). Recently, there has also been a growing trend towards developing specialist models using instruction tuning Jiao et al. (2023a); Wang et al. (2023b); Zhang et al. (2023); Cheng et al. (2023); Wang et al. (2023a). In this paper, we study how to better harness the power of LLM for specific NLP tasks using instruction tuning. Our research is motivated by the existence of various broad-coverage general-purpose instruction-following datasets Taori et al. (2023); Peng et al. (2023); Labs (2023); Xu et al. (2023); Zhou et al. (2023); Su et al. (2023b) and their surprising efficiency for turning LLMs into capable instruction-following generalists. For instance, Zhou et al. (2023) show that merely one thousand supervised input-output pairs are necessary to build a competent generalist. In contrast to general-purpose instruction tuning, our preliminary experiments show that a sufficiently large set of task-specific data is still required for transforming an LLM into a superior specialist. This leads us to a pivotal research question: How to better unleash the power of LLMs for specific NLP tasks by marrying the best of two worlds? More specifically, can general-purpose instruction-following datasets aid in the transformation of an LLM into a specialist? If so, when and how? We hypothesize the answers to the previous questions depend on (1) how specific the target task is; and (2) what skills the target task requires. To test this hypothesis, we first assess four target tasks with distinct levels of coverage. Our findings reveal that integrating general instruction tuning—that is, training with generalist data enhances the model’s performance on specific NLP tasks with broad task coverage, particularly when the amount of task-specific training data is limited. To gain a deeper understanding of the improvements elicited by training with generalist data, we subsequently examine three target tasks that focus on distinct skill sets. Our results suggest that general instruction tuning improves the model’s understanding and reasoning capabilities. However, when it comes to tasks that demand factual knowledge from the LLM, instructional data generated through self-instruct Wang et al. (2022a) harms the model’s performance due to the intrinsic hallucinations brought by such data creation approach. In sum, to the best of our knowledge, our work is the first effort to present a systematic guide for building and improving specialist models with general instruction tuning. ## 2 Background: Instruction Tuning In recent years, large language models (LLMs) have undergone rapid development and have dominated the field of natural language processing (NLP) Radford et al. (2018); Devlin et al. (2019); Radford et al. (2019); Brown et al. (2020). Today’s LLMs, such as ChatGPT OpenAI (2022) and GPT-4 OpenAI (2023), can perform complex and diverse tasks in the unified form of following natural language instructions. Generally, these models are trained in three separate stages: (1) large-scale unsupervised pre-training on raw text; and (2) instruction tuning via supervised learning Sanh et al. (2022); Wei et al. (2022a); Mishra et al. (2021); Su and Collier (2022); Su et al. (2023b); and (3) reinforcement learning from human feedback Stiennon et al. (2020); Bai et al. (2022); Ouyang et al. (2022). Recent studies Zhou et al. (2023); Gudibande et al. (2023) argued that almost all capabilities of LLMs are learned during unsupervised pre-training, and instruction tuning with a limited amount of supervised data is sufficient. However, this observation refers to the process of constructing general-purpose instruction-following models—generalists. In the following, we separately introduce broad-coverage “generalist” and task- specific “specialist” instruction tuning. #### Generalist Instruction Tuning. Early attempts on instruction tuning (Wang et al., 2022b; Sanh et al., 2022; Wei et al., 2022a; Chung et al., 2022, inter alia) transform a range of public NLP datasets into an instructional format, with a few manually crafted templates for each task. They then fine-tune an LLM on a portion of the transformed data and evaluate on another set of held-out tasks. Each work affirms that the model’s generalization ability to unseen tasks improves when increasing the task and template diversity. However, template-based instructions are not sufficiently diverse for building a truly competent generalist Ouyang et al. (2022). In contrast, state-of-the-art generalist models such as ChatGPT OpenAI (2022) are trained with proprietary instructions collected from real human users. In the pursuit to replicate the success of ChatGPT, various open-source broad-coverage instruction-tuning datasets are proposed. Some are gathered via crowd-sourcing Labs (2023); Zhou et al. (2023) while others use the outputs from strong proprietary models Taori et al. (2023); Peng et al. (2023); Xu et al. (2023); Su et al. (2023a); Li et al. (2023) with techniques such as self-instruct Wang et al. (2022a). Existing results suggest that these models can achieve near parity with proprietary models in various aspects Chiang et al. (2023); Zhou et al. (2023); Taori et al. (2023). #### Specialist Instruction Tuning. There is also an emerging trend to continue instruction tuning on specific NLP tasks, such as machine translation Jiao et al. (2023a), information extraction Wang et al. (2023b), medical QA Wang et al. (2023a); Fleming et al. (2023), and writing-assistant Zhang et al. (2023). These works typically transform existing task-specific datasets into the same instructional format as generalist instruction tuning and yield better model performance in specific tasks. Different from previous work, this study aims to provide a comprehensive and in-depth investigation of the role of generalist instruction data in specialist instruction tuning. Our work is most related to the initial studies on the cross-task generalization of instruction tuning such as FLAN Wei et al. (2022a). The differences between our work and previous work are: (1) we use broad-coverage generalist data, while they use template-based data; and (2) they focus on zero/few-shot performance on unseen tasks, while we assume an adequate amount of task-specific training data is available. ## 3 Incorporating Specialist Training with Generalist Training ### 3.1 Data Collection We sort the instruction-following data into two groups: (1) specialist data and (2) generalist data. #### Specialist data. primarily originates from existing NLP datasets with a focus on particular tasks. To facilitate our research, we mainly utilize the SuperNI dataset Wang et al. (2022b), a comprehensive benchmark containing 1,616 NLP datasets coupled with their respective natural language instructions, as the source of specialist data. The details are described in Section 4.1. We also leverage existing question answering datasets Kwiatkowski et al. (2019); Berant et al. (2013); Joshi et al. (2017) , reading comprehension datasets Lai et al. (2017) reasoning datasets Bowman et al. (2015); Talmor et al. (2019); Ling et al. (2017) to evaluate different aspects of model skills, detailed in Section 5.1. #### Generalist data is characterized by its extensive scope and diversity. For our research, we select two representative broad-coverage general-purpose datasets: GPT4-Instruct Peng et al. (2023) and LIMA Zhou et al. (2023). GPT4-Instruct Peng et al. (2023) contains 52k unique instruction-response pairs, where the instructions are collected through self-instruct Wang et al. (2022a) and the responses are generated by GPT-4 OpenAI (2023). LIMA Zhou et al. (2023) consists of 1k carefully curated instruction-response pairs derived from human-authored community questions and answers. Notably, we emphasize that GPT4-Instruct serves as an example of generalist data synthesized by LLMs and LIMA represents another distinct example of generalist data written by humans. #### Unified Format. We follow the template used in Stanford’s Alpaca project Taori et al. (2023) (See Appendix A). Each instance in the generalist and specialist data is transformed in a pair of {instruction, response}. ### 3.2 Training #### Specialist/Generalist Data Combination. For each target task, we construct the training and test set with 50k and 5k instances, respectively. For target tasks that span over multiple datasets, we uniformly sample training/test instances from the corresponding datasets such that each dataset has an equal proportion. For generalist data, we consider the GPT4-Instruct and LIMA datasets as discussed above. We first train models on generalist data and then specialist data. We vary the amounts of specialist data across {2k, 4k, 6k, 8k, 10k} to study the effect of generalist data under different circumstances of data scarcity. #### Model and Training Details. We conduct our experiments with the popular LLaMA 7B and 13B models Touvron et al. (2023). For training on generalist data, we follow the original setups in the respective papers Zhou et al. (2023); Taori et al. (2023). Specifically, for GPT4-Instruct, we train for 3 epochs with a batch size of 128, while for LIMA, we train for 15 epochs with a batch size of 64. In the subsequent specialist training phase, we train for 3 epochs with a batch size of 128. In both stages, we use the Adam optimizer Kingma and Ba (2015) with a learning rate of 2e-5 and utilize the standard language modeling objective: $\mathcal{L}=-\frac{1}{|\bm{y}|}\sum_{i=1}^{|\bm{y}|}\log p_{\theta}(y_{i}|\bm{x},\bm{y}_{<i}),$ where $\theta$ denotes the model parameters and $\\{\bm{x},\bm{y}\\}$ is an instruction-response pair. ## 4 Experiments I: The Coverage of the Target Tasks Figure 1: Comparison of models trained with different combinations of specialist and generalist data across different tasks. We report Rouge-L for SuperNI and accuracy for other levels. ### 4.1 Coverage Taxonomy To assess our model’s performance on a variety of target tasks with distinct levels of generality, we construct a hierarchy of four specialist tasks using the SuperNI dataset Wang et al. (2022b). This taxonomy encompasses tasks with varying scopes of coverage, as detailed below. #### SuperNI (multiple tasks, multiple formats). At the most comprehensive level, we incorporate all the English tasks from the SuperNI dataset, which encompasses a total of 756 datasets. Unlike LIMA and GPT4-Instruct, which accommodate a broad spectrum of user-oriented inquiries, the datasets in SuperNI focus on specific NLP tasks distilled from real-world demands. Therefore, we treat them as specialist data at the highest coverage level. #### Classification (multiple tasks, single format). The tasks in SuperNI can be grouped based on their task types, such as classification, summarization, and question answering. For the second level, we focus on the classification subset. Specifically, we select 252 classification datasets. To measure the model’s cross-task generalization capability, we allocate 223 datasets for training and reserve the remaining 29 datasets as held-out datasets for evaluation. #### Sentiment (single tasks, multiple domains). The classification tasks selected above can be further categorized based on their specific topics, such as sentiment analysis, toxic language detection, commonsense categorization, and others. Among these, we designate 32 sentiment analysis datasets as the third level. #### Yelp (single tasks, single domain). The sentiment analysis datasets mentioned above span various domains, such as movie and restaurant reviews. At the most fine- grained level, we choose the Yelp dataset (Zhang et al., 2015) as the representative task to evaluate the model’s performance in a highly specialized domain. Task Coverage | GPT4-Instruct | LIMA | specialist ---|---|---|--- SuperNI | 25.54 | 12.65 | 54.92 Classification | 53.20 | 46.84 | 80.02 Sentiment | 68.66 | 51.46 | 90.71 Yelp | 91.68 | 65.52 | 98.11 Table 1: The performance of generalists and specialists on tasks of different coverage levels on LLaMA-7B. The specialists are trained with 10k task- specific instances. For SuperNI, the performance is measured by Rouge-L, while the others are measured by accuracy. ### 4.2 Evaluation Setup For the SuperNI level, we follow the same evaluation protocol as in Wang et al. (2022b) and report Rouge-L Lin (2004). For the decoding strategy, we adopt greedy search with a maximum generation length of 512.111We leave the study on more advanced decoding methods (Holtzman et al., 2019; Su et al., 2022; Yang et al., 2023) as future work. For the Classification, Sentiment, and Yelp levels, we follow previous studies Brown et al. (2020); Sanh et al. (2022) and utilize a classification with options approach, where we prompt the model with a set of options and compute the likelihood of each option being the response. The one with the highest probability is taken as the model’s prediction, and we report the model’s accuracy. ### 4.3 Main Results #### Generalist models lag behind specialist models across all coverage levels. We compare generalist models that are solely trained on generalist data (i.e., LIMA or GPT4-Instruct) to those specialist models that are solely trained on specialist data (the 10k training instances we collect for each coverage level), using LLaMA-7B. From the results presented in Table 1, we can see that generalist models fall short in performance when compared to specialist models on all coverage levels. Notably, even as the coverage level becomes more encompassing, the performance gap between generalist models and specialist models does not shrink. For instance, on the most specific Yelp task, the specialist model outperforms the generalist model (GPT4-Instruct) by 6.43% absolute points. On the SuperNI task, the performance gap between the specialist and the generalist (GPT4-Instruct) is 29.38. These results validate the necessity of specialist tuning for specific NLP tasks. #### Transforming an LLM into a superior specialist demands a substantial amount of task-specific data. Figure 1 depicts the performance of specialist models on different tasks with varying numbers of training data (from 2k to 10k). From the results, we see that, for tasks with broader coverage (e.g. SuperNI and Classification), the model’s performance does not seem to converge with the 10k training instances. Even for narrow tasks such as Sentiment, at least 10k task-specific data is required to fully unlock the LLM’s potential. These results reveal the data- hungry nature of building specialist models. Figure 2: Results on held-out tasks (Classification) with LLaMA-7B. #### Generalist data can improve specialist performance when the task coverage is broad. Figure 1 also demonstrates that the inclusion of generalist data consistently results in performance improvements for both SuperNI and Classification across LLaMA 7B and 13B models. On average across different settings of specialist data, the introduction of generalist data leads to an improvement of 0.96 for LLaMA-7B and 0.74 for LLaMA-13B on SuperNI tasks, while for Classification tasks, it results in an enhancement of 1.53% for LLaMA-7B and 0.82% for LLaMA-13B. It is also worth noting that LIMA only has 1k instances, but it can even help improve performance when the number of specialist data is 10$\times$ larger. However, the results are the opposite for Sentiment and Yelp. For instance, the introduction of LIMA leads to a minor performance degeneration on Sentiment with 2k specialist data (a reduction of 0.25% for LLaMA-7B and 0.56% for LLaMA-13B). In the case of the Yelp task, the impact of including generalist data (both GPT4-Instruct and LIMA) appears to be minimal on the overall model performance. #### The performance gain is most evident when the amount of specialist data is limited. We can see that the performance gap between specialists trained with and without generalist data shrinks as the amount of specialist data increases. For example, at the Classification level, when the specialist data comprises only 2k instances, the inclusion of GPT4-Instruct enhances LLaMA-7B’s accuracy from 65.36% to 71.31% (+5.95%) and LLaMA-13B’s accuracy from 70.59% to 73.13% (+2.54%). However, when the number of specialist data reaches 10k instances, the addition of GPT4-Instruct only leads to smaller improvements, from 80.02% to 80.17% (+0.15%) for LLaMA-7B, and from 81.01% to 81.93% (+0.92%) for LLaMA-13B, respectively. #### The performance gain is less pronounced when the model scale is larger. As shown in Figure 1, when comparing the results of the 7B and 13B models, the trend of change in the effect of integrating generalist data is consistent for both models. However, it is worth noting that as the model scale is larger, the performance gain is less pronounced. Specifically, when the model scales up from 7B to 13B, the average improvement achieved by adding GPT4-Instruct on SuperNI decreases from 1.49 to 0.58, and the improvement in Classification reduces from 2.00% to 1.18%. Figure 3: Results using different amounts of generalist data (Classification, 10k specialist data) with LLaMA-7B. ### 4.4 Further Analysis For a deeper understanding of the impact of generalist data, here we present additional analyses. Unless otherwise specified, all experiments use LLaMA-7B as the foundation model. #### Cross-task Generalization. For the Classification level, recall that we exclude some classification tasks when constructing the training data. These tasks can be used as hold-out tasks to examine the specialist’s cross-task generalization ability. The results are shown in Figure 2. It can be observed that the accuracy on held-out tasks fluctuates in small ranges from 50.98% to 57.55% across different amounts of specialist data. However, upon incorporating LIMA, the average absolute accuracy improvement on the hold-out task increases by 2.70%, while adding GPT4-Instruct results in a 6.12% rise in absolute accuracy. This indicates that generalist data can greatly improve the cross-task generalization of specialist models. #### Number of Generalist Data. To study the effect of the amount of generalist data, we additionally partition the GPT4-Instruct dataset into five random parts and test the model’s performance when using different proportions of the dataset. The experiments are conducted at the Classification level with a fixed quantity of 10k specialist data. As shown in Figure 3, even with only 10k generalist data, the model’s accuracy is raised from 78.12% to 82.48%. Another interesting finding is that further increasing the generalist data to 50k merely brings small improvements (from 82.48% to 84.0%). The results together with our experiments with LIMA suggest that adding a small number of generalist data is sufficient to improve the specialist performance. Figure 4: The results using different test instructions (Classification) with LLaMA-7B. #### Cross-instruction Robustness. In all previous experiments, the models are trained and tested using the same instructions for each dataset. Now, we assess the model’s robustness when confronted with alternative instructions that have not appeared during training. To do so, we employ ChatGPT OpenAI (2022) to generate 20 semantically equivalent instructions based on the original instruction. Figure 4 reports the results of these unseen instructions. As seen, the models trained with the addition of generalist data exhibit substantial improvement in average accuracy compared to the models trained with specialist data only. For instance, when the specialist data is limited to 2k instances, incorporating generalist data leads to a 6.64% absolute improvement on average compared to the specialist model. In the meantime, the incorporation of generalist data also alleviates the performance variation between the best- performing and worse-performing runs from 4.04% to 2.42%. ## 5 Experiments II: The Required Skills of the Target Tasks We hypothesize that the model’s ability to perform specific NLP tasks can be attributed to the mix of several core capabilities. As such, we set up three target tasks that focus on three key skills which are detailed below. ### 5.1 Skill Taxonomy Figure 5: Comparison of models trained with different combinations of specialist and generalist data across different tasks. We report F1 score for Factual Knowledge and accuracy for other levels. Task Coverage | GPT4-Instruct | LIMA | specialist ---|---|---|--- Factual Knowledge | 14.78 | 17.28 | 46.37 Understanding | 35.10 | 30.82 | 76.77 Reasoning | 28.02 | 26.58 | 63.40 Table 2: The performance of generalists and specialists on tasks focusing on different skills. The specialists are trained with 10k task-specific instances on LLaMA-7B. For Factual Knowledge, the performance is measured by F1 score, while the others are measured by accuracy. #### Factual Knowledge is essential for models to serve information needs. We use three knowledge- intensive datasets: Natural Questions Kwiatkowski et al. (2019), WebQuestions Berant et al. (2013), and TriviaQA Joshi et al. (2017). All these three datasets consist of entity-centric questions, making them suitable for probing models’ ability to activate and utilize factual knowledge. Following previous work Brown et al. (2020), we evaluate under the closed-book setting where models are required to answer questions without the help of any external knowledge grounding. #### Understanding acts as an important perspective as the capability to interpret input text. We choose the RACE dataset Lai et al. (2017). RACE comprises data collected from English examinations in China and is specifically designed to assess the model’s ability to read and deeply comprehend texts in real-world scenarios. #### Reasoning is another fundamental ability for models to solve complex tasks. We use the SNLI Bowman et al. (2015) dataset for implicit reasoning, the CQA Talmor et al. (2019) for commonsense reasoning, and the AQUA Ling et al. (2017) dataset for arithmetic reasoning. ### 5.2 Evaluation Setup For the Factual Knowledge tasks, we use greedy search with a maximum generation length of 512. We adopt the F1 score as the evaluation metric following Brown et al. (2020). For the Understating and Reasoning tasks, we utilize the same classification with options method detailed in Section 3.2 and report the model accuracy. ### 5.3 Results and Analysis #### Generalist models lag behind specialist models across all task skills. Similar to Experiment I, we commence by comparing specialist and generalist models across three target tasks, each concentrating on distinct skills. The outcomes presented in Table 2 indicate that the generalist models consistently underperform the specialist models. For the Factual Knowledge task, the specialist model outperforms the generalist model with a 29.09 points higher F1 score. For the Understanding task, the specialist model surpasses the generalist model with a 41.67% increase in accuracy. For the Reasoning task, the specialist model excels beyond the generalist model, attaining a 35.38% absolute accuracy difference. Collectively, these findings substantiate the necessity of specialist tuning for accomplishing specific tasks. #### Incorporating GPT4-Instruct impairs the model’s factual knowledge, while integrating LIMA offers benefits. As illustrated in Figure 5, we observe the varying impact of different generalist data on the model’s performance in the Factual Knowledge task. In particular, when GPT4-Instruct is incorporated, the F1 score experiences a decline. Conversely, when LIMA data is integrated, the F1 witnesses an increase. We argue that this difference stems from the fact that GPT4-Instruct is machine-generated, while LIMA is human-authored. The rationale is that machine-generated data may contain hallucinations, thus impairing the model’s ability to recall factual knowledge. To validate our hypothesis, we conduct experiments using additional generalist datasets, namely Dolly Labs (2023), and Evol-Instruct Xu et al. (2023). Dolly consists of manually curated data generated by Databricks employees. Evol- Instruct uses more complex instructions than GPT4-Instruct and collects responses from ChatGPT Fang et al. (2023). As observed in Figure 6, adding Dolly does not impair the performance, but incorporating Evol-Instruct leads to similar performance degradation as GPT4-Instruct. The above results are consistent with our hypothesis that machine-generated generalist data might adversely affect the model’s factual knowledgeability due to hallucinations. For a more rigorous comparison, we use ChatGPT to generate responses for the 1k instructions in LIMA. The new 1k instruction-response pairs form a new generalist dataset, which we call LIMA-Chat. The only difference between LIMA- Chat and LIMA is that the responses in LIMA-Chat are machine-generated, while those in LIMA are human-written. From Figure 6, we can see that LIMA-Chat indeed harms the performance while LIMA improves the performance. The above results suggest that the choice of generalist data is crucial for target tasks that heavily rely on factual knowledge. Figure 6: Results on the Factual Knowledge with LLaMA-7B. #### Adding generalist data enhances the understanding ability. The results of the Understanding task are presented in Figure 5. It is evident that the addition of GPT4-Instruct greatly improves the model’s performance when the specialist data is only 2k or 4k instances. However, as the number of specialist data further increases, the improvement diminishes. This suggests that the inclusion of generalist data can enhance the model’s comprehension ability when the specialist data is limited. #### Adding generalist data enhances the reasoning ability. We further evaluate the consequences of incorporating generalist data on the model’s reasoning ability, as demonstrated in Figure 5. Notably, unlike Understanding, where the improvements from adding generalist data gradually diminish, the benefits of incorporating generalist data on the Reasoning tasks are persistent across different amounts of specialist data (an average improvement of 0.65% on LLaMA-7B and 1.12% on LLaMA-13B). This phenomenon could be attributed to the fact that the activation of reasoning capabilities relies on diverse instruction data, and specialist data can be too narrow to fully unlock the true potential of LLMs. #### Effect of Model Scale. For Factual Knowledge, increasing the model size from 7B to 13B results in more substantial performance improvements compared to increasing the amount of specialist data. This observation aligns with previous work Brown et al. (2020), which indicates that an LLM’s knowledge is mostly obtained through its pre-training. For Understanding, increasing the model size is as beneficial as adding more specialist data. For Reasoning, increasing the model size does not yield improvements as noticeable as Factual Knowledge and Understanding. We speculate that the emergence of strong reasoning abilities requires a larger model scale Wei et al. (2022b). #### Generalist data plays a vital role in enhancing a model’s understanding and reasoning capabilities, thereby increasing its effectiveness in addressing task-specific objectives. We dissect the model’s capabilities into three core components: (i) factual knowledge, (ii) understanding, and (iii) reasoning abilities. We demonstrate that incorporating generalist data does not improve the model’s factual knowledge and, in some cases, may even be detrimental if it includes hallucinated information. Nevertheless, comparative experiments focusing on understanding and reasoning abilities reveal that generalist data effectively fosters the model’s comprehension and significantly augments its reasoning capabilities. This observed efficacy can be ascribed to the capacity of generalist data to facilitate the model’s understanding and execution of diverse tasks. The wide range of instructions embedded within the generalist data stimulates the model’s comprehension and reasoning faculties, empowering it to grasp specific requirements associated with various tasks more effectively. Moreover, by activating the model’s reasoning abilities, it showcases enhanced performance across an assortment of tasks involving different levels of complexity. The activation of comprehension and reasoning abilities further broadens the model’s cognitive capacity, allowing it to derive a more comprehensive understanding based on existing information pertinent to the given task. Consequently, the inclusion of generalist data amplifies the model’s task- specific capabilities, as it becomes adept at utilizing its expanded cognitive capacity to achieve superior performance. ## 6 Conclusions In this study, we thoroughly investigated the interaction between specialist data and generalist data in the context of targeting specific NLP tasks. Our findings consistently demonstrate that the addition of generalist data leads to performance improvement when the task coverage is broad. This highlights the potential benefits of incorporating generalist data, particularly when the availability of specialist data is limited. Furthermore, we extensively examined the impact of integrating generalist data on the model’s core capabilities. Surprisingly, we observed that the inclusion of generalist data did not enhance the model’s factuality. In fact, generalist data containing hallucinatory information can have a negative impact. On the other hand, our experiments also revealed that the introduction of generalist data has positive effects on the model’s understanding and reasoning abilities. Overall, our findings highlight the importance of leveraging generalist data to enhance the understanding and reasoning capabilities of NLP models, thereby enabling them to tackle various tasks more effectively. However, careful consideration should be given to the quality and reliability of the generalist data to avoid adverse effects on the model’s factual knowledge. ## Limitations While this work aims to provide a comprehensive investigation, we note that we do not exhaustively cover all possible evaluations. For example, we do not discuss NLP tasks such as summarization, translation, etc. Instead, we focus on constructing a hierarchy of four target tasks of different coverage levels and three target tasks focusing on different core skills. In addition, due to resource constraints, we only use LLaMA 7B/13B as our foundation models. We leave the investigation on different types and scales of models to our future work. ## Acknowledgments This work was partly supported by the National Key Research and Development Program of China (No.2020YFB1708200), and the Shenzhen Science and Technology Program (JSGG20220831110203007). ## References * Bai et al. (2022) Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. _ArXiv preprint_ , abs/2204.05862. * Berant et al. (2013) Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_. * Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015\. A large annotated corpus for learning natural language inference. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_. * Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_. * Cheng et al. (2023) Daixuan Cheng, Shaohan Huang, and Furu Wei. 2023. Adapting large language models via reading comprehension. _ArXiv preprint_ , abs/2309.09530. * Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. * Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. _ArXiv preprint_ , abs/2210.11416. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. * Fang et al. (2023) Tao Fang, Shu Yang, Kaixin Lan, Derek F Wong, Jinpeng Hu, Lidia S Chao, and Yue Zhang. 2023. Is chatgpt a highly fluent grammatical error correction system? a comprehensive evaluation. _ArXiv preprint_ , abs/2304.01746. * Fleming et al. (2023) Scott L Fleming, Alejandro Lozano, William J Haberkorn, Jenelle A Jindal, Eduardo P Reis, Rahul Thapa, Louis Blankemeier, Julian Z Genkins, Ethan Steinberg, Ashwin Nayak, et al. 2023. Medalign: A clinician-generated dataset for instruction following with electronic medical records. _ArXiv preprint_ , abs/2308.14089. * Gudibande et al. (2023) Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The false promise of imitating proprietary llms. _ArXiv preprint_ , abs/2305.15717. * Holtzman et al. (2019) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. _arXiv preprint arXiv:1904.09751_. * Jiao et al. (2023a) Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Xing Wang, Shuming Shi, and Zhaopeng Tu. 2023a. Parrot: Translating during chat using large language models. _ArXiv preprint_ , abs/2304.02426. * Jiao et al. (2023b) Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. 2023b. Is chatgpt A good translator? A preliminary study. _ArXiv preprint_ , abs/2301.08745. * Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. * Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_. * Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. _Transactions of the Association for Computational Linguistics_ , 7\. * Labs (2023) Databricks Labs. 2023. Dolly: A tool for data versioning and dataops in databricks. https://github.com/databrickslabs/dolly. Accessed on June 8, 2023. * Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_. * Li et al. (2023) Siheng Li, Cheng Yang, Yichun Yin, Xinyu Zhu, Zesen Cheng, Lifeng Shang, Xin Jiang, Qun Liu, and Yujiu Yang. 2023. AutoConv: Automatically generating information-seeking conversations with large language models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_. * Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_. * Ling et al. (2017) Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. * Liu et al. (2023) Aiwei Liu, Xuming Hu, Lijie Wen, and Philip S Yu. 2023. A comprehensive evaluation of chatgpt’s zero-shot text-to-sql capability. _ArXiv preprint_ , abs/2303.13547. * Mishra et al. (2021) Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Natural instructions: Benchmarking generalization to new tasks from natural language instructions. _ArXiv preprint_ , abs/2104.08773. * OpenAI (2022) OpenAI. 2022. Introducing chatgpt. https://openai.com/blog/chatgpt. * OpenAI (2023) OpenAI. 2023. GPT-4 technical report. _ArXiv preprint_ , abs/2303.08774. * Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022\. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35. * Peng et al. (2023) Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with GPT-4. _ArXiv preprint_ , abs/2304.03277. * Qin et al. (2023) Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is chatgpt a general-purpose natural language processing task solver? _ArXiv preprint_ , abs/2302.06476. * Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. * Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8). * Sanh et al. (2022) Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In _The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022_. * Stiennon et al. (2020) Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. 2020. Learning to summarize with human feedback. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_. * Su and Collier (2022) Yixuan Su and Nigel Collier. 2022. Contrastive search is what you need for neural text generation. _arXiv preprint arXiv:2210.14140_. * Su et al. (2023a) Yixuan Su, Tian Lan, and Deng Cai. 2023a. Openalpaca: A fully open-source instruction-following model based on openllama. https://github.com/yxuansu/OpenAlpaca. * Su et al. (2023b) Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023b. Pandagpt: One model to instruction-follow them all. _ArXiv preprint_ , abs/2305.16355. * Su et al. (2022) Yixuan Su, Tian Lan, Yan Wang, Dani Yogatama, Lingpeng Kong, and Nigel Collier. 2022\. A contrastive framework for neural text generation. _Advances in Neural Information Processing Systems_ , 35:21548–21561. * Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. * Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca. * Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. _ArXiv preprint_ , abs/2302.13971. * Wang et al. (2023a) Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, and Ting Liu. 2023a. Huatuo: Tuning llama model with chinese medical knowledge. _ArXiv preprint_ , abs/2304.06975. * Wang et al. (2023b) Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, and Chunsai Du. 2023b. Instructuie: Multi-task instruction tuning for unified information extraction. _ArXiv preprint_ , abs/2304.08085. * Wang et al. (2022a) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022a. Self-instruct: Aligning language model with self generated instructions. _ArXiv preprint_ , abs/2212.10560. * Wang et al. (2022b) Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_. * Wei et al. (2022a) Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned language models are zero-shot learners. In _The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022_. * Wei et al. (2022b) Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022b. Emergent abilities of large language models. _Trans. Mach. Learn. Res._ , 2022. * Xu et al. (2023) Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. _ArXiv preprint_ , abs/2304.12244. * Yang et al. (2023) Haoran Yang, Deng Cai, Huayang Li, Wei Bi, Wai Lam, and Shuming Shi. 2023. A frustratingly simple decoding method for neural text generation. _arXiv preprint arXiv:2305.12675_. * Zhang et al. (2015) Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_. * Zhang et al. (2023) Yue Zhang, Leyang Cui, Deng Cai, Xinting Huang, Tao Fang, and Wei Bi. 2023. Multi-task instruction tuning of llama for specific scenarios: A preliminary study on writing assistance. _ArXiv preprint_ , abs/2305.13225. * Zhou et al. (2023) Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for alignment. _ArXiv preprint_ , abs/2305.11206. ## Appendix A Instruction Template ============ Instruction Format =========== Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. $\\#\\#\\#$Instruction: [Task Prompt] $\\#\\#\\#$Input: [Input Text] $\\#\\#\\#$Response: [Output Text]
# Univalency of certain transform of univalent functions Milutin Obradović Department of Mathematics, Faculty of Civil Engineering, University of Belgrade, Bulevar Kralja Aleksandra 73, 11000, Belgrade, Serbia <EMAIL_ADDRESS>and Nikola Tuneski Department of Mathematics and Informatics, Faculty of Mechanical Engineering, Ss. Cyril and Methodius University in Skopje, Karpoš II b.b., 1000 Skopje, Republic of North Macedonia<EMAIL_ADDRESS> ###### Abstract. We consider univalency problem in the unit disc ${\mathbb{D}}$ of the function $g(z)=\frac{(z/f(z))-1}{-a_{2}},$ where $f$ belongs to some classes of univalent functions in ${\mathbb{D}}$ and $a_{2}=\frac{f^{\prime\prime}(0)}{2}\neq 0$. ###### Key words and phrases: analytic, univalent, transform ###### 2000 Mathematics Subject Classification: 30C45 ## 1\. Introduction Let ${\mathcal{A}}$ denote the family of all analytic functions $f$ in the unit disk ${\mathbb{D}}:=\\{z\in{\mathbb{C}}:\,|z|<1\\}$ satisfying the normalization $f(0)=0=f^{\prime}(0)-1$, i.e., $f$ has the form (1) $f(z)=z+a_{2}z^{2}+a_{3}z^{3}+\ldots.$ Let $\mathcal{S}$, $\mathcal{S}\subset\mathcal{A}$, denote the class of univalent functions in ${\mathbb{D}}$, let $\mathcal{S}^{\star}$ be the subclass of ${\mathcal{A}}$ (and $\mathcal{S}$ which are starlike in ${\mathbb{D}}$ and let $\mathcal{U}$ denote the set of all $f\in{\mathcal{A}}$ satisfying the condition (2) $\left|\operatorname{U}_{f}(z)\right|<1\qquad(z\in{\mathbb{D}}),$ where (3) $\operatorname{U}_{f}(z):=\left(\frac{z}{f(z)}\right)^{2}f^{\prime}(z)-1.$ In [5, Theorem 4] the authors consider the problem of univalency for the function (4) $g(z)=\frac{(z/f(z))-1}{-a_{2}},$ where $f\in\mathcal{U}$ has the form (1) with $a_{2}\neq 0$. They proved the following Theorem A. Let $f\in\mathcal{U}$. Then, for the function $g$ defined by the expression (4) we have * (a) $|g^{\prime}(z)-1|<1$ for $|z|<|a_{2}|/2$; * (b) $g\in\mathcal{S}^{\star}$ in the disk $|z|<|a_{2}|/2$, and even more $\left|\frac{zg^{\prime}(z)}{g(z)}-1\right|<1$ in the same disk ; * (c) $g\in\mathcal{U}$ in the disk $|z|<|a_{2}|/2$ if $0<|a_{2}|\leq 1.$ These results are the best possible. For the proof of the previous theorem the authors used the next representation for the class $\mathcal{U}$ (see [2] and [4]). Namely, if $f\in\mathcal{U}$ then (5) $\frac{z}{f(z)}=1-a_{2}z-z\omega(z),$ where function $\omega$ is analytic in ${\mathbb{D}}$ with $|\omega(z)|\leq|z|<1$ for all $z\in{\mathbb{D}}$. The appropriate function $g$ from (4) has the form (6) $g(z)=z+\frac{1}{a_{2}}z\omega(z).$ ## 2\. Results In this paper we consider other cases of Theorem A(c) and certain related results. ###### Theorem 1. Let $f\in\mathcal{U}$. Then the function $g$ defined by the equation (4) belongs to $\mathcal{U}$ in the disc $|z|<\sqrt{\frac{1-|a_{2}|+\sqrt{|a_{2}|^{2}+2|a_{2}|-3}}{2}},$ i.e., satisfies (2) on this disc, if $\frac{5}{4}\leq|a_{2}|\leq 2$. ###### Proof. For the first part of the proof we use the same method as in [5]. By the definition of the class $\mathcal{U},$ i.e., inequality (2), and using the next estimation for the function $\omega$, $|z\omega^{\prime}(z)-\omega(z)|\leq\frac{r^{2}-|\omega(z)|^{2}}{1-r^{2}},$ (where $|z|=r$ and $|\omega(z)|\leq r$), after some calculations, we obtain $\begin{split}|\operatorname{U}_{g}(z)|&=\left|\frac{\frac{1}{a_{2}}\left[z\omega^{\prime}(z)-\omega(z)\right]-\frac{1}{a^{2}_{2}}\omega^{2}(z)}{\left[1+\frac{1}{a_{2}}\omega_{1}(z)\right]^{2}}\right|\\\ &\leq\frac{|a_{2}|\cdot|z\omega^{\prime}(z)-\omega(z)|+|\omega(z)|^{2}}{\left(|a_{2}|-|\omega(z)|\right)^{2}}\\\ &\leq\frac{|a_{2}|\cdot\frac{r^{2}-|\omega(z)|^{2}}{1-r^{2}}+|\omega(z)|^{2}}{\left(|a_{2}|-|\omega(z)|\right)^{2}}\\\ &=:\frac{1}{1-r^{2}}\cdot\varphi(t).\end{split}$ Here, (7) $\varphi(t)=\frac{|a_{2}|r^{2}-(|a_{2}|-1+r^{2})t^{2}}{(|a_{2}|-t)^{2}}$ and $|\omega(z)|=t$, $0\leq t\leq r$. From here we have that $\varphi^{\prime}(t)=\frac{2|a_{2}|}{(|a_{2}|-t)^{3}}\cdot\left[r^{2}-(|a_{2}|-1+r^{2})t\right]$ (where $|a_{2}|-t>0$ since $|a_{2}|\geq\frac{5}{4}>1>t$). Next, $\varphi^{\prime}(t)=0$ for $t_{0}=\frac{r^{2}}{|a_{2}|-1+r^{2}}$ and $0\leq t_{0}\leq r$ if $\frac{r^{2}}{|a_{2}|-1+r^{2}}\leq r,$ which is equivalent to $r^{2}-r+|a_{2}|-1\geq 0.$ The last relation is valid for $\frac{5}{4}\leq|a_{2}|\leq 2$ and every $0\leq t<1$. It means that the maximal value of the function $\varphi$ on $[0,r]$ is $\varphi(t_{0})=\frac{(|a_{2}|-1+r^{2})r^{2}}{(|a_{2}|-1)(|a_{2}|+r^{2})}.$ Finally, $|\operatorname{U}_{g}(z)|\leq\frac{1}{1-r^{2}}\cdot\varphi(t_{0})=\frac{(|a_{2}|-1+r^{2})r^{2}}{(1-r^{2})(|a_{2}|-1)(|a_{2}|+r^{2})}<1$ if $r^{4}-(1-|a_{2}|)r^{2}+(1-|a_{2}|)<0,$ or if $r<\sqrt{\frac{1-|a_{2}|+\sqrt{|a_{2}|^{2}+2|a_{2}|-3}}{2}}.$ This completes the proof. ∎ For our next consideration we need the next lemma. ###### Lemma 1. Let $f\in\mathcal{A}$ be of the form (1). If (8) $\sum_{2}^{\infty}n|a_{n}|\leq 1,$ then $\begin{split}|f^{\prime}(z)-1|&<1\qquad(z\in{\mathbb{D}}),\\\\[5.69054pt] \left|\frac{zf^{\prime}(z)}{f(z)}-1\right|&<1\qquad(z\in{\mathbb{D}})\end{split}$ (i.e. $f\in\mathcal{S}^{\star}$ ), and $f\in\mathcal{U}$. For the proof of $f\in\mathcal{U}$ in the lemma see [4], while the rest easily follows. Further, let $\mathcal{S}^{+}$ denote the class of univalent functions in the unit disc with the representation (9) $\frac{z}{f(z)}=1+b_{1}z+b_{2}z^{2}+\ldots,\quad b_{n}\geq 0,\,\,n=1,2,3,\ldots.$ For example, the Silverman class (the class with negative coefficients) is included in the class $\mathcal{S}^{+}$, as well as the Koebe function $k(z)=\frac{z}{(1+z)^{2}}\in\mathcal{S}^{+}$. The next characterization is valid for the class $\mathcal{S}^{+}$ (for details see [3]) (10) $f\in\mathcal{S}^{+}\quad\Leftrightarrow\quad\sum_{n=2}^{\infty}(n-1)b_{n}\leq 1.$ ###### Theorem 2. Let $f\in\mathcal{S}^{+}$. Then the function $g$ defined by (4) belongs to the class $\mathcal{U}$ in the disc $|z|<|a_{2}|/2$ and the result is the best possible. ###### Proof. Using the representation (9), the corresponding function $g$ has the form $g(z)=\frac{\frac{z}{f(z)}-1}{-a_{2}}=\frac{\frac{z}{f(z)}-1}{b_{1}}=z+\sum_{2}^{\infty}\frac{b_{n}}{b_{1}}z^{n}\quad(b_{1}\neq 0),$ and from here $\frac{1}{r}g(rz)=z+\sum_{2}^{\infty}\frac{b_{n}}{b_{1}}r^{n-1}z^{n}\quad(0<r\leq 1).$ Then, after applying Lemma 1, we have $\begin{split}\sum_{2}^{\infty}n|a_{n}|&=\sum_{2}^{\infty}n\frac{b_{n}}{b_{1}}r^{n-1}\\\ &=\frac{1}{b_{1}}\sum_{2}^{\infty}(n-1)b_{n}\frac{n}{n-1}r^{n-1}\\\ &\leq\frac{2r}{b_{1}}\sum_{2}^{\infty}(n-1)b_{n}\leq\frac{2r}{b_{1}}\leq 1\end{split}$ if $r\leq\frac{b_{1}}{2}=\frac{|a_{2}|}{2}$. It means, by the same lemma, that $g\in\mathcal{U}$ in the disc $|z|<|a_{2}|/2.$ In order to show that the result is the best possible, let consider the function $f_{1}$ defined by (11) $\frac{z}{f_{1}(z)}=1+bz+z^{2},\quad 0<b\leq 2.$ Then, $f_{1}\in\mathcal{S}^{+}$ is of type $f_{1}(z)=z-bz^{2}+\cdots$, so the function $g_{1}(z)=\frac{\frac{z}{f_{1}(z)}-1}{b}=z+\frac{1}{b}z^{2}$ is such that $\left|\left(\frac{z}{g_{1}(z)}\right)^{2}g^{\prime}_{1}(z)-1\right|\leq\frac{\frac{1}{b^{2}}|z|^{2}}{\left(1-\frac{1}{b}|z|\right)^{2}}<1$ when $|z|<b/2$. This implies that $g_{1}$ belongs to the class $\mathcal{U}$ in the disc $|z|<b/2$. On the other hand, since $g^{\prime}_{1}(-b/2)=0$, the function $g_{1}$ is not univalent in a bigger disc, implying that the result is the best possible. ∎ ###### Theorem 3. Let $f\in\mathcal{S}$. Then the function $g$ defined by (4) belongs to the class $\mathcal{U}$ in the disc $|z|<r_{0}$, where $r_{0}$ is the unique real root of the equation (12) $\frac{3r^{2}-2r^{4}}{(1-r^{2})^{2}}-\ln(1-r^{2})=|a_{2}|^{2}$ on the interval $(0,1)$. ###### Proof. We apply the same method as in the proof of the previous theorem. Namely, if $f\in\mathcal{S}$ has the representation (9), then (13) $\sum_{n=2}^{\infty}(n-1)|b_{n}|^{2}\leq 1$ (see [1], Theorem 11, p. 193, Vol. 2). Also, using (4), (9) and (13), we have $a_{2}=-b_{1}$, and $\frac{1}{r}g(rz)=z+\sum_{2}^{\infty}\frac{b_{n}}{b_{1}}r^{n-1}z^{n},\quad 0<r\leq 1.$ So, $\begin{split}\sum_{n=2}^{\infty}n|a_{n}|&=\sum_{n=2}^{\infty}n\frac{|b_{n}|}{|b_{1}|}r^{n-1}\\\ &=\frac{1}{|b_{1}|}\sum_{n=2}^{\infty}\sqrt{n-1}\cdot|b_{n}|\cdot\frac{n}{\sqrt{n-1}}\cdot r^{n-1}\\\ &\leq\frac{1}{|b_{1}|}\cdot\left(\sum_{n=2}^{\infty}(n-1)|b_{n}|^{2}\right)^{1/2}\cdot\left(\sum_{n=2}^{\infty}\frac{n^{2}}{n-1}r^{2(n-1)}\right)^{1/2}\\\ &\leq\frac{1}{|b_{1}|}\left(r^{2}\sum_{n=2}^{\infty}(n-1)(r^{2})^{n-2}+2r^{2}\sum_{n=2}^{\infty}(r^{2})^{n-2}+\sum_{n=2}^{\infty}\frac{1}{n-1}(r^{2})^{n-1}\right)^{1/2}\\\ &=\frac{1}{|b_{1}|}\left[\frac{3r^{2}-2r^{4}}{(1-r^{2})^{2}}-\ln(1-r^{2})\right]^{1/2}\leq 1\end{split}$ if $|z|<r_{0}$, where $r_{0}$ is the root of the equation $\frac{3r^{2}-2r^{4}}{(1-r^{2})^{2}}-\ln(1-r^{2})=|b_{1}|^{2}(=|a_{2}|^{2}).$ We note that the function on the left side of this equation is an increasing one on the interval $(0,1)$, so the equation has a unique root when $0<|a_{2}|\leq 2.$ ∎ ## References * [1] A. W. Goodman, Univalent functions, Vols. 1-2, Mariner, Tampa, Florida, 1983. * [2] Obradović M., Pascu N. N. and Radomir I., A class of univalent functions, Math. Japonica, 44(3) (1996), 565–568. * [3] M. Obradović and S. Ponnusamy, Coefficient characterization for certain classes of univalent functions, Bull. Belg. Math. Soc. (Simon Stevin) 16 (2009), 251–263. * [4] Obradović M., Ponnusamy S., On the class $\mathcal{U}$, Proc. 21st Annual Conference of the Jammu Math. Soc. and a National Seminar on Analysis and its Application, 11–26, 2011. * [5] Obradović M.,Tuneski N., Some properties of the class $\mathcal{U}$, Ann. Univ. Mariae Curie-Skłodowska Sect. A, 73(1) (2019), 45–56.
11institutetext: Technical University of Munich, Germany 22institutetext: Helmholtz AI and Institute of Machine Learning in Biomedical Imaging, Munich, Germany 33institutetext: Klinikum Rechts der Isar, Munich, Germany 44institutetext: Imperial College London, United Kingdom 55institutetext: School of Biomedical Engineering and Imaging Sciences, King’s College London, United Kingdom # Unsupervised Analysis of Alzheimer’s Disease Signatures using 3D Deformable Autoencoders Mehmet Yigit Avci∗ 11 Emily Chan∗ 112255 Veronika Zimmer 11 Daniel Rueckert 113344 Benedikt Wiestler 33 Julia A. Schnabel 112255 Cosmin I. Bercea 1122 ###### Abstract With the increasing incidence of neurodegenerative diseases such as Alzheimer’s Disease (AD), there is a need for further research that enhances detection and monitoring of the diseases. We present _MORPHADE_ (Morphological Autoencoders for Alzheimer’s Disease Detection), a novel unsupervised learning approach which uses deformations to allow the analysis of 3D T1-weighted brain images. To the best of our knowledge, this is the first use of deformations with deep unsupervised learning to not only detect, but also localize and assess the severity of structural changes in the brain due to AD. We obtain markedly higher anomaly scores in clinically important areas of the brain in subjects with AD compared to healthy controls, showcasing that our method is able to effectively locate AD-related atrophy. We additionally observe a visual correlation between the severity of atrophy highlighted in our anomaly maps and medial temporal lobe atrophy scores evaluated by a clinical expert. Finally, our method achieves an AUROC of 0.80 in detecting AD, out-performing several supervised and unsupervised baselines. We believe our framework shows promise as a tool towards improved understanding, monitoring and detection of AD. To support further research and application, we have made our code publicly available at github.com/ci-ber/MORPHADE. ###### Keywords: Unsupervised learning Registration Classification **footnotetext: These authors contributed equally to this work ## 1 Introduction Due to the increased prevalence of neurodegenerative diseases and their effects on cognitive function, the study of such diseases is a highly active research field. As the leading cause of dementia [1], Alzheimer’s disease (AD) is a particular focus of research advancements. However, the complex pathogenesis and progression mechanisms of AD remain only partially understood. Magnetic resonance imaging (MRI) has shown use in the non-invasive tracking of AD-associated brain changes, such as hippocampal and amygdala atrophy and ventricular dilation [16, 11]. Notably, several supervised machine learning methods utilizing MRI have been proposed which yield improvements in AD identification [24, 15, 23]. However, such methods are restricted by the need for large, annotated data sets. In contrast, unsupervised anomaly detection techniques [7, 2, 21, 25] offer a promising solution by modeling the distribution of healthy brain images to identify and localize anomalies without relying on labeled data. Nevertheless, unsupervised approaches face challenges in accurately analyzing structural abnormalities, particularly regions of atrophy, which are critical in AD research [5]. Classical techniques using multi atlas-based deformable registration [13] and morphometry methods [8, 3] have been proposed to analyze these structural changes. However, such methods allow analysis to be conducted only on a population-level, for instance as deviations from an atlas. In this work, we propose Morphological Autoencoders for Alzheimer’s Disease Detection (MORPHADE), a novel unsupervised anomaly detection framework based on deformable autoencoders (AEs) [4] which leverages deformation networks to generate patient-specific anomaly maps from 3D T1-weighted MRI brain scans. These anomaly maps allow not only AD detection, but also crucially reveal the location and degree of atrophy. Our main contributions are as follows: * • We use deformation fields in an unsupervised framework to analyze AD-related changes in the brain. To the best of our knowledge, this is the first use of such an approach using deep learning in the context of AD. * • We extend deformable autoencoders to 3D, utilize adversarial training and propose a dual-deformation strategy to improve reconstruction fidelity and the localization of atrophy. * • We accurately identify AD-affected brain regions, aligning our findings with clinical expectations. * • We assess AD severity by correlating our findings with clinical medial temporal lobe atrophy scores, evaluated by a board-certified clinical expert. * • Through comprehensive validation, we demonstrate superior performance in AD detection compared to unsupervised and even supervised baselines. ## 2 Background In unsupervised anomaly detection, reconstruction-based frameworks such as autoencoders (AEs) can be used to learn the distribution of healthy samples and subsequently identify samples that deviate from this norm as anomalous. The encoder $E_{\theta}$ maps an input $x$ to a lower-dimensional latent space and then the decoder $D_{\phi}$ learns to reconstruct from this encoded representation. The parameters $\theta$, $\phi$ of the AE are optimized given healthy input data $\chi=\\{x_{i},...,x_{n}\\}$ by minimizing the mean squared error (MSE) between the inputs and their reconstructions: $MSE=min_{\theta,\phi}\sum_{i=1}^{N}||x_{i}-D_{\phi}(E_{\theta}(x_{i}))||^{2}\enspace.$ (1) It is then assumed that during inference, the AE will generate a so-called pseudo-healthy reconstruction, in which only in-distribution healthy tissue can be successfully reconstructed and thus any reconstruction errors can be thought of as anomalies. A subject-specific map of anomalies can then be obtained by taking the residual between an input $x$ and its reconstruction $x_{recon}=D_{\phi}(E_{\theta}(x))$ as follows: $m_{residual}=|x-x_{recon}|\enspace.$ (2) Deformable Autoencoders (AEs) [4] were proposed as a method to alleviate false positives in the anomaly maps due to the limited reconstruction capabilities of traditional AEs. Since the top layers of the AE contain spatial information, deformable AEs use these layers to estimate a dense deformation field $\boldsymbol{\Phi}$ that allows local adaptions of the pseudo-healthy reconstruction to the individual anatomy of the subject. The estimation of the deformation field is optimized using local normalized cross correlation (LNCC): $\mathcal{L}_{morph}=LNCC(x,x_{morph})+{\beta}||\boldsymbol{\Phi}||^{2}\enspace,$ (3) where $\beta$ is a weight that is kept relatively high to constrain the deformations to be smooth and local, allowing only small changes to the reconstructions. We therefore refer to this part of the network as the constrained deformer. The improved reconstruction, which we refer to as the morphed reconstruction, $x_{morph}$, can then be obtained by $x_{morph}=x_{recon}\circ\boldsymbol{\Phi}$. The authors also propose to use perceptual loss (PL) [12] weighted by the hyperparameter $\alpha$, in addition to the MSE when optimizing the AE parameters, to promote reconstructions that closely resemble the training distribution: $\mathcal{L}_{recon}=\text{MSE}(x,x_{recon})+\alpha\text{PL}(x,x_{recon})\enspace.$ (4) ## 3 Methods and Materials Figure 1: Our approach, MORPHADE, integrates a dual-deformation strategy with a 3D autoencoder and adversarial training. The constrained deformer refines the reconstruction to generate a residual map with reduced false positives, while the unconstrained deformer is used to produce a folding map that highlights anomalies. The residual and folding maps together produce an anomaly map that allows the localization and assessment of the severity of atrophy. We propose MORPHADE, shown in Fig. 1, which builds upon deformable AEs. Firstly, we employ a 3D convolutional AE to enable the use of 3D images with the framework. Secondly, since PL uses 2D networks pre-trained on ImageNet, we employ an adversarial loss [9] to increase the realness of the reconstructions. We train a discriminator by minimizing this adversarial loss; therefore, the reconstruction loss becomes: $\mathcal{L}_{recon}=\text{MSE}(x,x_{recon})+\gamma\text{Adversarial}(x,x_{recon})\enspace,$ (5) where ${\gamma}$ balances the production of realistic reconstructions while maintaining pixel-wise accuracy. Our major extension to the deformable AEs is the use of a dual-deformation strategy, in which we employ an unconstrained deformer in addition to the constrained deformer, with the aim of improving the localization of atrophic regions. As previously stated, the constrained deformer is trained with a high value of $\beta$ to improve the generation of the pseudo-healthy reconstructions and thus reduce false positives in the anomaly maps. In contrast, the unconstrained deformer has the goal of reverting the pseudo- healthy reconstruction back to its original anomalous state. The deformer is trained with the same loss as in Eq. 3, but with a low value of $\beta$, which allows the creation of unconstrained deformation fields. In such deformation fields, low values of deformation should occur in areas of healthy tissue. Conversely, in regions of atrophy, the deformation field exhibits foldings, or areas in which the mapping of the deformation from the pseudo-healthy reconstruction to the original image is not one-to-one due to the loss of tissue volume. The determinant of the Jacobian of the deformation map, $J_{\boldsymbol{\Phi}}$, can be used to determine local volume changes, with negative values indicating such foldings. Therefore, we highlight the anomalies by using the negative Jacobian values to generate a map of the foldings, $m_{foldings}=\max(0,-\det(J_{\boldsymbol{\Phi}}))$. We finally multiply these foldings pixel-wise with the residual map from the constrained deformer to generate an anomaly map with reduced false positives and improved atrophy localization: $\text{Anomaly Map}=m_{residual}\times m_{foldings}\enspace.$ (6) Implementation. All networks were trained with Adam optimizer. The discriminator was trained with a learning rate of $1.0e^{-4}$, otherwise $5.0e^{-4}$ was used. The framework was first trained with a high value of $\beta=10$. We motivate this choice in Fig. 2a, where we show that using decreasing values of $\beta$ during training results in blurrier reconstructions. Conversely, a high $\beta$ value ensures that the AE does not overly rely on the deformations to achieve faithful reconstructions, but is instead forced to learn an accurate representation of the in-distribution data. After 200 epochs, the weights of these models were kept frozen while the deformation parameters were optimized for 100 epochs. At inference, we use a high value of $\beta=10$ to obtain the residual maps and a low value of $\beta=0.01$ to generate the folding maps. We demonstrate the need for lower $\beta$ values to produce improved folding maps in Fig. 2b, where it can be seen that using low values accentuates the anomalous regions in the brain. Figure 2: a) During training, a high value of $\beta=10$ constrains the deformer, promoting the AE to learn to produce less blurry reconstructions. b) At inference, a lower value of $\beta=0.01$ is used to generate folding maps (here shown overlayed on the input brain) that enhance the identification of anomalies. Dataset and Preprocessing. Data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu) [19].We used skull-stripped T1-weighted MPRAGE images of both male and female patients that are registered to the MNI brain template [17]. Our training set comprised 760 healthy control (HC) samples, with an additional 95 HC samples utilized for validation purposes. For the supervised baseline training, an additional 430 AD samples were used. The test set included 215 HC samples and 200 samples with AD. ## 4 Experiments and Results Atrophy Localization. We first validate the effectiveness of our method in identifying atrophy in sub-cortical brain regions affected by AD. To achieve this, we used the FSL FIRST tool [18] to segment these regions and compute mean anomaly scores for each, shown in Fig. 3. Our results indicate that AD patients exhibit notably higher anomaly scores in the hippocampus (left: 0.282 $\pm$ 0.495, right: 0.185 $\pm$ 0.382) and amygdala (left: 0.132 $\pm$ 0.207, right: 0.108 $\pm$ 0.208) compared to the hippocampus (left: 0.108 $\pm$ 0.193, right: 0.069 $\pm$ 0.125) and amygdala (left: 0.066 $\pm$ 0.115, right: 0.072 $\pm$ 0.111) for the healthy controls. These results are in line with the clinical expectation of these regions being significantly affected by AD pathology [6], indicating that MORPHADE is able to identify atrophy in clinically relevant brain regions. Figure 3: Anomaly scores for subcortical brain regions for Alzheimer’s Disease (AD) and Healthy Control (HC) samples, showcasing markedly higher scores for AD samples in the hippocampus and amygdala, consistent with clinical literature. [6] Atrophy Severity. We next evaluate the ability of our method to determine the severity of the localized anomalies by comparing our anomaly maps to medial temporal lobe atrophy (MTA) scores [20] that were assessed by a senior board- certified neuroradiologist. These scores range from 0 to 4 and are assigned based on the degree of structural changes observed in the choroid fissure, the temporal horn of the lateral ventricle, and the hippocampus. Fig. 4 shows a visual correlation between the degree of atrophy highlighted in the anomaly map in these key regions and the MTA scores, demonstrating the utility of our method in determining the severity of detected anomalies. Figure 4: Anomaly maps for AD patients alongside their corresponding medial temporal lobe atrophy (MTA) scores, demonstrating consistent alignment with AD-related structural changes and clinical MTA assessments. Pathology Detection. In this section, we assess the capability of MORPHADE in detecting AD at the patient level. Table 1 shows the Area Under the Receiver Operating Characteristic curve (AUROC) scores obtained when comparing our method to various baselines for identifying subjects with AD compared to healthy control (HC) subjects. Our model achieves an AUROC of 0.80, surpassing even the 3D supervised baselines ResNet [14] and DenseNet [10], with AUROCs of 0.77 and 0.74, respectively. Furthermore, we obtain improved performance compared to methods proposed for unsupervised anomaly detection. These methods are only available in 2D, so were assessed slice-wise with the final anomaly scores obtained by averaging over the slices for each patient. f-AnoGAN [21], Ganomaly [2] obtained AUROCs of 0.70 and 0.72, respectively. We also outperform Brainomaly [22] (AUROC 0.78), a method that is not strictly unsupervised since it requires pathological samples during training for improved performance. We also compare our results to a 3D adversarial AE to illustrate the benefit of utilizing the deformation fields with our method. Fig. 5 shows the reconstructions and residual maps obtained for both methods in representative AD and healthy controls (HC) subjects. Our method produces more refined reconstructions compared to the adversarial AE, shown by the improved MAE and SSIM scores. Moreover, the residual maps show fewer false positives for the healthy subject, while accentuating pathological areas for the AD subject. Using these improved residual maps alone for AD detection achieves a superior performance of AUROC 0.77 compared to 0.74 obtained by the adversarial AE. Finally, we demonstrate the utility of our dual-deformation approach, where AD identification was superior using our method compared to using only the residual maps from the constrained deformer (AUROC 0.77) or the folding maps from the unconstrained deformer (AUROC 0.79). Notably, the high performance of the folding maps underscores their effectiveness in detecting anomalies without relying on image differences between the input and reconstructions. Table 1: AUROC scores for the classification of AD and Healthy Controls (HC) patients. Best results are shown in bold. Method AD vs. HC $\uparrow$ ResNet (Supervised)[14] 0.77 DenseNet (Supervised)[10] 0.74 Brainomaly [22] (Mixed Supervision) 0.78 f-AnoGAN [21] (Unsupervised) 0.70 Ganomaly [2] (Unsupervised) 0.72 Adversarial AE (Unsupervised) 0.74 MORPHADE (ours) (Unsupervised) $0.80$ \- Only with residual maps ($\beta$=10) 0.77 \- Only with folding maps ($\beta$=0.01) $0.79$ Figure 5: A comparison of the performance of MORPHADE ($\beta=10$) with adversarial AEs for a subject with AD (left) and a healthy control subject (right). The morphological adjustments facilitated by MORPHADE enhance reconstruction fidelity, yielding higher Structure Similarity Index (SSIM) values for our method’s morphed reconstructions compared to those of the adversarial AE. The residual maps also demonstrate fewer reconstruction errors for the healthy subject, while highlighting atrophy for the subject with AD. ## 5 Discussion and Conclusion In this work, we introduced MORPHADE, a novel framework leveraging 3D deformable AEs for unsupervised analysis of Alzheimer’s Disease using T1-weighted brain MRI. Our approach is unique in employing deformation fields within an unsupervised learning context to analyze, localize, and assess the severity of AD-related atrophy. Our results demonstrate that MORPHADE can effectively identify and localize atrophy in clinically relevant brain regions, such as the hippocampus and amygdala, which aligns with clinical expectations of AD pathology. Furthermore, the anomaly maps generated by our method show strong visual correspondence with MTA scores, underscoring the potential of our method in clinical assessments. Lastly, MORPHADE achieved an AUROC of 0.80 in detecting AD, outperforming several supervised and unsupervised baselines. This highlights the robustness of our method without requiring extensive labeled datasets, addressing a significant limitation in current diagnostic approaches. Future work could explore integrating MORPHADE’s deformation metrics with established AD biomarkers, such as tau protein accumulation and amyloid-beta levels, to enhance understanding of disease progression. Additionally, expanding our framework to other neurodegenerative diseases could further validate its versatility and clinical utility. In conclusion, MORPHADE offers a promising tool for localization, and severity assessment of AD-related atrophy, contributing valuable insights into the progression and diagnosis of neurodegenerative diseases. Our findings suggest that this approach could significantly enhance the non-invasive monitoring and understanding of AD, paving the way for improved patient outcomes. ## References * [1] 2024 Alzheimer’s Disease facts and figures. Alzheimer’s & Dementia 20(5), 3708–3821 (2024). https://doi.org/10.1002/alz.13809 * [2] Akcay, S., Atapour-Abarghouei, A., Breckon, T.: Ganomaly: Semi-supervised anomaly detection via adversarial training. In: Computer Vision – ACCV 2018. Lecture Notes in Computer Science, vol. 11363, pp. 622–637. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20893-6_39 * [3] Ashburner, J., Hutton, C., Frackowiak, R., Johnsrude, I., Price, C., Friston, K.: Identifying global anatomical differences: Deformation-based morphometry. Human Brain Mapping 6(5-6), 348–357 (1998). https://doi.org/10.1002/(SICI)1097-0193(1998)6:5/6<348::AID-HBM4>3.0.CO;2-P * [4] Bercea, C.I., Rueckert, D., Schnabel, J.A.: What do AEs learn? Challenging Common Assumptions in Unsupervised Anomaly Detection. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 304–314. Springer (2023). https://doi.org/10.1007/978-3-031-43904-9_30 * [5] Bercea, C.I., Wiestler, B., Rueckert, D., Julia, S.: Generalizing unsupervised anomaly detection: Towards unbiased pathology screening. International Conference on Medical Imaging with Deep Learning (2023) * [6] Breijyeh, Z., Karaman, R.: Comprehensive review on Alzheimer’s Disease: Causes and treatment. Molecules 25(24), 5789 (2020). https://doi.org/10.3390/molecules25245789 * [7] Chen, X., Konukoglu, E.: Unsupervised detection of lesions in brain MRI using constrained adversarial auto-encoders. arXiv preprint arXiv:1806.04972 (2018) * [8] Chung, M., Worsley, K., Paus, T., Cherif, C., Collins, D., Giedd, J., Rapoport, J., Evans, A.: A unified statistical approach to deformation-based morphometry. NeuroImage 14(3), 595–606 (2001). https://doi.org/10.1006/nimg.2001.0862 * [9] Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks (2014), https://arxiv.org/abs/1406.2661 * [10] Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition (2017). https://doi.org/10.1109/CVPR.2017.243 * [11] Jack Jr., C.R., Bernstein, M.A., Fox, N.C., Thompson, P., Alexander, G., Harvey, D., Borowski, B., Britson, P.J., L. Whitwell, J., Ward, C., Dale, A.M., Felmlee, J.P., Gunter, J.L., Hill, D.L., Killiany, R., Schuff, N., Fox-Bosetti, S., Lin, C., Studholme, C., DeCarli, C.S., Krueger, G., Ward, H.A., Metzger, G.J., Scott, K.T., Mallozzi, R., Blezek, D., Levy, J., Debbins, J.P., Fleisher, A.S., Albert, M., Green, R., Bartzokis, G., Glover, G., Mugler, J., Weiner, M.W.: The Alzheimer’s Disease Neuroimaging Initiative (ADNI): MRI methods. Journal of Magnetic Resonance Imaging 27(4), 685–691 (2008). https://doi.org/10.1002/jmri.21049 * [12] Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. vol. 9906, pp. 694–711 (10 2016). https://doi.org/10.1007/978-3-319-46475-6_43 * [13] Koikkalainen, J., Lötjönen, J., Thurfjell, L., Rueckert, D., Waldemar, G., Soininen, H., Initiative, A.D.N., et al.: Multi-template tensor-based morphometry: application to analysis of Alzheimer’s Disease. NeuroImage 56(3), 1134–1144 (2011). https://doi.org/10.1016/j.neuroimage.2011.03.029 * [14] Korolev, S., Safiullin, A., Belyaev, M., Dodonova, Y.: Residual and plain convolutional neural networks for 3D brain MRI classification (2017). https://doi.org/10.1109/ISBI.2017.7950647 * [15] Li, H., Shi, X., Zhu, X., Wang, S., Zhang, Z.: FSNet: Dual interpretable graph convolutional network for Alzheimer’s Disease analysis. IEEE Transactions on Emerging Topics in Computational Intelligence 7(1), 15–25 (2023). https://doi.org/10.1109/TETCI.2022.3183679 * [16] Liu, M., Zhang, D., Shen, D., the Alzheimer’s Disease Neuroimaging Initiative: Hierarchical fusion of features and classifier decisions for Alzheimer’s Disease diagnosis. Human Brain Mapping 35(4), 1305–1319 (2014). https://doi.org/10.1002/hbm.22254 * [17] Mazziotta, J.C., Toga, A.W., Evans, A., Fox, P., Lancaster, J.: A probabilistic atlas of the human brain: Theory and rationale for its development: The International Consortium for Brain Mapping (ICBM). NeuroImage 2(2, Part A), 89–101 (1995). https://doi.org/10.1006/nimg.1995.1012 * [18] Patenaude, B., Smith, S., Kennedy, D., Jenkinson, M.: A Bayesian model of shape and appearance for subcortical brain segmentation. NeuroImage 56(3), 907–922 (2011). https://doi.org/10.1016/j.neuroimage.2011.02.046 * [19] Petersen, R., Aisen, P., Beckett, L., Donohue, M., Gamst, A., Harvey, D., Jack, C., Jagust, W., Shaw, L., Toga, A., Trojanowski, J., Weiner, M.: Alzheimer’s Disease Neuroimaging Initiative (ADNI). Neurology 74(3), 201–209 (2010). https://doi.org/10.1212/WNL.0b013e3181cb3e25 * [20] Scheltens, P., Launer, L., Barkhof, F., Weinstein, H., Van Gool, W.: Visual assessment of medial temporal lobe atrophy on Magnetic Resonance Imaging: Interobserver reliability. Journal of Neurology 242, 557–60 (1995). https://doi.org/10.1007/BF00868807 * [21] Schlegl, T., Seeböck, P., Waldstein, S.M., Langs, G., Schmidt-Erfurth, U.: f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Medical Image Analysis 54, 30–44 (2019). https://doi.org/10.1016/j.media.2019.01.010 * [22] Siddiquee, M., Shah, J., Wu, T., Chong, C., Schwedt, T., Dumkrieger, G., Nikolova, S., Li, B.: Brainomaly: Unsupervised neurologic disease detection utilizing unannotated T1-weighted brain MR images. IEEE Winter Conf Appl Comput Vis pp. 7558–7567 (2024). https://doi.org/10.1109/wacv57701.2024.00740 * [23] Wen, J., Thibeau-Sutre, E., Diaz-Melo, M., Samper-González, J., Routier, A., Bottani, S., Dormont, D., Durrleman, S., Burgos, N., Colliot, O.: Convolutional neural networks for classification of Alzheimer’s Disease: Overview and reproducible evaluation. Medical Image Analysis 63, 101694 (2020). https://doi.org/10.1016/j.media.2020.101694 * [24] Zhang, Y., Teng, Q., Liu, Y., Liu, Y., He, X.: Diagnosis of Alzheimer’s Disease based on regional attention with sMRI gray matter slices. Journal of Neuroscience Methods 365, 109376 (2022). https://doi.org/10.1016/j.jneumeth.2021.109376 * [25] Zimmerer, D., Isensee, F., Petersen, J., Kohl, S., Maier-Hein, K.: Unsupervised anomaly localization using variational auto-encoders. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. pp. 289–297. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_32
# Moduli Spaces of One-Line Extensions of $(10_{3})$ Configurations Moshe Cohen Mathematics Department, State University of New York at New Paltz, New Paltz, New York, USA<EMAIL_ADDRESS>and Baian Liu Department of Mathematics, The Ohio State University, Columbus, Ohio <EMAIL_ADDRESS> ###### Abstract. Two line arrangements in $\mathbb{CP}^{2}$ can have different topological properties even if they are combinatorially isomorphic. Results by Dan Cohen and Suciu and by Randell show that a reducible moduli space under complex conjugation is a necessary condition. We present a method to produce many examples of combinatorial line arrangements with a reducible moduli space obtained from a set of examples with irreducible moduli spaces. In this paper, we determine the reducibility of the moduli spaces of a family of arrangements of 11 lines constructed by adding a line to one of the ten $(10_{3})$ configurations. Out of the four hundred ninety-five combinatorial line arrangements in this family, ninety-five have a reducible moduli space, seventy-six of which are still reducible after the quotient by complex conjugation. ###### Key words and phrases: $(n_{3})$ configuration, geometric matroid, extension by an element ###### 2020 Mathematics Subject Classification: 52C35, 32S22, 14N20, 14H37, 14Q05 ## 1\. Introduction Hyperplane arrangements are not only classically interesting [OS80] but have been studied more recently [Dim17a]. These objects have interesting combinatorial structures [BLVSWZ99, HOV15] and even have applications to database searches [SBA15]. Interesting questions concerning hyperplane arrangements include those regarding whether the combinatorics determine topological properties [DIM17b]. We focus on hyperplane arrangements in two dimensions: line arrangements. We define a combinatorial line arrangement $\mathcal{A}=(\mathcal{P},\mathcal{L})$ to be a finite set of points $\mathcal{P}$ and a finite set of lines $\mathcal{L}$, which are subsets of $\mathcal{P}$. We also require that the intersection of two distinct lines be one point or empty. Given a combinatorial line arrangement $\mathcal{A}$, we consider its moduli space $\mathcal{M}_{\mathcal{A}}$, its set of all geometric realizations in $\mathbb{CP}^{2}$. Elements of the same moduli space are combinatorially equivalent. The number of components of a moduli space gives us information about the topology of the arrangement. In an irreducible moduli space or in a single connected component, Randell’s Isotopy Theorem [Ran89] states that any two arrangements $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ in 1-parameter family of combinatorially equivalent arrangements have diffeomorphic complements and that ($\mathbb{CP}^{2}$,$\mathcal{A}_{1}$) is homeomorphic to ($\mathbb{CP}^{2}$,$\mathcal{A}_{2}$). Furthermore, a result by Dan Cohen and Suciu [CS97, Theorem 3.9] states that complex conjugate arrangements have equivalent braid monodromies and so also have diffeomorphic complements. Thus we seek combinatorial line arrangements with a reducible moduli space $\mathcal{M}_{\mathcal{A}}$ and a reducible moduli space modulo complex conjugation $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$. We construct numerous such examples of combinatorial line arrangements using one-line extensions. We say that a one-line extension of a combinatorial line arrangement $\mathcal{A}$ is $\mathcal{A}$ together with an additional line. More commonly in the literature, matroids are extended by single elements; according to Oxley, this “can be fraught with difficulty” [Oxl19]. Our method is a specific type of single-element extension, in a dual sense. For more on matroids, see Oxley’s textbook [Oxl92]. Kocay [Koc16] uses a method of one-point extensions in order to find coordinatizations of arrangements. We apply one-line extensions to $(10_{3})$ configurations. More generally, an ($\bf n_{3}$) configuration is a combinatorial line arrangement with $n$ lines and $n$ triple points, which are points that lie in exactly three lines or, in other words, have multiplicity three. Such arrangements are not only classically interesting [Mar87, DvS03, Gro90, SW90] but have been studied more recently [Grü09, AAB14, EP13, Koc16, Koc21]. The reason we apply one-line extension to $(10_{3})$ configurations is to find combinatorial line arrangements with 11 lines and reducible moduli space. Combinatorial line arrangements with 10 or fewer lines and a reducible moduli space have been classified. In 1997, Fan showed that there are no combinatorial line arrangements with a reducible moduli space of up to and including 6 lines [Fan97]. Garber, Teicher, and Vishne showed that there are combinatorial line arrangements with a reducible moduli space of up to 8 lines that are realizable in $\mathbb{RP}^{2}$ [GTV03]. Nazir and Yoshinaga verified that for combinatorial line arrangements of up to and including 9 lines, those with a reducible moduli space must contain one of three combinatorial line arrangements as a subarrangement: the MacLane arrangement, also known as the Möbius-Kantor arrangement or the unique ($8_{3}$) configuration [Kan81, Mö28, Rey82, Sch89, Mac36]; the Nazir-Yoshinaga arrangement [NY12]; and the Falk- Sturmfels arrangement [CS97, cited as unpublished]. We say an arrangement is exceptional if it contains one of these three arrangements as a subarrangement and unexceptional otherwise. Suppose a combinatorial line arrangement $\mathcal{A}$ contains a line $\ell$ that passes through at most two points of multiplicity three or greater. Form another combinatorial line arrangement $\mathcal{A}\backslash\ell$ by deleting $\ell$ from $\mathcal{A}$. Nazir and Yoshinaga show that if the moduli space of $\mathcal{A}\backslash\ell$ is irreducible, then the moduli space of $\mathcal{A}$ is irreducible [NY12, Lemma 3.2]. We say a combinatorial line arrangement is reductive if it contains a line that passes through at most two points of multiplicity three or greater and non-reductive otherwise. The first author with Amram, Teicher, and Ye completed the classification of irreducibility of the moduli space of non-reductive, unexceptional arrangements of 10 lines in [ATY13] and [ACTY13], producing eighteen examples. Further analysis of these authors together with Sun and Zarkh reduced the number of candidates from eighteen to fifteen [ACSTYZ15]. Motivating this work, two of the nine combinatorial line arrangements from [ACTY13] with a reducible moduli space are on the list of the eleven one-line extensions of ($9_{3}$) configurations. Amram, Gong, Teicher, and Xu classify the moduli spaces of non-reductive arrangements of 11 lines with at least one point of multiplicity at least five in [AGTX15], identifying thirty-eight arrangements that satisfy the necessary moduli space condition. Further interesting examples of arrangements of 11 lines from the literature include a reductive, exceptional example by Artal Bartolo, Carmona Ruber, Cogolludo-Agustín, and Marco Buzunáriz and twenty-nine examples by Guerville-Ballé [GB22]. In this current work, we describe the one-line extension construction. While it does not produce reducible moduli spaces all of the time, it can empirically produce reducible moduli spaces a lot of the time. We use this to continue the classification of moduli spaces of non-reductive combinatorial line arrangements of 11 lines. Main Results. Given a $(10_{3})$ configuration, we determine the number of possible ways to add an eleventh line through some number of the double points: at least three so that the arrangement is not reductive; and at most five because a line through six existing doubles must belong to an arrangement of at least 13 lines. Over the ten $(10_{3})$ configurations, this gives: a subtotal of three hundred thirty-six arrangements, fifteen of which appear twice, for three double points; a subtotal of one hundred eighty-eight arrangements, thirty-seven of which appear twice, for four double points; and a total of twenty-three arrangements for five double points. We then classify the moduli spaces of these arrangements. A summary of these results can be found in Tables 1, 2, and 3. ###### Theorem 1.1. Out of the three hundred twenty-one distinct arrangements obtained by adding an eleventh line through three double points in one of the ten $(10_{3})$ configurations, just one of them has a reducible moduli space modulo complex conjugation: $(10_{3})_{7}.ADO$ as discussed in Example 1.6. ###### Theorem 1.2. Out of the one hundred fifty-one distinct arrangements obtained by adding an eleventh line through four double points in one of the ten $(10_{3})$ configurations, seventy-four of them have a reducible moduli space modulo complex conjugation. These are listed in Table 6 at the end of the Introduction. ###### Theorem 1.3. Out of the twenty-three distinct arrangements obtained by adding an eleventh line through five double points in one of the ten $(10_{3})$ configurations, just one of them has a reducible moduli space modulo complex conjugation: $(10_{3})_{1}.AEIKO$ as discussed in Example 1.7. $j$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Subtotal | Total ---|---|---|---|---|---|---|---|---|---|---|---|--- $\\#$ arrangements constructed from $(10_{3})_{j}$ | 4 | 17 | 42 | 11 | 76 | 30 | 50 | 50 | 39 | 17 | 336 | 321 $\\#$ with irreducible, non-empty moduli space | 3 | 12 | 34 | 0 | 73 | 25 | 43 | 48 | 36 | 17 | 291 | 282 $\\#$ with empty moduli space | 1 | 5 | 7 | 11 | 3 | 4 | 6 | 2 | 3 | 0 | 42 | 36 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}$ but irreducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 | 2 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 Table 1. Classification of the moduli space of arrangements obtained by adding a line through _three_ double points of a $(10_{3})$ configuration $j$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Subtotal | Total ---|---|---|---|---|---|---|---|---|---|---|---|--- $\\#$ arrangements constructed from $(10_{3})_{j}$ | 2 | 8 | 21 | 5 | 45 | 16 | 25 | 30 | 24 | 12 | 188 | 151 $\\#$ with irreducible, non-empty moduli space | 1 | 2 | 3 | 0 | 0 | 2 | 3 | 1 | 0 | 1 | 13 | 10 $\\#$ with empty moduli space | 0 | 4 | 8 | 5 | 13 | 8 | 8 | 9 | 11 | 1 | 67 | 50 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}$ but irreducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 0 | 1 | 4 | 0 | 5 | 2 | 1 | 4 | 2 | 3 | 22 | 17 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 1 | 1 | 6 | 0 | 27 | 4 | 13 | 16 | 11 | 7 | 86 | 74 Table 2. Classification of the moduli space of arrangements obtained by adding a line through _four_ double points of a $(10_{3})$ configuration $j$ | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Total ---|---|---|---|---|---|---|---|---|---|---|--- $\\#$ arrangements constructed from $(10_{3})_{j}$ | 1 | 1 | 2 | 1 | 5 | 2 | 2 | 3 | 3 | 3 | 23 $\\#$ with irreducible, non-empty moduli space | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 $\\#$ with empty moduli space | 0 | 1 | 2 | 1 | 5 | 2 | 2 | 3 | 3 | 2 | 21 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}$ but irreducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 $\\#$ with reducible $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 Table 3. Classification of the moduli space of arrangements obtained by adding a line through _five_ double points of a $(10_{3})$ configuration ###### Remark 1.4. All of the $(10_{3})$ configurations have a 2-dimensional moduli space, except for $(10_{3})_{1}$ and $(10_{3})_{4}$. The configuration $(10_{3})_{1}$ has a 3-dimensional moduli space, and $(10_{3})_{4}$ has an empty moduli space. Introducing a new line through 4 double points gives 2 restraints on the moduli space. If neither of these restraints are already present, then the moduli space is 0-dimensional or empty, and a 0-dimensional moduli space is reducible as long as it contains more than one point. This is empirically why one-line extensions of $(10_{3})$ configurations through 4 double points yield a high proportion of reducible moduli spaces. ###### Remark 1.5. Our example $(10_{3})_{6}.AFIO$ appears as $C_{28}$ in a recent work by Guerville-Ballé [GB22]; it is the only such overlap. ###### Example 1.6. Consider the $(10_{3})$ configuration $(10_{3})_{7}$ with an eleventh line passing through the intersections $L_{1}\cap L_{5}$, $L_{2}\cap L_{4}$, and $L_{9}\cap L_{10}$. We call this arrangement $(10_{3})_{7}.ADO$. Its arrangement table is given in Table 4, and two of its geometric realizations from the two different irreducible components of its moduli space are given in Figure 1. $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 4 | 6 | 5 | 3 | 7 | 2 | $A$ 2 | 4 | 6 | 8 | 8 | 9 | 7 | 5 | 3 | 4 | $D$ 3 | 5 | 7 | 9 | 0 | 0 | 8 | 9 | 0 | 6 | $O$ $A$ | $D$ | | $D$ | $A$ | | | | $O$ | $O$ | Table 4. The arrangement table for $(10_{3})_{7}.ADO$, which has a Galois conjugate moduli space $L_{10}$$L_{1}$$L_{2}$$L_{3}$$L_{4}$$L_{5}$$L_{6}$$L_{7}$$L_{8}$$L_{9}$$ADO$ $L_{10}$$L_{1}$$L_{2}$$L_{3}$$L_{4}$$L_{5}$$L_{6}$$L_{7}$$L_{8}$$L_{9}$$ADO$ Figure 1. Arrangement $(10_{3})_{7}.ADO$ with geometric realizations from the two different irreducible components of its Galois conjugate moduli space. The realization on the left corresponds to the values of $a=3$ and $c=\frac{3}{4}(1+\sqrt{5})$, and the realization on the right corresponds to the values $a=3$ and $c=\frac{3}{4}(1-\sqrt{5})$ The geometric realizations of $(10_{3})_{7}.ADO$ in $\mathbb{CP}^{2}$ with coordinates $[x:y:z]$, up to a projective transformation, can be described by the equation (1) $\displaystyle\begin{split}(y)(\sqrt{3}x+y)(\sqrt{3}x-y)(y-z)(\sqrt{3}(b+c)x+(b+c)y+(2\sqrt{3}bc-c)z)&\\\ (\sqrt{3}bx- by+(a+2b+\sqrt{3}ab)z)(\sqrt{3}(b+c)x-(c-b)y+2\sqrt{3}bcz)(-\sqrt{3}bx-(a+b)y+\sqrt{3}abz)&\\\ (\sqrt{3}cx-(a+c)y-\sqrt{3}acz)(z)(\sqrt{3}(b+c)x-(2\sqrt{3}bc-b-3c)y+(2\sqrt{3}bc-2c)z)&=0,\end{split}$ where $a$ is a complex number with a finite number of exceptions, $b=-\tfrac{4c^{2}+ac}{a+2c-2\sqrt{3}c^{2}}$, and $c^{\pm}=\tfrac{a}{4}(1\pm\sqrt{5})$. This shows that we have two irreducible components in the moduli space: one corresponding to $\tfrac{a}{4}(1+\sqrt{5})$ and another corresponding to $\tfrac{a}{4}(1-\sqrt{5})$. Since we have one free parameter, each of the irreducible components is one-dimensional. ###### Example 1.7. The arrangement $\mathcal{A}=(10_{3})_{1}.AEIKO$ has arrangement table given in Table 5. Its moduli space can be parameterized by $a$ and $b$ satisfying $a^{\pm}=(\frac{3\pm\sqrt{5}}{2})b$, where $b$ is a complex number with a finite number of exceptions.We also know that $\operatorname{Aut}((10_{3})_{1}.AEIKO)\cong F_{20}\cong\langle(12345),(1243)\rangle$, the Frobenius group of order 20. $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 3 | 4 | 5 | $A$ 2 | 4 | 6 | 9 | 4 | 5 | 6 | 7 | 6 | 7 | $E$ 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | $I$ $A$ | $E$ | $I$ | $A$ | $K$ | $I$ | $E$ | $O$ | $O$ | $K$ | $K$ | | | | | | | | | | $O$ Table 5. The arrangement table for $(10_{3})_{1}.AEIKO$, whose moduli space is two Galois conjugate points Organization. Section 2 describes in more detail the objects with which we work, including the moduli space of a combinatorial line arrangement. Section 3 details the one-line extension construction. Section 4 applies the one-line extension construction to all ten $(10_{3})$ configurations. We calculate that, up to isomorphism, there are three hundred twenty-one one-line extensions of $(10_{3})$ configurations through three double points, one hundred fifty-one one-line extensions of $(10_{3})$ configurations through four double points, and twenty-three one-line extensions of $(10_{3})$ configurations through five double points. Section 5 discusses how the moduli space is calculated and the algebraic techniques to determine its irreducibility. We conclude that there are seventy-six one-line extensions of $(10_{3})$ configurations with a reducible moduli space up to complex conjugation. Section 6 offers minor corrections to the work by the first author with Amram, Teicher, and Ye [ACTY13]. Acknowledgements. The authors would like to thank the Undergraduate Research Summer Institute at Vassar College, an in-house research experience for undergraduates, for their funding of a portion of this research while the second author was a rising senior at Vassar during the summer of 2017. Arrangement $\mathcal{A}$ | $\lvert\mathcal{M}_{\mathcal{A}}\rvert$ | $\lvert\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}\rvert$ | $\operatorname{Aut}(\mathcal{A})$ ---|---|---|--- 1.AEIK | $\infty^{1}$ | $\infty^{1}$ | ${\mathbb{Z}}/4{\mathbb{Z}}$ 2.AENO $\cong$ 7.ADLO | 2 | 2 | 3.BDHL $\cong$ 9.BDIM | 3 | 2 | 3.BDIK $\cong$ 6.AENO | 3 | 2 | 3.BDIL | 3 | 2 | 3.BDKL $\cong$ 9.BDMN | 3 | 3 | 3.BFJM | 3 | 2 | 3.DHLO | 3 | 2 | 5.AEIK | 5 | 3 | 5.AEJO | 3 | 2 | 5.AEKN | 5 | 3 | 5.AENO | 4 | 3 | 5.AFIK | 4 | 2 | 5.AFIO | 4 | 3 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 5.AFKL | 4 | 3 | 5.AFLO | 4 | 4 | 5.BDHL $\cong$ 5.BFIK | 4 | 2 | 5.BDHN $\cong$ 7.AIJL | 3 | 2 | 5.BDIK $\cong$ 5.BDKL $\cong$ 10.AEIJ | 6 | 3 | 5.BDJL $\cong$ 5.BEIK | 3 | 2 | 5.BDKN $\cong$ 10.ADKM | 5 | 3 | 5.BEGJ | 5 | 3 | 5.BEGK | 5 | 4 | 5.BEGN | 4 | 2 | 5.BEKN | 4 | 2 | 5.BFGK | 4 | 3 | 5.BFKL | 3 | 2 | 5.BGJL | 3 | 2 | 5.BGKL | 5 | 3 | 5.BGKN | 4 | 3 | 5.DHLO | 3 | 3 | 5.DHMN $\cong$ 7.AILO | 3 | 3 | 5.DHNO $\cong$ 7.AFIL | 2 | 2 | 5.DIKM | 4 | 2 | 5.DKMN | 2 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 1 Two 1-dimensional components | | | Arrangement $\mathcal{A}$ | $\lvert\mathcal{M}_{\mathcal{A}}\rvert$ | $\lvert\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}\rvert$ | $\operatorname{Aut}(\mathcal{A})$ ---|---|---|--- 6.AEHO | 4 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 6.AFIJ | 3 | 2 | 6.AFIO | 4 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 7.ADIL | 3 | 2 | 7.ADIM $\cong$ 7.AFIM | 2 | 2 | 7.AEIJ | 3 | 2 | 7.AEIM | 2 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 7.AEIO | 2 | 2 | 7.AEJM | 2 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 7.AEJN | 2 | 2 | 7.AIJM | 4 | 2 | 7.BIJM | 3 | 2 | 8.ADIM | 4 | 3 | 8.AEGM | 4 | 2 | 8.AEIM | 7 | 4 | 8.AFIJ $\cong$ 9.ADIM | 5 | 3 | 8.AFIL | 5 | 3 | 8.AFJO | 5 | 3 | 8.AIJM | 5 | 3 | 8.AILM | 5 | 4 | 8.BDIM | 4 | 2 | 8.BFIJ | 5 | 3 | 8.BFIK $\cong$ 9.ADLN | 5 | 4 | 8.BFJO | 4 | 2 | 8.BFKO | 3 | 2 | 8.BIJM | 4 | 2 | 8.BIKM | 6 | 4 | 8.CFIJ $\cong$ 9.AEIM | 4 | 3 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 9.ADIL | 5 | 3 | 9.ADMN | 5 | 3 | 9.AEGM | 3 | 2 | 9.BDHM | 3 | 2 | 9.BDHO | 3 | 2 | 9.BGMN | 3 | 2 | 10.ADHM | 4 | 3 | 10.ADHN | 4 | 3 | ${\mathbb{Z}}/2{\mathbb{Z}}$ 10.ADIN | 5 | 4 | 10.AEGM | 6 | 4 | 10.AEGN | 2 | 2 | ${\mathbb{Z}}/2{\mathbb{Z}}$ Table 6. The list of 74 arrangements with reducible moduli space modulo complex conjugation obtained from one-line extensions of $(10_{3})$ arrangements appearing in Theorem 1.2, whose automorphism groups are trivial unless otherwise noted ## 2\. Background We can describe a combinatorial line arrangement as a collection of “points” and “lines” along with incidence relations between them. The following definition has been adapted from the definition of combinatorial configuration in Grünbaum’s textbook _Configurations of Points and Lines_ [Grü09]. ###### Definition 2.1. A combinatorial line arrangement $\mathcal{A}=(\mathcal{P},\mathcal{L})$ consists of a finite set of points $\mathcal{P}$ and a finite set of lines $\mathcal{L}$, which are subsets of $\mathcal{P}$. We also require that the intersection of two lines is at most one point. Our convention in constructing non-reductive arrangements also requires that each point in $\mathcal{P}$ appears in at least three elements of $\mathcal{L}$. If a point $P\in\mathcal{P}$ is on the line $L\in\mathcal{L}$, we say that $P$ is incident to $L$ or that $L$ is incident to $P$. We refer to the intersection of exactly two lines as a double point, and we define the set of double points of $\mathcal{A}$ to be $\operatorname{Doubles}(\mathcal{A})=\left\\{\\{L,L^{\prime}\\}\in\binom{\mathcal{L}}{2}\mid L\cap L^{\prime}=\emptyset\right\\}.$ The reason why these 2-tuples are called double points is that we will be attempting to realize these arrangements in projective space, where all pairs of lines intersect exactly once. Using our convention that each point in $\mathcal{P}$ appears in at least three elements of $\mathcal{L}$, we see that if $L\cap L^{\prime}=\emptyset$, then the intersection of $L$ and $L^{\prime}$, in the realization in projective space, is not a point that appears in at least three elements of $\mathcal{L}$, so this intersection shall be named a double point since $L$ and $L^{\prime}$ are the only two lines passing through this intersection. On the same note, We define points of higher multiplicity as the elements of $\mathcal{P}$. A triple point is an element in $\mathcal{P}$ that appears in exactly three elements of $\mathcal{L}$. These arrangements can be presented in an arrangement table, in which the headers are the names of the lines and the columns contain the names of the points incident to each line. ###### Example 2.2. The well-known Fano arrangement is a combinatorial line arrangement whose arrangement table can be found in Table 7. $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ ---|---|---|---|---|---|--- $P_{1}$ | $P_{1}$ | $P_{1}$ | $P_{2}$ | $P_{2}$ | $P_{3}$ | $P_{3}$ $P_{2}$ | $P_{4}$ | $P_{6}$ | $P_{4}$ | $P_{5}$ | $P_{4}$ | $P_{5}$ $P_{3}$ | $P_{5}$ | $P_{7}$ | $P_{6}$ | $P_{7}$ | $P_{7}$ | $P_{6}$ Table 7. An arrangement table for the Fano arrangement We see that the arrangement table presentation of the Fano arrangement corresponds to the usual geometric presentation given in Figure 2. $P_{5}$$P_{1}$$P_{3}$$P_{7}$$L_{1}$$L_{3}$$L_{6}$$L_{2}$$L_{7}$$L_{5}$$P_{2}$$P_{6}$$P_{4}$ Figure 2. The Fano arrangement with the middle circle as the combinatorial “line” $L_{4}$ ### 2.1. Moduli space of a combinatorial line arrangement With this in mind, we are able to manipulate these combinatorial line arrangements by adding or removing a line. ###### Definition 2.3. Let $\mathcal{A}=(\mathcal{P},\mathcal{L})$ be a combinatorial line arrangement with $\mathcal{L}=\\{L_{1},L_{2},\dots,L_{n}\\}$, and let $L\subseteq\operatorname{Doubles}(\mathcal{A})$ be a subset of the set of double points. We define the one-line extension of $\mathcal{A}$ by the line $L$, written $\mathcal{A}\cup L$, to be the line arrangement $(\mathcal{P}\cup L,\mathcal{L}^{\prime})$, where $\mathcal{L}^{\prime}=\\{L_{1}^{\prime},L_{2}^{\prime},\dots,L_{n}^{\prime},L\\}$ and $L_{i}^{\prime}=L_{i}\cup D$ if there exists $D\in L$ such that $L_{i}\in D$ and $L_{i}^{\prime}=L_{i}$ otherwise. Intuitively, this is adding the line $L$ through its specified points and attaching double points in $\mathcal{A}$ that have now turned into triple points to the appropriate lines. A similar construction involves removing a line from an arrangement. ###### Definition 2.4. Let $\mathcal{A}=(\mathcal{P},\mathcal{L})$ be a combinatorial line arrangement and $L\in\mathcal{L}$. We define $\mathcal{A}$ minus $L$, written $\mathcal{A}\setminus L$, to be $\mathcal{A}\setminus L=(\mathcal{P},\mathcal{L}\setminus\\{L\\})$ with the convention that double points are omitted. Intuitively, this is removing the line $L$ from $\mathcal{A}$ while removing triple points that are now double points from $\mathcal{P}$ and the corresponding lines. The following notion of an isomorphism for combinatorial line arrangements is used to identify those we deem to have the same combinatorial information. ###### Definition 2.5. Let $\mathcal{A}=(\mathcal{P},\mathcal{L})$ and $\mathcal{A}^{\prime}=(\mathcal{P}^{\prime},\mathcal{L}^{\prime})$ be two combinatorial line arrangements. We say that $\mathcal{A}$ and $\mathcal{A}^{\prime}$ are isomorphic, denoted by $\mathcal{A}\cong\mathcal{A}^{\prime}$, if there exists function $\varphi:\mathcal{P}\cup\mathcal{L}\to\mathcal{P}^{\prime}\cup\mathcal{L}^{\prime}$ such that $\varphi|_{\mathcal{P}}:\mathcal{P}\to\mathcal{P}^{\prime}$ and $\varphi|_{\mathcal{L}}:\mathcal{L}\to\mathcal{L}^{\prime}$ are both bijections. Additionally, for all $L\in\mathcal{L}$ and $P,Q\in L$, we have that $\varphi(P),\varphi(Q)\in\varphi(L)$. Also, if $L,L^{\prime}\in\mathcal{L}$ are such that $L\cap L^{\prime}=\\{P\\}$ for some point $P\in\mathcal{P}$, then $\varphi(L)\cap\varphi(L^{\prime})=\\{\varphi(P)\\}$. Such a function $\varphi$ is called an isomorphism. Also denote by $\operatorname{Aut}(\mathcal{A})=\\{\varphi:\mathcal{A}\to\mathcal{A}\mid\text{$\varphi$ is an isomorphism}\\}$ the automorphism group of $\mathcal{A}$. ###### Example 2.6. It is known that the automorphism group for the Fano arrangement is $\operatorname{GL}(3,\mathbb{F}_{2})$. The main tool we use to analyze combinatorial line arrangements is the moduli space. The moduli space $\mathcal{M}_{\mathcal{A}}$ is the space of all geometric realizations of the combinatorial line arrangement $\mathcal{A}$ in $\mathbb{CP}^{2}$. In order to define the moduli space, we need to define a geometric realization for a combinatorial line arrangement. ###### Definition 2.7. Let $\mathcal{A}$ be a combinatorial line arrangement with lines $L_{1},L_{2},\ldots,L_{n}$. A geometric realization of $\bf\mathcal{A}$ in $\mathbb{CP}^{2}$ is a collection of lines $\ell_{1},\ell_{2},\ldots,\ell_{n}$ in $\mathbb{CP}^{2}$ such that for any subset of $S\subseteq\\{1,2,\dots,n\\}$ with $\lvert S\rvert\geq 3$, we have $\bigcap_{i\in S}\ell_{i}$ is nonempty if and only if $\bigcap_{i\in S}L_{i}$ is nonempty. We say that $\mathcal{A}$ is geometrically realizable in $\mathbb{CP}^{2}$ if there exists a geometric realization of $\mathcal{A}$ in $\mathbb{CP}^{2}$. Consider the complex projective line $\ell$ with equation $ax+by+cz=0$. Then the complex projective point dual to this line in $\mathbb{CP}^{2}$ is $\ell^{*}=[a:b:c]\in(\mathbb{CP}^{2})^{*}$. This is a convenient way to represent lines as points for the sake of notation. Now we are ready to introduce the moduli space. ###### Definition 2.8. The ordered moduli space of a combinatorial line arrangement $\mathcal{A}$ is defined to be $\mathcal{M}_{\mathcal{A}}=\\{(\ell_{1}^{*},\ell_{2}^{*},\ldots,\ell_{n}^{*})\in((\mathbb{CP}^{2})^{*})^{n}\mid\text{$(\ell_{1},\ell_{2},\ldots,\ell_{n})$ is a geometic realization of $\mathcal{A}$ in $\mathbb{CP}^{2}$}\\}/\operatorname{PGL}(3,\mathbb{C}),$ which are all geometric realizations of $\mathcal{A}$ in $\mathbb{CP}^{2}$, up to a projective transformation. We refer to this throughout as simply the moduli space of an arrangement, noting that there exists a related notion of the unordered moduli space, obtain by a quotient by the automorphism group. The moduli space is endowed the topology induced by the Zariski topology on $((\mathbb{CP}^{2})^{*})^{n}$. We will also sometimes endow the moduli space with the finer Euclidean topology. An important quotient of the moduli space is $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$, which is defined to be the quotient of $\mathcal{M}_{\mathcal{A}}$ under complex conjugation. ###### Example 2.9. The Fano arrangement is not geometrically realizable in $\mathbb{CP}^{2}$; its moduli space is empty. These notions are equivalent in general. Given a combinatorial line arrangement $\mathcal{A}$, it is a necessary condition for its moduli space $\mathcal{M}_{\mathcal{A}}$ to be reducible for two geometric realizations of $\mathcal{A}$ to be different topologically. If $\mathcal{M}_{\mathcal{A}}$ is irreducible, then we can apply Randell’s Isotopy Theorem to show that all of the geometric realizations of $\mathcal{A}$ are the same topologically. ###### Theorem 2.10. (Randell’s Isotopy Theorem [Ran89]) Two combinatorially isomorphic arrangements $\mathcal{R}_{1}$ and $\mathcal{R}_{2}$ connected by a 1-parameter family of isomorphic arrangements have complements in $\mathbb{CP}^{2}$ that are diffeomorphic. Furthermore, $(\mathbb{CP}^{2},\mathcal{R}_{1})$ and $(\mathbb{CP}^{2},\mathcal{R}_{2})$ are of the same topological type. We also consider the moduli space modulo complex conjugation due the following result from Cohen and Suciu. ###### Theorem 2.11. (Cohen-Suciu [CS97, Theorem 3.9]) The braid monodromies of complex conjugated curves are equivalent. For the purpose of classifying irreducibility of moduli spaces, it turns out that we only need to analyze the moduli spaces of non-reductive combinatorial line arrangements. This is a consequence of the following result, which shows that we can remove a line incident to at most two points of higher multiplicity and use the irreducibility of the moduli space of the smaller arrangement to deduce the irreducibility of the moduli space of the original, larger arrangement. ###### Theorem 2.12. (Nazir-Yoshinaga [NY12, reframed Lemma 3.2]) Let $\mathcal{A}=(\mathcal{P},\\{L_{1},L_{2},\ldots,L_{n}\\})$ be a combinatorial line arrangement with $L_{n}$ being incident to at most two points of higher multiplicity, and let $\mathcal{A}^{\prime}=\mathcal{A}\setminus L_{n}$. Then $\mathcal{M}_{\mathcal{A}}$ is irreducible if $\mathcal{M}_{\mathcal{A}^{\prime}}$ is irreducible. We use the one-line extension construction, which we describe in the next section, to provide examples of combinatorial line arrangements with a reducible moduli space. These come from known arrangements which we discuss in the next subsection. ### 2.2. $(n_{3})$ configurations In anticipation of our later construction, we introduce these examples from the literature. ###### Definition 2.13. (see for example [Grü09]) An $(n_{k})$ configuration is a combinatorial line arrangement with $n$ lines and $n$ points where each line is incident to $k$ points and each point is incident to $k$ lines. We are specifically interested in $(n_{3})$ configurations. Table 8 provides a non-exhaustive enumeration of $(n_{3})$ configurations as found in Grünbaum’s textbook [Grü09, Theorem 2.2.1], in which he cites others [Mar87, DvS03, Gro90]. It is worth noting that since $(n_{3})$ configurations can be seen as $3$-regular, $3$-uniform hypergraphs of girth $\geq 3$, any finite automorphism group is achievable with an $(n_{3})$ configuration [Vog86]. $n$ | $\leq 6$ | 7 | 8 | 9 | 10 | 11 | 12 ---|---|---|---|---|---|---|--- number of $(n_{3})$ configurations | 0 | 1 | 1 | 3 | 10 | 31 | 229 Table 8. Number of $(n_{3})$ configurations as found in [Grü09, Theorem 2.2.1] The construction in [ACTY13] begins with one of the three $(9_{3})$ configurations and considers the set of all double points. Then all subsets of cardinality three are considered, and this set is quotiented out by the automorphism group of the $(9_{3})$ configuration. This gives the possible combinatorial line arrangements of 10 lines obtained by adding a line passing through three double points. Table 20 at the end of Section 6 gives the eleven arrangements that result from this construction. Following the naming convention in [ACTY13], the numbers correspond to points in the original $(9_{3})$ configuration, and the capital letters correspond to double points in the original $(9_{3})$ configuration that have become triple points after the tenth line has been added. ###### Remark 2.14. We note here that in [ACTY13] there are fourteen arrangements given, but in Section 6 we show that three of these are redundant due to arrangement $(9_{3})_{1}$ having a larger automorphism group than acknowledged in that work. Table 9 lists all ten $(10_{3})$ arrangements. For ease of naming new combinatorial line arrangements, we name each of the 15 double points with the upper case letters $A$ through $O$ for each of the $(10_{3})$ configurations $\mathcal{A}$. These are given explicitly in Table 10. $\text{Aut}((10_{3})_{1})\cong S_{5}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 3 4 5 2 4 6 9 4 5 6 7 6 7 3 5 7 0 8 8 9 9 0 0 | $\text{Aut}((10_{3})_{2})\cong D_{12}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 3 4 5 2 4 6 9 4 7 6 5 6 7 3 5 7 0 8 8 9 9 0 0 ---|--- $\text{Aut}((10_{3})_{3})\cong{\mathbb{Z}}/2{\mathbb{Z}}\times{\mathbb{Z}}/2{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 3 4 5 2 4 6 9 4 6 7 5 6 7 3 5 7 0 8 8 9 9 0 0 | $\text{Aut}((10_{3})_{4})\cong S_{4}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 3 4 5 2 4 6 9 4 6 5 7 6 7 3 5 7 0 8 8 9 9 0 0 $\text{Aut}((10_{3})_{5})\cong{\mathbb{Z}}/2{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 4 3 5 2 4 6 9 4 7 5 6 6 7 3 5 7 0 8 8 9 9 0 0 | $\text{Aut}((10_{3})_{6})\cong S_{3}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 2 3 2 5 3 4 2 4 6 9 4 7 6 7 5 6 3 5 7 0 8 8 9 9 0 0 $\text{Aut}((10_{3})_{7})\cong{\mathbb{Z}}/3{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 2 4 6 5 3 7 2 2 4 6 8 8 9 7 5 3 4 3 5 7 9 0 0 8 9 0 6 | $\text{Aut}((10_{3})_{8})\cong{\mathbb{Z}}/3{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 3 5 7 2 6 4 2 2 4 6 8 8 9 7 5 3 4 3 5 7 9 0 0 8 9 0 6 $\text{Aut}((10_{3})_{9})\cong{\mathbb{Z}}/4{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 2 4 6 5 3 2 3 2 4 6 8 8 9 7 5 7 4 3 5 7 9 0 0 8 9 0 6 | $\text{Aut}((10_{3})_{10})\cong{\mathbb{Z}}/10{\mathbb{Z}}$. $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 3 2 7 5 6 4 2 2 4 6 8 8 9 7 5 3 4 3 5 7 9 0 0 8 9 0 6 Table 9. The ten $(10_{3})$ configurations with their arrangement tables and automorphism groups Double | $(10_{3})_{1}$ | $(10_{3})_{2}$ | $(10_{3})_{3}$ | $(10_{3})_{4}$ | $(10_{3})_{5}$ | $(10_{3})_{6}$ | $(10_{3})_{7}$ | $(10_{3})_{8}$ | $(10_{3})_{9}$ | $(10_{3})_{10}$ ---|---|---|---|---|---|---|---|---|---|--- A | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{4}$ | $L_{1}\cap L_{5}$ | $L_{1}\cap L_{5}$ | $L_{1}\cap L_{5}$ | $L_{1}\cap L_{6}$ B | $L_{1}\cap L_{9}$ | $L_{1}\cap L_{9}$ | $L_{1}\cap L_{9}$ | $L_{1}\cap L_{9}$ | $L_{1}\cap L_{8}$ | $L_{1}\cap L_{8}$ | $L_{1}\cap L_{6}$ | $L_{1}\cap L_{6}$ | $L_{1}\cap L_{6}$ | $L_{1}\cap L_{7}$ C | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{10}$ | $L_{1}\cap L_{7}$ | $L_{1}\cap L_{8}$ | $L_{1}\cap L_{7}$ | $L_{1}\cap L_{8}$ D | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ | $L_{2}\cap L_{4}$ E | $L_{2}\cap L_{7}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{6}$ | $L_{2}\cap L_{5}$ F | $L_{2}\cap L_{8}$ | $L_{2}\cap L_{7}$ | $L_{2}\cap L_{7}$ | $L_{2}\cap L_{8}$ | $L_{2}\cap L_{9}$ | $L_{2}\cap L_{7}$ | $L_{2}\cap L_{9}$ | $L_{2}\cap L_{7}$ | $L_{2}\cap L_{9}$ | $L_{2}\cap L_{6}$ G | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ | $L_{3}\cap L_{4}$ H | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ | $L_{3}\cap L_{5}$ I | $L_{3}\cap L_{6}$ | $L_{3}\cap L_{8}$ | $L_{3}\cap L_{8}$ | $L_{3}\cap L_{7}$ | $L_{3}\cap L_{7}$ | $L_{3}\cap L_{9}$ | $L_{3}\cap L_{8}$ | $L_{3}\cap L_{9}$ | $L_{3}\cap L_{8}$ | $L_{3}\cap L_{9}$ J | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{9}$ | $L_{5}\cap L_{8}$ | $L_{4}\cap L_{9}$ | $L_{4}\cap L_{10}$ | $L_{4}\cap L_{10}$ | $L_{4}\cap L_{10}$ K | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{9}$ | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{10}$ | $L_{5}\cap L_{8}$ | $L_{5}\cap L_{8}$ L | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L{10}$ | $L_{6}\cap L_{7}$ | $L_{6}\cap L_{10}$ M | $L_{6}\cap L_{9}$ | $L_{6}\cap L_{9}$ | $L_{6}\cap L_{10}$ | $L_{6}\cap L_{10}$ | $L_{6}\cap L_{8}$ | $L_{6}\cap L_{10}$ | $L_{7}\cap L_{10}$ | $L_{7}\cap L_{8}$ | $L_{7}\cap L_{10}$ | $L_{7}\cap L_{9}$ N | $L_{7}\cap L_{10}$ | $L_{7}\cap L_{10}$ | $L_{7}\cap L_{9}$ | $L_{7}\cap L_{9}$ | $L_{7}\cap L_{9}$ | $L_{7}\cap L_{9}$ | $L_{8}\cap L_{10}$ | $L_{7}\cap L_{9}$ | $L_{8}\cap L_{9}$ | $L_{7}\cap L_{10}$ O | $L_{8}\cap L_{9}$ | $L_{8}\cap L_{9}$ | $L_{8}\cap L_{9}$ | $L_{8}\cap L_{9}$ | $L_{8}\cap L_{10}$ | $L_{8}\cap L_{10}$ | $L_{9}\cap L_{10}$ | $L_{8}\cap L_{9}$ | $L_{9}\cap L_{10}$ | $L_{8}\cap L_{9}$ Table 10. Labels of the double points in the $(10_{3})$ arrangements We highlight two geometric pictures of arrangements that better display the symmetry of their respective automorphism group: the arrangements $(10_{3})_{7}$ and $(10_{3})_{10}$ in Figure 3. 2463579801$L_{1}$$L_{2}$$L_{3}$$L_{7}$$L_{8}$$L_{9}$$L_{4}$$L_{6}$$L_{5}$$L_{10}$ 0529174863$L_{1}$$L_{2}$$L_{3}$$L_{4}$$L_{5}$$L_{6}$$L_{7}$$L_{8}$$L_{9}$$L_{10}$ Figure 3. A geometric picture of $(10_{3})_{7}$ on the left showing its $\mathbb{Z}/3{\mathbb{Z}}$ automorphism group, and a geometric picture of $(10_{3})_{10}$ on the right showing the ${\mathbb{Z}}/5{\mathbb{Z}}$ subgroup of its $\mathbb{Z}/10{\mathbb{Z}}$ automorphism group ## 3\. The One-Line Extension Construction We first describe the one-line extension construction in full generality. Then we describe the one-line extension construction through a specified number of double points. Lastly, we discuss the effects of this construction on the moduli spaces. ###### Definition 3.1. Let $\mathcal{A}=(\mathcal{P},\mathcal{L})$ be a combinatorial line arrangement. We define the set of one-line extensions of $\mathcal{A}$ as $\operatorname{OL}(\mathcal{A})=\left\\{\mathcal{A}\cup L\,\middle|\,L\in\binom{\mathcal{P}\cup\operatorname{Doubles}(\mathcal{A})}{k},k\in{\mathbb{Z}},k>0,\mathcal{A}\cup L\text{ is a line arrangement}\right\\}/\sim,$ where $A\cup L\sim A\cup L^{\prime}$ if the two arrangements are combinatorially isomorphic. In other words, $\operatorname{OL}(\mathcal{A})$ is the set of all combinatorial line arrangements with $\left|\mathcal{L}\right|+1$ lines containing $\mathcal{A}$ as a subarrangement, up to isomorphism. The requirement that $\mathcal{A}\cup L$ is a line arrangement ensures that every pair of lines intersect no more than once. An interesting family of one-line extensions are extensions purely through double points. Some, but not all, isomorphisms between different extensions are induced by the automorphisms of $\mathcal{A}$. ###### Definition 3.2. Let $\mathcal{A}$ be a combinatorial line arrangement. We define the set of extension lines of $\mathcal{A}$ by $k$ double points as $\operatorname{OLExt}(k,\mathcal{A})=\left\\{L\in\binom{\operatorname{Doubles}(\mathcal{A})}{k}\,\middle|\,\text{$\mathcal{A}\cup L$ is a line arrangement}\right\\}/\operatorname{Aut}(\mathcal{A}).$ We take $k\geq 3$ to avoid producing reductive arrangements. Again, to make sure that $\mathcal{A}\cup L$ is a line arrangement, we have to make sure that every pair of lines intersect no more than once. ###### Definition 3.3. The following set contains all the possible combinatorial line arrangements constructed by adding a line through $k$ double points in $\mathcal{A}$ up to isomorphism: $\operatorname{OLExtArrs}(k,\mathcal{A})=\\{\mathcal{A}\cup L\mid[L]\in\operatorname{OLExt}(k,\mathcal{A})\\}/\sim,$ where $A\cup L\sim A\cup L^{\prime}$ if the two arrangements are combinatorially isomorphic. We call this set one line extensions of $\mathcal{A}$ by $k$ double points. Each element is well-defined since for each $[L]\in\operatorname{OLExt}(k,\mathcal{A})$, if $[L]=[L^{\prime}]$, the automorphism of $\mathcal{A}$ induces an automorphism between $A\cup L$ and $A\cup L^{\prime}$. ###### Remark 3.4. Note that it is possible that $[L]\neq[L^{\prime}]$ as elements of $\operatorname{OLExt}(k,\mathcal{A})$ yet $\mathcal{A}\cup L\cong\mathcal{A}\cup L^{\prime}$. ###### Example 3.5. Table 11 gives an example of a pair of isomorphic arrangements constructed using different classes of $\operatorname{OLExt}(3,\mathcal{A})$, where $\mathcal{A}$ is a $(10_{3})$ configuration. $(10_{3})_{5}.BDL$ | $(10_{3})_{5}.BIK$ ---|--- | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 4 | 3 | 5 | B 2 | 4 | 6 | 9 | 4 | 7 | 5 | 6 | 6 | 7 | D 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | L B | D | | D | | L | L | B | | | | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 4 | 3 | 5 | B 2 | 4 | 6 | 9 | 4 | 7 | 5 | 6 | 6 | 7 | I 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | K B | | I | | K | | I | B | | K | Table 11. Two isomorphic arrangements in $\operatorname{OLExtArrs}(3,(10_{3})_{5})$ An isomorphism $\varphi:(10_{3})_{5}.BDL\to(10_{3})_{5}.BIK$ given in Table 12. Note that $\varphi(\\{L_{1},L_{2},\dots,L_{10}\\})\neq\\{L_{1},L_{2},\dots,L_{10}\\}$ or, equivalently, $\varphi(L_{11})\neq L_{11}$, so $(10_{3})_{5}.BDL$ and $(10_{3})_{5}.BIK$ would not have been identified via the quotient of $\operatorname{OLExt}(3,\mathcal{A})$ by $\operatorname{Aut}((10_{3})_{5})$. However, the two arrangements are identified since they are combinatorially isomorphic. Point $p$ | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | B | D | L ---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\varphi(p)$ | 8 | 6 | I | 7 | B | 9 | 3 | 0 | K | 2 | 1 | 4 | 5 Line $L$ | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ | | $\varphi(L)$ | $L_{3}$ | $L_{8}$ | $L_{9}$ | $L_{5}$ | $L_{11}$ | $L_{10}$ | $L_{7}$ | $L_{1}$ | $L_{6}$ | $L_{4}$ | $L_{2}$ | | Table 12. The explicit isomorphism between $(10_{3})_{5}.BDL$ and $(10_{3})_{5}.BIK$ To detect this phenomenon, we rely on the following lemma: ###### Lemma 3.6. Fix $k\in{\mathbb{Z}}$ such that $k\geq 3$. Let $\mathcal{A}=(\mathcal{P},\mathcal{L})$ be a combinatorial line arrangement, and let $[L],[L^{\prime}]\in\operatorname{OLExt}(k,\mathcal{A})$ such that $[L]\neq[L^{\prime}]$. If there exists an isomorphism $\varphi:\mathcal{A}\cup L\to\mathcal{A}\cup L^{\prime}$, then there exists a line $\ell\in\mathcal{L}$ such that $(\mathcal{A}\cup L^{\prime})\setminus\ell\cong\mathcal{A}$. ###### Proof. Since $[L]\neq[L^{\prime}]$, we know that $\varphi(L)\neq L^{\prime}$, meaning $\varphi(L)=\ell$ for some $\ell\in\mathcal{L}$, so $\mathcal{A}=(\mathcal{A}\cup L)\setminus L\cong(\mathcal{A}\cup L^{\prime})\setminus\ell$. ∎ When considering one-line extensions of different combinatorial arrangements, it is also possible to have one-line extensions of non-isomorphic arrangements be isomorphic. In other words, it is possible for two non-isomorphic combinatorial line arrangements $\mathcal{A}_{1}\not\cong\mathcal{A}_{2}$ to have $\mathcal{A}_{1}^{\prime}\in\operatorname{OLExtArrs}(k,\mathcal{A}_{1})$ and $\mathcal{A}_{2}^{\prime}\in\operatorname{OLExtArrs}(k^{\prime},\mathcal{C}_{2})$ such that $\mathcal{A}_{1}^{\prime}\cong\mathcal{A}_{2}^{\prime}$. ###### Example 3.7. The following is an example of a pair of isomorphic arrangements: one comes from some $\operatorname{OLExtArrs}(3,\mathcal{A})$ and the other comes from some $\operatorname{OLExtArrs}(3,\mathcal{B})$, where $\mathcal{A}$ and $\mathcal{B}$ are distinct, non-isomorphic $(10_{3})$ configurations: $(10_{3})_{1}.AEM$ | $(10_{3})_{6}.KLO$ ---|--- | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 3 | 4 | 5 | A 2 | 4 | 6 | 9 | 4 | 5 | 6 | 7 | 6 | 7 | E 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | M A | E | | A | | M | E | | M | | | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 5 | 3 | 4 | K 2 | 4 | 6 | 9 | 4 | 7 | 6 | 7 | 5 | 6 | L 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | O | | | | K | L | L | O | K | L | Table 13. Two isomorphic arrangements: one comes from $\operatorname{OLExtArrs}(3,(10_{3})_{1})$ and the other comes from $\operatorname{OLExtArrs}(3,(10_{3})_{6})$ An isomorphism $\varphi:(10_{3})_{1}.AEM\to(10_{3})_{6}.KLO$ is given in Table 14. Point $p$ | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | A | E | M ---|---|---|---|---|---|---|---|---|---|---|---|---|--- $\varphi(p)$ | 2 | 5 | O | 7 | K | 3 | 4 | 1 | L | 6 | 9 | 0 | 8 Line $L$ | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ | | $\varphi(L)$ | $L_{8}$ | $L_{9}$ | $L_{2}$ | $L_{7}$ | $L_{11}$ | $L_{6}$ | $L_{7}$ | $L_{3}$ | $L_{5}$ | $L_{1}$ | $L_{4}$ | | Table 14. The explicit isomorphism between $(10_{3})_{1}.AEM$ and $(10_{3})_{6}.KLO$ To detect when one-line extensions from different arrangements are isomorphic, we have the following tool. ###### Lemma 3.8. Let $\mathcal{A}_{1}=(\mathcal{P}_{1},\mathcal{L}_{1})$ and $\mathcal{A}_{2}=(\mathcal{P}_{2},\mathcal{L}_{2})$ be two arrangements such that $\mathcal{A}_{1}\not\cong\mathcal{A}_{2}$, and let $[L]\in\operatorname{OLExt}(k,\mathcal{A}_{1})$ and $[L^{\prime}]\in\operatorname{OLExt}(k,\mathcal{A}_{2})$. If there exists an isomorphism $\varphi:\mathcal{A}_{1}\cup L\to\mathcal{A}_{2}\cup L^{\prime}$, then there exists a line $\ell\in\mathcal{L}_{2}$ such that $\mathcal{A}_{2}\cup L^{\prime}\setminus\ell\cong\mathcal{A}_{1}$. ###### Proof. If $\varphi(L)=L^{\prime}$, then it must be the case that $\varphi(\mathcal{A}_{1})=\mathcal{A}_{2}$, which goes against our hypothesis that $\mathcal{A}_{1}\not\cong\mathcal{A}_{2}$. We may then assume that $\varphi(L)\neq L^{\prime}$ and $\varphi(L)=\ell$ for some $\ell\in\mathcal{L}_{2}$. Then $\mathcal{A}_{1}=(\mathcal{A}_{1}\cup L)\setminus L\cong(\mathcal{A}_{2}\cup L^{\prime})\setminus\ell$ by $\varphi$. ∎ Empirically, the one-line extension construction is effective since the construction generally reduces the dimension of the moduli space and a zero dimensional space is reducible as long as it is not empty or a singleton. The dimension of the moduli space is reduced after a one-line extension since requiring three points to be collinear is an additional constraint on the moduli space unless those three points are already collinear. Requiring four points to be collinear gives two additional constraints in the generic case, and requiring fives points to be collinear gives three additional constraints in the generic case. ## 4\. One-Line Extensions of $(10_{3})$ Configurations Out of the eleven possible one-line extensions of $(9_{3})$ configurations by three double points, six have a reducible moduli space. With this as motivation, we apply one-line extensions to all $(10_{3})$ configurations in hopes of producing combinatorial line arrangements of 11 lines with a reducible moduli space. We now apply the one-line extension construction through $k$ double points to all of the $(10_{3})$ configurations, for $k=3,4,5$. Note that $k\geq 6$ is impossible since 6 double points require 12 other lines. The construction is detailed in Algorithm 1. Algorithm 1 Enumeration Algorithm 0: A list $\mathfrak{C}$ of all $(10_{3})$ configurations up to isomorphism and their automorphism groups, and a value for $k$ 0: A list $\mathfrak{A}$ of combinatorial line arrangements of $11$ lines that can be constructed by adding a line through $k$ double points in a configuration in $\mathfrak{C}$, up to isomorphism Initialize $\mathfrak{A}:=\emptyset$ Initialize $\mathfrak{F}:=\emptyset$, the arrangements needing additional testing for $\mathcal{C}=(\mathcal{P},\mathcal{L})\in\mathfrak{C}$ do Calculate $\operatorname{OLExt}(k,\mathcal{C})$ for $L\in\operatorname{OLExt}(k,\mathcal{C})$ do Set $\mathfrak{A}:=\mathfrak{A}\cup\\{\mathcal{C}\cup L\\}$ for $L^{*}\in\mathcal{L}$ do if $\mathcal{C}\cup L\setminus L^{*}$ is a $(10_{3})$ configuration then $\mathfrak{F}:=\mathfrak{F}\cup\\{\mathcal{C}\cup L\\}$ break end if end for end for end for Check for isomorphisms among the arrangements in $\mathfrak{F}$, throwing away isomorphic copies in $\mathfrak{A}$ return $\mathfrak{A}$ In order to start implementing this construction, we need the automorphism groups of the $(10_{3})$ configurations. The generators for the automorphism groups were also provided in [Mar87]. A correspondence between the naming of the configurations in [Mar87] and [Grü09] is provided in [Grü09]. We indicate the automorphism groups in Table 9. Now we enumerate the one-line extensions of $(10_{3})$ configurations through 3, 4, or 5 double points up to isomorphism utilizing Lemmas 3.6 and 3.8. Fix $k=3,4,5$. We can calculate $\operatorname{OLExt}(k,(10_{3})_{i})$ for $i=1,\dots,10$ directly. Then we form the set $\bigcup_{i=1}^{10}\\{(10_{3})_{i}\cup L\mid[L]\in\operatorname{OLExt}(k,(10_{3})_{i}\\},$ which has all of the possible one-line extensions of $(10_{3})$ configurations through $k$ double points, but some isomorphism classes might be represented more than once. Lemmas 3.6 and 3.8 imply that such an isomorphism class can only come from arrangements of the form $(10_{3})_{i}\cup L$ such that there exists a line $\ell$ of $(10_{3})_{i}$ where $(10_{3})_{i}\cup L\setminus\ell$ is a $(10_{3})$ configuration. This creates a much smaller set to check for pairwise isomorphisms. This enumeration process is detailed in Algorithm 1. Using the automorphism groups of all the $(10_{3})$ configurations from Table 9, we can apply Algorithm 1. For $k=3$, the first block of code results in three hundred thirty-seven arrangements. Lemma 3.6 identifies a pair of isomorphic arrangements both constructed from $(10_{3})_{5}$, leaving us with a subtotal of three hundred thirty-six arrangements. Lemma 3.8 identifies fifteen pairs of isomorphic line arrangements, and we are able to conclude that the remaining three hundred twenty-one arrangements are pairwise non- isomorphic. See Table 1 for more details. For $k=4$ and $k=5$, the results of the construction are summarized in Tables 2 and 3. The results in those tables about the reducibility of various notions of the moduli space are obtained by using tools from the next section. ## 5\. Irreducibility of Moduli Spaces In order to use results from Section 2, we must be able to calculate the moduli space of a combinatorial line arrangement. We follow Algorithm 2 in [BP15] to achieve this. Given a combinatorial line arrangement $\mathcal{A}$, we fix a projective basis of four points (or lines), accounting for the quotient by $\operatorname{PGL}(3,\mathbb{C})$. Then we add in a line or point from $\mathcal{A}$ one at a time, parameterizing when needed. We are then left with parameters $v_{1},v_{2},\dots,v_{r}$ and polynomials $f_{1},f_{2},\dots,f_{s}\in{\mathbb{C}}[v_{1},v_{2},\dots,v_{r}]$ such that $f_{i}(v_{1},v_{2},\dots,v_{r})=0$ for all $i=1,2,\dots,s$ to ensure the collinearity relations. There are also $g_{1},g_{2},\dots,g_{t}\in{\mathbb{C}}[v_{1},v_{2},\dots,v_{r}]$ such that $g_{j}(a_{1},a_{2},\dots,a_{r})\neq 0$ for all $j=1,2,\dots,t$, since $g_{i}(v_{1},v_{2},\dots,v_{r})=0$ would correspond to a degenerate realizations of the arrangement. ###### Example 5.1. The construction of $(10_{3})_{5}$ in [ACTY13] required the introduction of 3 parameters named $a,b,c$ and the constraint on the parameters is $-a^{2}b^{2}c+a^{2}b^{2}+a^{2}bc-a^{2}c+2abc^{2}-3abc-ac^{2}+2ac- bc^{2}+bc+c^{2}-c=0.$ One of the new arrangements is obtained by adding a line through the double points $A,N,O$ and has the arrangement table given in Table 15. $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | $L_{11}$ ---|---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 8 | 2 | 3 | 2 | 4 | 3 | 5 | A 2 | 4 | 6 | 9 | 4 | 7 | 5 | 6 | 6 | 7 | N 3 | 5 | 7 | 0 | 8 | 8 | 9 | 9 | 0 | 0 | O A | | | A | | | N | O | N | O | Table 15. An arrangement table for the arrangement $(10_{3})_{5}.ANO$ The additional line requires the additional constraint that $\displaystyle 2a^{2}b^{2}c-a^{2}b^{2}-2a^{2}bc^{2}+a^{2}c^{2}+ab^{2}c^{3}-2ab^{2}c+ab^{2}-2abc^{3}$ $\displaystyle+abc^{2}+abc+2ac^{3}-2ac^{2}-bc^{4}+bc^{3}+bc^{2}-bc+c^{4}-2c^{3}+c^{2}$ $\displaystyle=0.$ If the system $\\{f_{i}=0\mid i=1,2,\dots s\\}\cup\\{g_{j}\neq 0\mid j=1,2,\dots,t\\}$ has no solutions, then the combinatorial line arrangement is not geometrically realizable. If the system does have a solution, then we analyze the irreducibility of the moduli space. In order to look at the irreducibility of the moduli space, we eliminate the factors of each $f_{i}$ that contradict the system $\\{g_{j}\neq 0\mid j=1,2,\dots,t\\}$. We end up with polynomials $h_{1},h_{2},\dots,h_{m}\in{\mathbb{C}}[v_{1},v_{2},\dots,v_{r}]$ so that the moduli space is the variety $V(h_{1},h_{2},\dots,h_{m})\setminus V(g_{1},g_{2},\dots,g_{t})$ and its closure is $V(h_{1},h_{2},\dots,h_{m})$. Then the irreducible components of the moduli space correspond to the irreducible components of $V(h_{1},h_{2},\dots,h_{m})$. For all of our arrangements, $V(h_{1},h_{2},\dots,h_{m})$ is a finite set or there exists a parameterization such that $m=1$. This is achieved by selecting a projective basis that minimizes $m$. If $V(h_{1},h_{2},\dots,h_{m})$ is a finite set, then $\mathcal{M}_{\mathcal{A}}$ is a finite set and the moduli space $\mathcal{M}_{\mathcal{A}}$ is reducible if it contains at least two points. If $m=1$ and $h_{1}$ is irreducible over ${\mathbb{C}}$, then $V(h_{1})$ and therefore $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ are irreducible. We will discuss how to decide whether $h_{1}$ is irreducible over ${\mathbb{C}}$ later in this section. ###### Example 5.2. In Example 5.1, the moduli space is a variety that is described by two polynomials. It would be easier to determine irreducibility if we can reparamaterize the moduli space so that it is described by a singular polynomial. Indeed, we can reparametrize the moduli space by starting with the following four points and coordinates: $9:\;[1:0:0]\hskip 28.45274pt2:\;[0:1:0]\hskip 28.45274pt0:\;[0:0:1]\hskip 28.45274pt12:\;[1:1:1]$ Then the points and lines were constructed the following order: $9,2,0,O,L_{10},L_{8},L_{7},L_{4},5,L_{1},A,L_{11},N,L_{9},3,6,L_{2},1,4,L_{3},L_{5},8,7,L_{6}.$ This leads to a new parametrization with only two variables and the single constraint that $a^{4}b^{2}+a^{4}b-3a^{3}b^{2}-3a^{3}b+a^{2}b^{2}+2a^{2}b-2ab-a+1=0,$ which is easier to interpret, since we will see that this is an irreducible polynomial over ${\mathbb{C}}$ by Theorem 5.8 and Lemma 5.9 so the moduli space is irreducible. Suppose $m=1$ and $h_{1}$ is reducible over ${\mathbb{C}}$. Here we look at the intersections of the irreducible components of $V(h_{1})$ to see if they are in $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$. If an intersection point of the irreducible components of $V(h_{1})$ is in $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$, the irreducible components containing this intersection point are in the same Euclidean-connected component. As we have discussed earlier, for many of our arrangements, we can translate the problem of determining the irreducibility of the moduli space into a problem of determining the irreducibliity of the polynomial describing the moduli space. ###### Definition 5.3. A polynomial $f$ over a field $K$ is absolutely irreducible if $f$ is irreducible over any field extension of $K$, or equivalently, $f$ is irreducible over $\overline{K}$, the algebraic closure of $K$. The problem of determining whether $f\in K[x_{1},x_{2},\ldots,x_{n}]$ is absolutely irreducible can be translated into a problem about the Newton polytope of $f$, as we are able to put a monoidal operation on Newton polytopes that mimics the multiplication of polynomials. ###### Definition 5.4. Let $K$ be a field and let $f\in K[x_{1},x_{2},\ldots,x_{n}]$ be a polynomial. Then the Newton polytope of $f$ is defined to be the convex hull of a set of points in ${\mathbb{R}}^{n}$: $P_{f}=\textup{Conv}\left\\{(u_{1},u_{2},\ldots,u_{n})\,\middle|\,\alpha_{u_{1},u_{2},\ldots,u_{n}}\neq 0,f=\sum_{j_{1},j_{2},\ldots,j_{n}}\alpha_{j_{1},j_{2},\ldots,j_{n}}x_{1}^{j_{1}}x_{2}^{j_{2}}\cdots x_{n}^{j_{n}}\right\\}.$ The Newton polytope is defined this way so that the points in $P_{f}$ provide a geometric interpretation of the combinations of exponents in each term in $f$. Now we want an operation on Newton polytopes that provides an analogy for polynomial multiplication in this geometric interpretation. ###### Definition 5.5. (see for example Bertone-Chèze-Galligo [BCG10, Definition 5]) If $A_{1}$ and $A_{2}$ are two subsets of ${\mathbb{R}}^{n}$, then we define their Minkowski sum as $A_{1}+A_{2}=\\{a_{1}+a_{2}\mid a_{1}\in A_{1},a_{2}\in A_{2}\\}.$ In fact, if we view $(K[x_{1},x_{2},\ldots,x_{n}],\cdot)$ as a monoid and $(\textup{{NewP}}_{n},+)$, where $\textup{{NewP}}_{n}$ is the set of all convex polytopes with nonnegative integer coordinates in ${\mathbb{R}}^{n}$, also as a monoid, then $P:K[x_{1},x_{2},\ldots,x_{n}]\to\textup{{NewP}}_{n}$ defined by $f\mapsto P_{f}$ is a monoid homomorphism by the following proposition. ###### Proposition 5.6. (Ostrowski [Ost75], as found in Bertone-Chèze-Galligo [BCG10, Lemma 6]) Let $f,g\in K[x_{1},x_{2},\ldots,x_{n}]$. Then $P_{fg}=P_{f}+P_{g}$. Using the language of Newton polytopes, the following proposition leads to a useful criterion for detecting the absolute irreducibility of polynomials. ###### Proposition 5.7. ([Rup04]) If $f\in K[x_{1},x_{2},\ldots,x_{n}]$ is absolutely reducible and $f=f_{1}f_{2}\cdots f_{s}$, then $P_{f}=P_{f_{1}}+P_{f_{2}}+\cdots P_{f_{s}}=\underbrace{P_{f_{1}}+\cdots P_{f_{1}}}_{\text{$s$ times}}$ ###### Proof. The factors $f_{1},f_{2},\ldots,f_{s}$ are conjugates over $K$ so the corresponding Newton polytopes are the same. ∎ This leads to the following reformulation of a result by [Gao01] as found in [BCG10]. ###### Theorem 5.8. (Gao [Gao01], as found in Bertone-Chèze-Galligo [BCG10]) Let $f(x_{1},x_{2},\ldots,x_{n})$ be an irreducible polynomial in $K[x_{1},x_{2},\ldots,x_{n}]$, where $K$ is a field. If the Newton polytope has the following convex hull: $P_{f}=\\{(x_{1}^{(1)},x_{2}^{(1)},\dots,x_{n}^{(1)}),(x_{1}^{(2)},x_{2}^{(2)},\dots,x_{n}^{(2)}),\ldots,(x_{1}^{(k)},x_{2}^{(k)},\dots,x_{n}^{(k)})\\}$ and the coordinates of the points on the convex hull are coprime, i.e. (2) $\gcd(x_{1}^{(1)},x_{2}^{(1)},\dots,x_{n}^{(1)},x_{1}^{(2)},x_{2}^{(2)},\dots,x_{n}^{(2)},\ldots,x_{1}^{(k)},x_{2}^{(k)},\dots,x_{n}^{(k)})=1,$ then $f$ is absolutely irreducible. It is easy to check algorithmically the $\gcd$ condition of Equation 2 in Theorem 5.8. However, we require a little bit more theory to see whether $f$ is indeed irreducible over $K$, in order to satisfy the first hypothesis of Theorem 5.8. For the purpose of this paper, we are only looking at polynomials over $K=\mathbb{Q}$. In fact, the polynomials we are considering have coefficients in ${\mathbb{Z}}$, and due to Gauss’s Lemma, a primitive polynomial $f$ irreducibe over ${\mathbb{Z}}$ is an irreducible polynomial viewed as an element in ${\mathbb{Q}}[x_{1},x_{2},\dots,x_{n}]$ so we need a criteria for a polynomial to be irreducible over ${\mathbb{Z}}$, such as the following. ###### Lemma 5.9. Let $f\in{\mathbb{Z}}[x_{1},x_{2},\ldots,x_{n}]$. Now let $g(x_{1})=f(x_{1},t_{2},t_{3}\dots,t_{n})$ for some particular values $t_{2},t_{3},\dots,t_{n}\in{\mathbb{Z}}$. Furthermore, let $p$ be a prime number and define $h(x_{1})\in{\mathbb{Z}}/p{\mathbb{Z}}[x_{1}]$ to be $g(x_{1})$ with its coefficients reduced modulo $p$. If both $h$ and $f$ have the same degree in the variable $x_{1}$ and $h$ is irreducible over ${\mathbb{Z}}/p{\mathbb{Z}}$, then $f$ is irreducible over ${\mathbb{Z}}$. Since there are only a small number of irreducible univariate polynomials over ${\mathbb{Z}}/p{\mathbb{Z}}$ of a certain degree for a small prime $p$, it is easy to check if $h$ is irreducible over ${\mathbb{Z}}/p{\mathbb{Z}}$ in the above lemma. ###### Example 5.10. The polynomial in Example 5.2 is $f(a,b):=a^{4}b^{2}+a^{4}b-3a^{3}b^{2}-3a^{3}b+a^{2}b^{2}+2a^{2}b-2ab-a+1$, whose Newton polytope has vertices $(0,0),(1,0),(2,2),(4,1),(4,2)$. The greatest common divisor of the coordinates of these vertices is 1, so Theorem 5.8 implies that $f(a,b)$ is absolutely irreducible as long as $f(a,b)$ is irreducible over ${\mathbb{Q}}$. We consider $f(-1,b)=5b^{2}+8b+2$. We see that $f(-1,b)$ has no roots modulo 7, so $f(-1,b)$ is irreducible over ${\mathbb{Z}}$ and therefore $f(a,b)$ is irreducible over ${\mathbb{Z}}$ by Lemma 5.9. Then Gauss’ Lemma implies that $f(a,b)$ is irreducible over ${\mathbb{Q}}$. Theorem 5.8 is easy to check and suffices to show absolute irreducibility for the majority of the polynomials with which we are concerned. The following theorem handles the rest of the polynomials we have by removing some points from the Newton Polytope and then applying Theorem 5.8. ###### Theorem 5.11. (Bertone-Chèze-Galligo [BCG10, Proposition 9], Kaltofen [Kal95]) Let $f\in{\mathbb{Z}}[x_{1},x_{2},\dots,x_{n}]$ and let $\overline{f}$ be $f$ with its coefficients reduced modulo $p$ for some prime $p$. If $\deg(f)=\deg(\overline{f})$ and $\overline{f}$ is absolutely irreducible, then $f$ is absolutely irreducible. Now Theorems 5.8 and 5.11 provide us with criteria to test the irreducibility of the polynomial constraints on the parameters of the moduli spaces. This then allows us to determine the number of connected components the moduli space has. The results are shown in Theorems 1.1, 1.2, and 1.3, showing the number of arrangements in each family, classified by their moduli spaces. We break down the tables by which $(10_{3})$ configuration each arrangement is a one-line extension of and also by whether the moduli space is irreducible, empty, $\mathcal{M}_{\mathcal{A}}$ being reducible and $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ being irreducible, or $\mathcal{M}_{\mathcal{A}}^{\mathbb{C}}$ being reducible. The subtotal simply adds all of the columns together, whereas the total accounts for identifications up to isomorphism. ## 6\. Corrections We now remedy a small error in previous work by the first author with Amram, Teicher, and Ye that leads to three arrangements being counted twice. ###### Proposition 6.1. There are only two distinct line arrangements with ten lines obtained by adding a tenth line through three double points in the configuration $(9_{3})_{1}$. This replaces the following now-incorrect result: ###### Proposition 6.2. (Amram-Cohen-Teicher-Ye [ACTY13, Lemma 8.2]) There are five line arrangements with ten lines obtained by adding a tenth line through three double points in the configuration $(9_{3})_{1}$. The argument in [ACTY13] relies on the automorphism group of the configuration $(9_{3})_{1}$ to identify isomorphic arrangements with ten lines constructed from the configuration $(9_{3})_{1}$ as described. It appears that [ACTY13] has depicted the configuration $(9_{3})_{1}$ as having an automorphism group isomorphic to $D_{6}$, the dihedral group of order twelve. However, the automorphism group of the configuration $(9_{3})_{1}$ has order larger than twelve. ###### Proposition 6.3. (Coxeter [Cox77]) The automorphism group of the configuration $(9_{3})_{1}$ is isomorphic to $PG(2,3)$, a group of order one hundred eight. A larger automorphism group would mean that we could potentially identify some of the arrangements with ten lines constructed from $(9_{3})_{1}$ as isomorphic. Indeed, this is the case. We conclude with a proof of Proposition 6.1. ###### Proof. In [ACTY13], five line arrangements with 10 lines are said to be generated as described, but we show explicitly that some of the new line arrangements are isomorphic. Using the names of the arrangements in [ACTY13], we first show $(9_{3})_{1}.CDI\cong(9_{3})_{1}.CFH$ , whose arrangement tables are given in Table 16. $(9_{3})_{1}.CDI$ | $(9_{3})_{1}.CFH$ ---|--- | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ ---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 3 | 5 | 0 | 0 | 0 | C 2 | 4 | 6 | 4 | 6 | 7 | 4 | 2 | 3 | D 3 | 5 | 7 | 8 | 8 | 8 | 6 | 7 | 5 | I C | I | | D | | C | | I | D | | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ ---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 3 | 5 | 0 | 0 | 0 | C 2 | 4 | 6 | 4 | 6 | 7 | 4 | 2 | 3 | F 3 | 5 | 7 | 8 | 8 | 8 | 6 | 7 | 5 | H C | H | F | F | H | C | | | | Table 16. Arrangement tables for two isomorphic arrangements $(9_{3})_{1}.CDI$ and $(9_{3})_{1}.CFH$ The arrangements $(9_{3})_{1}.CDI$ and $(9_{3})_{1}.CFH$ are isomorphic due to the following isomorphism $\varphi$. The isomorphism $\varphi$ also induces an automorphism of the lines. See Table 17. Point $p$ | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | C | D | I ---|---|---|---|---|---|---|---|---|---|---|---|--- $\varphi(p)$ | 4 | 3 | 1 | 2 | 6 | 8 | 0 | 5 | 7 | C | F | H Line $L$ | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | | $\varphi(L)$ | $L_{1}$ | $L_{5}$ | $L_{9}$ | $L_{3}$ | $L_{8}$ | $L_{6}$ | $L_{7}$ | $L_{2}$ | $L_{4}$ | $L_{10}$ | | Table 17. An explicit isomorphism $\varphi:(9_{3})_{1}.CDI\to(9_{3})_{1}.CFH$ Now we show that $(9_{3})_{1}.CDG\cong_{\varphi_{1}}(9_{3})_{1}.CDH\cong_{\varphi_{2}}(9_{3})_{1}.CFG$ , whose arrangement tables are given in Table 18. $(9_{3})_{1}.CDG$ | $(9_{3})_{1}.CDH$ | $(9_{3})_{1}.CFG$ ---|---|--- | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ ---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 3 | 5 | 0 | 0 | 0 | C 2 | 4 | 6 | 4 | 6 | 7 | 4 | 2 | 3 | D 3 | 5 | 7 | 8 | 8 | 8 | 6 | 7 | 5 | G C | | | D | G | C | | G | D | | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ ---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 3 | 5 | 0 | 0 | 0 | C 2 | 4 | 6 | 4 | 6 | 7 | 4 | 2 | 3 | D 3 | 5 | 7 | 8 | 8 | 8 | 6 | 7 | 5 | H C | H | | D | H | C | | | D | | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ ---|---|---|---|---|---|---|---|---|--- 1 | 1 | 1 | 2 | 3 | 5 | 0 | 0 | 0 | C 2 | 4 | 6 | 4 | 6 | 7 | 4 | 2 | 3 | F 3 | 5 | 7 | 8 | 8 | 8 | 6 | 7 | 5 | G C | | F | F | G | C | | G | | Table 18. Arrangement tables for three isomorphic arrangements $(9_{3})_{1}.CDG,(9_{3})_{1}.CDH$ and $(9_{3})_{1}.CFG$ All three arrangements are isomorphic due to the isomorphisms $\varphi_{1}$ and $\varphi_{2}$. The isomorphisms $\varphi_{1},\varphi_{2}$ also induce an automorphisms of the lines. See Table 19. ∎ Point $P$ | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | C | D | G ---|---|---|---|---|---|---|---|---|---|---|---|--- $\varphi_{1}(P)$ | 4 | 7 | 5 | 8 | 0 | 2 | 6 | 1 | 3 | C | D | H $\varphi_{2}(\varphi_{1}(P))$ | 6 | 5 | 8 | 7 | 4 | 1 | 0 | 3 | 2 | C | F | G Line $L$ | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$ | $L_{5}$ | $L_{6}$ | $L_{7}$ | $L_{8}$ | $L_{9}$ | $L_{10}$ | | $\varphi_{1}(L)$ | $L_{6}$ | $L_{8}$ | $L_{3}$ | $L_{9}$ | $L_{5}$ | $L_{1}$ | $L_{7}$ | $L_{2}$ | $L_{4}$ | $L_{10}$ | | $\varphi_{2}(\varphi_{1}(L))$ | $L_{6}$ | $L_{2}$ | $L_{9}$ | $L_{4}$ | $L_{8}$ | $L_{1}$ | $L_{7}$ | $L_{5}$ | $L_{3}$ | $L_{10}$ | | Table 19. Explicit isomorphisms for $(9_{3})_{1}.CDG\cong_{\varphi_{1}}(9_{3})_{1}.CDH\cong_{\varphi_{2}}(9_{3})_{1}.CFG$ $(9_{3})_{1}.CFH$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 2 3 5 0 0 0 C 2 4 6 4 6 7 4 2 3 F 3 5 7 8 8 8 6 7 5 H C H F F H C | $(9_{3})_{3}.BDF$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 9 9 9 B 2 4 6 2 5 3 2 4 3 D 3 5 7 4 6 7 5 7 6 F F B B D D F ---|--- $(9_{3})_{1}.CDG$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 2 3 5 0 0 0 C 2 4 6 4 6 7 4 2 3 D 3 5 7 8 8 8 6 7 5 G C D G C G D | $(9_{3})_{3}.ACG$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 9 9 9 A 2 4 6 2 5 3 2 4 3 C 3 5 7 4 6 7 5 7 6 G G C A G C A $(9_{3})_{2}.DFI$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 4 3 2 D 2 4 6 4 2 3 7 5 5 F 3 5 7 6 7 9 9 6 9 I I D F I D F | $(9_{3})_{3}.AEG$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 9 9 9 A 2 4 6 2 5 3 2 4 3 E 3 5 7 4 6 7 5 7 6 G G E A G E A $(9_{3})_{2}.CFI$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 4 3 2 C 2 4 6 4 2 3 7 5 5 F 3 5 7 6 7 9 9 6 9 I I F I C F C | $(9_{3})_{3}.ADG$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 9 9 9 A 2 4 6 2 5 3 2 4 3 D 3 5 7 4 6 7 5 7 6 G G A G D D A $(9_{3})_{2}.DFA$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 4 3 2 D 2 4 6 4 2 3 7 5 5 F 3 5 7 6 7 9 9 6 9 A A D F D F A | $(9_{3})_{3}.BEG$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 9 9 9 A 2 4 6 2 5 3 2 4 3 D 3 5 7 4 6 7 5 7 6 G G A G D D A $(9_{3})_{2}.DFH$ $L_{1}$ $L_{2}$ $L_{3}$ $L_{4}$ $L_{5}$ $L_{6}$ $L_{7}$ $L_{8}$ $L_{9}$ $L_{10}$ 1 1 1 8 8 8 4 3 2 D 2 4 6 4 2 3 7 5 5 F 3 5 7 6 7 9 9 6 9 H D F H D F H | Table 20. The eleven one-line extensions of $(9_{3})$ configurations found in [ACTY13] ## References * [AAB14] Abdullah Al-Azemi and Dieter Betten, _The configurations $(12_{3})$ revisited_, J. Geom. 105 (2014), 391–417. * [ACSTYZ15] Meirav Amram, Moshe Cohen, Hao Sun, Mina Teicher, Fei Ye, and Anna Zarkh, _Combinatorial symmetry of line arrangements and applications_ , Topology Appl. 193 (2015), 226–247. * [ACTY13] Meirav Amram, Moshe Cohen, Mina Teicher, and Fei Ye, _Moduli spaces of ten-line arrangements with double and triple points_ , arXiv:1306.6105, 2013. * [AGTX15] Meirav Amram, Cheng Gong, Mina Teicher, and Wan-Yuan Xu, _Moduli spaces of arrangements of 11 projective lines with a quintuple point_ , Turk. J. Math. 39 (2015), 618–644. * [ATY13] Meirav Amram, Mina Teicher, and Fei Ye, _Moduli spaces of arrangements of 10 projective lines with quadruple points_ , Adv. Appl. Math. 51 (2013), 392–418. * [BCG10] Cristina Bertone, Guillaume Chèze, and André Galligo, _Modular Las Vegas algorithms for polynomial absolute factorization_ , J. Symbolic Comput. (2010), 1280–1295. * [BLVSWZ99] Anders Björner, Michael Las Vergnas, Bernd Sturmfels, Neil White, and Günter M. Ziegler, _Oriented matroids_ , Cambridge University Press, 1999\. * [BP15] Jürgen Bokowski and Vincent Pilaud, _On topological and geometric $(19_{4})$ configurations_, Eur. J. Combin. 50 (2015), 4–17. * [Cox77] Harold Scott MacDonald Coxeter, _The pappus configuration and its groups_ , Pi Mu Epsilon J. 6 (1977), 331–336. * [CS97] Daniel Cohen and Alexander Suciu, _The braid monodromy of plane algebraic curves and hyperplane arrangements_ , Comment. Math. Helv. 72 (1997), 285–315. * [Dim17a] Alexandru Dimca, _Hyperplane arrangements: An introduction_ , Springer, 2017\. * [DIM17b] Alexandru Dimca, Denis Ibadula, and Anca Macinic, _Freeness and near freeness are combinatorial for line arrangements in small degrees_ , arXiv:1712.04400, 2017. * [DvS03] Robert Daublebsky von Sterneck, _Über die zu den configurationen $12_{3}$ zugehörigen gruppen von substitutionen_, Monatsh. Math. 14 (1903), 254–260. * [EP13] Eric Ens and Ranganathan Padmanabhan, _Group embedding of the projective plane ${\rm PG}(2,3)$_, Automated reasoning and mathematics, Lecture Notes in Comput. Sci., vol. 7788, Springer, Heidelberg, 2013, pp. 131–138. * [Fan97] Kwai-Man Fan, _Direct product of free groups as the fundamental group of the complement of a union of lines_ , Michigan Math. 44 (1997), 283–291. * [Gao01] Shuhong Gao, _Absolute irreducibility of polynomials via newton polytopes_ , J. Algebra (2001), 501–520. * [GB22] Benoît Guerville-Ballé, _The loop-linking number of line arrangements_ , Math. Z. 301 (2022), no. 2, 1821–1850. * [Gro90] H. Gropp, _On the existence and nonexistence of configurations $n_{k}$_, J. Comb. Inf. Syst. Sci. 15 (1990), 34–48. * [Grü09] Branko Grünbaum, _Configurations of points and lines_ , American Mathematical Society, 2009. * [GTV03] David Garber, Mina Teicher, and Uzi Vishne, _$\pi_{1}$ -classification of real arrangements with up to eight lines_, Topology 42 (2003), 265–289. * [HOV15] Jessica Hauschild, Jazmin Ortiz, and Oscar Vega, _On the levi graph of point-line configurations_ , Involve 8 (2015), no. 5, 893–900. * [Kal95] E. Kaltofen, _Effective noether irreducibility forms and applications_ , Journal of Computer and System Sciences 50 (1995), no. 2, 274 – 295\. * [Kan81] S. Kantor, _Ueber die configurationen (3,3) mit den Indices 8,9 und ihren Zusammenhang mit den Curven dritter Ordnung_ , Wien. Ber. (1881), 915–932. * [Koc16] William L. Kocay, _One-point extensions in $n_{3}$ configurations_, Ars Math. Contemp. 10 (2016), 291–322. * [Koc21] by same author, _The configurations $(13_{3})$_, Art Discrete Appl. Math. 4 (2021), no. 3, Paper No. 3.15, 16. * [Mö28] A. F. Möbius, _Kann von zwei dreiseitigen Pyramiden eine jede in Bezug auf die andere um- und eingeschrieben zugleich heißen?_ , J. Reine Angew. Math. 3 (1828), 273–278. * [Mac36] Saunders MacLane, _Some interpretations of abstract linear dependence in terms of projective geometry_ , Amer. J. Math. 58 (1936), no. 1, 236–240. * [Mar87] Vittorio Martinetti, _Sulle configurazioni piane $\mu_{3}$_, Ann. Mat. Pura Appl. 2 (1887), 1–26. * [NY12] Shaheen Nazir and Masahiko Yoshinaga, _On the connectivity of the realization spaces of line arrangements_ , Ann. Sc. Norm. Super. Pisa Cl. Sci. 11 (2012), 921–937. * [OS80] Peter Orlik and Louis Solomon, _Combinatorics and topology of complements of hyperplanes_ , Invent. Math. 56 (1980), 167–189. * [Ost75] Alexander Markowich Ostrowski, _On multiplication and factorization of polynomials, i. lexicographic orderings and extreme aggregates of terms_ , Aequ. Math. 13 (1975), no. 3, 201–228. * [Oxl92] James G. Oxley, _Matroid theory_ , Oxford University Press, 1992. * [Oxl19] James Oxley, _A matroid extension result_ , SIAM J. Discrete Math. 33 (2019), no. 1, 138–152. * [Ran89] Richard Randell, _Lattice-isotopic arrangements are topologically isomorphic_ , Proc. Amer. Math. Soc. 107 (1989), 555–559. * [Rey82] Th. Reye, _Das Problem der Configurationen_ , Acta Math. 1 (1882), no. 1, 93–96. * [Rup04] David Rupprecht, _Semi-numerical absolute factorization of polynomials with integer coefficients_ , Journal of Symbolic Computation 37 (2004), no. 5, 557–574. * [SBA15] Klara Stokes and Maria Bras-Amorós, _A survey on the use of combinatorial configurations for anonymous database search_ , Adv. Res. Data Privacy 567 (2015), 407–419. * [Sch89] H. Schröter, _über die Bildungsweise und geometrische Construction der Configurationen $10_{3}$_, Nachr. Ges. Wiss Göittingen (1889), 239–253. * [SW90] Bernd Sturmfels and Neil White, _Rational realizations of $11_{3}$\- and $12_{3}$-configurations_, Aequationes Math. 39 (1990), 92–123. * [Vog86] Walter Vogler, _Representing groups by graphs with constant link and hypergraphs_ , Journal of Graph Theory 10 (1986), no. 4, 461–475. * [Ye13] Fei Ye, _Classification of moduli spaces of arrangements of 9 projective lines_ , Pacific J. of Math. 265 (2013), 243–256.
* Lee et al. (2017) Lee, C.-F., Ho, P. T. P., Li, Z.-Y., et al. 2017, Nature Astronomy, 1, 0152. * Lombardi et al. (2014) Lombardi, M., Bouy, H., Alves, J., et al. 2014, A&A, 566, A45. doi:10.1051/0004-6361/201323293 * Machida et al. (2008) Machida, M. N., Inutsuka, S.-. ichiro ., & Matsumoto, T. 2008, ApJ, 676, 1088. * Machida & Hosokawa (2013) Machida, M. N. & Hosokawa, T. 2013, MNRAS, 431, 1719. doi:10.1093/mnras/stt291 * Masson & Chernin (1993) Masson, C. R. & Chernin, L. M. 1993, ApJ, 414, 230. doi:10.1086/173071 * Matzner & McKee (1999) Matzner, C. D. & McKee, C. F. 1999, ApJ, 526, L109. doi:10.1086/312376 * Maud et al. (2015) Maud, L. T., Moore, T. J. T., Lumsden, S. L., et al. 2015, MNRAS, 453, 645. doi:10.1093/mnras/stv1635 * Megeath et al. (2012) Megeath, S. T., Gutermuth, R., Muzerolle, J., et al. 2012, AJ, 144, 192. doi:10.1088/0004-6256/144/6/192 * Megeath et al. (2016) Megeath, S. T., Gutermuth, R., Muzerolle, J., et al. 2016, AJ, 151, 5. doi:10.3847/0004-6256/151/1/5 * Megeath et al. (2022) Megeath, S. T., Gutermuth, R. A., & Kounkel, M. A. 2022, PASP, 134, 042001. doi:10.1088/1538-3873/ac4c9c * Mignone et al. (2012) Mignone, A., Zanni, C., Tzeferacos, P., et al. 2012, ApJS, 198, 7. doi:10.1088/0067-0049/198/1/7 * Motte et al. (1998) Motte, F., Andre, P., & Neri, R. 1998, A&A, 336, 150 * Motte et al. (2001) Motte, F., André, P., Ward-Thompson, D., et al. 2001, A&A, 372, L41. doi:10.1051/0004-6361:20010543 * Motte et al. (2021) Motte, F., Bontemps, S., Csengeri, T., et al. 2021, arXiv:2112.08182 * Mottram et al. (2017) Mottram, J. C., van Dishoeck, E. F., Kristensen, L. E., et al. 2017, A&A, 600, A99. doi:10.1051/0004-6361/201628682 * Myers & Ladd (1993) Myers, P. C. & Ladd, E. F. 1993, ApJ, 413, L47. doi:10.1086/186956 * Myers (2008) Myers, P. C. 2008, ApJ, 687, 340. doi:10.1086/591664 * Nagy et al. (2020) Nagy, Z., Menechella, A., Megeath, S. T., et al. 2020, A&A, 642, A137. doi:10.1051/0004-6361/201937342 * Offner et al. (2011) Offner, S. S. R., Lee, E. J., Goodman, A. A., et al. 2011, ApJ, 743, 91. * Offner & Arce (2014) Offner, S. S. R. & Arce, H. G. 2014, ApJ, 784, 61. doi:10.1088/0004-637X/784/1/61 * Offner & Chaban (2017) Offner, S. S. R. & Chaban, J. 2017, ApJ, 847, 104. doi:10.3847/1538-4357/aa8996 * Ostriker et al. (2001) Ostriker, E. C., Lee, C.-F., Stone, J. M., et al. 2001, ApJ, 557, 443. doi:10.1086/321649 * Oya et al. (2018) Oya, Y., Sakai, N., Watanabe, Y., et al. 2018, ApJ, 863, 72. doi:10.3847/1538-4357/aacf42 * Oya et al. (2022) Oya, Y., Kibukawa, H., Miyake, S., et al. 2022, PASP, 134, 094301. doi:10.1088/1538-3873/ac8839 * Pascucci et al. (2022) Pascucci, I., Cabrit, S., Edwards, S., et al. 2022, arXiv:2203.10068 * Pineda et al. (2020) Pineda, J. E., Segura-Cox, D., Caselli, P., et al. 2020, Nature Astronomy, 4, 1158. doi:10.1038/s41550-020-1150-z * Rabenanahary et al. (2022) Rabenanahary, M., Cabrit, S., Meliani, Z., et al. 2022, A&A, 664, A118. doi:10.1051/0004-6361/202243139 * Raga et al. (1993) Raga, A. C., Canto, J., Calvet, N., et al. 1993, A&A, 276, 539 * Rezaei Kh. et al. (2020) Rezaei Kh., S., Bailer-Jones, C. A. L., Soler, J. D., et al. 2020, A&A, 643, A151. doi:10.1051/0004-6361/202038708 * Rohde et al. (2022) Rohde, P. F., Walch, S., Seifried, D., et al. 2022, MNRAS, 510, 2552. doi:10.1093/mnras/stab3572 * Rosen & Smith (2004) Rosen, A. & Smith, M. D. 2004, MNRAS, 347, 1097. doi:10.1111/j.1365-2966.2004.07279.x * Roueff et al. (2021) Roueff, A., Gerin, M., Gratier, P., et al. 2021, A&A, 645, A26. doi:10.1051/0004-6361/202037776 * Santiago-García et al. (2009) Santiago-García, J., Tafalla, M., Johnstone, D., et al. 2009, A&A, 495, 169. * Seale & Looney (2008) Seale, J. P. & Looney, L. W. 2008, ApJ, 675, 427. * Scandariato et al. (2012) Scandariato, G., Da Rio, N., Robberto, M., et al. 2012, A&A, 545, A19. doi:10.1051/0004-6361/201219264 * Schwarz et al. (2012) Schwarz, K. R., Shirley, Y. L., & Dunham, M. M. 2012, AJ, 144, 115. doi:10.1088/0004-6256/144/4/115 * Shang et al. (2020) Shang, H., Krasnopolsky, R., Liu, C.-F., et al. 2020, ApJ, 905, 116. doi:10.3847/1538-4357/abbdb0 * Shu et al. (1995) Shu, F. H., Najita, J., Ostriker, E. C., et al. 1995, ApJ, 455, L155. doi:10.1086/309838 * Shu et al. (2000) Shu, F. H., Najita, J. R., Shang, H., et al. 2000, Protostars and Planets IV, 789 * Spitoni et al. (2008) Spitoni, E., Recchi, S., & Matteucci, F. 2008, A&A, 484, 743. doi:10.1051/0004-6361:200809403 * Snell et al. (1980) Snell, R. L., Loren, R. B., & Plambeck, R. L. 1980, Interstellar Molecules, 87, 173 * Solomon et al. (1981) Solomon, P. M., Huguenin, G. R., & Scoville, N. Z. 1981, ApJ, 245, L19. doi:10.1086/183513 * Stanke et al. (2006) Stanke, T., Smith, M. D., Gredel, R., et al. 2006, A&A, 447, 609. doi:10.1051/0004-6361:20041331 * Stanke et al. (2022) Stanke, T., Arce, H. G., Bally, J., et al. 2022, A&A, 658, A178. doi:10.1051/0004-6361/201937034 * Stanke et al. (2022) Stanke, T., Arce, H. G., Bally, J., et al. 2022, arXiv:2201.00463 * Stutz et al. (2013) Stutz, A. M., Tobin, J. J., Stanke, T., et al. 2013, ApJ, 767, 36. doi:10.1088/0004-637X/767/1/36 * Tabone et al. (2017) Tabone, B., Cabrit, S., Bianchi, E., et al. 2017, A&A, 607, L6. * Tafalla et al. (2004) Tafalla, M., Santiago, J., Johnstone, D., et al. 2004, A&A, 423, L21. * Takahashi et al. (2008) Takahashi, S., Saito, M., Ohashi, N., et al. 2008, ApJ, 688, 344. doi:10.1086/592212 * Takemura et al. (2021) Takemura, H., Nakamura, F., Ishii, S., et al. 2021, PASJ, 73, 487. doi:10.1093/pasj/psab014 * Testi & Sargent (1998) Testi, L. & Sargent, A. I. 1998, ApJ, 508, L91. doi:10.1086/311724 * Tobin et al. (2007) Tobin, J. J., Looney, L. W., Mundy, L. G., et al. 2007, ApJ, 659, 1404. * Tobin et al. (2020) Tobin, J. J., Sheehan, P. D., Megeath, S. T., et al. 2020, ApJ, 890, 130. * Tobin et al. (2022) Tobin, J. J., Offner, S. S. R., Kratter, K. M., et al. 2022, ApJ, 925, 39. doi:10.3847/1538-4357/ac36d2 * van der Marel et al. (2013) van der Marel, N., Kristensen, L. E., Visser, R., et al. 2013, A&A, 556, A76. doi:10.1051/0004-6361/201220717 * Vazzano et al. (2021) Vazzano, M. M., Fernández-López, M., Plunkett, A., et al. 2021, A&A, 648, A41. doi:10.1051/0004-6361/202039228 * van Kempen et al. (2016) van Kempen, T. A., Hogerheijde, M. R., van Dishoeck, E. F., et al. 2016, A&A, 587, A17. doi:10.1051/0004-6361/201424725 * Velusamy et al. (2014) Velusamy, T., Langer, W. D., & Thompson, T. 2014, ApJ, 783, 6. doi:10.1088/0004-637X/783/1/6 * Ward-Thompson et al. (2017) Ward-Thompson, D., Pattle, K., Bastien, P., et al. 2017, ApJ, 842, 66. doi:10.3847/1538-4357/aa70a0 * White & Hillenbrand (2004) White, R. J. & Hillenbrand, L. A. 2004, ApJ, 616, 998. doi:10.1086/425115 * Whitney et al. (2003) Whitney, B. A., Wood, K., Bjorkman, J. E., et al. 2003, ApJ, 591, 1049. doi:10.1086/375415 * Wilson & Matteucci (1992) Wilson, T. L. & Matteucci, F. 1992, A&A Rev., 4, 1. * Wilson et al. (2009) Wilson, T. L., Rohlfs, K., & Hüttemeister, S. 2009, Tools of Radio Astronomy, by Thomas L. Wilson; Kristen Rohlfs and Susanne Hüttemeister. ISBN 978-3-540-85121-9. Published by Springer-Verlag, Berlin, Germany, 2009. doi:10.1007/978-3-540-85122-6 * Xu et al. (2022) Xu, D., Offner, S. S. R., Gutermuth, R., et al. 2022, ApJ, 926, 19. doi:10.3847/1538-4357/ac39a0 * Xu et al. (2022) Xu, D., Offner, S. S. R., Gutermuth, R., et al. 2022, ApJ, 941, 81. doi:10.3847/1538-4357/aca153 * Xu & Kunz (2021) Xu, W. & Kunz, M. W. 2021, MNRAS, 502, 4911. doi:10.1093/mnras/stab314 * Yen et al. (2017) Yen, H.-W., Koch, P. M., Takakuwa, S., et al. 2017, ApJ, 834, 178. doi:10.3847/1538-4357/834/2/178 * Yıldız et al. (2015) Yıldız, U. A., Kristensen, L. E., van Dishoeck, E. F., et al. 2015, A&A, 576, A109. doi:10.1051/0004-6361/201424538 * Zhang et al. (2016) Zhang, Y., Arce, H. G., Mardones, D., et al. 2016, ApJ, 832, 158. doi:10.3847/0004-637X/832/2/158 * Zhang et al. (2019) Zhang, Y., Arce, H. G., Mardones, D., et al. 2019, ApJ, 883, 1. ## Appendix A Cloud Subtraction Total Power array data is sensitive to large-scale emission and is crucial to recover the outflow mass. However, while it recovers the large-scale outflow emission it also picks up the cloud emission. Even though we tried to avoid including cloud emission by not including velocity channels in the outflow maps that were close to the cloud velocity, there were several sources in which emission from the parent cloud (or extended “contaminating” CO emission at different velocities), could not be avoided even at velocities more than 1 km s-1 away from the parent cloud velocity. Therefore, we developed cloud subtraction routines for the Ring Method and the Pixel Flux-tracing Technique as shown in Figure 17 and Figure 18 respectively. The upper left panel of Figure 17 shows the 12CO integrated intensity map of the red-shifted lobe of the HOPS-135 outflow, which clearly shows large-scale cloud emission. To avoid contamination from cloud emission in the estimation of outflow parameters using the RM, we first estimate the averaged background per pixel inside the ring mask, but outside the outflow mask (as shown in the upper right panel of Figure 17). For tracers with cloud contamination, we subtracted the background cloud spectrum from the outflow spectrum, as shown in the lower panels of Figure 17. We then use the background subtracted data to construct the mass-spectrum inside the rings to estimate the molecular outflow rates. As for the Pixel Flux-tracing Technique (PFT), the mass spectrum is calculated for each individual pixel (instead of using rings). Hence, we adopted a slightly different method for the background subtraction as shown in Figure 18. We first obtained the averaged profile for the quantity of interests as a function of distance from the central protostar perpendicular to the outflow direction. To avoid subtracting outflow emission near the protostar, we set the background value in the inner region ($\leq 1050\,$au) to a constant as shown in Figure 18 upper plot. We rotate the radial profile to create a 2D background map (see lower middle panel of Figure 18). Then we subtracted out the background to get rid of the large-scale cloud emission (see lower right panel of Figure 18). Figure 17: Subtraction of cloud emission using the Ring Method for HOPS-135. The red-shifted outflow lobe sits on-top of cloud shown in 12CO Momentum 0 (upper left panel). We measure the background emission in a ring outside the outflow mask as shown in the upper right panel. For each rings at different distances from source, we measured its corresponding background. For tracers with cloud contamination, we subtracted the background cloud spectrum from the spectrum obtained over the area covered by the outflow (lobe) mask, as shown in the bottom 3 panels. In the bottom right panel the ‘outflow’ label represents the background subtracted column density spectrum. Figure 18: Cloud subtraction method for the PFT. In this example, we show (in the upper left panel) the uncorrected outflow mass map for HOPS-10, This clearly shows background cloud emission not associated with the outflow. We measure the background emission as a function of distance from the central protostar (perpendicular to the outflow direction), as shown in the upper panel. The orange and blue lines represent the left side and the right side of the background emission profile from the protostar. We set the central region ($\leq 1050\,$au) of the profile to constant to avoid subtracting outflow emission near the protostar. We used the radial profile to generate a two- dimensional background map (upper middle panel). We then subtract the background map to remove the cloud contamination, resulting in the background- subtracted map (upper right panel). ## Appendix B Outflow inclination properties and wide angle wind modelling In this section, we compared the inclination angle derived from the wide-angle wind modeling to the inclination angle derived from the disk major to minor axis ratio. The inclination angles derived from both methods are shown in Table 5. We obtain the outflow inclination angle by fitting a wide-angle wind model (Li & Shu, 1996; Lee et al., 2000). The wide-angle wind model has been used by others to estimate the inclination angle (and other properties) of observed molecular outflows (e.g., Hirano et al., 2010; Arce et al., 2013; Yen et al., 2017). In this model, the entrained material can be described by a parabolic shell with a expansion velocity analogous to the Hubble law. Many of the outflows from Class I sources (and some from flat-spectrum sources) show parabolic-like cavities similar to those expected from the wide-angle wind model. On the other hand, this wide-angle wind model may not be appropriate for modeling molecular outflows from Class 0 sources, as they are typically very collimated, and show a jet-like morphology (specially at high velocities). However, some of these Class 0 outflows also exhibit wider outflow cavities at low velocities (e.g. HOPS-10, HOPS-198). Hence, modeling these outflow cavity walls with a wide-angle wind model can provide an estimate of the outflow inclination angle. The wide-angle wind model of Lee et al. (2000) can be described by 3 independent parameters: $c_{0}$, $v_{0}$, and $i$. Considering z is the outflow direction, and R is the radial distance from the outflow center, then a cylindrical symmetric parabolic shell surface can be described by: $\displaystyle z=c_{0}R^{2},$ (B1) and the corresponding velocity follows: $\displaystyle v_{z}=v_{0}z,$ (B2) $\displaystyle v_{r}=v_{0}r,$ (B3) $\displaystyle v_{\phi}=0.$ (B4) Figure 19 outlines the basic steps for fitting the wide-angle wind model to constrain the outflow inclination. We start with creating an outflow mask from the 12CO integrated intensity (moment 0) map. The outflow mask is used to isolate the outflow from the surroundings and fit the outflow lobe morphology to the model. Then we applied the wide-angle wind model to constrain the pair solutions for $c_{0}$ and $i$ for each lobe (red-shifted and blue-shifted lobe). The parabola should fit the edge of the outflow mask. For every 5 degrees of inclination angle, from 0 degrees to 180 degrees, we search for the best solutions for $c_{0}$. In total, we obtained 36 pairs of solutions (for parameters $c_{0}$ and $i$). Then for each pair of $c_{0}$ and $i$ solution, we run different wide-angle wind models with a different characteristic velocity ($v_{0}$) using 0.3, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, and 6.0 km s-1. In total, we produced 468 models to compare with the 12CO position-velocity (PV) diagram along the outflow axis. The central panel in Figure 19 shows the residual plot with red (value of 1) representing the data, and blue (value of -1) representing the wide-angle wind model. We eliminate solutions with large residuals in the PV diagram. After reducing the number of solutions, we used the remaining solutions (with parameters $c_{0}$, $i$, and $v_{0}$) to compare with the PV perpendicular to the outflow. For the wide-angle wind model, the PV diagram perpendicular to the outflow axis is an ellipse. By matching the ellipse with the observational data, we can constrain the inclination angle of the outflow axis (with respect to the line-of-sight) as shown for HOPS-130 in Figure 19. The resulting estimated inclinations are shown in Table 5 with 0 degrees being pole-on and 90 degrees being an edge-on system. The parameters for the best wide-angle wind model for each source are shown in Table 6. Note that we divide the $c_{0}$ and $v_{0}$ by the source distance of 400 pc to convert the values in the unit of au instead of arcseconds. In Figure 20 we plot the distribution of the difference in inclination values derived using the two different methods: the outflow wide-angle wind fitting method and the disk continuum method. For the disk continuum method, the inclination angle can be derived by using the disk major to minor axis ratio and assuming the protostellar outflows are perpendicular to the disk plane. While there is a large dispersion in Figure 20, the distribution peaks at $10\sim 20^{\circ}$ and shows that in general both methods are consistent with each other. For the vast majority of sources, the estimated $i$ from the two methods are different within $35\arcdeg$, and only three sources (i.e., 14% of the sample) show differences in the estimate of $i$ of $45\arcdeg$ or more. The small difference between the two methods might be partially explained by the assumption of a thin disk for the disk minor to major axis ratio method. Due to the disk thickness, the method cannot fully recover edge-on disk systems (small outflow inclination angles). Even so, the inclination angle derived from the high-resolution disk continuum is a likely more accurate estimate than that derived using the wide-angle wind model because it involves significantly fewer assumptions. Moreover, it is likely that not all molecular outflows are entrained by a wide-angle wind (the main assumptions when using the wide-angle wind model). In particular, many of our Class 0 outflows are jet-like and deviate from the parabolic shell structure expected in outflows driven by the wide-angle winds. We also compare our results to the disk inclination angle derived through modeling of SED of protostars by Furlan et al. (2016). We find that the distribution of the derived inclination angle difference is completely random. This strongly suggests that the disk inclination angle from SED modeling is not reliable, and should not be used in determining disk and outflow inclination angles. Alternatively, machine learning approaches may be used to predict both the plane-of-sky orientation and line-of-sight outflow inclination from CO spectral data (Xu et al., 2022). Figure 19: Method for constraining the outflow inclination angle by fitting a wide-angle wind model. The example shown here uses the outflow from HOPS-130. Table 5: Outflow inclination angles Source | Outflow inclination | Disk | Disk | Disk | Outflow inclination ---|---|---|---|---|--- | from wide-angle wind | major axis | minor axis | minor to major | derived from | model fit [deg] | [arcsec]aaDeconvolved sizes from ALMA 0.87 mm continuum data published by Tobin et al. (2020). | [arcsec]aaDeconvolved sizes from ALMA 0.87 mm continuum data published by Tobin et al. (2020). | axis ratio | disk [deg] HOPS-10 | 40 | 0$\farcs$14 | 0$\farcs$07 | 0.50 | 60 HOPS-11 | 45 | 0$\farcs$13 | 0$\farcs$12 | 0.92 | 23 HOPS-13 | 50 | 0$\farcs$19 | 0$\farcs$09 | 0.47 | 62 HOPS-127 | 25 | 0$\farcs$05 | 0$\farcs$03 | 0.60 | 53 HOPS-129 | 20 | 0$\farcs$10 | 0$\farcs$06 | 0.60 | 53 HOPS-130 | 25 | 0$\farcs$46 | 0$\farcs$15 | 0.33 | 71 HOPS-134 | 45 | 0$\farcs$06 | 0$\farcs$05 | 0.83 | 34 HOPS-135 | 35 | 0$\farcs$11 | 0$\farcs$06 | 0.55 | 57 HOPS-150B | 25 | 0$\farcs$03 | 0$\farcs$07 | 0.78 | 39 HOPS-157 | 20 | 0$\farcs$53 | 0$\farcs$46 | 0.87 | 30 HOPS-164 | 55 (blue)/25 (red) | 0$\farcs$22 | 0$\farcs$14 | 0.64 | 50 HOPS-166 | 20 | 0$\farcs$16 | 0$\farcs$15 | 0.94 | 20 HOPS-169 | 70 | 0$\farcs$17 | 0$\farcs$13 | 0.77 | 40 HOPS-177 | 30 | 0$\farcs$19 | 0$\farcs$03 | 0.16 | 81ccAs described in 4.1.4, we believe this estimate of the inclination angle is wrong because the continuum image for this source has a poor signal-to-noise. We instead use the mean inclination angle from a random uniform distribution of outflow orientations on the sky ($57.3\arcdeg$) for this source. HOPS-185 | 30bbSuspicious results. The derived inclination angle is very different compared to the expected value from the molecular outflow morphology (Figure 2). | 0$\farcs$27 | 0$\farcs$10 | 0.37 | 68 HOPS-191 | 30 | 0$\farcs$05 | 0$\farcs$02 | 0.41 | 66 HOPS-194 | 20 | 0$\farcs$19 | 0$\farcs$17 | 0.89 | 27 HOPS-198 | 25bbSuspicious results. The derived inclination angle is very different compared to the expected value from the molecular outflow morphology (Figure 2). | 0$\farcs$41 | 0$\farcs$07 | 0.17 | 80 HOPS-200 | NA | 0$\farcs$46 | 0$\farcs$16 | 0.35 | 70 HOPS-355 | 45 | 0$\farcs$08 | 0$\farcs$07 | 0.88 | 29 HOPS-408 | 70 | 0$\farcs$28 | 0$\farcs$24 | 0.88 | 59 Figure 20: Histogram of inclination angle differences between those derived from the wide-angle wind model fit to the inclinations derived from the disk minor to major axis ratio. Table 6: Parameters for the best outflow wide-angle wind model for each source Source | $i$ | $c_{0}$aaThe uncertainty for $c_{0}$ is 0.74/$sin(i)\times 10^{-4}$ au-1. | $v_{0}$bbThe values are either 7.5, 12.5, 25.0 or 37.5 $\,10^{-4}$km s-1 because these are the values for the grid parameter search. The uncertainty for $v_{0}$ is half the interval between the values evaluated by the model. For 7.5$\times\,10^{-4}$ km s-1, the uncertainty is $\pm 3.8\times 10^{-4}$ km s-1, for all other values the the uncertainty is $\pm 6.3\times 10^{-4}$ km s-1. ---|---|---|--- | [deg] | [$10^{-4}$ au-1] | [$10^{-4}$ km s-1] HOPS-10 | 40 | 40.3 (blue) / 10.8 (red) | 37.5 HOPS-11 | 45 | 6.2 | 12.5 HOPS-13 | 50 | 25.0ccUnreliable results, poorly constrained. | 7.5 HOPS-127 | 25 | 13.9 (blue) / 10.4 (red) | 25.0 HOPS-129 | 20 | 4.3 (blue) / 8.6 (red) | 12.5 (blue) / 7.5 (red) HOPS-130 | 25 | 7.0 (blue) / 10.4 (red) | 12.5 HOPS-134 | 45 | 4.2 | 12.5 HOPS-135 | 35 | 12.8 | 12.5 HOPS-150B | 25 | 52.2ccUnreliable results, poorly constrained. | 7.5 HOPS-157 | 20 | 64.5 | 37.5 HOPS-164 | 55 (blue)/25 (red) | 45.2 (blue) / 80.0 (red) | 25.0 (blue) / 7.5 (red) HOPS-166 | 20 | 4.3 | 7.5 HOPS-169 | 70 | 40.7 | 12.5 HOPS-177 | 30 | 5.9 | 7.5 HOPS-185 | 30ddSuspicious results. The derived inclination angle is very different compared to the expected value from the molecular outflow morphology (Figure 2). | 5.9 (blue) / 8.8 (red) | 25.0 HOPS-191 | 30 | 8.8 (blue) / 5.9 (red) | 12.5 HOPS-194 | 20 | 4.3 | 12.5 (blue) / 7.5 (red) HOPS-198 | 25ddSuspicious results. The derived inclination angle is very different compared to the expected value from the molecular outflow morphology (Figure 2). | 10.4 (blue) / 45.2 (red) | 25.0 HOPS-200 | NA | NA | NA HOPS-355 | 45 | 27.0 | 12.5 HOPS-408 | 70 | 20.3 | 25.0 ## Appendix C Molecular Outflow Mass, Momentum, and Energy velocity range information In this section we show the velocity range used for constructing the molecular outflow mass, momentum, and energy spectrum. An example of outflow mass spectrum constructed from CO, 13CO, and C18O is shown in Figure 5. Table 7: Velocity ranges for constructing Outflow Mass, Momentum, and Energy Spectra | | Blue-shifted | | | Red-shifted | | ---|---|---|---|---|---|---|--- Source | 12CO | 13CO | C18O | C18O | 13CO | 12CO | | ($km\,s^{-1}$) | ($km\,s^{-1}$) | ($km\,s^{-1}$) | ($km\,s^{-1}$) | ($km\,s^{-1}$) | ($km\,s^{-1}$) | HOPS-10 | $v\leq 7.52$ | $7.52\leq v\leq 7.76$ | NA | NA | $9.58\leq v\leq 9.74$ | $9.74\leq v$ | HOPS-11 | $v\leq 5.88$ | $5.88\leq v\leq 6.68$ | $6.68\leq v\leq 7.10$ | $8.50\leq v\leq 10.33$ | $10.33\leq v\leq 11.12$ | $11.12\leq v$ | HOPS-13 | $v\leq 5.02$ | $5.02\leq v\leq 5.58$ | $5.58\leq v\leq 5.66$ | NA | NA | NA | HOPS-127 | $v\leq 3.27$ | $3.27\leq v\leq 3.75$ | NA | $5.02\leq v\leq 5.65$ | $5.65\leq v\leq 6.84$ | $6.84\leq v$ | HOPS-129 | $v\leq 2.52$ | $2.52\leq v\leq 2.84$ | $2.84\leq v\leq 2.92$ | NA | NA | NA | | $2.92\leq v\leq 3.07$ | | | | | | HOPS-130 | $v\leq 2.45$ | NA | NA | NA | NA | NA | HOPS-134 | NA | NA | NA | NA | $6.93\leq v\leq 7.80$ | $7.80\leq v$ | HOPS-135 | $v\leq 3.20$ | $3.20\leq v\leq 3.68$ | $3.68\leq v\leq 4.39$ | $4.95\leq v\leq 5.20$ | $5.20\leq v\leq 6.30$ | $6.30\leq v$ | HOPS-150 | $v\leq 3.28$ | $3.28\leq v\leq 3.62$ | $3.62\leq v\leq 3.68$ | NA | NA | NA | HOPS-157 | $v\leq 3.64$ | $4.64\leq v\leq 4.75$ | NA | NA | $6.64\leq v\leq 9.28$ | $9.28\leq v$ | HOPS-164 | $v\leq 4.61$ | $4.61\leq v\leq 4.92$ | $4.92\leq v\leq 5.09$ | NA | $6.58\leq v\leq 6.83$ | $6.83\leq v$ | HOPS-166 | $v\leq 5.85$ | NA | NA | NA | $10.05\leq v\leq 12.05$ | $12.05\leq v$ | HOPS-169 | $v\leq 2.95$ | $2.95\leq v\leq 5.96$ | $5.96\leq v\leq 6.67$ | $7.47\leq v\leq 8.42$ | $9.58\leq v\leq 9.62$ | $9.62\leq v$ | HOPS-177 | $v\leq 6.78$ | $6.78\leq v\leq 8.37$ | NA | NA | $9,40\leq v\leq 9.96$ | $9.96\leq v$ | HOPS-185 | $v\leq 6.38$ | $6.38\leq v\leq 6.61$ | $6.61\leq v\leq 6.94$ | NA | NA | $9.87\leq v$ | HOPS-191 | $v\leq 7.64$ | NA | NA | NA | NA | $10.5\leq v$ | HOPS-194 | NA | NA | NA | NA | NA | $9.85\leq v$ | HOPS-198 | $v\leq 4.72$ | $4.72\leq v\leq 4.79$ | $4.79\leq v\leq 4.88$ | NA | NA | $5.99\leq v\leq 6.54$ | | | | | | | $10.51\leq v$ | HOPS-200 | $v\leq 6.59$ | $6.59\leq v\leq 6.66$ | $6.66\leq v\leq 7.75$ | NA | NA | NA | HOPS-355 | $v\leq 4.89$ | $6.05\leq v\leq 6.14$ | $6.14\leq v\leq 6.40$ | NA | NA | $10.05\leq v$ | HOPS-408 | $v\leq 2.38$ | $2.38\leq v\leq 2.83$ | NA | $4.21\leq v\leq 6.15$ | $6.15\leq v\leq 6.90$ | $6.90\leq v$ | ## Appendix D Summarize the sources excluded for each analysis In this section we summarize the sources excluded for each analysis and give explanations. The results are summarized in Table 8. Table 8: Sources excluded for each analysis Main Analysis | Analysis item | Section/ Figure | Sources excluded | Reason ---|---|---|---|--- | | | HOPS-408 | Outflow is barely resolved. Mass, | Radial profile | Figure 9 | HOPS-166, HOPS-194 | Significantly higher bolometric luminosity Momentum, | | | | and core mass compared to other sources. Energy | | Figure 14, | HOPS-408 | Outflow is barely resolved. rate | Class average | Table 3, Table 4 | HOPS-166, HOPS-194 | Significantly higher bolometric luminosity | | | | and core mass compared to other sources. | | | HOPS-11 blue-shifted lobe | Outflow lobe is strongly asymmetric. | | | HOPS-13, HOPS-200 | Very messy outflow. | Outflow Momentum | Figure 13 | HOPS-134 blue-shifted lobe | No detection. | angular profile | | HOPS-135 blue-shifted lobe | Low S/N ratio. | | | HOPS-177 red-shifted lobe | Low S/N ratio. | | | HOPS-11 blue-shifted lobe | Outflow lobe is strongly asymmetric. Angular | | | HOPS-13, HOPS-200 | Very messy outflow. Profile | | | HOPS-130 red-shifted lobe | Compact emission, hard to separate out. | | | HOPS-134 blue-shifted lobe | No detection. | Outflow Energy | Figure 13 | HOPS-135 blue-shifted lobe | Low S/N ratio. | angular profile | | HOPS-150 | Contaminated by nearby source. | | | HOPS-157 red-shifted lobe | Compact emission, hard to separate out. | | | HOPS-177 red-shifted lobe | Low S/N ratio. | | | HOPS-191 red-shifted lobe | Low S/N ratio. | | | HOPS-194 | Low S/N ratio. | | | HOPS-408 | Outflow is barely resolved. | | | HOPS-150, HOPS-166 | Opening angle is ill defined. | Conventional | Figure 11 | | Poor Gaussian fits to the 12CO data. | Opening Angle | | HOPS-194 | Outflow opening angle measured from | | | | C18O moment 0 map by eye. Opening | | | HOPS-13, HOPS-134 | Angle | Momentum | Figure 12 | HOPS-157, HOPS-166, | Poor Gaussian fits to the Momentum Maps. | Opening Angle | | HOPS-177, HOPS-194, | | | | HOPS-200 | | | | HOPS-13, HOPS-157 | | Energy | Figure 12 | HOPS-134, HOPS-191 | Poor Gaussian fits to the Energy Maps. | Opening Angle | | HOPS-194, HOPS-200 | | | | HOPS-150 | Contaminated by nearby source. | Outflow | Figure 15 | HOPS-13, HOPS-150 | Poorly constrained. | Curvature | | HOPS-200 | ## Appendix E Molecular Outflow rates In this section, we report all the measurements of outflow mass ($\dot{M}$), momentum ($\dot{P}$), and energy ($\dot{E}$) ejection rate for both the red- shifted (R) and blue-shifted (B) lobe at different radius. We present the outflow rate using all outflow gas in Table 9, Table 10, and Table 11. For gas with escape velocity $v_{out}>1$ km s-1, the outflow rates are reported in Table 12, Table 13, and Table 14. Similarly, for gas with escape velocity $v_{out}>2$ km s-1, the outflow rates are reported in Table 15, Table 16, and Table 17. Table 9: Molecular outflow rates using all outflow gas, within $2380-3740$ au and $3749-5100$ au away from source | | 2380 | $-$ | 3740 | AU | | | 3740 | $-$ | 5100 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 1.46 | 3.11 | 5.14 | 71.73 | 5.78E+41 | 4.75E+43 | 1.79 | 2.93 | 10.54 | 68.68 | 1.79E+42 | 4.94E+43 HOPS10 | 1.46 | 2.67 | 5.14 | 30.38 | 5.78E+41 | 8.30E+42 | 1.79 | 2.48 | 10.54 | 25.07 | 1.79E+42 | 6.49E+42 no jet | | | | | | | | | | | | HOPS11 | 1.70 | 4.30 | 7.66 | 26.69 | 8.19E+41 | 4.08E+42 | 2.29 | 5.43 | 11.06 | 39.39 | 1.24E+42 | 7.64E+42 HOPS11 | 1.62 | 3.94 | 5.73 | 17.14 | 3.58E+41 | 1.51E+42 | 2.04 | 4.99 | 6.51 | 24.93 | 3.52E+41 | 2.74E+42 no jet | | | | | | | | | | | | HOPS164 | 1.76 | 2.95 | 10.23 | 20.82 | 2.61E+42 | 1.00E+43 | 1.34 | 3.17 | 9.01 | 28.17 | 2.78E+42 | 1.44E+43 HOPS164 | 1.63 | 2.75 | 5.01 | 8.18 | 2.22E+41 | 6.44E+41 | 1.22 | 2.81 | 3.90 | 7.49 | 1.87E+41 | 4.58E+41 no jet | | | | | | | | | | | | HOPS169 | 3.79 | 2.30 | 24.99 | 15.18 | 3.09E+42 | 2.02E+42 | 3.11 | 2.13 | 17.75 | 18.24 | 1.82E+42 | 2.44E+42 HOPS198 | 0.46 | 3.01 | 5.31 | 22.22 | 8.27E+41 | 4.62E+42 | 0.31 | 1.56 | 3.85 | 28.58 | 6.08E+41 | 1.25E+43 HOPS355 | 0.22 | 0.05 | 0.50 | 0.36 | 2.13E+40 | 2.76E+40 | 0.17 | 0.06 | 0.43 | 0.46 | 2.16E+40 | 3.87E+40 HOPS408 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.34 | 2.37 | 2.39 | 6.59 | 2.81E+41 | 2.62E+41 | 0.23 | 2.42 | 1.47 | 6.82 | 1.55E+41 | 3.25E+41 HOPS130 | 0.71 | NA | 10.02 | NA | 1.76E+42 | NA | 0.48 | NA | 6.70 | NA | 1.21E+42 | NA HOP135 | 0.19 | 13.37 | 0.39 | 43.09 | 1.99E+40 | 1.67E+42 | 0.19 | 11.52 | 0.28 | 36.11 | 1.14E+40 | 1.38E+42 HOPS157 | 0.93 | NA | 2.76 | NA | 1.93E+41 | NA | 1.27 | NA | 4.02 | NA | 3.38E+41 | NA HOPS177 | 7.12 | 0.74 | 20.14 | 1.05 | 6.43E+41 | 1.92E+40 | 5.77 | 0.79 | 16.98 | 1.28 | 5.68E+41 | 3.01E+40 HOPS185 | 8.28 | 0.12 | 30.95 | 0.85 | 2.05E+42 | 6.29E+40 | 4.62 | 0.16 | 16.25 | 1.14 | 6.85E+41 | 8.25E+40 HOPS191 | 0.61 | 0.06 | 2.55 | 0.60 | 1.47E+41 | 7.33E+40 | 0.83 | 0.02 | 3.93 | 0.21 | 2.62E+41 | 2.02E+40 HOPS129 | 0.90 | 0.04 | 1.08 | 0.02 | 1.58E+40 | 1.63E+38 | 0.70 | 0.04 | 0.97 | 0.03 | 1.80E+40 | 1.87E+38 HOPS134 | NA | 0.39 | NA | 1.71 | NA | 1.32E+41 | NA | 0.29 | NA | 0.94 | NA | 6.31E+40 HOPS13 | 1.69 | NA | 6.34 | NA | 3.58E+41 | NA | 1.81 | NA | 7.98 | NA | 5.20E+41 | NA HOPS150 | 0.31 | NA | 0.28 | NA | 6.68E+39 | NA | 0.15 | NA | 0.23 | NA | 6.97E+39 | NA HOPS166 | 0.24 | 2.51 | 1.36 | 8.02 | 9.98E+40 | 3.93E+41 | 0.36 | 1.74 | 1.78 | 6.63 | 9.81E+40 | 3.90E+41 HOPS194 | NA | 0.16 | NA | 0.50 | NA | 1.94E+40 | NA | 0.17 | NA | 0.50 | NA | 1.68E+40 HOPS200 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 10: Molecular outflow rates using all outflow gas, within $5100-6460$ au and $6460-7820$ au away from source | | 5100 | $-$ | 6460 | AU | | | 6460 | $-$ | 7820 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 1.96 | 3.00 | 14.87 | 78.66 | 3.34E+42 | 6.40E+43 | 1.77 | 2.82 | 13.43 | 95.12 | 3.63E+42 | 7.86E+43 HOPS10 | 1.96 | 2.41 | 14.87 | 19.39 | 3.34E+42 | 3.86E+42 | 1.77 | 2.08 | 13.43 | 20.96 | 3.63E+42 | 4.12E+42 no jet | | | | | | | | | | | | HOPS11 | 2.24 | 4.95 | 10.04 | 36.55 | 1.01E+42 | 6.73E+42 | 2.05 | 3.73 | 7.74 | 28.12 | 5.51E+41 | 4.85E+42 HOPS11 | 2.05 | 4.76 | 6.63 | 29.34 | 3.59E+41 | 3.86E+42 | 2.05 | 3.22 | 7.74 | 15.31 | 5.51E+41 | 1.27E+42 no jet | | | | | | | | | | | | HOPS164 | 0.50 | 3.22 | 5.61 | 29.14 | 2.58E+42 | 1.72E+43 | NA | 2.28 | 0.02 | 20.79 | 4.92E+39 | 1.29E+43 HOPS164 | 0.40 | 2.85 | 1.07 | 6.55 | 3.46E+40 | 1.93E+41 | NA | 2.03 | 0.01 | 4.88 | 2.28E+38 | 1.57E+41 no jet | | | | | | | | | | | | HOPS169 | 2.54 | 1.71 | 16.10 | 16.66 | 1.73E+42 | 2.28E+42 | 1.48 | 1.52 | 10.20 | 14.62 | 1.02E+42 | 1.74E+42 HOPS198 | 0.15 | 1.24 | 1.50 | 16.93 | 1.87E+41 | 6.50E+42 | 0.09 | 0.33 | 0.74 | 2.05 | 7.41E+40 | 2.85E+41 HOPS355 | 0.14 | 0.05 | 0.36 | 0.41 | 1.54E+40 | 3.67E+40 | 0.15 | 0.04 | 0.42 | 0.31 | 1.65E+40 | 2.64E+40 HOPS408 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.10 | 2.35 | 0.42 | 6.76 | 2.91E+40 | 4.17E+41 | 0.05 | 2.23 | 0.19 | 5.55 | 1.01E+40 | 1.97E+41 HOPS130 | 0.21 | NA | 2.35 | NA | 3.59E+41 | NA | 0.10 | NA | 0.76 | NA | 8.06E+40 | NA HOP135 | 0.11 | 2.35 | 0.12 | 7.21 | 2.86E+39 | 2.73E+41 | NA | NA | NA | NA | NA | NA HOPS157 | 0.31 | NA | 0.45 | NA | 9.73E+39 | NA | NA | NA | NA | NA | NA | NA HOPS177 | 5.31 | 1.03 | 16.03 | 1.67 | 5.65E+41 | 4.12E+40 | 3.88 | 1.25 | 11.95 | 1.96 | 4.45E+41 | 4.99E+40 HOPS185 | 5.30 | 0.18 | 19.20 | 1.25 | 8.30E+41 | 8.84E+40 | 5.18 | 0.06 | 18.55 | 0.42 | 7.12E+41 | 3.36E+40 HOPS191 | 0.80 | 0.01 | 4.45 | 0.13 | 3.53E+41 | 1.17E+40 | 0.36 | 0.01 | 1.50 | 0.05 | 9.17E+40 | 4.47E+39 HOPS129 | 0.37 | 0.04 | 0.55 | 0.03 | 1.14E+40 | 1.88E+38 | 0.25 | 0.02 | 0.40 | 0.01 | 9.21E+39 | 8.83E+37 HOPS134 | NA | 0.19 | NA | 0.37 | NA | 7.63E+39 | NA | 0.10 | NA | 0.22 | NA | 5.16E+39 HOPS13 | 0.18 | NA | 0.74 | NA | 3.95E+40 | NA | NA | NA | NA | NA | NA | NA HOPS150 | 0.05 | NA | 0.03 | NA | 2.70E+38 | NA | 0.06 | NA | 0.07 | NA | 1.66E+39 | NA HOPS166 | 0.33 | 1.46 | 1.62 | 5.62 | 8.71E+40 | 3.35E+41 | 0.32 | 0.76 | 1.64 | 2.89 | 8.96E+40 | 1.61E+41 HOPS194 | NA | 0.13 | NA | 0.37 | NA | 1.13E+40 | NA | 0.10 | NA | 0.28 | NA | 8.95E+39 HOPS200 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 11: Molecular outflow rates using all outflow gas, within $7820-9180$ au away from source | | 7820 | $-$ | 9180 | AU | ---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 0.79 | 1.90 | 4.05 | 76.61 | 5.59E+41 | 6.43E+43 HOPS10 | 0.79 | 1.29 | 4.05 | 15.65 | 5.59E+41 | 3.03E+42 no jet | | | | | | HOPS11 | 1.63 | 2.30 | 6.90 | 21.40 | 4.60E+41 | 3.63E+42 HOPS11 | 1.63 | 1.32 | 6.90 | 4.48 | 4.60E+41 | 1.88E+41 no jet | | | | | | HOPS164 | NA | 1.77 | NA | 19.95 | NA | 1.41E+43 HOPS164 | NA | 1.55 | NA | 3.89 | NA | 1.46E+41 no jet | | | | | | HOPS169 | 1.09 | 1.38 | 8.87 | 11.86 | 8.80E+41 | 1.22E+42 HOPS198 | 0.10 | NA | 0.88 | NA | 9.77E+40 | NA HOPS355 | 0.13 | 0.03 | 0.34 | 0.19 | 1.31E+40 | 1.22E+40 HOPS408 | NA | NA | NA | NA | NA | NA HOPS127 | 0.10 | 2.00 | 0.33 | 5.47 | 1.73E+40 | 2.04E+41 HOPS130 | NA | NA | NA | NA | NA | NA HOP135 | NA | NA | NA | NA | NA | NA HOPS157 | NA | NA | NA | NA | NA | NA HOPS177 | 1.31 | 1.14 | 4.13 | 1.76 | 1.61E+41 | 4.43E+40 HOPS185 | NA | NA | NA | NA | NA | NA HOPS191 | NA | NA | NA | NA | NA | NA HOPS129 | 0.13 | NA | 0.28 | NA | 8.94E+39 | 4.91E+35 HOPS134 | NA | 0.08 | NA | 0.16 | NA | 4.10E+39 HOPS13 | NA | NA | NA | NA | NA | NA HOPS150 | 0.06 | NA | 0.08 | NA | 1.88E+39 | NA HOPS166 | 0.14 | 0.49 | 0.66 | 2.35 | 3.31E+40 | 1.58E+41 HOPS194 | NA | 0.05 | NA | 0.13 | NA | 4.10E+39 HOPS200 | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 12: Molecular outflow rates for gas with $v_{out}>1$ km s-1, within $2380-3740$ au and $3749-5100$ au away from source | | 2380 | $-$ | 3740 | AU | | | 3740 | $-$ | 5100 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 1.13 | 3.11 | 4.93 | 72.53 | 6.46e+41 | 4.86E+43 | 1.43E+00 | 2.93 | 10.44 | 69.29 | 1.95E+42 | 5.02E+43 HOPS10 | 1.13 | 3.11 | 4.93 | 72.53 | 6.46e+41 | 4.86E+43 | 1.43E+00 | 2.93 | 10.44 | 69.29 | 1.95E+42 | 5.02E+43 no jet | | | | | | | | | | | | HOPS11 | 1.49 | 4.03 | 7.51 | 26.96 | 8.32e+41 | 4.42E+42 | 2.02E+00 | 5.19 | 10.93 | 39.85 | 1.29E+42 | 8.11E+42 HOPS11 | 1.49 | 4.03 | 7.51 | 26.96 | 8.32e+41 | 4.42E+42 | 2.02E+00 | 5.19 | 10.93 | 39.85 | 1.29E+42 | 8.11E+42 no jet | | | | | | | | | | | | HOPS164 | 1.76 | 2.95 | 10.23 | 20.82 | 2.61e+42 | 1.00E+43 | 1.34E+00 | 3.17 | 9.01 | 28.17 | 2.78E+42 | 1.44E+43 HOPS164 | 1.76 | 2.95 | 10.23 | 20.82 | 2.61e+42 | 1.00E+43 | 1.34E+00 | 3.17 | 9.01 | 28.17 | 2.78E+42 | 1.44E+43 no jet | | | | | | | | | | | | HOPS169 | 3.35 | 1.83 | 24.73 | 14.95 | 3.11e+42 | 2.07E+42 | 2.59E+00 | 1.67 | 17.43 | 18.02 | 1.84E+42 | 2.49E+42 HOPS198 | 0.50 | 3.04 | 8.71 | 30.68 | 5.6e+42 | 2.53E+43 | 3.88E-01 | 1.59 | 9.90 | 33.63 | 7.09E+42 | 2.28E+43 HOPS355 | 0.13 | 0.05 | 0.49 | 0.41 | 3.15e+40 | 4.95E+40 | 1.03E-01 | 0.06 | 0.42 | 0.55 | 2.82E+40 | 8.30E+40 HOPS408 | NA | NA | NA | NA | 0 | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.35 | 2.05 | 2.50 | 6.53 | 3.31e+41 | 3.49E+41 | 2.29E-01 | 2.14 | 1.62 | 6.78 | 2.21E+41 | 4.12E+41 HOPS130 | 0.71 | NA | 10.76 | NA | 2.42e+42 | NA | 4.90E-01 | NA | 7.45 | NA | 1.83E+42 | NA HOP135 | 0.09 | 12.89 | 0.44 | 43.14 | 8.42e+40 | 1.99E+42 | 6.40E-02 | 10.97 | 0.28 | 35.97 | 5.82E+40 | 1.57E+42 HOPS157 | 0.59 | NA | 2.54 | NA | 2.12e+41 | NA | 8.42E-01 | NA | 3.74 | NA | 3.62E+41 | NA HOPS177 | 7.13 | 0.55 | 20.51 | 1.42 | 8.64e+41 | 4.20E+41 | 5.78E+00 | 0.60 | 17.38 | 1.67 | 8.22E+41 | 4.34E+41 HOPS185 | 8.28 | 0.13 | 31.43 | 2.27 | 2.48e+42 | 1.61E+42 | 4.62E+00 | 0.18 | 16.68 | 3.21 | 1.08E+42 | 2.33E+42 HOPS191 | 0.63 | 0.07 | 3.61 | 1.47 | 9.73e+41 | 9.09E+41 | 8.54E-01 | 0.03 | 5.34 | 0.48 | 1.37E+42 | 2.75E+41 HOPS129 | 0.55 | NA | 0.93 | NA | 7.78e+40 | NA | 4.82E-01 | NA | 0.97 | NA | 9.78E+40 | NA HOPS134 | NA | 0.40 | NA | 2.22 | 0 | 3.53E+41 | NA | 0.30 | NA | 1.43 | NA | 2.92E+41 HOPS13 | 1.68 | NA | 6.33 | NA | 3.58e+41 | NA | 1.82E+00 | NA | 8.13 | NA | 5.59E+41 | NA HOPS150 | 0.08 | NA | 0.51 | NA | 1.44e+41 | NA | 7.36E-02 | NA | 0.51 | NA | 1.39E+41 | NA HOPS166 | 0.25 | 2.52 | 1.46 | 8.17 | 1.33e+41 | 4.23E+41 | 3.64E-01 | 1.75 | 1.92 | 6.76 | 1.47E+41 | 4.16E+41 HOPS194 | NA | 0.21 | NA | 1.82 | 0 | 4.41E+41 | NA | 0.22 | NA | 1.94 | NA | 4.73E+41 HOPS200 | NA | NA | NA | NA | 0 | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 13: Molecular outflow rates for gas with $v_{out}>1$ km s-1, within $5100-6460$ au and $6460-7820$ au away from source | | 5100 | $-$ | 6460 | AU | | | 6460 | $-$ | 7820 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 1.60 | 3.01 | 14.76 | 79.83 | 3.52E+42 | 6.55E+43 | 1.43 | 2.83 | 13.44 | 96.41 | 3.88E+42 | 8.03E+43 HOPS10 | 1.60 | 3.01 | 14.76 | 79.83 | 3.52E+42 | 6.55E+43 | 1.43 | 2.83 | 13.44 | 96.41 | 3.88E+42 | 8.03E+43 no jet | | | | | | | | | | | | HOPS11 | 1.95 | 4.79 | 9.92 | 37.04 | 1.07E+42 | 7.18E+42 | 1.66 | 3.62 | 7.73 | 28.53 | 6.77E+41 | 5.21E+42 HOPS11 | 1.95 | 4.79 | 9.92 | 37.04 | 1.07E+42 | 7.18E+42 | 1.66 | 3.62 | 7.73 | 28.53 | 6.77E+41 | 5.21E+42 no jet | | | | | | | | | | | | HOPS164 | 0.50 | 3.22 | 5.61 | 29.14 | 2.58E+42 | 1.72E+43 | NA | 2.28 | 0.02 | 20.79 | 4.92E+39 | 1.29E+43 HOPS164 | 0.50 | 3.22 | 5.61 | 29.14 | 2.58E+42 | 1.72E+43 | NA | 2.28 | 0.02 | 20.79 | 4.92E+39 | 1.29E+43 no jet | | | | | | | | | | | | HOPS169 | 2.25 | 1.49 | 15.94 | 16.63 | 1.75E+42 | 2.35E+42 | 1.40 | 1.47 | 10.22 | 14.80 | 1.05E+42 | 1.86E+42 HOPS198 | 0.16 | 1.28 | 3.38 | 21.88 | 3.17E+42 | 1.64E+43 | 0.09 | 0.34 | 1.82 | 3.32 | 1.97E+42 | 3.40E+42 HOPS355 | 0.10 | 0.05 | 0.38 | 0.48 | 2.95E+40 | 6.94E+40 | 0.13 | 0.04 | 0.47 | 0.41 | 3.99E+40 | 6.89E+40 HOPS408 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.10 | 1.94 | 0.52 | 6.62 | 6.98E+40 | 5.03E+41 | 0.05 | 1.82 | 0.26 | 5.44 | 4.02E+40 | 2.90E+41 HOPS130 | 0.22 | NA | 3.14 | NA | 1.04E+42 | NA | 0.11 | NA | 1.81 | NA | 9.79E+41 | NA HOP135 | 0.03 | 2.22 | 0.09 | 7.23 | 1.61E+40 | 3.61E+41 | NA | NA | NA | NA | NA | NA HOPS157 | 0.20 | NA | 0.39 | NA | 1.99E+40 | NA | NA | NA | NA | NA | NA | NA HOPS177 | 5.31 | 0.78 | 16.46 | 2.00 | 8.29E+41 | 4.27E+41 | 3.89 | 0.94 | 12.30 | 2.51 | 6.50E+41 | 6.58E+41 HOPS185 | 5.31 | 0.21 | 19.63 | 4.35 | 1.19E+42 | 3.43E+42 | 5.18 | 0.09 | 18.87 | 3.09 | 1.00E+42 | 2.95E+42 HOPS191 | 0.82 | 0.02 | 5.53 | 0.49 | 1.19E+42 | 3.93E+41 | 0.37 | 0.01 | 2.27 | 0.16 | 6.96E+41 | 1.01E+41 HOPS129 | 0.25 | NA | 0.57 | NA | 6.62E+40 | NA | 0.18 | NA | 0.54 | NA | 1.03E+41 | NA HOPS134 | NA | 0.21 | NA | 0.84 | NA | 2.20E+41 | NA | 0.11 | NA | 0.55 | NA | 1.65E+41 HOPS13 | 0.18 | NA | 0.74 | NA | 3.95E+40 | NA | NA | NA | NA | NA | NA | NA HOPS150 | 0.04 | NA | 1.04 | NA | 4.03E+41 | NA | 0.02 | NA | 0.17 | NA | 5.09E+40 | NA HOPS166 | 0.34 | 1.47 | 1.87 | 5.83 | 1.46E+41 | 3.75E+41 | 0.32 | 0.76 | 1.74 | 2.92 | 1.24E+41 | 1.68E+41 HOPS194 | NA | 0.19 | NA | 1.92 | NA | 5.09E+41 | NA | 0.14 | NA | 1.59 | NA | 4.33E+41 HOPS200 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 14: Molecular outflow rates for gas with $v_{out}>1$ km s-1, within $7820-9180$ au away from source | | 7820 | $-$ | 9180 | AU | ---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 0.66 | 1.91 | 5.65 | 78.07 | 1.55E+42 | 6.62E+43 HOPS10 | 0.66 | 1.91 | 5.65 | 78.07 | 1.55E+42 | 6.62E+43 no jet | | | | | | HOPS11 | 1.45 | 2.28 | 7.00 | 22.14 | 5.64E+41 | 4.13E+42 HOPS11 | 1.45 | 2.28 | 7.00 | 22.14 | 5.64E+41 | 4.13E+42 no jet | | | | | | HOPS164 | NA | 1.77 | NA | 19.95 | NA | 1.41E+43 HOPS164 | NA | 1.77 | NA | 19.95 | NA | 1.41E+43 no jet | | | | | | HOPS169 | 1.08 | 1.37 | 9.01 | 12.16 | 9.49E+41 | 1.40E+42 HOPS198 | 0.11 | NA | 3.18 | NA | 4.21E+42 | NA HOPS355 | 0.11 | 0.04 | 0.39 | 0.36 | 3.50E+40 | 9.02E+40 HOPS408 | NA | NA | NA | NA | NA | NA HOPS127 | 0.10 | 1.71 | 0.46 | 5.56 | 7.15E+40 | 3.61E+41 HOPS130 | NA | NA | NA | NA | NA | NA HOP135 | NA | NA | NA | NA | NA | NA HOPS157 | NA | NA | NA | NA | NA | NA HOPS177 | 1.31 | 0.86 | 4.35 | 2.48 | 2.93E+41 | 7.40E+41 HOPS185 | NA | NA | NA | NA | NA | NA HOPS191 | NA | NA | NA | NA | NA | NA HOPS129 | 0.12 | NA | 0.36 | NA | 5.06E+40 | NA HOPS134 | NA | 0.08 | NA | 0.54 | NA | 1.89E+41 HOPS13 | NA | NA | NA | NA | NA | NA HOPS150 | 0.03 | NA | 0.22 | NA | 6.30E+40 | NA HOPS166 | 0.14 | 0.49 | 0.73 | 2.40 | 5.67E+40 | 1.68E+41 HOPS194 | NA | 0.08 | NA | 1.08 | NA | 3.10E+41 HOPS200 | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 15: Molecular outflow rates for gas with $v_{out}>2$ km s-1, within $2380-3740$ au and $3749-5100$ au away from source | | 2380 | $-$ | 3740 | AU | | | 3740 | $-$ | 5100 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 0.39 | 3.11 | 4.02 | 72.53 | 6.35E+41 | 4.86E+43 | 0.67 | 2.93 | 9.52 | 69.29 | 1.94E+42 | 5.02E+43 HOPS10 | 0.39 | 3.11 | 4.02 | 72.53 | 6.35E+41 | 4.86E+43 | 0.67 | 2.93 | 9.52 | 69.29 | 1.94E+42 | 5.02E+43 no jet | | | | | | | | | | | | HOPS11 | 1.00 | 3.18 | 6.78 | 25.76 | 8.21E+41 | 4.40E+42 | 1.35 | 4.25 | 9.94 | 38.55 | 1.27E+42 | 8.09E+42 HOPS11 | 1.00 | 3.18 | 6.78 | 25.76 | 8.21E+41 | 4.40E+42 | 1.35 | 4.25 | 9.94 | 38.55 | 1.27E+42 | 8.09E+42 no jet | | | | | | | | | | | | HOPS164 | 1.13 | 1.56 | 9.26 | 18.50 | 2.59E+42 | 1.00E+43 | 0.95 | 1.70 | 8.39 | 25.75 | 2.77E+42 | 1.43E+43 HOPS164 | 1.13 | 1.56 | 9.26 | 18.50 | 2.59E+42 | 1.00E+43 | 0.95 | 1.70 | 8.39 | 25.75 | 2.77E+42 | 1.43E+43 no jet | | | | | | | | | | | | HOPS169 | 2.77 | 1.25 | 23.87 | 14.08 | 3.09E+42 | 2.05E+42 | 2.26 | 1.58 | 16.94 | 17.86 | 1.83E+42 | 2.48E+42 HOPS198 | 0.50 | 3.04 | 8.71 | 30.68 | 5.60E+42 | 2.53E+43 | 0.39 | 1.59 | 9.90 | 33.63 | 7.09E+42 | 2.28E+43 HOPS355 | 0.13 | 0.05 | 0.49 | 0.41 | 3.15E+40 | 4.95E+40 | 0.10 | 0.06 | 0.42 | 0.55 | 2.82E+40 | 8.30E+40 HOPS408 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.26 | 1.42 | 2.36 | 5.59 | 3.29E+41 | 3.35E+41 | 0.17 | 1.33 | 1.52 | 5.58 | 2.20E+41 | 3.94E+41 HOPS130 | 0.71 | NA | 10.76 | NA | 2.42E+42 | NA | 0.49 | NA | 7.45 | NA | 1.83E+42 | NA HOP135 | 0.06 | 9.81 | 0.39 | 38.52 | 8.35E+40 | 1.92E+42 | 0.03 | 8.11 | 0.24 | 31.68 | 5.77E+40 | 1.51E+42 HOPS157 | 0.33 | NA | 2.21 | NA | 2.08E+41 | NA | 0.39 | NA | 3.16 | NA | 3.54E+41 | NA HOPS177 | 5.46 | 0.11 | 17.88 | 0.85 | 8.22E+41 | 4.13E+41 | 4.49 | 0.16 | 15.37 | 1.10 | 7.90E+41 | 4.26E+41 HOPS185 | 7.79 | 0.13 | 30.51 | 2.27 | 2.47E+42 | 1.61E+42 | 4.45 | 0.18 | 16.35 | 3.21 | 1.07E+42 | 2.33E+42 HOPS191 | 0.45 | 0.07 | 3.34 | 1.47 | 9.69E+41 | 9.09E+41 | 0.65 | 0.03 | 5.02 | 0.48 | 1.36E+42 | 2.75E+41 HOPS129 | 0.08 | NA | 0.34 | NA | 7.03E+40 | NA | 0.12 | NA | 0.51 | NA | 9.19E+40 | NA HOPS134 | NA | 0.22 | NA | 1.98 | NA | 3.50E+41 | NA | 0.12 | NA | 1.16 | NA | 2.88E+41 HOPS13 | 1.56 | NA | 6.17 | NA | 3.55E+41 | NA | 1.76 | NA | 8.04 | NA | 5.58E+41 | NA HOPS150 | 0.05 | NA | 0.46 | NA | 1.43E+41 | NA | 0.06 | NA | 0.48 | NA | 1.38E+41 | NA HOPS166 | 0.25 | 1.73 | 1.46 | 6.85 | 1.33E+41 | 4.01E+41 | 0.36 | 1.20 | 1.92 | 5.86 | 1.47E+41 | 4.01E+41 HOPS194 | NA | 0.16 | NA | 1.74 | NA | 4.39E+41 | NA | 0.18 | NA | 1.86 | NA | 4.72E+41 HOPS200 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 16: Molecular outflow rates for gas with $v_{out}>1$ km s-1, within $5100-6460$ au and $6460-7820$ au away from source | | 5100 | $-$ | 6460 | AU | | | 6460 | $-$ | 7820 | AU | ---|---|---|---|---|---|---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 0.81 | 3.01 | 13.79 | 79.83 | 3.51E+42 | 6.55E+43 | 0.67 | 2.83 | 12.50 | 96.41 | 3.87E+42 | 8.03E+43 HOPS10 | 0.81 | 3.01 | 13.79 | 79.83 | 3.51E+42 | 6.55E+43 | 0.67 | 2.83 | 12.50 | 96.41 | 3.87E+42 | 8.03E+43 no jet | | | | | | | | | | | | HOPS11 | 1.31 | 4.21 | 8.98 | 36.26 | 1.05E+42 | 7.17E+42 | 1.18 | 3.32 | 7.03 | 28.11 | 6.67E+41 | 5.20E+42 HOPS11 | 1.31 | 4.21 | 8.98 | 36.26 | 1.05E+42 | 7.17E+42 | 1.18 | 3.32 | 7.03 | 28.11 | 6.67E+41 | 5.20E+42 no jet | | | | | | | | | | | | HOPS164 | 0.38 | 1.73 | 5.43 | 26.69 | 2.58E+42 | 1.72E+43 | NA | 1.27 | 0.02 | 19.13 | 4.90E+39 | 1.28E+43 HOPS164 | 0.38 | 1.73 | 5.43 | 26.69 | 2.58E+42 | 1.72E+43 | NA | 1.27 | 0.02 | 19.13 | 4.90E+39 | 1.28E+43 no jet | | | | | | | | | | | | HOPS169 | 1.94 | 1.48 | 15.51 | 16.61 | 1.75E+42 | 2.35E+42 | 1.24 | 1.46 | 9.97 | 14.78 | 1.04E+42 | 1.86E+42 HOPS198 | 0.16 | 1.28 | 3.38 | 21.88 | 3.17E+42 | 1.64E+43 | 0.09 | 0.34 | 1.82 | 3.32 | 1.97E+42 | 3.40E+42 HOPS355 | 0.10 | 0.05 | 0.38 | 0.48 | 2.95E+40 | 6.94E+40 | 0.13 | 0.04 | 0.47 | 0.41 | 3.99E+40 | 6.89E+40 HOPS408 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA HOPS127 | 0.06 | 1.21 | 0.46 | 5.57 | 6.89E+40 | 4.87E+41 | 0.03 | 1.21 | 0.21 | 4.55 | 3.95E+40 | 2.76E+41 HOPS130 | 0.22 | NA | 3.14 | NA | 1.04E+42 | NA | 0.11 | NA | 1.81 | NA | 9.79E+41 | NA HOP135 | 0.01 | 1.60 | 0.06 | 6.29 | 1.59E+40 | 3.46E+41 | NA | NA | NA | NA | NA | NA HOPS157 | 0.04 | NA | 0.18 | NA | 1.70E+40 | NA | NA | NA | NA | NA | NA | NA HOPS177 | 4.13 | 0.19 | 14.62 | 1.24 | 7.99E+41 | 4.17E+41 | 3.00 | 0.20 | 10.94 | 1.59 | 6.29E+41 | 6.46E+41 HOPS185 | 5.18 | 0.21 | 19.39 | 4.35 | 1.18E+42 | 3.43E+42 | 5.05 | 0.09 | 18.63 | 3.09 | 9.98E+41 | 2.95E+42 HOPS191 | 0.65 | 0.02 | 5.27 | 0.49 | 1.19E+42 | 3.93E+41 | 0.26 | 0.01 | 2.12 | 0.16 | 6.94E+41 | 1.01E+41 HOPS129 | 0.08 | NA | 0.36 | NA | 6.34E+40 | NA | 0.07 | NA | 0.39 | NA | 1.01E+41 | NA HOPS134 | NA | 0.08 | NA | 0.63 | NA | 2.17E+41 | NA | 0.06 | NA | 0.47 | NA | 1.64E+41 HOPS13 | 0.18 | NA | 0.74 | NA | 3.95E+40 | NA | NA | NA | NA | NA | NA | NA HOPS150 | 0.04 | NA | 1.03 | NA | 4.03E+41 | NA | 0.01 | NA | 0.16 | NA | 5.06E+40 | NA HOPS166 | 0.34 | 1.00 | 1.87 | 5.06 | 1.46E+41 | 3.63E+41 | 0.32 | 0.51 | 1.74 | 2.49 | 1.24E+41 | 1.61E+41 HOPS194 | NA | 0.15 | NA | 1.85 | NA | 5.08E+41 | NA | 0.12 | NA | 1.55 | NA | 4.32E+41 HOPS200 | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. Table 17: Molecular outflow rates for gas with $v_{out}>1$ km s-1, within $7820-9180$ au away from source | | 7820 | $-$ | 9180 | AU | ---|---|---|---|---|---|--- Source | $\dot{M_{B}}$aaIn units of M⊙ Myr-1. | $\dot{M_{R}}$aaIn units of M⊙ Myr-1. | $\dot{P_{R}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{P_{B}}$bbIn units of $M_{\odot}\,km\,s^{-1}$ Myr-1. | $\dot{E_{R}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | $\dot{E_{B}}$ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. HOPS10 | 0.32 | 1.91 | 5.23 | 78.07 | 1.54E+42 | 6.62E+43 HOPS10 | 0.32 | 1.91 | 5.23 | 78.07 | 1.54E+42 | 6.62E+43 no jet | | | | | | HOPS11 | 1.18 | 2.18 | 6.59 | 22.01 | 5.58E+41 | 4.13E+42 HOPS11 | 1.18 | 2.18 | 6.59 | 22.01 | 5.58E+41 | 4.13E+42 no jet | | | | | | HOPS164 | NA | 1.00 | NA | 18.66 | NA | 1.40E+43 HOPS164 | NA | 1.00 | NA | 18.66 | NA | 1.40E+43 no jet | | | | | | HOPS169 | 1.02 | 1.37 | 8.91 | 12.15 | 9.47E+41 | 1.40E+42 HOPS198 | 0.11 | NA | 3.18 | NA | 4.21E+42 | NA HOPS355 | 0.11 | 0.04 | 0.39 | 0.36 | 3.50E+40 | 9.02E+40 HOPS408 | NA | NA | NA | NA | NA | NA HOPS127 | 0.07 | 1.29 | 0.40 | 4.91 | 7.05E+40 | 3.50E+41 HOPS130 | NA | NA | NA | NA | NA | NA HOP135 | NA | NA | NA | NA | NA | NA HOPS157 | NA | NA | NA | NA | NA | NA HOPS177 | 0.96 | 0.17 | 3.81 | 1.64 | 2.84E+41 | 7.29E+41 HOPS185 | NA | NA | NA | NA | NA | NA HOPS191 | NA | NA | NA | NA | NA | NA HOPS129 | 0.05 | NA | 0.28 | NA | 4.96E+40 | NA HOPS134 | NA | 0.05 | NA | 0.49 | NA | 1.88E+41 HOPS13 | NA | NA | NA | NA | NA | NA HOPS150 | 0.02 | NA | 0.20 | NA | 6.27E+40 | NA HOPS166 | 0.14 | 0.39 | 0.73 | 2.23 | 5.67E+40 | 1.65E+41 HOPS194 | NA | 0.07 | NA | 1.06 | NA | 3.10E+41 HOPS200 | NA | NA | NA | NA | NA | NA ccHOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. In units of erg Myr-1. ## Appendix F Two-dimensional Mass, Momentum, Energy, Mass rate, Momentum rate (Force), and Energy rate (mechanical luminosity) maps for all outflows In this section, we presents the two-dimensional mass, momentum, energy, mass rate, momentum rate (force), and energy rate (mechanical luminosity) maps for all outflows. All the rate maps are computed from our Pixel Flux-tracing Technique (PFT). Instead of a single rate value obtained by previous methods, the PFT allows us to compute two-dimensional molecular outflow instantaneous rates maps for the first time. Figure 21: HOPS-10 molecular outflow mass, momentum, and energy maps (panels in left column), and the corresponding rate maps (panels in right column). The red ellipse in the bottom left of each panel represents the synthesized beam. Note that the pixel size is the same for all maps and is 0$\farcs$17 $\times$ 0$\farcs$17\. Figure 22: HOPS-10 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note for this figure set, we exclude the high- velocity jet components. The pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 23: HOPS-11 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 24: HOPS-11 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note for this figure set, we exclude the high- velocity jet components. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 25: HOPS-13 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 26: HOPS-127 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 27: HOPS-129 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. There is no detection in the energy ejection rate map due to the high noise level. Figure 28: HOPS-130 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 29: HOPS-134 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 30: HOPS-135 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 31: HOPS-150 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. 2 jets with molecular bullets appear clearly in the energy ejection rate maps. The southern jet coincides spatially with HOPS-150 A, but the northern jet is originated from a nearby source outside the field of view. Figure 32: HOPS-157 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 33: HOPS-164 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 34: HOPS-164 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note for this figure set, we exclude the high- velocity jet components. The pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 35: HOPS-166 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 36: HOPS-169 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 37: HOPS-177 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. There is no detection in the energy ejection rate map due to the high noise level. Figure 38: HOPS-185 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 39: HOPS-191 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. There is no detection in the energy ejection rate map due to the high noise level. Figure 40: HOPS-194 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. No clear detection is seen in the energy map, and the energy and momentum ejection rate maps. Figure 41: HOPS-198 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 42: HOPS-200 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. Figure 43: HOPS-408 outflow mass, momentum, energy maps, and the corresponding instantaneous rate maps. Note that the pixel size is 0$\farcs$17 $\times$ 0$\farcs$17\. The red ellipses in the bottom left of plots represent the synthesized beam. ## Appendix G Outflow Opening Angle, Momentum opening angle, and Energy opening angle In this section, we presents all the measurements for the traditional outflow cavity opening angle, momentum opening angle and the energy opening angles. Table 18: Summary of outflow opening angles without inclination correction Source | Traditional | Momentum | Energy ---|---|---|--- | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. | (∘) | (∘) | (∘) HOPS-10 | 38.0 $\pm$ 3.4 / 62.6 $\pm$ 3.6 | 77.1 $\pm$ 12.7 / 48.6 $\pm$ 2.8 | 23.3 $\pm$ 2.6 / 64.4 $\pm$ 4.7 HOPS-11 | 117.2 $\pm$ 23.7 / 118.6 $\pm$ 5.0 | 42.0 $\pm$ 3.8 / 40.6 $\pm$ 4.7 | 30.1 $\pm$ 2.8 / 84.6 $\pm$ 5.7 HOPS-13 | 50.0 $\pm$ 2.4 / 57.6 $\pm$ 5.6 | NA / NA | NA / NA HOPS-127 | 77.9 $\pm$ 5.6 / 62.9 $\pm$ 4.3 | NA / 112.7 $\pm$ 9.1 | NA / 130.7 $\pm$ 32.5 HOPS-129 | 75.6 $\pm$ 6.4 / NA | 45.2 $\pm$ 8.0 / 82.9 $\pm$ 6.5 | 38.6 $\pm$ 5.6 / 56.7 $\pm$ 7.5 HOPS-130 | 68.3 $\pm$ 8.1 / NA | 91.9 $\pm$ 13.1 / NA | 143.6 $\pm$ 14.6 / NA HOPS-134 | 131.5 $\pm$ 17.1 / NA | NA / NA | NA / NA HOPS-135 | NA / 71.3 $\pm$ 5.3 | NA / 64.6 $\pm$ 4.2 | 40.1 $\pm$ 2.5 / 62.6 $\pm$ 4.3 HOPS-150bbHOPS-150 is a binary system and it is contaminated by two collimated molecular outflows from two nearby sources. The Gaussian fits performed poorly for HOPS-150. | NA / NA | 106.1 $\pm$ 11.7 / NA | NA / NA HOPS-157 | 110.2 $\pm$ 11.0 / NA | NA / NA | NA / NA HOPS-164 | 45.0 $\pm$ 3.0 / 29.3 $\pm$ 0.5 | 56.4 $\pm$ 3.3 / 32.4 $\pm$ 1.8 | 45.8 $\pm$ 1.8 / 26.0 $\pm$ 0.7 HOPS-166 | NA / NA | NA / NA | NA / 155.5 $\pm$ 82.4 HOPS-169 | 67.6 $\pm$ 5.9 / 45.6 $\pm$ 1.6 | 34.0 $\pm$ 2.2 / 35.5 $\pm$ 3.3 | 36.3 $\pm$ 2.7 / 23.2 $\pm$ 1.2 HOPS-177 | 134.9 $\pm$ 19.0 / 91.6 $\pm$ 12.0 | NA / NA | 112.0 $\pm$ 5.2 / NA HOPS-185 | 112.2 $\pm$ 13.9 / 92.2 $\pm$ 11.3 | NA / 97.5 $\pm$ 8.0 | NA / 98.0 $\pm$ 7.2 HOPS-191 | 113.2 $\pm$ 8.7 / 77.6 $\pm$ 8.5 | 83.8 $\pm$ 4.7 / 71.6 $\pm$ 5.7 | NA / NA HOPS-194 | NA / 110.0 $\pm$ 11.1 | NA / NA | NA / NA HOPS-198 | 69.9 $\pm$ 4.5 / NA | 60.0 $\pm$ 3.7 / 66.3 $\pm$ 3.7 | 34.4 $\pm$ 4.1 / NA HOPS-200 | 137.2 $\pm$ 20.7 / NA | NA / NA | NA / NA HOPS-355 | 24.3 $\pm$ 1.3 / 34.3 $\pm$ 1.5 | 31.2 $\pm$ 1.6 / 27.1 $\pm$ 1.8 | 40.8 $\pm$ 5.4 / 26.3 $\pm$ 3.3 HOPS-408 | NA / NA | 37.2 $\pm$ 2.4 / 24.5 $\pm$ 1.3 | 26.4 $\pm$ 4.5 / 24.0 $\pm$ 8.9 Table 19: Summary of the outflow opening angles with inclination correction Source | Traditional | Momentum | Energy ---|---|---|--- | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. The errors are from the uncertainty of the Gaussian fit. | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. The errors are from the uncertainty of the Gaussian fit. | opening angleaaFor each source we give two estimates. The first value is the estimate for the blue-shifted lobe and the second value is for the red-shifted lobe. NA indicates the Gaussian fit used to derive the opening angle was not good, it did not converge or there is simply no outflow lobe for that source. The opening angle is defined as the full-width quarter maximum (FWQM) of the fitted Gaussian. The errors are from the uncertainty of the Gaussian fit. | (∘) | (∘) | (∘) HOPS-10 | 33.2 $\pm$ 2.9 / 55.5 $\pm$ 3.1 | 69.2 $\pm$ 11.0 / 42.7 $\pm$ 2.4 | 20.2 $\pm$ 2.3 / 57.2 $\pm$ 4.1 HOPS-11 | 64.4 $\pm$ 9.2 / 65.9 $\pm$ 1.9 | 16.8 $\pm$ 1.5 / 16.2 $\pm$ 1.8 | 11.8 $\pm$ 1.1 / 38.6 $\pm$ 2.2 HOPS-13 | 44.7 $\pm$ 2.1 / 51.7 $\pm$ 4.9 | NA / NA | NA / NA HOPS-127 | 65.8 $\pm$ 4.5 / 52.1 $\pm$ 3.4 | NA / 100.5 $\pm$ 7.3 | NA / 120.3 $\pm$ 26.3 HOPS-129 | 63.6 $\pm$ 5.1 / NA | 36.8 $\pm$ 6.4 / 70.5 $\pm$ 5.2 | 31.3 $\pm$ 4.5 / 46.7 $\pm$ 6.0 HOPS-130 | 65.3 $\pm$ 7.7 / NA | 88.6 $\pm$ 12.4 / NA | 141.6 $\pm$ 13.8 / NA HOPS-134 | 101.6 $\pm$ 9.5 / NA | NA / NA | NA / NA HOPS-135 | NA / 62.0 $\pm$ 4.4 | NA / 55.8 $\pm$ 3.5 | 34.0 $\pm$ 2.1 / 54.0 $\pm$ 3.6 HOPS-150bbHOPS-150 is a binary system and it is contaminated by two collimated molecular outflows from two nearby sources. The Gaussian fits performed poorly for HOPS-150. ccfootnotetext: HOPS-194 opening angle is measured directly from the C18O outflow cavity as shown in Figure 2. The uncertainty is adopted to be 10% of the measurement. | NA / NA | 79.8 $\pm$ 7.4 / NA | NA / NA HOPS-157 | 70.9 $\pm$ 5.5 / NA | NA / NA | NA / NA HOPS-164 | 35.4 $\pm$ 2.3 / 22.8 $\pm$ 0.4 | 44.9 $\pm$ 2.5 / 25.3 $\pm$ 1.4 | 36.1 $\pm$ 1.4 / 20.2 $\pm$ 0.5 HOPS-166 | NA / NA | NA / NA | NA / 116.1 $\pm$ 33.9 HOPS-169 | 46.7 $\pm$ 3.8 / 30.3 $\pm$ 1.0 | 22.3 $\pm$ 1.4 / 23.3 $\pm$ 2.1 | 23.9 $\pm$ 1.7 / 15.1 $\pm$ 0.8 HOPS-177 | 127.5 $\pm$ 16.0 / 81.7 $\pm$ 10.1 | NA / NA | 102.6 $\pm$ 4.4 / NA HOPS-185 | 108.2 $\pm$ 12.9 / 88.0 $\pm$ 10.5 | NA / 93.3 $\pm$ 7.4 | NA / 93.8 $\pm$ 6.7 HOPS-191 | 108.5 $\pm$ 8.0 / 72.8 $\pm$ 7.8 | 78.9 $\pm$ 4.3 / 66.9 $\pm$ 5.2 | NA / NA HOPS-194 | NA / 65.1 $\pm$ 5.0 | NA / NA | NA / NA HOPS-198 | 69.1 $\pm$ 4.4 / NA | 59.3 $\pm$ 3.6 / 65.5 $\pm$ 3.6 | 33.9 $\pm$ 4.0 / NA HOPS-200 | 134.6 $\pm$ 19.4 / NA | NA / NA | NA / NA HOPS-355 | 11.9 $\pm$ 0.6 / 17.0 $\pm$ 0.7 | 15.4 $\pm$ 0.8 / 13.3 $\pm$ 0.9 | 20.4 $\pm$ 2.6 / 12.9 $\pm$ 1.6 HOPS-408 | NA / NA | 32.2 $\pm$ 2.1 / 21.1 $\pm$ 1.1 | 22.7 $\pm$ 3.9 / 20.7 $\pm$ 7.6
# A Probabilistic Approach to The Perfect Sum Problem Kristof Pusztai Columbia University Department of Statistics <EMAIL_ADDRESS> ###### Abstract The subset sum problem is known to be an NP-hard problem in the field of computer science with the fastest known approach having a run-time complexity of $O(2^{0.3113n})$. A modified version of this problem is known as the perfect sum problem and extends the subset sum idea further. This extension results in additional complexity, making it difficult to compute for a large input. In this paper, I propose a probabilistic approach which approximates the solution to the perfect sum problem by approximating the distribution of potential sums. Since this problem is an extension of the subset sum, our approximation also grants some probabilistic insight into the solution for the subset sum problem. We harness distributional approximations to model the number of subsets which sum to a certain size. These distributional approximations are formulated in two ways: using bounds to justify normal approximation, and approximating the empirical distribution via density estimation. These approximations can be computed in $O(n)$ complexity, and can increase in accuracy with the size of the input data making it useful for large-scale combinatorial problems. Code is available at https://github.com/KristofPusztai/PerfectSum. _Keywords_ Subset Sum Problem $\cdot$ Perfect Sum Problem $\cdot$ Probability Theory $\cdot$ Central Limit Theorem $\cdot$ Combinatorics ## 1 Introduction Combinatorial problems have a notorious reputation for being computationally hard to solve efficiently and, as a result, understanding the underlying properties proves useful in many real life scenarios. The difficulty of solving such problems has made them particularly relevant in the field of cryptography with ciphers such as the well known ENIGMA machine used by Germans during World War II being a prime example(Prasad and Kumari, 2020). Such encryptions make use of of the immense solution space associated with such combinatorial problems, making cracking them a complex task(Sarvate and Seberry, 1986). Additionally, we can model lots of natural events via a combinatorial approach. Examples where this is especially relevant can be found in combinatorial biology, specifically, the study of gene re-combination, where it is important to understand the dynamics of different possible combinations of gene alterations(Waterman, 1995)(Reményi et al., 2004). Phenotypic convergence refers to multiple different combinations of genomic events leading to a similar biological outcome, or trait(Rosenblum et al., 2014). How many different ways (i.e. through what combinations of individual genomic events) a particular phenotype may emerge is of interest in disease genetics. Ultimately, any task which involves finding subgroups of choices out of a set of total possible choices will be well modelled by problems such as the one we are examining. Many past works have focused on finding the most efficient solutions to such problems or investigating specific algorithm behavior. Despite many sophisticated approaches to this problem, the fastest deterministic solution runs in $O(2^{0.3113n})$ complexity(Howgrave-Graham and Joux, 2010). There exists a faster algorithm that implements some form of randomization and so inherently relies on probabilistic aspects, but this work does not focus heavily on the topic and instead is interested in the underlying dynamics of their devised algorithm(Cao and Liu, 2018). In this work, we will focus instead on exploring the purely probabilistic aspects which will allow us to approximate a solution with relative computational ease rather then trying to find the exact solution. ## 2 Past Works Recent works in this area are mainly focused on fast deterministic and randomized approaches. In fact, such scenarios are commonly used to introduce dynamic programming, specifically, the Bellman method(Bellman, 1961). Other solution methods include exhaustive search and divide and-conquer approaches with different modifications yielding sophisticated and efficient algorithms. One such modification to general exhaustive search by (Cao and Liu, 2018) uses a novel data arrangement to find a solution with a computation complexity of $O(2^{n})$ and is the first published algorithm which returns all subsets. As a result, this method finds the exact solution to the perfect sum. In this paper, a probabilistic algorithm is also introduced which implements random permutations and a truncation. However, this algorithm only works well for large $n$ and is designed to answer the decision problem. Some more statistically oriented works also exist but are heavily focused on analyzing specific algorithms. The most notable of these is the work of (D’Atri and Puech, 1982) which investigates the limiting behavior of variants of the GOLOSONE algorithm. This work obtains exact distributions for the output parameters of the algorithms. While the result is certainly interesting, it does not reveal any direct properties of the problem itself. Our work focuses directly on the problem itself and does not deal with any specific solution algorithm. ## 3 Problem Formulation First we will define the subset sum problem and then the perfect sum problem to show that the latter is an extension of the former. ### 3.1 Subset Sum The subset sum problem decision problem is defined formally as: given a set $S=\\{x_{1},x_{2},...,x_{n}\\}$ and some value $T$, the subset sum problem is to decide if there is a subset of $S$ that sums up to $T$(Koiliaris and Xu, 2018). That is, there exists some subset $S_{k}\subseteq S$, such that $\sum S_{k}=\sum_{x\in S_{k}}x=T$ ### 3.2 Perfect Sum The perfect sum can be stated as follows: given a set of values and a target sum, we want to find the number of all subsets which sum to our target value. Formally, this can be defined in the following set of conditions. Given a set of size n, $S=\\{x_{1},x_{2},...,x_{n}\\}$, and some value $T$ we want to find all such subsets $S_{k}=\\{x_{i},...,x_{j}\\}$, $1\leq i<j\leq n$, $S_{k}\subseteq S$ so that $\sum S_{k}=\sum_{x\in S_{k}}x=T$ for all $k=1,...,n$. In this paper, we will focus on finding the _total_ number of all such subsets instead of the actual subsets themselves. Clearly this is an extension of subset sum since the solution to the perfect sum gives us sufficient information to solve the decision problem while also providing additional information. ## 4 Probabilistic Analysis Using our definitions from section 3.2, we can treat any subset $S_{k}$ as a sample taken without replacement from the set S. In order to inspect the dynamics of the subset further, we can calculate the probability that any value of the set S, $x\in S$, will be in the subset $S_{k}$. These calculations are not novel, and their proofs can be found in the appendix for rigor. The novelty will come in the application of these calculations in approximating a solution. #### Lemma 2: The event that value $x\in S$ is chosen to be in $S_{k}$ has probability $P(x\in S_{k})=k/n$ where $k$ denotes the cardinality of the subset $S_{k}$, $k=|S_{k}|$, and $n$ denotes the cardinality of the set $S$, $n=|S|$. This subtle property is simple yet useful as we can use this to show that the expected average of any subset will be the same as the average of the original set. #### Theorem 1: The expected value of the sum of any subset of size $k$, $\sum{S_{k}}$, is equal to $k$ times the mean of the total set, $\bar{S}$. Formally: $E[\sum{S_{k}}]=k\bar{S}$ We are now faced with a more difficult task which is to calculate the variance of the sums of some subset of size k. This is made complex by the fact that we are sampling without replacement which induces non-zero covariance among each member of the sample. From the formula for variance we have: $Var[\sum{S_{k}}]=Var[\sum_{x\in S_{k}}x]=\sum_{x\in S_{k}}Var[x]+\sum_{x_{1},x_{2}\in S_{k},x_{1}\neq x_{2}}Cov[x_{1},x_{2}]$ Hence, we know that the variance of the sum of random variables $x\in S_{k}$ will include covariance terms due to the lack of replacement. Intuitively, we know that this covariance should be negative since if we have two chosen values from $S_{k}$, call them $x_{1}$ and $x_{2}$ then we know that if $x_{1}$ has a large value we can no longer pick this large value when choosing $x_{2}$. Additionally, we know that the total size of the set we are choosing from should also be present since as the set size approaches infinity, this limit approaches sampling with replacement. With these in mind, we can take a look at the formula for covariance and notice a few things. $Cov[x_{1},x_{2}]=E[x_{1}*x_{2}]-E[x_{1}]*E[x_{2}]\textrm{ \ \ \ for $x_{1},x_{2}\in S_{k}$ and $x_{1}\neq x_{2}$}$ Firstly, we note that $E[x_{1}]=E[x_{2}]$ for any $x_{1},x_{2}\in S_{k}$ since all values $x\in S$ have the same probability of being in $S_{k}$ from our calculations in Theorem 1, and all values in $x\in S_{k}$ have the same probability, $\frac{1}{k}$, of being chosen. In fact, we can show that $E[x]=\bar{S}$ where $\bar{S}$ denotes the mean of the original size $n$ set. A short proof of this is left in the appendix section 9.1. Now we are left to find the $E[x_{1}*x_{2}]$ term which happens to be something quite convenient as we see in below in Lemma 2. #### Lemma 2: $E[x_{1}*x_{2}]=-\frac{\sigma^{2}}{n-1}+\bar{S}^{2}$ We now have all of our terms in covariance accounted for which allows us to conclude Lemma 3 by simply plugging in the values from our previous derivations. #### Lemma 3: $\textrm{for any $x_{1},x_{2}\in S_{k}$ we have that }Cov[x_{1},x_{2}]=\frac{-\sigma^{2}}{n-1}$ We now have all the components to find the variance of the sum of a subset and by simply plugging in our calculated values from the above sections we arrive at Theorem 2. #### Theorem 2: $Var[\sum{S_{k}}]=k\sigma^{2}(1-\frac{(k-1)}{n-1})$ This conclusion provides us with significant information about the distribution of possible sum values and is vital to our application for approximating a solution. Note that as $k$ approaches $n$, the variance goes to 0. ## 5 Application to Finding Solutions In the following sections, we provide applications for the above probabilistic properties in finding a solution. We will explore various distributional approximation methods and apply them to calculate a concrete value which we compare to the true answer for solvable randomly generated problems. Psuedocode for our general algorithm structure can be found in the appendix and can be seen to have $O(n)$ complexity. ### 5.1 Exact Solution With the previous calculations in mind, we can try and apply these conclusions to finding a solution to the perfect sum problem. From a probabilistic perspective, if we know the exact discrete distribution of the larger set $S$, then we can calculate the exact distribution of each of the subsets $S_{k}$. As a result, we can find the percentage of subsets of each size $k$ which sum up to a given value. In fact, with this distributional perspective we gain immense flexibility as we can also find the percentage of subsets which sum to greater, or less then a specified value. Once we have the percentage, we can multiply this percentage by the total number of different possible subsets of size $k$, which is just a combination formula computation, and we are left with the exact number of subsets. The usefulness of this approach is limited, though, by the fact that we must calculate exact discrete probability density’s. For example, let’s examine $k$ = 3. Let’s denote the sum of 3 random variables $x_{1}=j,x_{2}=l,x_{3}=p$ and $X=\sum S_{3}=x_{1}+x_{2}+x_{3}$ then we have: $P(X=x)=\sum_{j\in S}\sum_{p\in S\backslash j}P(x_{1}=j)*P(x_{2}=x-j-p|x_{1})*P(x_{3}=x-j-l|x_{1},x_{2})*P(x_{1},x_{2},x_{3}\in S_{k})$ which leaves us once again, with a combinatorial problem. Ultimately, our previous results are not useful in the pursuit of an exact solution, but what they grant us is the ability to calculate accurate approximations easily when $n$ is large and certain conditions are met. As $n$ gets large, the exact solution becomes infeasible to calculate but distributional approximation accuracy increases and is significantly faster to calculate with $O(n)$ complexity. Note that in the above formula, as the size of $k$ increases, the number of combinations of different possible values taken on by the sum increases. ### 5.2 Naive Approximation via Normal Distribution The most naive approximation approach takes form in a modified version of the central limit theorem (CLT)(Fischer, 2010), allowing us to approximate the distribution of the sum of any subset as a normal distribution. This approximation hinges on the assumption that our values in set $S$ are i.i.d random samples taken from some underlying latent distribution. If these assumptions do not hold, we cannot invoke the Berry-Esseen bounds shown below and the accuracy of our approximation will not necessarily increase as set size and threshold value increase. However, if these assumptions do hold, then the accuracy of this approximation will increase the larger $T$and $S$ are. We can see this empirically in Figure 1. For smaller $T$, solutions will contain smaller subset groupings and can cause inaccuracy. These cases will depend heavily on the actual distribution of values in S. However, the larger the size of set $S$, $n$, is and the larger our target, $T$, is, the more subsets of larger size $k$, which results in a more accurate approximation via this method as these larger subsets will contain most of the desired sums. More formally, we can theoretically bound the deviation between the distribution we are approximating and the normal distribution. Many of the different proofs of the classical CLT rely on such bounding, examining the absolute difference between the normal distribution and distribution of the sum of i.i.d random variables(Fischer, 2010). However, for our case, we are looking at a bound for the sum of i.i.d samples from a finite population which we assume to be a set of independent random variables. With this in mind, we can harness the bound presented in ”Berry–Esseen Bound for a Sample Sum from a Finite Set of Independent Random Variables”(Zhao et al., 2004): $\sup_{x}|P(\frac{S_{k}}{\sqrt{kb}}\leq x)-\Phi(x)|\leq C\min(\delta_{1},\delta_{2}+1/\sqrt{kq})$ where $C$ is an absolute constant, $\frac{1}{0}:=\infty$, $p=\frac{k}{n}$, $q=1-p$, $b=1-pn^{-1}\sum_{i=1}^{n}(E(x_{i}))^{2}$, $\delta_{1}=n^{-1}\sum_{i=1}^{n}\frac{E(|x_{i}|^{3})}{\sqrt{(}k)b^{3/2}}$, $\delta_{2}=n^{-1}\sum_{i=1}^{n}\frac{|E(x_{i})|^{3}}{\sqrt{nb}}+n^{-1}\sum_{i=1}^{n}\frac{E(|x_{i}-pE(x_{i})|^{3})}{\sqrt{n}b^{3/2}}$ This bound gives us a good idea of the theoretical worst case convergence of our sums to the normal distribution and provides a guarantee that these sums do, in fact, approach the normal distribution as $n$ increases. However, calculating this theoretical deviation does not give us much information on how well the normal approximation is doing with a specific data-set. For a more concrete and specific result, we can use the discrete Jensen-Shannon divergence metric(Lin, 1991) over the desired data which quantifies how well the normal approximation is doing. The discrete Jensen-Shannon divergence metric over probability space $\Omega$ is defined as: $JSD(P||Q)=\frac{1}{2}\sum_{x\in\Omega}P(x)log(\frac{P(x)}{M(x)})+\frac{1}{2}\sum_{x\in\Omega}Q(x)log(\frac{Q(x)}{M(x)})$ where $M(x)=\frac{1}{2}(P(x)+Q(x))$ Note that there are many different forms of CLT’s all with their own assumptions and convergence rates. In fact, there are also general methods for bounding the difference between distributions. Specifically, Stein’s method(Stein, 1972) provides a way to calculate bounds of the difference between normal distribution and any other distribution we might assume our data comes from. As a result, this naive approximation can still useful and provide accurate enough approximations with minimal computation. In section 5.4 we will introduce a naive approximation which does not hinge on these assumptions and instead tries to estimate the density directly via sampling. (a) JS-Divergence of Normal Approximation: 0.04613 JS-Divergence of Irwin-Hall: 0.03013 (b) JS-Divergence of Normal Approximation: 0.03118 JS-Divergence of Irwin-Hall: 0.02225 (c) JS-Divergence of Normal Approximation: 0.03164 JS-Divergence of Irwin-Hall: 0.02237 (d) JS-Divergence of Normal Approximation: 0.01849 JS-Divergence of Irwin-Hall: 0.01832 Figure 1: In these graphs we see that the normal distribution does well at approximating the distribution of our subset sums, which come from a uniform distribution. Notice, however, that the normal distribution is not such a great approximation for low values of $k$, namely graph (a),$k=1$ and (b)$k=2$ show a larger deviation between the normal approximation and the true values. Graph (c), $k=3$, and (d), $k=4$, are much better approximated. As the subset size gets larger, the normal pdf will become a more accurate approximation as shown by the Berry-Esseen Bounds(Zhao et al., 2004). ### 5.3 Improved Approximation For scenarios where our set $S$ can be approximated well via some other continuous distribution besides the normal, we can make our approximation even more accurate. Formally, if the original set is an i.i.d sample of some known continuous distribution, then we can use this to find a better distribution of the sums of each different subset size. This will result in a more accurate approximation, especially for subsets of smaller size $k$, since this is where the CLT approximation is the least accurate. However, we must be careful with this as the normal distribution can still be a better approximation if the size of $S$ is small. We want the variance of the sample mean to not be too large to ensure that this alternate distribution is truly better as even the slightest deviation in sample values can cause this alternate distribution to fail at accurately describing larger subset sums. This is due to the fact that smaller samples will have higher variance in means. We can see this directly through a simple examination of the formula for calculating sample means: $\bar{X}=\frac{1}{n}\sum_{i=1}^{n}x_{i}\rightarrow Var(\bar{X})=\frac{Var(x_{i})}{n}$ However, since the normal approximation takes the mean and standard deviation of the samples themselves into account, it is able to better correct for these deviations. We can clearly see this in Figure 2 which demonstrates the effects of set size on accuracy of using the underlying distribution from which it was sampled. (a) JS-Divergence of Normal Approximation: 0.09837 JS-Divergence of Chi-Square: 0.02314 (b) JS-Divergence of Normal Approximation: 0.01370 JS-Divergence of Chi-Square: 0.15208 (c) JS-Divergence of Normal Approximation: 0.09587 JS-Divergence of Chi-Square: 0.01658 (d) JS-Divergence of Normal Approximation: 0.03234 JS-Divergence of Chi-Square: 0.01357 Figure 2: Notice in the simulated set of size 200 that there is a slight deviation between the theoretical distribution and sample distribution in picture (a). This causes the approximation to be off for larger subset sums as seen in (b). In (c) and (d) the set size is much larger, 20,000 samples, which leads to a better approximation and justifies the use of the alternate distribution over the normal. (a) JS-Divergence of Normal Approximation: 0.12712 JS-Divergence of Irwin-Hall: 0.12986 (b) JS-Divergence of Normal Approximation: 0.08121 JS-Divergence of Irwin-Hall: 0.10295 (c) JS-Divergence of Normal Approximation: 0.05145 JS-Divergence of Irwin-Hall: 0.09945 (d) JS-Divergence of Normal Approximation: 0.02421 JS-Divergence of Irwin-Hall: 0.10691 Figure 3: The Normal approximation works better for our discrete example since it directly takes into account mean and standard deviation of the discrete set. This discrete set was drawn from a discrete uniform distribution and one would think trying to approximate via a continuous uniform distribution would yield relatively good results. However, the Bates distribution (sum of uniform distributions) does not do a good job at approximation and cannot account for certain properties resulting from the discreteness of the data. To quantify when using a distribution other then the normal would perform better, we can again take an empirical approach and use the discrete Jensen- Shannon divergence metric(Lin, 1991) to measure how well the two distributions approximate our subsets. This will then allow us to make exact comparisons between the two distributions approximation abilities for our specific data. Additionally, we can take a theoretical approach and note that we want the mean of our sample set $S$ to be close to the actual mean of the distribution. Hence, we want some some size $n$ sample which results in a low variance of $\bar{X}$. The value of $n$ will completely depend on the desired threshold, and the variance of the underlying distribution from which samples are drawn. ### 5.4 Empirical Approximation Via Density Estimation For situations where any form of CLT may not be applicable and normal distribution does not provide a good approximation of the densities, we can estimate densities directly through sampling and using this sampled empirical distribution. There are several methods both parametric and non-parametric for density approximation. Parametric approaches include fitting a Pearson distribution(Lahcene, 2013), Johnson-$S_{u}$ distribution(Slifker and Shapiro, 1980) or a variety of distributions which can be fitted via either bayesian or frequentist methods. Non-parametric approaches include the well known method of kernel density estimation, which is what we will employ in our implementation due to its flexible nature. Specifically, we will use a tophat kernel with bandwidth, h, specified by the 10% quantile of the difference between data points. The estimated probability for some threshold value $t$ is then calculated as follows: $P(X=t)=\frac{1}{nh}\sum_{i}^{n}K(\frac{t-x_{i}}{h})$ where $K(x)$ signifies the kernel function with the appropriate properties of $\int_{-\infty}^{\infty}K(x)\,dx$ = 1 and symmetry, $K(-x)=K(x)$. We use this to estimate the probability density based on sampling without replacement from the set $S$ and taking the sums. As a result, we are directly estimating the distribution of the sum. The choice of Kernel Density Estimation is due to the flexibility that this estimation method provides, as it makes no assumptions about the data unlike some other parametric techniques. However, this flexibility comes at a cost as the number of samples, choice of bandwidth and choice of kernel will heavily affect the shape and accuracy of the estimation. Finding the optimal parameters will likely require extensive exploratory data analysis. Additionally, calculating such kernel density estimations depends on the number of data points and more data increases run-time. ## 6 Simulations There are three cases to test our approximation accuracy on generated sets. The first case is where the naive approximation does well. We explore both continuous and discrete cases. We find that in discrete case an improved approximation method may not exist and that our naive method performs the best. Specifically, we simulated sets from a discrete scaled uniform distribution, $U(0,20)$, to create our initial set. In the next section (6.1) we show that the naive normal approximation here provides better results then by trying to model this discrete distribution by its continuous counterpart. However, for cases in which $S$ is drawn from a continuous underlying distribution, we can use this continuous distribution which is a better approximation than the normal. In Section 6.2 we will explore the case where $S$ is drawn from chi-square distribution which is known to be heavily skewed and so will render our naive normal approximation less accurate. From our simulations, we can see where our approximations work well, and where they don’t. Lastly, in Section 6.3, we will explore cases where independence does not hold and observe how these effect our normal approximation and kernel density approximations. Figure 4: This graph plots the absolute error percentage between ground truth and approximation values for 20 set simulations. Notice a decreasing trend in absolute approximation error as set size increases. This is due to the fact that subsets with larger size begin to dominate the solution and allow for better approximation. As an increasing number of solutions are found within larger subsets, our approximation becomes more accurate. Orange deviation lines signify 1 SD. (a) (b) (c) (d) Figure 5: The Normal approximation works better for smaller $n$ due to small sample variation which the chi-squared approximation is not able to adjust for despite the samples coming from an underlying chi-square distribution. However, as $n$ increases, the set follows a chi-squared distribution more closely and the roles reverse, with the chi-squared approximation becoming optimal. ### 6.1 Cases for Naive Approximation For cases where the values of set $S$ are constrained to discrete values, it may not be feasible to find a more accurate continuous approximation then the normal distribution. A demonstration of this can be found in appendix section 8.4. Additionally, if no information about the underlying distribution is known, then the Naive approximation is certainly one approach to modelling the distribution of sums and, theoretically, will be extremely effective for large sets and a large target threshold. In our example, we use a scaled discrete uniform distribution, $U(0,20)$, and investigate the approximation abilities of the normal distribution. One would think initially that using a continuous uniform distribution to approximate this would provide good results, but we must be careful with this. For example, let’s consider a uniform (0,1) discrete distribution. Note that 0 and 1 both have probability of 0.5 of being picked. Then if we sum two independent samples from this distribution, the probability of the sum being 2 will be 0.25. If we tried to approximate this via a continuous uniform distribution, we would estimate the probability of the sum being 2 to be 0, hence not a very useful approximation. Due to computational constraints, the ground truth solution for the perfect sum problem can only be calculated up to set $S$ sizes of 26. However, even with such a small set size, we see some trends emerge. Specifically, in Figure 4 we see that as the set size increases, our absolute approximation error decreases. ### 6.2 Cases for Improved approximation For cases where the values of set $S$ can take on continuous values, we may find a continuous distribution which approximates these values better then the normal distribution, but this only works well for a significantly large n. This is especially the case if the distribution is highly skewed. Another aspect to consider when approaching the continuous case is that finding the number of exact sums results in an exact solution of 0. In fact, We get more insight from looking at how many sets sum up to greater/less then a certain threshold value. We performed various simulations which explore the validity of our approximation approach and found promising results. This approximation does well when the threshold is higher up relative to the set size, since larger subsets will contribute more to the solution. Additionally, the normal approximation did better on smaller input set sizes when compared to the chi-squared approximation. This is likely due to the fact that the normal approximation directly takes the mean and standard deviation of the set as parameters making it highly flexible when faced with sample deviations which occur in small samples sizes. However, as the set size increased, the chi-squared approximation performed better, with lower approximation error. This is depicted in Figure 6. Note, however, that despite despite the normal approximation becoming less accurate for larger n, it is still a relatively good approximation, with an maximum absolute error of 0.08, while the chi squared approximation was off by 0.4 in the worst case. The normal approximation is versatile as it’s parameters allow for sample deviation adjustments. ### 6.3 Cases for Empirical Approach For cases where no form of the CLT can be invoked and we have no a priori knowledge of the distribution, we are faced with two options: continue using the normal, or estimate the distribution empirically. There are several benefits to using the normal approximation including that it will not depend on the accuracy/validity of taking a sample to estimate the empirical distribution. As a result, no randomness will be involved and this approach will always give back the same answer for a given set of inputs. Additionally, the existence of bounds on the normal approximation error for dependent samples has been shown in Stein(Stein, 1972). However, if no bounds can be shown and the normal will not be a good approximation then an empirical approach might best suit the situation. Specifically in situations where the data is very spread out as seen in Figure 7 and our empirical method significantly outperforms the naive normal approximation. (a) JS-Divergence of Normal Approximation: 0.79811 JS-Divergence of KDE: 0.27923 (b) JS-Divergence of Normal Approximation: 0.82409 JS-Divergence of KDE: 0.10890 (c) JS-Divergence of Normal Approximation: 0.79985 JS-Divergence of KDE: 0.13563 (d) JS-Divergence of Normal Approximation: 0.78635 JS-Divergence of KDE: 0.23210 Figure 6: Note the KDE’s ability to account for large gaps between the possible sums such as cases like a.) and d.). We see that the KDE provides a better estimation of such distributions then the normal as shown by the JS- Divergences. ## 7 Applications To Other Problems This solution approach can be applied to any problem which can be boiled down to a perfect sum-type question. These questions include the classic ”knapsack” and ”change-making” problems(Goodrich and Tamassia, 2015)(Silvano, 1990). However, we must bear in mind that, as mentioned in the problem statements section, our approach is focused on determining the _total_ number of possible solutions, and not actually finding these solutions themselves. This leads to the major drawback of this method in that, while we may have an accurate approximation of this total value, we are not any closer to knowing what these sets actually are. In fact, there are no truly efficient algorithms to find these exact solution sets and so in big-data settings, finding such sets becomes infeasible, not only computation-wise, but also memory wise as the number of solutions can grow to hundreds of millions very rapidly(i.e. 100 choose 5 = 75,287,520 different possible combinations). With this in mind, one can apply our approach to help narrow down the computational scale, focusing on desired subset-sizes which are likely to contain solutions. For example, to find the minimum subset size which sums to a desired number, our approach can be used to see whether a subset of size $k$ is likely to sum up to this desired value. From here, we can either choose the minimum set that’s predicted to have at least one subset of this size sum to the desired value, or we can choose the minimum set that has a probability of such a sum occurring based on the estimated distribution. If we want to find this actual solution set, then we can pair this information up with an exact solution finding algorithm that actually examines the sets to find a solution and focus on on just subsets of the sizes we’ve identified from the previous step. In this way, we’ve reduced the computation time significantly as we will only need to focus on a limited range of different subset sizes instead of all possible sizes. Examining real world applications of this approach, it is immediate that this is particularly relevant to the field of genomics research due to the combinatorial nature of gene inheritance/recombination. There is a whole sub- field of biology which focuses on combinatorial methods for gene interaction(J and S, 2001). Research such as ”Combinatorial control of gene expression” demonstrates the relevance of such ideas in understanding genomics(Reményi et al., 2004). Specifically, the ideas presented in this paper are useful if one wants to find the number of different possible genes which could result in a specific event occurring. We need only to assign genes a numerical value in their contribution to this event, and then we can apply our approach to find the total number/percentage of potentially threshold surpassing combinations. This could be especially useful for cancer research, where we can think about each gene combination as contributing a certain amount, and once a stability level is crossed, a cell turns cancerous. These applications I will leave for further thought and exploration to the reader. ## 8 Conclusion In this paper, we introduce a novel approximation approach to the perfect sum problem which is focused on finding the total number of sub-sets which sum to/surpass a desired threshold. This approach harnesses probability theory to estimate the distribution of the sums of subsets. Specifically, we examine the use of the normal distribution as well as non-parametric density estimation to estimate the distribution of the sums of different sized subsets when no a priori distribution information is known. Our algorithm, presented in appendix section Perfect Sum Psuedocode runs in $O(n)$ complexity and can increase in accuracy as set size increases, depending on the threshold level and problem set up. As a result, it directly addresses the need for good approximation in big data cases where finding the exact solution is infeasible. ## 9 Appendix ### 9.1 Proof Of Expected Value For Members of Subset Size k for any $x\in S_{K}$ we have: $E[x]=\sum_{x\in S}x*P(x)$ Note that $P(x)$ denotes the probability of picking the value x from all possible values in subset $S_{k}$ which is the joint event of this value ending up in $S_{k}$ first and then being picked out of all other k-1 values $=\sum_{x\in S}x*1/k*k/n$ $=\sum_{x\in S}x*1/n$ $=1/n\sum_{x\in S}x=\bar{S}$ ∎ ### 9.2 Proof of Lemma 1 Notice that in a subset of size $k$, for some value $x\in S$ there are $n-1\choose k-1$ different possible subsets $S_{k}$ which satisfy $x\in S_{k}$. There are a total of $n\choose k$ subsets of size k. Thus, the probability of choosing a subset which contains $x$ out of all possible subsets of size $k$ is: $P(x\in S_{k})=\frac{{n-1\choose k-1}}{{n\choose k}}=\frac{\frac{(n-1)!}{(k-1)!(n-k)!})}{\frac{n!}{k!(n-k)!}}=\frac{k!(n-1)!}{(k-1)!n!}=k/n$ ∎ ### 9.3 Proof of Theorem 1 $\sum{S_{k}}=\sum_{x\in S_{k}}x=\sum_{x\in S}x*\mathbbm{1}\\{x\in S_{k}\\}\textrm{ \ \ \ where }\mathbbm{1}\\{x\in S_{k}\\}=\left\\{\begin{array}[]{l}1\textrm{ if }x\in S_{k}\\\ 0\textrm{ otherwise}\end{array}\right.$ Then we have that: $E[\sum{S_{k}}]=E[\sum_{x\in S}x*\mathbbm{1}\\{x\in S_{k}\\}]$ $=\sum_{x\in S}x*E[\mathbbm{1}\\{x\in S_{k}\\}]$ $=\sum_{x\in S}x*P(x_{i}\in S_{k})$ $=\sum_{x\in S}x*k/n$ $=\frac{k}{n}\sum_{x\in S}x=k\bar{S}$ ∎ ### 9.4 Proof of Lemma 2 $E[x_{1}*x_{2}]=\sum_{x_{1},x_{2}\in S,x_{1}\neq x_{2}}x_{1}*x_{2}*P(x_{1},x_{2}\in S_{k})*P(x_{1},x_{2})$ Following the notation from Lemma 1, $P(x_{1},x_{2}\in S_{k})$ denotes the probability of the event that values $x_{1}$ and $x_{2}$ get chosen to be in the subset $S_{k}$. $P(x_{1},x_{2})$ denotes the probability of the event that values $x_{1}$ and $x_{2}$ get chosen from the values in the subset $S_{k}$. We need these two events because in order for the ultimate event of $x_{1}$ and $x_{2}$ getting chosen from $S_{k}$, we need to ensure that they actually are in $S_{k}$ to begin with, otherwise they cannot be chosen so we need to capture the joint event. Expanding on these probabilities further: $P(x_{1},x_{2}\in S_{k})=\frac{k}{n}*\frac{k-1}{n-1}$ $P(x_{1},x_{2})=\frac{1}{k(k-1)}$ Hence, $\sum_{x_{1},x_{2}\in S,x_{1}\neq x_{2}}x_{1}*x_{2}*P(x_{1},x_{2}\in S_{k})*P(x_{1},x_{2})=\sum_{x_{1},x_{2}\in S,x_{1}\neq x_{2}}x_{1}*x_{2}*\frac{1}{n(n-1)}$ $=\frac{1}{n(n-1)}(\sum_{x_{1}\in S}x_{1}*\sum_{x_{2}\in S,x_{2}\neq x_{1}}x_{2})$ $=\frac{1}{n(n-1)}(\sum_{x_{1}\in S}x_{1}*(\sum_{x_{2}\in S}x_{2}-x_{1}))$ $=\frac{1}{n(n-1)}(\sum_{x_{1}\in S}x_{1}*(\sum_{x_{2}\in S}x_{2}-x_{1}))$ $=\frac{1}{n(n-1)}(\sum_{x_{1}\in S}x_{1}*\sum_{x_{2}\in S}x_{2}-\sum_{x_{1}\in S}x_{1}^{2})\textrm{ \ \ \ note that \ $\sum_{x_{1}\in S}x_{1}*\sum_{x_{2}\in S}x_{2}=nE[x_{1}]*nE[x_{2}]$}$ $=\frac{1}{(n-1)}(n^{2}E[x_{1}]E[x_{2}]-E[x_{1}^{2}])$ $=\frac{1}{(n-1)}(n\bar{S}^{2}-\sigma^{2}-\bar{S}^{2})$ $=\frac{n}{(n-1)}\bar{S}^{2}-\frac{1}{(n-1)}\sigma^{2}-\frac{1}{(n-1)}\bar{S}^{2}$ $=\bar{S}^{2}-\frac{\sigma^{2}}{(n-1)}$ ∎ ### 9.5 Proof of Lemma 3 Plugging everything into our formula for covariance we have: $Cov[x_{1},x_{2}]=E[x_{1}*x_{2}]-E[x_{1}]*E[x_{2}]\textrm{ \ \ \ for $x_{1},x_{2}\in S_{k}$ and $x_{1}\neq x_{2}$}$ $=-\frac{\sigma^{2}}{n-1}+\bar{S}^{2}-\bar{S}^{2}$ $=-\frac{\sigma^{2}}{n-1}$ ∎ ### 9.6 Proof of Theorem 2 $Var[\sum_{x\in S_{k}}x]=\sum_{x_{1},x_{2}\in S_{k}}Cov[x_{1},x_{2}]$ $=\sum_{x\in S_{k}}Var[x]+\sum_{x_{1},x_{2}\in S_{k},x_{1}\neq x_{2}}Cov[x_{1},x_{2}]$ $=k\sigma^{2}+k(k-1)\frac{-\sigma^{2}}{n-1}=k\sigma^{2}(1-\frac{(k-1)}{n-1})$ ∎ ### 9.7 Perfect Sum Psuedocode Algorithm 1 Perfect Sum Probabilistic Approximation 1:Compute mean, $\mu$, and variance, $\sigma^{2}$, of distribution(or approximate distribution) of $S$ 2: 3:Create variable, $R=0$, to keep track of total number of subsets which satisfy sum conditions regarding $T$ 4:for $k=1,2,\ldots\textrm{length}(S)$ do 5: Compute values (i.e. $\hat{\sigma^{2}}$, $\hat{\mu}$ via Theorem 1, Theorem 2 ) necessary for distribution approximation choice 6: 7: Compute distribution approximation of sum of subsets size $k$ 8: 9: Use distribution to find desired probability, $P=P(\sum S_{k}\\{=,\geq,\leq\\}T)$, regarding given sum threshold, $T$ 10: 11: Convert probability to number of subsets, $s=integer(P*{n\choose k})$ 12: 13: Add to variable $R$ to keep track of total over all different sized subsets, $R=R+s$ 14:end for 15:return R ## References * Prasad and Kumari (2020) Kalika Prasad and Munesh Kumari. A review on mathematical strength and analysis of enigma. Technical report, Department of Mathematics, Central University of Jharkhand, India, 2020. URL https://arxiv.org/pdf/2004.09982.pdf. * Sarvate and Seberry (1986) Dinesh G. Sarvate and Jennifer Seberry. Encryption methods based on combinatorial designs. Technical report, University of Wollongong, 1986. URL https://ro.uow.edu.au/infopapers/1019. * Waterman (1995) Michael S. Waterman. _Handbook of Combinatorics_ , chapter 39. Elsevier Science B.V., 1995. * Reményi et al. (2004) Attila Reményi, Hans R Schöler, and Matthias Wilmanns. Combinatorial control of gene expression. _Nature Structural and Molecular Biology_ , 812–815, 2004. * Rosenblum et al. (2014) Erica Bree Rosenblum, Christine E. Parent, and Erin E. Brandt. The molecular basis of phenotypic convergence. _Annual Review of Ecology, Evolution, and Systematics_ , 45:203-226, 2014. * Howgrave-Graham and Joux (2010) Nick Howgrave-Graham and Antoine Joux. New generic algorithms for hard knapsacks. _Advances in Cryptology – EUROCRYPT 2010, vol 6110_ , pages 235–256, 2010. * Cao and Liu (2018) Zhengjun Cao and Lihua Liu. New algorithms for subset sum problem. _CoRR_ , abs/1807.02611, 2018. * Bellman (1961) Richard Bellman. On the approximation of curves by line segments using dynamic programming. _Communications of the ACM_ , 4 No. 6, 1961. * D’Atri and Puech (1982) Gianfranco D’Atri and Claude Puech. Probabilistic analysis of the subset-sum problem. _Discrete Applied Mathematics_ , Volume 4, Issue 4, 1982. * Koiliaris and Xu (2018) Konstantinos Koiliaris and Chao Xu. Subset sum made simple, 2018. * Fischer (2010) Hans Fischer. _A History of the Central Limit Theorem_. Springer New York Dordrecht Heidelberg London, 2010. * Zhao et al. (2004) L. C. Zhao, C. Q. Wu, and Qiying Wang. Berry–esseen bound for a sample sum from a finite set of independent random variables. _Journal of Theoretical Probability_ , 2004. * Lin (1991) Jianhua Lin. Divergence measures based on the shannon entropy. _IEEE TRANSACTIONS ON INFORMATION THEORY_ , 37, NO.1, 1991. * Stein (1972) Charles Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. _Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability_ , 583-602, 1972. * Lahcene (2013) Bachioua Lahcene. On pearson families of distributions and its applications. _African Journal of Mathematics and Computer Science Research_ , 6(5), 2013. * Slifker and Shapiro (1980) James F. Slifker and Samuel S. Shapiro. The johnson system: Selection and parameter estimation. _Technometrics_ , Vol. 22, No. 2, 1980. * Goodrich and Tamassia (2015) Michael T Goodrich and Roberto Tamassia. _Algorithm Design and Applications_. Wiley, 2015. * Silvano (1990) Martello Silvano. _Knapsack problems : algorithms and computer implementations_. Wiley, 1990. * J and S (2001) Pelletier J and Sidhu S. Mapping protein-protein interactions with combinatorial biology methods. _Curr. Opin. Biotechnol._ , 340–7, 2001. * Ferrari (2019) Alfredo Ferrari. A note on sum and difference of correlated chi-squared variables. _arXiv_ , 1906.09982, 2019.
# Entanglement entropy in the Ising model with topological defects Ananda Roy<EMAIL_ADDRESS>Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854-8019 USA Hubert Saleur Institut de Physique Théorique, Paris Saclay University, CEA, CNRS, F-91191 Gif-sur-Yvette Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-0484, USA ###### Abstract Entanglement entropy (EE) contains signatures of many universal properties of conformal field theories (CFTs), especially in the presence of boundaries or defects. In particular, topological defects are interesting since they reflect internal symmetries of the CFT, and have been extensively analyzed with field- theoretic techniques with striking predictions. So far, however, no lattice computation of EE has been available. Here, we present an ab-initio analysis of EE for the Ising model in the presence of a topological defect. While the behavior of the EE depends, as expected, on the geometric arrangement of the subsystem with respect to the defect, we find that zero-energy modes give rise to crucial finite-size corrections. Importantly, contrary to the field-theory predictions, the universal subleading term in the EE when the defect lies at the edge of the subsystem arises entirely due to these zero-energy modes and is not directly related to the modular S-matrix of the Ising CFT. Entanglement plays a central role in the development of long-range correlations in quantum critical phenomena. Thus, quantification of the entanglement in a quantum-critical system provides a way to characterize the universal properties of the critical point. The von-Neumann entropy is a natural candidate to perform this task. For zero-temperature ground-states of 1+1D quantum-critical systems described by conformal field theories (CFTs), the von-Neumann entropy [equivalently, entanglement entropy (EE)] for a subsystem exhibits universal logarithmic scaling with the subsystem size [1, 2]. The coefficient of this scaling determines a fundamental property of the bulk CFT: the central charge. For finite systems with boundaries at a conformal critical point, the EE receives universal, subleading, boundary- dependent contributions, the so-called ‘boundary entropy’ [3] – a central concept in a variety of physical problems both in condensed matter physics and string theory. This boundary contribution to the EE provides a valuable diagnostic for identifying the different boundary fixed points of a given CFT [2, 4, 5, 6, 7]. While entanglement has been analyzed extensively in CFTs with and without boundaries, its behavior is much less understood in the presence of defects. The question is particularly intriguing since entanglement measures may provide an alternate way to classify defects in CFTs. Of particular interest are topological (perfectly-transmissive) defects [8, 9, 10, 11, 12]. These defects commute with the generators of conformal transformations and thus, can be deformed without affecting the values of the correlation functions as long as they are not taken across field insertions (hence the moniker topological). They reflect the internal symmetries of the CFT and relate the order-disorder dualities of the CFT to the high-low temperature dualities of the corresponding off-critical model [10, 13, 14]. They also play an important role in the study of anyonic chains and in the correspondence between CFTs and three-dimensional topological field theories [15]. It is natural to analyze EE in the presence of topological defects. Two distinct geometries have been considered in the literature: i) where the defect is entirely within the subsystem, and ii) where the defect is located precisely at the interface between the subsystem and the rest. While both cases exhibit identical leading-order logarithmic scaling with subsystem size, they differ when subleading, i.e., $O(1)$, corrections are taken into account. In the first case, after usual folding maneuvers, the subleading term can be equated to a boundary entropy with double the bulk degrees of freedom [16, 17, 18]. In this way, the subleading correction to symmetric EE can be computed analytically for all rational CFTs [19]. In the second case, the subleading term for the interface EE is much more difficult to obtain. Field-theory computations based on the replica trick for the free, real boson [20], the free, real fermion [21] (see also [22, 23, 24]) and generalization to all rational CFTs [19] using the corresponding twisted torus partition functions [8] provide results whose validity have never been tested with ab-initio, lattice computations. Such a computation is particularly important since the mapping to the twisted torus partition function does not faithfully capture the geometric arrangement of the subsystem with respect to the defect. This is important for subleading terms in the interface EE which are, in fact, the signatures of the topological defect. Our goal, in this paper, is to investigate the question in the simplest possible case, the Ising model, using ab-initio calculations. These calculations are more complex than is usually the case for EE due to the presence of zero-energy modes. The latter are a salient feature of the topological nature of the defect. While the effects of zero-modes have been extensively quantified in gapped systems due to their relevance in topologically ordered systems, the same is much less understood in the critical systems. Here, we show that these zero-modes give rise to nontrivial contributions to the EE when the subsystem size is comparable to the size of the whole system, similar to the case of a periodic ring of free fermions [25, 26]. Thus, these zero-modes profoundly affect corrections to scaling. After the role of these zero-modes is properly taken into account, we find results that are as expected for case (i), but are not compatible with the formulas proposed in the string/field theory literature [21, 19] for case (ii). Figure 1: Schematic of a single defect in the critical Ising chain with periodic boundary conditions in the spin (left) and fermionic (right) picture. Away from the defect, the Hamiltonian contains nearest-neighbor ferromagnetic $\sigma_{i}^{x}\sigma_{i+1}^{x}$ interaction (black lines) and onsite transverse field $\sigma_{i}^{z}$ (blue lines). The defect (purple box) concerns the spins located at sites $L$ and 1. The energy (duality) defect part of the Hamiltonian is: $b_{\epsilon}\sigma_{L}^{x}\sigma_{1}^{x}/2$ $+\sigma_{L}^{z}/2$ ($b_{\sigma}\sigma_{L}^{y}\sigma_{1}^{x}/2$). After Jordan-Wigner (JW) transformation, the ferromagnetic coupling corresponds to an interaction of the form $\gamma_{2i}\gamma_{2i+1}$, while the transverse field corresponds to $\gamma_{2i-1}\gamma_{2i}$. For the duality defect, there is a zero-mode localized at $\gamma_{2L}$. Another zero-mode is delocalized throughout the system (see below). The Ising model provides a particularly appropriate testbed for the EE in the presence of topological defects due to the following. First, the lattice Hamiltonians are well-understood [27, 16]. Second, the zero-mode structure is the simplest and yet, sufficient to explain the main concepts. Third, the defect Hamiltonian can be mapped, after Jordan-Wigner (JW) transformation, to a bilinear fermionic Hamiltonian. The latter can be diagonalized semi- analytically and leads to very accurate predictions of EE [28, 29, 30]. To emphasize the nontrivial nature of the topological defect, we compare our results with the non-topological defects in the model. There are two nontrivial classes of defects in the Ising CFT: energy ($\epsilon$) and duality ($\sigma$). We consider a periodic Ising chain with the defect concerning the spins at sites $L$ and 1 [31, 32, 27]. The Hamiltonians are: $H_{\epsilon,\sigma}=H_{0}$$-h_{\epsilon,\sigma}$ (see Fig. 1). Here, the bulk Hamiltonian term, $H_{0},=-\sum_{i=1}^{L-1}\sigma_{i}^{x}\sigma_{i+1}^{x}/2$ $-\sum_{i=1}^{L-1}\sigma_{i}^{z}/2$. The two defect terms are: $h_{\epsilon}=b_{\epsilon}\sigma_{L}^{x}\sigma_{1}^{x}/2$ $+\sigma_{L}^{z}/2$ and $h_{\sigma}=b_{\sigma}\sigma_{L}^{y}\sigma_{1}^{x}/2$ 111Note that $h_{\epsilon,\sigma}$ are marginal perturbations which, in general, lead to continuously varying critical exponents [45].. The energy defect consists of one bond with altered strength $b_{\epsilon}$ connecting spins at sites $L$ and 1. In particular, $b_{\epsilon}=0,+1$ and $-1$ correspond to Ising models with open, periodic and antiperiodic boundary conditions (bcs) respectively. Unlike the energy defect, the duality defect consists of a $\sigma_{L}^{y}\sigma_{1}^{x}$ interaction 222This duality defect Hamiltonian is related by a local unitary rotation on the $L^{\rm th}$ spin to the one considered in Refs. [16], which has $\sigma_{L}^{z}\sigma_{1}^{x}$ interaction. We do not use this alternate form since it no longer leads to a bilinear Hamiltonian under JW transformation and cannot be solved by free- fermion techniques. . Equally important, there is no transverse field at the $L^{\rm th}$ site. The duality defect for $b_{\sigma}=1$ (equivalently $b_{\sigma}=-1$, which is related by a local unitary rotation) is the topological defect for the Ising CFT. Next, we perform a JW transformation: $\gamma_{2k-1}=\sigma_{k}^{x}\prod_{j=1}^{k-1}\sigma_{j}^{z}$, $\gamma_{2k}=\sigma_{k}^{y}\prod_{j=1}^{k-1}\sigma_{j}^{z}$, where $\gamma_{j}$-s are real, Majorana fermion operators obeying $\\{\gamma_{j},\gamma_{k}\\}=2\delta_{j,k}$. In the fermionic language, the defect Hamiltonians are $H_{\epsilon,\sigma}^{f}=H_{0}^{f}-h_{\epsilon,\sigma}^{f}$, where $\displaystyle H_{0}^{f}$ $\displaystyle=\frac{i}{2}\sum_{j=1}^{L-1}\gamma_{2j}\gamma_{2j+1}+\frac{i}{2}\sum_{j=1}^{L-1}\gamma_{2j-1}\gamma_{2j},$ (1) $\displaystyle h_{\epsilon}^{f}$ $\displaystyle=\frac{ib_{\epsilon}}{2}\gamma_{2L}\gamma_{1}-\frac{i}{2}\gamma_{2L-1}\gamma_{2L},\ h_{\sigma}^{f}=-\frac{ib_{\sigma}}{2}\gamma_{2L-1}\gamma_{1}.$ (2) Here we have restricted ourselves to the symmetry sector $Q=\prod_{j=1}^{L}\sigma_{j}^{z}=1$ 333Equivalently, we analyze the mixed sector Hamiltonian [27]: $H_{\epsilon,\sigma}^{m}=P_{+}H_{\epsilon,\sigma}^{f}(Q=+1)+P_{-}H_{\epsilon,\sigma}^{f}(Q=-1)$. For the energy defect, from the definition of $H_{\epsilon}^{f}$, we recover the well-known fact that the periodic (antiperiodic) coupling at the boundary for the spin model corresponds to antiperiodic (periodic) coupling for the fermionic model. In particular, the periodic fermionic model ($b_{\epsilon}=-1$) contains two nonlocal Majorana zero-modes, which together are responsible for the two-fold degenerate ground-state of the fermionic model. For the duality defect, the operator $\gamma_{2L}$ does not occur in $H_{\sigma}^{f}$. It commutes with the Hamiltonian: $[\gamma_{2L},H_{\sigma}^{f}]=0$, and anticommutes with the conserved $\mathbb{Z}_{2}$ charge: $\\{\gamma_{2L},Q\\}=0$. Thus, it is a zero-mode of the model which is perfectly localized in space. It has a partner zero-mode which is completely delocalized: $\Lambda(b_{\sigma})=\sum_{k=1}^{L}\gamma_{2k-1}+b_{\sigma}\sum_{k=1}^{L-1}\gamma_{2k}$. Note that the zero-modes exist for all values of $b_{\sigma}$ and are not special features of the topological point. The fermionic Hamiltonian also reaffirms a CFT result [27]: $H_{\sigma}^{f}$ describes a chain of $2L-1$ Majorana fermions or equivalently, $L-1/2$ spins. This is important for quantifying finite-size effects. Now, we compute symmetric and interface EEs for the ground states of the fermionic Hamiltonians $H_{\epsilon,\sigma}^{f}$. Since the latter are bilinear in the fermionic operators, we compute the relevant EEs from the ground-state correlation matrix [28, 29, 30]. The latter is calculated from the ground state by filling up the negative energy states. The method is unambiguous in the absence of zero-energy modes. However, in the presence of the latter (e.g., $b_{\epsilon}=-1$ or any $b_{\sigma}$), it raises the question: are the zero-energy states empty or occupied in the ground state? Yet another possibility is to consider an incoherent superposition of filled and empty states. This leads to the total system being in a mixed state, but is appropriate when taking the zero-temperature limit of a thermal ensemble [25]. The question is crucial to the computation since zero-energy modes nontrivially affect the EE. For $b_{\epsilon}=-1$, the zero-modes give rise to nontrivial corrections to the EE of a subsystem of size $r$ within a total system of size $L$ [25, 26]. The correction $\Delta S(r/L)=$ $\frac{\pi r}{L}\int_{0}^{\infty}\tanh(\pi rh/L)[\coth(\pi h)-1]$. For $r\ll L$, the EE is oblivous to the existence of the two nonlocal zero-modes spread throughout the system: $\Delta S\simeq\pi^{2}r^{2}/12L^{2}\rightarrow 0$. The situation changes as the subsystem occupies appreciable fraction of the total system ($r\sim L$) culminating in $\Delta S(r=L)=\ln 2$, the latter being the entropy of the two-fold degenerate ground state of a periodic chain of fermions. Below, we present analogous results for the topological defect considering separately the cases when the total system is in a pure and mixed state for the relevant fermionic models 444See Supplementary Material for details. Figure 2: Results for the symmetric EE ($S_{s}$) for a periodic chain of size $L=3000$ with a single defect and $10<r<100$ (left panel) and $100<r<L/2$ (right panels). The blue crosses are obtained for $b_{\epsilon}=1$ (i.e., no defect). The maroon (green) hexagons are for the energy defect $b_{\epsilon}=-1$ (duality defect $b_{\sigma}=1$), when the total system is in a mixed state. The differences with $S_{s}(b_{\epsilon}=1)$ are $\Delta S(r/L)$ and $\Delta S(r/L)/2+(\ln 2)/2$ for the two cases. The corresponding predictions are denoted by maroon and green pluses. Compared to $b_{\epsilon}=-1$, the EE for $b_{\sigma}=1$ has an additional offset $(\ln 2)/2$ even for $r/L\rightarrow 0$ due to the zero-mode $\gamma_{2L}$ localized at the center of the interval. As $r/L$ increases, the EE for $b_{\sigma}=1$, due to the single nonlocal zero mode, $\Lambda(b_{\sigma}=1)$, receives a contribution half as large as that for $b_{\epsilon}=-1$, which has two such modes. The maroon squares (green diamonds) present the results when the total system is in a pure state (the zero-energy state being filled or empty) as opposed to a mixed state. The results for $b_{\epsilon}=\pm 1$ are indistinguishable. However, relative to the energy defect, the $b_{\sigma}=1$ shows an offset $\Delta S(1-r/L)/2$, which again reduces to $(\ln 2)/2$ for $r/L\ll 1$. Figure 3: (Left panel) Results for the interface EE ($S_{\cal I}$) for a periodic chain of $L=500$ with a single defect. One end of the subsystem is at the defect and the other end sweeps through the system. The $b_{\epsilon}=-1$ case, both when the system is in an incoherent superposition (maroon hexagons) and a pure state (maroon squares), can be directly understood from the symmetric EE (Fig. 2). The topological defect results are identical for mixed (green hexagons) and pure (green diamonds) total-system state. Then, the interface EE is best compared to that of a spin-chain of size $L-1/2$ without defects [see Fig. 1 and discussion below Eq. (1)]. The difference between the two EEs is $\Delta S[r/(L-1/2)]+\delta_{r,L}(\ln 2)/2$ (orange pluses). For comparison, the corresponding predictions obtained by comparing to size-$L$ chain without defect (green pluses) are also plotted. For $r$ not too close to $L$, the difference is negligible. But, zooming in around $r\sim L$ (inset) makes the difference manifest. (Right panel) We show the scaling of the interface EE for $r=L/2$ as a function of $\ln L$. Every curve exhibits the same leading-order scaling yielding a central charge $c\simeq 0.5$. However, the offset with respect to no-defect for $b_{\epsilon}=-1$ is $\Delta S(1/2)=-1/2+\ln 2$ when the total system is mixed and 0 when pure. On the other hand, the corresponding offsets for both pure and mixed cases for $b_{\sigma}=1$ are $\Delta S(1/2)/2=-1/4+(\ln 2)/2$. First, we compute the symmetric EE (Fig. 2). The results for $b_{\epsilon}=1$ (no defect) are shown with blue crosses. The symmetric EE exhibits the expected logarithmic scaling: $S_{s}(b_{\epsilon}=1)$ $=\frac{c}{3}\ln\big{[}\frac{L}{\pi}\sin\big{(}\frac{\pi r}{L}\big{)}\big{]}$ $+S_{0}$ for all values of $r/L$. We set the lattice spacing to 1 throughout this work. Fitting this expression yields the expected central charge $c\simeq 0.5$ and $S_{0}\simeq 0.478$. The maroon (green) hexagons corresponds to the symmetric EE for $b_{\epsilon}=-1$ ($b_{\sigma}=1$) when the total system is in an incoherent superposition of the zero-energy states being filled and empty. Compared to $S_{s}(b_{\epsilon}=1)$, the symmetric EEs ($S_{s}^{m},m$ denoting the total system being mixed) for the two cases get an additional contribution of $\Delta S(r/L)$ [26] and $\Delta S(r/L)/2+(\ln 2)/2$. Thus, when $r/L\ll 1$, for both $b_{\epsilon}=-1$ and $b_{\sigma}=1$, $S_{s}^{m}$ exhibits the expected logarithmic dependence with $c\simeq 0.5$ (left panel). However, the offset, $S_{0}$, for $S_{s}^{m}(b_{\sigma}=1)$ is $(\ln 2)/2$ higher. This higher offset unambiguously distinguishes the topological defect from the energy defect and is because of the localized unpaired Majorana zero mode at the center of the subsystem (the result will be the same as long as the defect lies within and not at the edge of the subsystem). This is consistent with the identification of this defect problem to a boundary CFT problem at the ‘continuous Neumann boundary fixed-point’ after folding, with the corresponding $g$-function $=\sqrt{2}$ [16] (see also Ref. [37]). This should be compared with the ‘continuous Dirichlet boundary fixed point’ ($b_{\epsilon}=-1$ case), which has $g$-function $=1$ and thus, no additional boundary entropy contribution. For $b_{\sigma}=1$, increasing $r/L$ leads to a further offset of $\Delta S(r/L)/2$ due to the contribution from the second nonlocal zero-mode. Here, the factor of 1/2 accounts for the difference in the number of nonlocal zero-modes in the $b_{\epsilon}=-1$ and $b_{\sigma}=1$ models. The maroon squares (green diamonds) correspond to the symmetric EEs obtained by keeping the zero-energy state empty (denoted by $S_{s}^{p},p$ denoting the total system being pure; the results are identical for the filled case). Now, the results for $b_{\epsilon}=\pm 1$ are indistinguishable. However, compared to the case without defects, $S_{s}^{p}(b_{\sigma}=1)$ exhibits a $\Delta S(1-r/L)/2$ offset. For $r\ll L$, this again leads to the offset, $S_{0}$, being $(\ln 2)/2$ higher than the other cases. For $r\sim L$, the $S_{s}^{p}$ diminishes compared to the case when the total system is mixed due to the purity of the total system. Next, we compute the interface EE (Fig. 3), where one end of the subsystem (of size $r$) is located at the defect and the other end sweeps through the system (of size $L$) 555Note that the symmetric EE computation required much larger total system sizes in order to get a good result for offset $(\ln 2)/2$.. For $b_{\epsilon}=-1$, the results are identical to the symmetric case. This is expected since the resulting model is just a fermionic model with periodic bc and there is no difference between symmetric and interface EEs. The results for $b_{\sigma}=1$ are the same for the total system in a mixed (green hexagons) and pure (green diamonds) state. Then, $S_{\cal I}^{m,p}(b_{\sigma}=1)$ is given by the EE of an Ising chain of length $L-1/2$ without any defects [$S_{{\cal I},L-1/2}(b_{\epsilon}=1)$] together with an offset. The first contribution to this offset is $\Delta S[r/(L-1/2)]$. It arises as the subsystem size grows and becomes aware of the nonlocal zero-mode $\Lambda(b_{\sigma})$ in the chain of length $L-1/2$. The second contribution, $\delta_{r,L}(\ln 2)/2$, arises only when $r=L$. This is due the localized zero-mode, $\gamma_{2L}$, which contributes only when the subsystem covers the entire system. The resulting prediction is shown with orange pluses. For comparison, we have also shown the curves obtained by computing the corresponding offsets for the system-size $L$ (green pluses). For $r$ not close to $L$, both predictions work well. However, for $L-r\sim 1$, only the computation with system-size $L-1/2$ leads to the correct predictions (see inset). The field-theory computations are usually done for $1\ll r,L$ with $r$ not too close to $L$. For definiteness, we consider the scaling of $S_{\cal I}(r=L/2)$ with $\ln L$ [19, 21]. Fitting to $S_{\cal I}(r=L/2)=(c/3)\ln L+\tilde{S}_{0}$ 666Note that for a periodic (open) system, the coefficient of $\ln L$ is $c/3$ $(c/6)$. yields the expected central charge $c\simeq 0.5$ for all the curves. Recall that for $0\leq b_{\sigma}<1$, the coefficient of $\ln L$ is $c_{\rm eff}/3$, where the ‘effective central charge’ $c_{\rm eff}\in[0,1/2)$ [22, 21]. The difference of the offsets, $\tilde{S}_{0}$, between $b_{\epsilon}=-1$ and $b_{\epsilon}=1$ cases is $2\Delta_{I}=\Delta S(1/2)$ $=-1/2+\ln 2$ or 0 depending on the total system being in a mixed or pure state. For the topological case, both pure and mixed states, the corresponding offset difference is $\Delta_{I}=\Delta S(1/2)/2=-1/4+(\ln 2)/2$. Importantly, this offset occurs entirely due to a ‘finite-size effect’ correction arising due to the existence of nonlocal zero-modes and bears no relationship to the specific modular S-matrix elements predicted in Refs. [19, 21]. To summarize, we computed the symmetric and interface EEs for the Ising CFT with a topological defect taking into account the subtle effects of the zero- modes on the EEs. We showed that while both the EEs exhibit identical leading- order logarithmic scaling, the subleading $O(1)$ corrections are of completely different origin. The subleading term [$=(\ln 2)/2$] for the symmetric EE is related to the $g$-function of the corresponding defect at the boundary in the folded picture. However, the corresponding term [$\Delta S(r/L)/2$] in the interface EE arises only when the subsystem occupies a finite fraction of the total system and is entirely due to the local and nonlocal zero-modes of the topological defect Hamiltonian. In the limit $r\ll L$, there is no additional offset compared with the case without defect. The interface EE result is in sharp contrast with the existing predictions in terms of the modular S-matrix of the CFT [21, 19], which predict an offset equal to $-\ln 2$ instead. We also computed the EEs in the case of open/free boundary conditions with a defect at the center of the chain 777See Supplementary Material for details. The results for symmetric EE were identical to that obtained for the periodic chain. The offset interface EE is the same as for the periodic chain when the total system was pure. The total system being mixed contributes another $(\ln 2)/2$ to the offset. Several features of the topological defect persist away from the topological point, i.e., $b_{\sigma}\neq 1$, and even away from the conformal critical point. We plan to address some of these questions elsewhere. Defect Hamiltonians for other rational CFTs contain more complicated set of zero modes [41]. The question of subleading corrections in EEs for these models remains open. Finally, advancements in measurement of Renyi entropies in engineered quantum systems [42, 43] and quantum simulation [44] can lead to potential verification of our analytical predictions. We thank Natan Andrei, Pasquale Calabrese, Christopher Herzog and Ingo Peschel for discussions. We also thank David Rogerson and Frank Pollmann for discussions and collaborations on a related project. AR acknowledges support from a grant from the Simons Foundation (825876, TDN). HS was supported by ERC Advanced Grant NuQFT. ## References * Holzhey _et al._ [1994] C. Holzhey, F. Larsen, and F. Wilczek, Geometric and renormalized entropy in conformal field theory, Nuclear Physics B 424, 443 (1994). * Calabrese and Cardy [2004] P. Calabrese and J. Cardy, Entanglement entropy and quantum field theory, Journal of Statistical Mechanics: Theory and Experiment 2004, P06002 (2004). * Affleck and Ludwig [1991] I. Affleck and A. W. W. Ludwig, Universal noninteger “ground-state degeneracy” in critical quantum systems, Phys. Rev. Lett. 67, 161 (1991). * Calabrese and Cardy [2009] P. Calabrese and J. Cardy, Entanglement entropy and conformal field theory, J. Phys. A42, 504005 (2009), arXiv:0905.4013 [cond-mat.stat-mech] . * Affleck _et al._ [2009] I. Affleck, N. Laflorencie, and E. S. Sørensen, Entanglement entropy in quantum impurity systems and systems with boundaries, Journal of Physics A: Mathematical and Theoretical 42, 504009 (2009). * Roy _et al._ [2020] A. Roy, F. Pollmann, and H. Saleur, Entanglement Hamiltonian of the 1+1-dimensional free, compactified boson conformal field theory, J. Stat. Mech. 2008, 083104 (2020), arXiv:2004.14370 [cond-mat.stat-mech] . * Roy _et al._ [2021] A. Roy, D. Schuricht, J. Hauschild, F. Pollmann, and H. Saleur, The quantum sine-Gordon model with quantum circuits, Nucl. Phys. B 968, 115445 (2021), arXiv:2007.06874 [quant-ph] . * Petkova and Zuber [2001] V. B. Petkova and J. B. Zuber, Generalized twisted partition functions, Phys. Lett. B 504, 157 (2001), arXiv:hep-th/0011021 . * Bachas _et al._ [2002] C. Bachas, J. de Boer, R. Dijkgraaf, and H. Ooguri, Permeable conformal walls and holography, JHEP 06, 027, arXiv:hep-th/0111210 . * Fröhlich _et al._ [2004] J. Fröhlich, J. Fuchs, I. Runkel, and C. Schweigert, Kramers-wannier duality from conformal defects, Phys. Rev. Lett. 93, 070601 (2004). * Frohlich _et al._ [2007] J. Frohlich, J. Fuchs, I. Runkel, and C. Schweigert, Duality and defects in rational conformal field theory, Nucl. Phys. B 763, 354 (2007), arXiv:hep-th/0607247 . * Aasen _et al._ [2016] D. Aasen, R. S. K. Mong, and P. Fendley, Topological Defects on the Lattice I: The Ising model, J. Phys. A 49, 354001 (2016), arXiv:1601.07185 [cond-mat.stat-mech] . * Kramers and Wannier [1941] H. A. Kramers and G. H. Wannier, Statistics of the two-dimensional ferromagnet. part i, Phys. Rev. 60, 252 (1941). * Savit [1980] R. Savit, Duality in field theory and statistical systems, Rev. Mod. Phys. 52, 453 (1980). * Buican and Gromov [2017] M. Buican and A. Gromov, Anyonic Chains, Topological Defects, and Conformal Field Theory, Commun. Math. Phys. 356, 1017 (2017), arXiv:1701.02800 [hep-th] . * Oshikawa and Affleck [1997] M. Oshikawa and I. Affleck, Boundary conformal field theory approach to the critical two-dimensional ising model with a defect line, Nuclear Physics B 495, 533 (1997). * Saleur [1998] H. Saleur, Lectures on nonperturbative field theory and quantum impurity problems, (1998), arXiv:cond-mat/9812110 . * Saleur [2000] H. Saleur, Lectures on nonperturbative field theory and quantum impurity problems: Part 2, (2000), arXiv:cond-mat/0007309 . * Gutperle and Miller [2016] M. Gutperle and J. D. Miller, A note on entanglement entropy for topological interfaces in RCFTs, JHEP 04, 176, arXiv:1512.07241 [hep-th] . * Sakai and Satoh [2008] K. Sakai and Y. Satoh, Entanglement through conformal interfaces, JHEP 12, 001, arXiv:0809.4548 [hep-th] . * Brehm and Brunner [2015] E. M. Brehm and I. Brunner, Entanglement entropy through conformal interfaces in the 2D Ising model, JHEP 09, 080, arXiv:1505.02647 [hep-th] . * Eisler and Peschel [2010] V. Eisler and I. Peschel, Entanglement in fermionic chains with interface defects, Annalen der Physik 522, 679 (2010). * Peschel and Eisler [2012] I. Peschel and V. Eisler, Exact results for the entanglement across defects in critical chains, Journal of Physics A: Mathematical and Theoretical 45, 155301 (2012). * Calabrese _et al._ [2012] P. Calabrese, M. Mintchev, and E. Vicari, Entanglement Entropy of Quantum Wire Junctions, J. Phys. A 45, 105206 (2012), arXiv:1110.5713 [cond-mat.stat-mech] . * Herzog and Nishioka [2013] C. P. Herzog and T. Nishioka, Entanglement Entropy of a Massive Fermion on a Torus, JHEP 03, 077, arXiv:1301.0336 [hep-th] . * Klich _et al._ [2017] I. Klich, D. Vaman, and G. Wong, Entanglement hamiltonians for chiral fermions with zero modes, Phys. Rev. Lett. 119, 120401 (2017). * Grimm [2002] U. Grimm, Spectrum of a duality twisted Ising quantum chain, J. Phys. A 35, L25 (2002), arXiv:hep-th/0111157 . * Vidal _et al._ [2003] G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Entanglement in quantum critical phenomena, Phys. Rev. Lett. 90, 227902 (2003), arXiv:quant-ph/0211074 . * Peschel [2003] I. Peschel, Calculation of reduced density matrices from correlation functions, Journal of Physics A: Mathematical and General 36, L205 (2003). * Latorre _et al._ [2004] J. I. Latorre, E. Rico, and G. Vidal, Ground state entanglement in quantum spin chains, Quantum Inf. Comput. 4, 48 (2004). * Henkel _et al._ [1989] M. Henkel, A. Patkós, and M. Schlottmann, The ising quantum chain with defects (i). the exact solution, Nuclear Physics 314, 609 (1989). * Baake _et al._ [1989] M. Baake, P. Chaselon, and M. Schlottmann, The ising quantum chain with defects (ii). the so(2n)kac-moody spectra, Nuclear Physics B 314, 625 (1989). * Note [1] Note that $h_{\epsilon,\sigma}$ are marginal perturbations which, in general, lead to continuously varying critical exponents [45]. * Note [2] This duality defect Hamiltonian is related by a local unitary rotation on the $L^{\rm th}$ spin to the one considered in Refs. [16], which has $\sigma_{L}^{z}\sigma_{1}^{x}$ interaction. We do not use this alternate form since it no longer leads to a bilinear Hamiltonian under JW transformation and cannot be solved by free-fermion techniques. * Note [3] Equivalently, we analyze the mixed sector Hamiltonian [27]: $H_{\epsilon,\sigma}^{m}=P_{+}H_{\epsilon,\sigma}^{f}(Q=+1)+P_{-}H_{\epsilon,\sigma}^{f}(Q=-1)$. * Note [4] See Supplementary Material for details. * Alba _et al._ [2017] V. Alba, P. Calabrese, and E. Tonni, Entanglement spectrum degeneracy and the cardy formula in 1+1 dimensional conformal field theories, Journal of Physics A: Mathematical and Theoretical 51, 024001 (2017). * Note [5] Note that the symmetric EE computation required much larger total system sizes in order to get a good result for offset $(\mathop{ln}\nolimits 2)/2$. * Note [6] Note that for a periodic (open) system, the coefficient of $\mathop{ln}\nolimits L$ is $c/3$ $(c/6)$. * Note [7] See Supplementary Material for details. * Belletête _et al._ [2020] J. Belletête, A. M. Gainutdinov, J. L. Jacobsen, H. Saleur, and T. S. Tavares, Topological defects in periodic rsos models and anyonic chains (2020), arXiv:2003.11293 [math-ph] . * Islam _et al._ [2015] R. Islam, R. Ma, P. M. Preiss, M. Eric Tai, A. Lukin, M. Rispoli, and M. Greiner, Measuring entanglement entropy in a quantum many-body system, Nature 528, 77 (2015). * Brydges _et al._ [2019] T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, and C. F. Roos, Probing rényi entanglement entropy via randomized measurements, Science 364, 260–263 (2019). * Monroe _et al._ [2021] C. Monroe, W. Campbell, L.-M. Duan, Z.-X. Gong, A. Gorshkov, P. Hess, R. Islam, K. Kim, N. Linke, G. Pagano, and et al., Programmable quantum simulations of spin systems with trapped ions, Reviews of Modern Physics 93, 10.1103/revmodphys.93.025001 (2021). * Cardy [1987] J. L. Cardy, Continuously varying exponents and the value of the central charge, Journal of Physics A: Mathematical and General 20, L891 (1987).
# Hardware-accelerated Inference for Real-Time Gravitational-Wave Astronomy Alec Gunny Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA LIGO Laboratory, 185 Albany St, MIT, Cambridge, MA 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Dylan Rankin Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Jeffrey Krupa Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Muhammed Saleem School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA Tri Nguyen Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA LIGO Laboratory, 185 Albany St, MIT, Cambridge, MA 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Michael Coughlin School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA Philip Harris Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Erik Katsavounidis Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA LIGO Laboratory, 185 Albany St, MIT, Cambridge, MA 02139, USA Steven Timm Fermi National Accelerator Laboratory, Batavia, IL 60510, USA Burt Holzman Fermi National Accelerator Laboratory, Batavia, IL 60510, USA ###### Abstract The field of transient astronomy has seen a revolution with the first gravitational-wave detections and the arrival of multi-messenger observations they enabled. Transformed by the first detection of binary black hole and binary neutron star mergers, computational demands in gravitational-wave astronomy are expected to grow by at least a factor of two over the next five years as the global network of kilometer-scale interferometers are brought to design sensitivity. With the increase in detector sensitivity, real-time delivery of gravitational-wave alerts will become increasingly important as an enabler of multi-messenger followup. In this work, we report a novel implementation and deployment of deep learning inference for real-time gravitational-wave data denoising and astrophysical source identification. This is accomplished using a generic Inference-as-a-Service model that is capable of adapting to the future needs of gravitational-wave data analysis. Our implementation allows seamless incorporation of hardware accelerators and also enables the use of commercial or private (dedicated) as-a-service computing. Based on our results, we propose a paradigm shift in low-latency and offline computing in gravitational-wave astronomy. Such a shift can address key challenges in peak-usage, scalability and reliability, and provide a data analysis platform particularly optimized for deep learning applications. The achieved sub-millisecond scale latency will also be relevant for any machine learning-based real-time control systems that may be invoked in the operation of near-future and next generation ground-based laser interferometers, as well as the front-end collection, distribution and processing of data from such instruments. NewReferences ## We have entered a new era where discoveries in astronomy will be driven by combining observations in gravitational waves, the electromagnetic spectrum, as well as neutrinos, e.g. Ref. (Dietrich et al., 2020; Aartsen, M. G. et al., 2018). This is often referred to as multi-messenger astronomy (MMA) and it has been enabled by the direct detection of gravitational-wave transients (Abbott, 2016; Abbott, B. et al., 2017) with the large ground-based laser interferometers LIGO (Harry, 2010; Aasi, 2015; Aasi et al., 2015) and Virgo (F. Acernese et al., 2015). The data analyses for gravitational-wave transients of both known and unknown signal morphology present a major computational challenge for these instruments as they prepare to enter their fourth observing run (referred to as “O4”) in the summer of 2022 (Abbott et al., 2020). Existing gravitational-wave detection algorithms for compact binary systems rely heavily on parameterized waveforms (templates) and the use of matched-filtering techniques, e.g. Refs (Usman, 2016; Cannon et al., 2020). Such approaches scale poorly with the expected low frequency improvement of instruments as well as with the expanding parameter space needed to cover spin effects and sub-solar mass compact binaries (Abbott, 2019). The anticipated addition of KAGRA (Aso et al., 2013; Somiya, 2012) in the international network of detectors observing the gravitational-wave sky is expected to further increase the computational demands. At the same time, gravitational-wave transients of ill-defined morphology (e.g., supernovae, neutron star glitches and potentially yet-unknown astrophysical systems) are susceptible to instrumental and environmental noise that is hard to simulate and often challenging to subtract (Zevin et al., 2017; Davis et al., 2021). Such noise sources will affect searches for binary systems with sufficient signal duration that will make it statistically probable to overlap with non-Gaussian artifacts, such as in the case of GW170817 (Abbott, B. et al., 2017). Detection of persistent gravitational-wave emission from known and unknown neutron stars or in the form of a stochastic radiation (although not a real-time detection process) may also be hindered by noise artifacts, mostly in the form of line noise appearing in the noise spectrum of the instruments (Davis et al., 2021). While the vast majority of data analysis techniques employed in gravitational-wave searches are built on traditional time-frequency decomposition using the Fourier and other wavelet transforms, deep learning techniques have recently emerged as potentially powerful solutions to the computational challenges in this field. Neural networks and other gradient-based learning algorithms have been proposed in gravitational-wave analyses for tasks like noise regression (Vajente et al., 2020; Ormiston et al., 2020), astrophysical searches (George & Huerta, 2018) and transient noise classification (Zevin et al., 2017; Mukund et al., 2017). However, significant barriers exist before they can be used in large-scale gravitational-wave analyses, as large networks with complex architectures increase computational demands and incur large inference latencies. For gravitational-wave astronomy, however, time is of the essence. Astrophysical sources of transient gravitational waves, located at cosmological distances, are expected to be faint and fast-fading. Thus, maximizing the time to search the sky for their counterparts is essential (Metzger, 2017). Innovative hardware-based acceleration with Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGA) have recently been gaining ground within industry and academic research (Mittal & Vetter, 2015; Duarte et al., 2019) as methods for fast machine learning (ML) inference at large scales. The computing landscape available for gravitational-wave searches mostly includes in-house computing clusters over which CPU-intensive workflows are run both for real-time as well as offline processing (Canton et al., 2021). The rise of recent Heterogeneous High Performance Computing centers (HHPCs) has helped to illustrate an alternative computing model. The rapid growth of HHPCs provides a potential solution for a next-generation gravitational-wave computing model. In order to take full advantage of accelerators, modifications must be made to the standard model of computing, in which pipelines directly manage the accelerated resources they use for execution. An alternative model, which has gained popularity in other fields, is called “as-a-service” (Krupa, 2021; Wang et al., 2020). When used to specifically denote accelerated ML inference, it is referred to as Inference-as-a-Service (IaaS). In this IaaS model, trained ML models are hosted in a centralized repository, from which an inference application loads and exposes them to requests from networked clients. The user then requests inferences via a client by sending packets of inputs to the server via HTTP or gRPC (a remote process calling system often used in distributed computing). The details of the server are abstracted from the user using standard client Application Programming Interfaces (APIs). This allows the simple integration and management of heterogeneous computing resources as well as parallel and asynchronous execution of inference requests. In the following, we will analyze the advantages of the IaaS paradigm using two deep learning models. The first is DeepClean (Ormiston et al., 2020), a 1-D convolutional autoencoder able to predict and subtract noise present in the gravitational-wave sensing channel (referred to as “strain”). Gravitational-wave interferometers are subject to technical and environmental noise that ultimately limit the instruments’ ability to reach their design sensitivity. The bulk of data written onto disk by these instruments corresponds to auxiliary channels that monitor and record the interferometry as well as their physical environment. This allows effective monitoring and regression of transient or continuous noise from the gravitational-wave measurement. In order to achieve this, about 200,000 such auxiliary channels resulting in over 10 MBps of time series data from each interferometer are continually written onto disk during normal data acquisition (Abbott et al., 2016b). We generally refer to them as “witness” channels for their ability to record noise that may affect the measurement and their role in assisting noise subtraction from the strain channel. DeepClean uses information from the auxiliary channels correlated with the strain channel in order to achieve noise reduction. It can also be customized to specific noise couplings for a variety of applications (Ormiston et al., 2020). In the use-case described here, we use 21 auxiliary channels that are good witnesses of noise appearing in the strain channel and associated with monitor lines of power mains and their harmonics, including sidebands. The second model we will use to demonstrate the IaaS framework, called BBHnet (Nguyen, 2021), is used in the archetypal search for compact binary black hole coalescences. BBHnet utilizes 1-D convolutional neural networks in order to distinguish between binary black holes and detector noise, with the noise- regressed strain signal derived by DeepClean as its input. The combination of DeepClean and BBHnet offers an end-to-end test of a pipeline that combines both data-cleaning and astrophysical searches. The task of binary black hole identification is a challenging one, but deep learning has been shown to be an effective replacement (George & Huerta, 2018) for the template matching algorithms that are currently used. The number of templates required for such algorithms can grow to the millions as searches expand to cover neutron stars and systems with sub-solar component masses (Abbott, 2019). Moreover, the continuing improvement of the low-frequency sensitivity of interferometers and the addition of new detectors in the international gravitational-wave network (Abbott et al., 2020) also grow the template banks used in GW searches. Deep learning is only expected to become more critical in this regime due to the large number of free parameters in the compact binary star system numerical problem. At the same time, the ability to perform such gravitational-wave searches in real-time enables their follow-up via the electromagnetic spectrum and neutrinos, thus enabling multi-messenger astronomy. To deploy these pipelines, we use an out-of-the-box inference service developed by NVIDIA called the Triton Inference Server (NVIDIA, 2021). Triton supports concurrent inference execution on both CPUs and GPUs using multiple framework backends. It is provided as a containerized application to make it portable to different deployment locations. Triton automatically detects changes in the model repository from which it reads, and will update the models that it hosts in-memory in response to updates according to a prescribed versioning policy. This ensures that all services, and their users, are kept in-sync with the latest developments and with one another. This is particularly beneficial for computing scenarios where a distributed user base is interested in accessing centrally-managed server resources. Inference scenarios in gravitational-wave data analyses can be broadly separated into online and offline categories. The online inference scenario requires low latency inferences for real-time processing of live streams during data collection runs. The offline scenario, on the other hand, involves large-scale processing of archival data for use-cases such as model validation analyses or completion of transient event searches and corresponding catalogs following the definitive calibration of the instruments. Fig. 1 and Fig. 2 depict the two IaaS deployment scenarios we have adopted for realistic online and offline use-cases in gravitational-wave experiments. Figure 1: Example IaaS deployment scenario for the local online case. Locally (i.e., at the gravitational-wave detector sites) deployed client and server instances co-located with in-memory data sources to minimize latency. The server is deployed in a container using Singularity (Kurtzer et al., 2021) and reads from a cloud-based model repository to stay in sync with updates. Figure 2: Example IaaS deployment scenario for the cloud-based offline case. Offline co-located cloud-based deployment where multiple server instances are managed by Kubernetes and data is split among multiple client virtual machines. Fig. 1 depicts the deployment for an online pipeline which leverages DeepClean to remove noise from the strain channel at each detector to be made available to downstream transient event detection algorithms in real-time. In order to meet the low-latency requirements of mutli-messenger astronomy, this pipeline is deployed fully on the LIGO Data Grid (LDG, https://computing.docs.ligo.org/lscdatagridweb/) so that the data sources, client, and inference service can minimize latency incurred by networked connections. Fig. 2 presents a generic offline scenario we use for two archival data processing tasks. These use-cases, unlike the online use-case, prioritize the total processing time for a given dataset instead of the individual request latency. In addition, they can be massively parallelized by breaking the dataset into smaller, time-contiguous subsets which are assigned to individual client instances. In both these scenarios, building an optimal IaaS pipeline involves tuning the same parameters, but their different objectives and constraints lead to drastically different decisions for which set of values is optimal. Moreover, the streaming nature of gravitational-wave data presents unique challenges to the IaaS model in both deployment scenarios. In order to reduce the overall bandwidth going into the inference service, previous time series samples are cached on the GPU so that only the new samples need to be sent to the server. More details can be found in the methods section. As a first test of the offline use-case, we deploy DeepClean to remove noise from roughly a month’s worth of strain data from the O3 observing run of the LIGO-Virgo instruments (Abbott et al., 2020), using an inference sampling rate ($r$) of 0.25 Hz. Figure 3 shows the distribution of processing time per second of data achieved both by a traditional workflow on the LDG and by workflows leveraging an inference service on various amounts of cloud resources. The LDG workflow, a traditional pipeline consisting of a single GPU, has the longest processing time, shown in grey. For the IaaS workflow using only CPUs we observe a reduction in the processing time by a factor of 10 when compared to a traditional workflow, in spite of the additional sources of latency in the IaaS workflows. This reduction comes largely from the inference service’s ability to execute multiple tasks concurrently, allowing for efficient parallelization of the work across multiple inference instances. Further reductions in time are achieved by adding GPUs to the service. An inference service equipped with 4 GPUs is able to decrease the processing time by another factor of 5, and the reduction continues proportionally as more service nodes are added. These modifications to the number of GPUs can be handled seamlessly in the IaaS paradigm. Figure 3: Distributions of the time required to process one second of data for DeepClean, $r=0.25~{}\text{Hz}$. For these studies all GPU-equipped inference service nodes used NVIDIA V100 32GB GPUs and were equipped with 32 vCPUs and hosted 6 concurrent execution instances per GPU. The CPU-only inference service hosted 6 concurrent execution instances for the whole node. The second offline workflow, an end-to-end pipeline implemented in multiple different frameworks and comprises both DeepClean and BBHnet, is shown in Fig. 4. Figure 4: Flow of data through ensemble of individual modules on inference service in the end-to-end pipeline. Snapshotter model maintains and updates state for all three data sources at once, then sends the updated snapshots to their respective downstream models. The framework backend used for each model is indicated in the legend. Using Triton’s ensemble scheduler, arbitrary server-side pipelines like this can be constructed. The ability to construct complex ensembles of otherwise disparate models implemented using many software backends illustrates the robustness of the IaaS model. The time and cost required to process one second of data using this pipeline with various server and client configurations are shown in Figs. 5 \- 8. The cost is computed by aggregating the cost-per-unit-time of all client and server resources and normalizing by the cost of 1 CPU hour. See Methods for more information about the details of these measurements. As can be seen from Figs. 5 and 6, for both values of the inference sampling rate ($r$) and the number of clients per server node, the processing time decreases nearly linearly as the number of server nodes is increased. However, the total cost per second of data remains nearly constant with increasing numbers of nodes leveraged by the inference service, as shown in Figs. 7 and 8. These trends show that once external constraints such as cost or sampling rate are imposed, the IaaS workflow is then able to make efficient use of the available resources, regardless of the exact values of the constraints. Figure 5: Distributions of time required to process one second of data for the end-to-end ensemble, $r=10~{}\text{Hz}$. The colored bars represent the median value, with $\pm 25$ percentiles depicted by the black bars. All nodes were equipped with 4 NVIDIA T4 GPUs and 32 vCPUs. The amount of concurrent execution per GPU was 6 for each DeepClean instance and 1 for all other models except the snapshotter. Figure 6: Distributions of time required to process one second of data for the end-to-end ensemble, $r=20~{}\text{Hz}$. The colored bars represent the median value, with $\pm 25$ percentiles depicted by the black bars. All nodes were equipped with 4 NVIDIA T4 GPUs and 32 vCPUs. The amount of concurrent execution per GPU was 6 for each DeepClean instance and 1 for all other models except the snapshotter. Figure 7: Distributions of cost required to process one second of data for the end-to-end ensemble, $r=10~{}\text{Hz}$. The colored bars represent the median value, with $\pm 25$ percentiles depicted by the black bars. All nodes were equipped with 4 NVIDIA T4 GPUs and 32 vCPUs. The amount of concurrent execution per GPU was 6 for each DeepClean instance and 1 for all other models except the snapshotter. Figure 8: Distributions of cost required to process one second of data for the end-to-end ensemble, $r=20~{}\text{Hz}$. The colored bars represent the median value, with $\pm 25$ percentiles depicted by the black bars. All nodes were equipped with 4 NVIDIA T4 GPUs and 32 vCPUs. The amount of concurrent execution per GPU was 6 for each DeepClean instance and 1 for all other models except the snapshotter. We see in Figs. 5 and 6 that increasing the clients per server node from 2 up to 4 is able to reduce the total processing time, regardless of the actual number of server nodes. This reduction is possible because we do not fully utilize the server’s resources, and therefore more clients can be served with the unused resources. This underscores the ability of the IaaS paradigm to take full advantage of scarce server resources. For larger numbers of clients we still observe the same processing time, but an increased cost as a result of the saturated server throughput. In this particular example, the models limiting the throughput are the two DeepClean instances, which are the larger of the models in the pipeline. Some optimizations have already been applied to these models for this pipeline, but further improvement to their inference throughput is being investigated. The IaaS paradigm is equally capable of performing in an online setting. Unlike in the offline settings described above, the online setting prioritizes low latency. Figure 9 depicts the latency achieved with various server configurations as a function of the inference sampling rate $r$ for the DeepClean pipeline. We disaggregate the latencies into time spent computing model inference (light blue) and time spent queueing for available resources (light brown). At low values of $r$, the latency is dominated by compute time and remains nearly constant regardless of server configuration, since a single GPU is capable of handling the request load. As $r$ increases past $\sim$1000 Hz, the request load overwhelms the maximal GPU performance, and so requests must queue and wait for resources to become available. At the highest values of $r$, this resource availability is the primary determinant of total latency, which becomes a near-linear function of the amount of server resources. More detailed inspection of latency sources indicates that the bottleneck in this pipeline is the streaming state update described in Methods, which limits the capacity for the downstream DeepClean model to benefit from additional GPUs. Future optimizations to this update step via HHPC techniques will allow this pipeline to better utilize the available resources to both increase the values of $r$ which can be processed stably and decrease the latency incurred at sustainable values of $r$. Figure 9: End-to-end server-side latency for multiple GPU counts as a function of the inference sampling rate ($r$). Latency is broken down into contributions from both compute time and queuing time, with vertical bars representing median values and error bars representing $\pm 25$ percentiles. The data sampling rate $f_{s}$ was fixed at 4000 Hz for all measurements, and all GPUs used were NVIDIA V100 16GB GPUs. Finally, we perform tests emulating the offline end-to-end pipeline in the context of jobs running over a prolonged period of time. The jobs are submitted to compute instances on the Google Cloud Platform via the HEPCloud framework (Holzman et al., 2017). HEPCloud enables scientific experiments to access a heterogeneous variety of computing resources (on-premises batch, commercial cloud, supercomputing facilities), while presenting a single simple frontend interface (HTCondor) to the end user. HEPCloud is a good test-bed for prototyping as-a-service workloads. First, it allows a large number of CPU, GPU, and other resources to be provisioned at a single site, facilitating scalable intra-site tests. Second, these resources form a self-contained system free from other jobs, which increases reproducibility for testing new frameworks. Last, the HTCondor frontend is widely used in pre-existing job submission frameworks. The clients are configured to run on Google Cloud CPU nodes accessed via HEPCloud. The servers are configured to have between 4 and 80 NVIDIA T4 GPUs at the same site. The result showing sustained throughput of frames (measured in seconds of data processed per second) as a function of time is shown in Fig. 10. With a server consisting of 4 GPUs, stable processing of 1000 inferences per second (2 seconds of data per second) is observed. This work is distributed across 100 HEPCloud clients. The observed throughput scales linearly with the number of servers (GPUs), reaching 20000 inferences per second (40 seconds of data per second) for 80 GPUs communicating with 2000 clients. This is a demonstration of our frameworks’ ability to deliver inferences for a sustained period of time, and with large numbers of resources, using existing gravitational-wave experimental paradigms. Figure 10: The number of seconds of gravitational-wave data processed per second as a function of time for a sustained test using HEPCloud. The jobs are synchronized to start at the time indicated by the dotted vertical line. While we have shown that the IaaS paradigm is capable of meeting the computational needs of streaming low-latency data denoising and astrophysical searches, there are additional considerations that are required for a fully real-time pipeline for multi-messenger followup. Specifically, the performance of the deep neural network inference pipeline must operate with the same fidelity in the offline scenario. In this instance, we have observed that there is some degradation in the subtraction quality at the edges of the cleaned segments when using DeepClean. While we leave the mitigation of this effect to future work, in the present setup we simply exclude the quality- degraded edges from the cleaned data segments at a latency cost equal to the excluded data length. We refer to this latency as the aggregation latency. Fig. 11 demonstrates this degradation and subsequent recovery by comparing the performance with the fully offline DeepClean pipeline. We see that an aggregation latency of 0.75 s is able to closely reproduce the amplitude spectral density of offline DeepClean. While this limits the minimum possible latency for the online pipeline we have used, it could potentially be reduced or even removed entirely by an algorithm designed (trained) specifically for low-latency cleaning. Figure 11: Performance of the online-deployed DeepClean noise regression as a function of the aggregation latency \- length of the time series data excluded from each cleaned segment, to avoid noisy edges. The quantity on the y-axis is the ratio of the Amplitude Spectral Density (ASD) of the cleaned data to that of the uncleaned data. We compare the offline case (purple) to an analysis with zero latency (green), 0.5 s (orange), and 0.75 s latency (cyan). At zero latency, the fraction of frequency bins within the [50,70] Hz that differ from the offline ASD ratio by more than 10% is $\sim$23%; this quantity reduces to $\sim$4% and $\sim$1% with 0.5 s latency and 0.75 s latency respectively. Another factor which must be considered when discussing the overall latency is how often the trained network must be updated on the server. Typically a cleaning algorithm must be retrained on recent data to maintain performance under gradually changing conditions. For continuous functioning, the trained model must remain valid for longer than the time it takes to retrain a new model. For DeepClean, training the network on a new dataset typically takes $<$20 minutes on a single GPU, while analysis of O3 gravitational-wave data from the LIGO-Hanford detector shows that a trained model is effective for at least 4096 s of subsequent data, a common storage length for gravitational- wave data. Therefore, in the case of DeepClean there is sufficient time for an update cycle. We have demonstrated a fully realistic computational workflow to process gravitational-wave data with ML algorithms using a heterogeneous computing stack. Our workflow comprises both an ML-based denoising algorithm and an ML- based binary black hole detection algorithm. By caching the time-series, we can increase the input throughput by several orders of magnitude. Our workflow can seamlessly incorporate future updates to either algorithm and can be extended to additional algorithms for detection or any other analysis. We have run this workflow with real gravitational-wave data and demonstrated operation in two different scenarios: online and offline data processing. We find that in the online scenario our current setup can perform real-time noise subtraction and binary black hole detection with a latency of one second, and is currently limited by the performance of the noise cleaning algorithm itself, and not the inference latency. In spite of this limitation, a server with a GPU located locally at each gravitational-wave site would be able to output a cleaned stream of data within one second of acquisition. With modified deep neural network models, it is likely that the latency can be reduced to milliseconds. The success of this scheme is not dependent on the specifics of the algorithms implemented, which is crucial for its long-term viability as computational needs and resources develop. For offline gravitational-wave data processing, we have set up a full reprocessing stream. By relying on the Inference as-a-Service model, we are able to optimally configure the GPUs and CPUs to process the data-stream, leading to large increases in the overall throughput of the system. Depending on the desired hardware setup, we demonstrate orders of magnitude reductions in the time required to process gravitational-wave data. Our system is dynamically scalable, and remains optimized whether we use a small number of GPUs or a larger number of resources. We have also demonstrated scalability by testing a sustained offline workflow with HEPCloud on Google Cloud. As a consequence, we have demonstrated that we can scale out gravitational-wave reprocessing to utilize a large amount of computing nodes available within a cloud or high performance computing center. With sufficient computing resources, our computing scheme can reprocess gravitational-wave datasets containing many years of data within a few hours. To conclude, we have demonstrated a fully realistic integration of ML-based hetereogeneous computing within existing gravitational-wave computing stacks that are ready to be deployed. Our implementation shows the ability to meet latency and scale requirements for online and offline uses in gravitational- wave ground-based interferometers like LIGO, Virgo and KAGRA. It also presents the computational platform for incorporating ML-based techniques for real-time controls of the numerous servo loops that make part of the laser interferometry in present and future ground interferometers (Reitze et al., 2019) and statisfies the requirement for sub-ms latencies and ability to handle thousands of channels anywhere in the 1-10MBps bandwidth needs (Abbott et al., 2016a) have already been achieved with off-the-shelf computing hardware our implementation uses. This work also has broad implications across many fields where the processing of real-time data-streams is critical, including areas of electromagnetic and neutrino astronomy. Most importantly, this work can significantly improve our ability to perform multi-messenger astronomical observations. As gravitational-wave detectors become increasingly sensitive over the course of second-generation improvements in this decade (Abbott et al., 2020), and with third-generation improvements in the next (Reitze et al., 2019), this new heterogeneous computational stack has the ability to facilitate the computational demands needed to accelerate discovery. ## 1 Acknowledgements The authors are grateful for computational resources provided by the LIGO Laboratory at Caltech, Livingston, LA and Hanford, WA. The LIGO Laboratory has been supported under National Science Foundation Grants PHY-0757058 and PHY-0823459. AG, DR, TN, PH and EK are supported by NSF grants #1934700, #1931469, and DR additionally with the IRIS-HEP grant #1836650. JK is supported by NSF grant #190444. SM and MC are supported by NSF grant PHY-2010970. Work supported by the Fermi National Accelerator Laboratory, managed and operated by Fermi Research Alliance, LLC under Contract No. DE- AC02-07CH11359 with the U.S. Department of Energy. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes. Cloud credits for this study were provided by Internet2 managed Exploring Cloud to accelerate Science (NSF grant PHY-190444). Additionally we would like to thank NSF Institute for AI and Fundamental Interactions (Cooperative Agreement PHY-2019786). The authors are also grateful for the support provided by Stuart Anderson in the realization and testing of our workflow within the LDG. Finally, the authors would like to thank Alexander Pace for providing useful comments on the manuscript. The authors declare that they have no competing financial interests. AG and DR are the primary authors of the manuscript. JK integrated applications in HEPCloud. ST and BH support and operate HEPCloud. SM, MC, EK, and TN support development of DeepClean. All authors contributed to edits to the manuscript. Correspondence and requests for materials should be addressed to Alec Gunny (email: [email protected]). ## Appendix A Gravitational Wave Computing Architecture Our work builds off previous efforts in the realms of as-a-service computing and ML. Significant motivation is taken from the integration of these concepts for usage in high energy physics (HEP) where recent algorithmic advances and the availability of large datasets have greatly facilitated the adoption of ML. Previous work with experiments at the CERN Large Hadron Collider and the ProtoDUNE-SP experiment at the Fermi National Accelerator Laboratory have shown that the as-a-service computing model has the potential to offer impressive speed-ups, improved performance, and reduced complexity relative to traditional computing models (Krupa, 2021; Rankin et al., 2020; Wang et al., 2020). These works have also demonstrated the ability to perform inference as- a-service with both GPUs as well as FPGAs. While promising, this paradigm offers several challenges from an engineering perspective. To ensure consistent deployment, as is the case with rule based algorithms, applications in this model (trained neural networks) need to be versioned, tested, and validated. They may be asked to share pools of compute resources, including comparatively expensive co-processors like GPUs or FPGAs, with other applications, or may require deployment on dedicated hardware to meet strict constraints on inference latency. Moreover, these applications will need to be shared among users who may not be knowledgeable or interested in the details of their implementation (save for some guarantees about their functionality). The pipelines built on top of them will need to be kept up-to- date as newer versions are fine-tuned on new data or replaced wholesale by novel architectures, and frequent updates may impose a non-trivial burden on users to avoid degraded or interrupted service for sensitive applications. The computing architecture traditionally employed in managing gravitational- wave searches is based on the HTCondor batch system (Hernandez et al., 2020). The individual tasks that make up an astrophysical search pipeline are performed by modules which have been programmed and optimized by data analysis experts specific for each search. The resulting workflow is often made up of (often tens of) thousands of jobs that require little or no communication amongst them, thus allowing their straightforward parallelization. HTCondor provides the mapping of such jobs to resources and shepherds them to completion. This computing model requires data to be discoverable either through bulk, local storage available on each compute site or via migration of data to the site where jobs are assigned, e.g., when using Open Science Grid resources (Pordes et al., 2008) (https://opensciencegrid.org/). ## Appendix B Gravitational-wave data processing ### B.1 Streaming Iaas One feature of gravitational-wave data that makes the application of the IaaS model more challenging is that the data of interest comprises streaming time series that need to be processed sequentially, which limits the extent to which an inference service can leverage scale to process data in parallel. This is particularly true for convolutional models like DeepClean and BBHNet, which process fixed-length “snapshots” of this time series, sampled at some frequency $r\leq f_{s}$, where $f_{s}$ is the sampling rate of the time series. As shown in Fig. 12, higher values of $r$ compared to the inverse of the length of the frame will produce highly redundant data between subsequent frames. If requests made to “static” models like DeepClean or BBHnet provide a full snapshot’s worth of data as input, their network load will increase substantially, bottlenecking the pipeline. Figure 12: Overview of snapshotter behavior, first step. High frequency inference relative to the length of the frame results in redundant input data. Colored squares represent time series snapshots at different points in time. Figure 13: Overview of snapshotter behavior, second step. Streaming new samples to the snapshotter updates its state, which is then passed to downstream networks for inference. (c) Fully-online inference for models that produce time series output streams back the last $\frac{f_{s}}{r}$ samples at each timestep. Outputs aren’t identical between frames, and there is potential for improved performance by aggregating overlapping predictions. Figure 14: Overview of snapshotter behavior, third step. Fully-online inference for models that produce time series output streams back the last $\frac{f_{s}}{r}$ samples at each timestep. Outputs aren’t identical between frames, and there is potential for improved performance by aggregating overlapping predictions. To get around both of these limitations, we host an extra model on the server which maintains a state representing the previous input snapshot. Inference requests are made to this model providing a $\big{\lfloor}\frac{f_{s}}{r}\big{\rfloor}$-sized stream of strictly new samples that are used to update the state to a new snapshot, which is then passed to downstream static models. By providing only the new time series a dramatic reduction in networking bandwidth is needed equal to the sampling rate divided by the total model input size; for the models used in this paper, this reduction is more than 1000-fold. A diagram of this data flow is shown in Fig. 13. While updates to the snapshot state have to be performed sequentially, downstream inference on adjacent frames can be performed in parallel with a server node by leveraging Triton’s ensemble scheduling API. In the online setting, this means that as long as the state update can be performed faster than $\frac{1}{r}$ seconds, we avoid introducing a bottleneck into the pipeline. In the offline setting, we can host multiple snapshot instances on a single node and parallelize inference across them in order to saturate the downstream throughput capacity. This ability to cache states in order to better utilize complex ensembles of downstream models represents a significant optimization that is critical to enabling the effective use of deep learning inference in gravitational-wave physics and astronomy. In all experiments described below, the snapshotter model was implemented using TensorFlow (Abadi et al., 2015). ### B.2 Data aggregation There remains one additional challenge in the online streaming setting for architectures like DeepClean whose output is itself a time series with a data sampling rate greater than $r$. For these models, there will also be overlap between the output segments produced by each inference. Unlike the input redundancy, however, the overlapping segments will not be identical: each prediction will be conditioned on data from different windows in time and therefore will, in general, be distinct. While one may improve prediction quality by aggregating the overlapping segments from subsequent windows, this would incur additional latency in the online setting, as completed predictions would have to wait for inference on subsequent frames to complete in order to aggregate them. For this reason, we have chosen to adopt the “fully-online” or “causal” prediction scheme shown in Fig. 14, in which predicted segments keep only their most recent $\big{\lfloor}\frac{f_{s}}{r}\big{\rfloor}$ samples. ### B.3 Aggregation latency As discussed in the Main Text, DeepClean subtraction quality degrades at the edges of cleaned segments for a span of data with length $\lesssim 1\,s$. For the offline analysis, we typically perform cleaning on 4096 s of data, i.e only $\sim$0.02% of the data is impacted. However, for the online analysis where data is cleaned in real-time, we use shorter data segments ($\sim$ 8 s), increasing the importance of problematic segment edges. We currently address this as follows, although future work will likely improve upon this technique. In order to clean the most recent $1\,s$ data, i.e, a window of $[-1\,s$, $0\,s$], if we take the noisy edge to be $\delta t\,s$ and analyze an $8\,s$ window; $[-8+\delta t\,s$, $0+\delta t\,s]$, we can safely recover the cleaned sub-window $[-1,0]\,s$ after removing the $\delta t\,s$ of noise-affected subtraction. This increases our latency by $\delta t\,s$; we refer to this as our aggregation latency, as it is due to how our cleaned data is aggregated. Fig. 11 shows the performance of the subtraction as a function of the aggregation latency. Here, we have taken, as an example, the subtraction of the 60 Hz power-line and its harmonics in O3 data from the LIGO-Hanford detector. The plot shows the ratio of the Amplitude Spectral Density (ASD) of the cleaned data to the original data as a function of frequency. In the ideal scenario, only 60 Hz noise is removed, leading to a dip in the ASD ratio near 60 Hz while it is $\sim 1$ elsewhere. This is true for the offline analysis case (purple) with a dip of two orders of magnitude. With zero latency (where the noisy edge is fully included), no subtraction was achieved and some additional noise from aggregation was added to the data, appearing as upward excursions in the ASD ratio. With an aggregation-latency of 0.5 s, similar features are observed, except for some subtraction at 60 Hz. With 0.75 s latency, the offline subtraction is reproduced, meaning that cleaned data can be delivered only 0.75 s after the original data. Our tuning has shown that aggregation latency can be controlled by tuning the duration of data per inference and the overlap between successive inference segments. In addition, the network loss function likely affects this. Unlike the current models that are optimal for cleaning bulk offline data, the objective function may need to be changed to reflect the online requirements, to improve performance. ### B.4 Online Deployment As noted above, the online deployment scenario more generally is characterized by a sensitivity to the latency incurred by any individual request. It is not sufficient that all requests be completed by some deadline; it is also required that, with high probability, each request (or subset of requests) be completed within some fixed amount of time after it was generated. In this setting, an operating point can be identified by the amount of compute resources being used by the service and the level of parallelism being leveraged on those resources via concurrent execution of inference. Each operating point is associated with some cost as well as distributions of the latency and throughput those resources can achieve. The online gravitational-wave data use-case studied in this paper subtracts noise from strain data in real-time using DeepClean, increasing the sensitivity of downstream event detection algorithms. It is constrained by the exact mechanism and overall organization by which gravitational-wave detectors produce input data. The nominal data acquisition model in gravitational-wave experiments relies on front-end computers that accumulate 1/16th of a second of data that are sent in chunks to a data aggregator on a fast network (Bork et al., 2021). The lowest latency data this aggregator makes available for analysis correspond to 1-second long gravitational-wave “frames” – the standard format for storing gravitational-wave data (Anderson, 2019) – that are compiled in memory and are made available for prompt analyses. In the future, we hope to be able to move our pipeline closer to the front-end data source in order to avoid the latency incurred by waiting for full frames to aggregate. In this setting, throughput is not a dependent variable to be measured, but rather an independent variable that is fixed by $r$. Sustaining a given throughput level means either scaling up resource usage in order to achieve lower latency levels, or increasing the level of parallelism on a smaller resource pool at the expense of higher latency. Fig. 9 measures this trade-off at incremental values of $r$, using 21 witness channels and $f_{s}=4000~{}\text{Hz}$ for all cases. As mentioned in the main text, the experiments at $r=\frac{f_{s}}{3}$ and $r=\frac{f_{s}}{2}$ were already bottlenecked by the state update at the snapshotter, so measurements were not attempted for $r=f_{s}$. The amount of parallel execution was searched over at each value of $r$, and the values reported in Fig. 9 represent measurements at the level of parallelism that incurred the least median latency. These measurements were made using NVIDIA V100 16GB GPUs, which were connected directly via an NVLink connection for the multi-GPU cases. The software backend used to implement DeepClean was TensorRT using FP16 precision. The pipeline for these measurements was deployed entirely on LDG resources in order to minimize data transfer latencies. The client pipeline loads and preprocesses frames written to LDG shared memory by a data replay service. It then asynchronously packages $\big{\lfloor}\frac{f_{s}}{r}\big{\rfloor}$-sized chunks of these frames into requests to stream to the inference service. Responses are compiled asynchronously until a full frame’s worth of noise estimates is constructed. This frame is postprocessed then subtracted from the associated strain data, which is then written to disk. The latency samples depicted in Fig. 9 represent measurements of the time taken to execute one full inference on the server side. Each sample is the average value over a $\sim$50 ms interval, as reported by Triton’s metrics service. ### B.5 Offline Deployment Offline inference scenarios, on the other hand, are marked by their insensitivity to latency. Because any post-hoc analysis tends to be done on the inference results in bulk, it matters less how long any particular request takes to complete. As such, the relevant metrics pertain to how quickly data can be processed in aggregate, i.e. throughput, and how much that speed costs. Moreover, because all of the data in the offline scenario already exists up front, $r$ has no relationship to the rate at which data can be generated, and instead dictates the number of inferences that need to occur for any given length of data. For our offline pipelines, we split the datasets into segments and distributed these segments evenly among many client instances, whose requests are then distributed evenly among the available inference service instances. The number of snapshotter instances per GPU on each service instance is scaled to meet the number of clients assigned to that instance. Fig. 2 roughly depicts the network diagram for both offline use-cases studied here. Client VMs read frame files from a regionally co-located cloud storage bucket and resample them to $f_{s}=4096~{}\text{Hz}$ before applying preprocessing and streaming requests to the inference service. Service instances are deployed in the same regional datacenter using Google Kubernetes Engine. Each DeepClean instance described below uses 21 witness channels. The measurements depicted in Fig. 3 began as throughput estimates over $\sim$50 ms intervals by counting the number of inferences that took place during each interval as reported by Triton’s metrics service. The $r$ value for each experiment was divided by these throughput samples to produce samples in units of seconds per second of data, from which the distributions in Fig. 3 were formed. The first offline use-case described above used DeepClean to subtract noise from roughly one month of gravitational-wave strain data collected during the O3 run, now contained in $\sim$10,000 frame files of length 256 seconds each. The distributions of processing time per second of data for this pipeline are depicted in Fig. 3, with $r=0.25~{}\text{Hz}$ for all measurements. For the LDG pipeline, samples were taken directly by measuring the time delta over multiple processing runs. The GPU used in this pipeline was an NVIDIA V100 16GB GPU. For all the IaaS experiments, each service is associated with 6 client instances, and inference is executed for DeepClean using the ONNX runtime (Bai et al., 2019). The CPU-only IaaS pipeline utilized 64 vCPUs and 6 concurrent DeepClean execution instances, while each GPU-enabled node utilized 32 vCPUs and 6 concurrent execution instances per GPU. We leveraged NVIDIA V100 32GB GPUs on all GPU-enabled inference service nodes in order to maintain consistency with the resources available on LDG, but similar experiments leveraging NVIDIA T4 GPUs showed comparable throughput at a much lower price point and would likely be used in production. For all the IaaS pipelines, each client aggregates responses and produces clean data in much the same way the online DeepClean pipeline does, with cleaned strain data written to another cloud storage bucket. The second offline use-case performed inference using an end-to-end ensemble depicted in Fig. 4, which also specifies the software framework used to execute each model. This pipeline uses two DeepClean instances to remove noise from strain data from both detectors, which are combined and postprocessed then passed to BBHnet to produce event likelihood estimates. It was run on $\sim$27 hours of data collected during the O2 run. Each inference service node in the pipeline leveraged 4 NVIDIA T4 GPUs and 32 vCPUs, with 6 concurrent execution instances available per GPU for each DeepClean model and 1 for all others except the snapshotter model. The time and cost per second of data distributions for this pipeline are depicted in figs. 5-8. The variation in Fig. 8 at 6 nodes with 3 clients per node is due to implementation issues that caused clients to fail to synchronize appropriately, and does not reflect volatility in the inference service’s throughput capacity. ### B.6 HEPCloud We use the HEPCloud (Holzman et al., 2017) package to submit jobs which run client pipelines on virtual nodes on Google Cloud. The jobs are submitted using HTCondor from the Fermilab computing cluster. The Google Cloud nodes running the clients are configured as to have 16 virtual CPUs and 30 GB of memory. The Triton server containing the ML model workflow is deployed on Google Cloud: each server node is configured to have with 64 vCPUs, 240 GB memory, and four NVIDIA T4 GPUs. We test servers configured to have 4, 40, and 80 GPUs. A non-streaming version of the end-to-end ensemble is employed. The model pipeline therefore consists of the DeepClean and BBHnet models, which are hosted on a bucket on Google Cloud Storage. For optimal throughput, each GPU is configured to host six instances of the DeepClean models and one instance of the BBHnet model. The IP address belonging to each server is passed to a defined set of clients. For the purposes of this experiment, a client:server ratio of 100:1 is found to offer sustained throughput. Each client is allocated more than an hour of data to process. To ensure consistency and reproducibility in testing the workflow, the clients are synchronized via readiness signals between jobs on HTCondor. For the server configurations with 4 and 40 GPUs, a stable throughput is observed, while for the 80 GPU configuration, a 1% drop in throughput is observed. This is the result of a small number of clients failing from gRPC errors that will be investigated in further studies. The results of the sustained test showing throughput as a function of time are shown in Fig. 10. ## Appendix C Data Availability Upon request, the corresponding author will provide data required to reproduce the figures. ## Appendix D Code Availability The code from this work can be found at http://github.com/fastmachinelearning/gw-iaas. ## References * Aartsen, M. G. et al. (2018) Aartsen, M. G. et al. 2018, Science, 361, doi: 10.1126/science.aat1378 * Aasi et al. (2015) Aasi, J., Abbott, B. P., Abbott, R., et al. 2015, Classical and Quantum Gravity, 32, 074001, doi: 10.1088/0264-9381/32/7/074001 * Aasi (2015) Aasi, J. e. a. 2015, Classical and Quantum Gravity, 32, 074001 * Abadi et al. (2015) Abadi, M., Agarwal, A., Barham, P., et al. 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ * Abbott et al. (2016a) Abbott, B., Abbott, R., Abbott, T., et al. 2016a, Physical Review Letters, 116, doi: 10.1103/physrevlett.116.131103 * Abbott (2019) Abbott, B. e. a. 2019, Physical Review Letters, 123, doi: 10.1103/physrevlett.123.161102 * Abbott et al. (2016b) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016b, Classical and Quantum Gravity, 33, 134001, doi: 10.1088/0264-9381/33/13/134001 * Abbott et al. (2020) Abbott, B. P., et al. 2020, Living Reviews in Relativity, 23, doi: 10.1007/s41114-020-00026-9 * Abbott (2016) Abbott, B. P. e. a. 2016, Phys. Rev. Lett., 116, 061102, doi: 10.1103/PhysRevLett.116.061102 * Abbott, B. et al. (2017) Abbott, B. et al. 2017, Phys. Rev. Lett., 119, 161101, doi: 10.1103/PhysRevLett.119.161101 * Anderson (2019) Anderson, S. e. a. 2019, LIGO DCC, T970130 * Aso et al. (2013) Aso, Y., Michimura, Y., Somiya, K., et al. 2013, Phys. Rev. D, 88, 043007, doi: 10.1103/PhysRevD.88.043007 * Bai et al. (2019) Bai, J., Lu, F., & Zhang, K. e. a. 2019, ONNX: Open Neural Network Exchange, https://github.com/onnx/onnx, GitHub * Bork et al. (2021) Bork, R., Hanks, J., Barker, D., et al. 2021, SoftwareX, 13, 100619, doi: https://doi.org/10.1016/j.softx.2020.100619 * Cannon et al. (2020) Cannon, K., Caudill, S., Chan, C., et al. 2020, arXiv e-prints, arXiv:2010.05082. https://arxiv.org/abs/2010.05082 * Canton et al. (2021) Canton, T. D., Nitz, A. H., Gadre, B., et al. 2021, Realtime search for compact binary mergers in Advanced LIGO and Virgo’s third observing run using PyCBC Live. https://arxiv.org/abs/2008.07494 * Davis et al. (2021) Davis et al. 2021, Classical and Quantum Gravity, 38, 135014, doi: 10.1088/1361-6382/abfd85 * Dietrich et al. (2020) Dietrich, T., Coughlin, M. W., Pang, P. T. H., et al. 2020, Science, 370, 1450, doi: 10.1126/science.abb4317 * Duarte et al. (2019) Duarte, J., Harris, P., Hauck, S., et al. 2019, Comput. Softw. Big Sci., 3, 13, doi: 10.1007/s41781-019-0027-2 * F. Acernese et al. (2015) F. Acernese et al. 2015, Classical and Quantum Gravity, 32, 024001 * George & Huerta (2018) George, D., & Huerta, E. A. 2018, Physics Letters B, 778, 64, doi: 10.1016/j.physletb.2017.12.053 * Harry (2010) Harry, G. M. e. a. 2010, Classical and Quantum Gravity, 27, 084006, doi: 10.1088/0264-9381/27/8/084006 * Hernandez et al. (2020) Hernandez, E. F., Würthwein, F., Bockelman, B., et al. 2020, CoRR, abs/2011.14995. https://arxiv.org/abs/2011.14995 * Holzman et al. (2017) Holzman, B., Bauerdick, L. A. T., Bockelman, B., et al. 2017, Computing and Software for Big Science, 1, doi: 10.1007/s41781-017-0001-9 * Krupa (2021) Krupa, J. e. a. 2021, Mach. Learn. Sci. Tech., 2, 035005, doi: 10.1088/2632-2153/abec21 * Kurtzer et al. (2021) Kurtzer, G. M., Cclerget, Bauer, M., et al. 2021, hpcng/singularity: Singularity 3.7.3, Zenodo, doi: 10.5281/ZENODO.1310023 * Metzger (2017) Metzger, B. D. 2017, Living Rev. Rel., 20, 3, doi: 10.1007/s41114-017-0006-z * Mittal & Vetter (2015) Mittal, S., & Vetter, J. S. 2015, ACM Comput. Surv., 47, doi: 10.1145/2788396 * Mukund et al. (2017) Mukund, N., Abraham, S., Kandhasamy, S., Mitra, S., & Philip, N. S. 2017, Phys. Rev. D, 95, 104059, doi: 10.1103/PhysRevD.95.104059 * Nguyen (2021) Nguyen, T. e. a. 2021, AI-enhanced methods for binary black hole detection (in preparation) * NVIDIA (2021) NVIDIA. 2021, Triton Inference Server, [software] version v2.12.0 (accessed 2021-07-17) https://developer.nvidia.com/nvidia-triton-inference-server * Ormiston et al. (2020) Ormiston, R., Nguyen, T., Coughlin, M., Adhikari, R. X., & Katsavounidis, E. 2020, Physical Review Research, 2, doi: 10.1103/physrevresearch.2.033066 * Pordes et al. (2008) Pordes, R., Altunay, M., Avery, P., et al. 2008, Journal of Physics: Conference Series, 125, 012070, doi: 10.1088/1742-6596/125/1/012070 * Rankin et al. (2020) Rankin, D., Krupa, J., Harris, P., et al. 2020, 2020 IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC), 38 * Reitze et al. (2019) Reitze, D., Adhikari, R. X., Ballmer, S., et al. 2019, Cosmic Explorer: The U.S. Contribution to Gravitational-Wave Astronomy beyond LIGO. https://arxiv.org/abs/1907.04833 * Somiya (2012) Somiya, K. 2012, Classical and Quantum Gravity, 29, 124007 * Usman (2016) Usman, S. A. e. a. 2016, Class. Quant. Grav., 33, 215004, doi: 10.1088/0264-9381/33/21/215004 * Vajente et al. (2020) Vajente, G., Huang, Y., Isi, M., et al. 2020, Physical Review D, 101, doi: 10.1103/physrevd.101.042003 * Wang et al. (2020) Wang, M., Yang, T., Acosta Flechas, M., et al. 2020, doi: 10.3389/fdata.2020.604083 * Zevin et al. (2017) Zevin, M., Coughlin, S., Bahaadini, S., et al. 2017, Classical and Quantum Gravity, 34, 064003, doi: 10.1088/1361-6382/aa5cea
# Global synchronization on time-varying higher-order structures Md Sayeed Anwar Dibakar Ghosh Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India Timoteo Carletti Department of Mathematics and Namur Institute for Complex Systems, naXys, University of Namur, 2 rue Grafé, Namur B5000, Belgium ###### Abstract Synchronization has received a lot of attention from the scientific community for systems evolving on static networks or higher-order structures, such as hypergraphs and simplicial complexes. In many relevant real world applications, the latter are not static but do evolve in time, in this paper we thus discuss the impact of the time-varying nature of high-order structures in the emergence of global synchronization. To achieve this goal we extend the master stability formalism to account, in a general way, for the additional contributions arising from the time evolution of the higher-order structure supporting the dynamical systems. The theory is successfully challenged against two illustrative examples, the Stuart-Landau nonlinear oscillator and the Lorenz chaotic oscillator. ## I Introduction In the realm of complex systems, synchronization refers to the intriguing ability of coupled nonlinear oscillators to self-organize and exhibit a collective unison behavior without the need for a central controller Arenas _et al._ (2008). This phenomenon, observed in a wide range of human-made and natural systems Boccaletti _et al._ (2018), continues to inspire scientists seeking to unravel its underlying mechanisms. To study synchronization, network science has proved to be a powerful and effective framework. Here, the interconnected nonlinear oscillators are represented as nodes, while their interactions are depicted as links Barabási (2016). However, the classical static network representation has its limitation in modeling many empirical systems, such as social networks Wasserman _et al._ (1994), brain networks Valencia _et al._ (2008); Bassett _et al._ (2011), where the connections among individual basic units are adaptable enough to be considered to evolve through time. Therefore, the framework of networks has been generalized as to include time-varying networks Holme and Saramäki (2012); Masuda and Lambiotte (2016), whose connections vary with time. The results presented in this framework support the claim that synchronization is enhanced by the dynamics of the supporting medium Ghosh _et al._ (2022); Carletti and Fanelli (2022); Anwar _et al._ (2022). Another intrinsic limitation of networks is due to their capability to only model pairwise interactions. To go beyond this issue, scholars have brought to the fore the relevance of higher-order structures, which surpass the traditional network setting that models the interactions between individual basic units only through pairwise links Carletti _et al._ (2020a); Battiston _et al._ (2020, 2021); Majhi _et al._ (2022); Boccaletti _et al._ (2023). By considering the simultaneous interactions of many agents, higher-order structures, namely hypergraphs Berge (1973) and simplicial complexes Bianconi (2021), offer a more comprehensive understanding of complex systems. These higher-order structures have been proven to produce novel features in various dynamical processes, including consensus Neuhäuser _et al._ (2020, 2021), random walks Carletti _et al._ (2020b); Schaub _et al._ (2020), pattern formation Carletti _et al._ (2020a); Muolo _et al._ (2023); Gao _et al._ (2023), synchronization Carletti _et al._ (2020a); Skardal and Arenas (2020, 2019); Carletti _et al._ (2023); Anwar and Ghosh (2022a, b), social contagion and epidemics Iacopini _et al._ (2019); Chowdhary _et al._ (2021). Nevertheless, the suggested framework is not sufficiently general for describing systems with many-body interactions that vary with time. As an example, group interactions in social systems have time-varying nature as the interactions among groups of individuals are not always active but rather change throughout time Cencetti _et al._ (2021). Some early works have begun to investigate the time-varying aspect of many-body interactions in various dynamical processes. For instance, time-varying group interactions have been demonstrated to influence the convergence period of consensus dynamics Neuhäuser _et al._ (2021) and to predict the onset of endemic state in epidemic spreading Chowdhary _et al._ (2021). The present work is motivated by these recent research directions, and it aims to take one step further by considering the impact of time-varying higher- order structures in the synchronization of nonlinear oscillators. In this context, a preliminary effort has been reported in Anwar and Ghosh (2022c), that investigates synchronization in time-varying simplicial complexes, limited only to fast switching Stilwell _et al._ (2006); Petit _et al._ (2017) among distinct static simplicial configurations, implying that the time scale of the simplicial evolution is exceedingly fast compared to that of the underlying dynamical system. In contrast, in the present work, we allow the higher-order structures to evolve freely with time, thus removing any limitations on the imposed time evolution of the higher-order structure. We present the results in the framework of hypergraphs, but they hold true also for simplicial complexes. Under such broad circumstances, we develop a theory to determine the conditions ensuring the stability of a globally synchronized state that generalizes the Master Stability Equation Pecora and Carroll (1998) to a setting where the time evolution of underlying higher-order structures is explicitly considered. The generalized framework we discuss here assumes that the coupling functions cancel out when the dynamics of individual oscillators are identical, which is a necessary condition that must be met for the extended system to have a synchronous solution and it has been frequently used in the literature across various domains. The developed theory reveals that the consideration of temporality in group interactions can induce synchronization more easily than static group interactions, tested on higher- order structures of coupled Stuart Landau oscillators and paradigmatic Lorenz systems. ## II The model To start with, let us consider a $m$-dimensional dynamical system whose time evolution is described by the following ordinary differential equation $\frac{d\vec{x}}{dt}=\vec{f}(\vec{x})\,,$ (1) where $\vec{x}\in\mathbb{R}^{m}$ denotes the state vector and $\vec{f}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}$ some smooth nonlinear function; let us assume moreover that system (1) exhibits an oscillatory behavior, being the latter periodic or irregular; we are thus considering the framework of generic nonlinear oscillators. Let us now consider $n$ identical copies of system (1) coupled by a symmetric higher-order structure; namely, we allow the nonlinear oscillators to interact in couples, as well as in triplets, quadruplets, and so on, up to interactions among $D+1$ units. We can thus describe the time evolution of the state vector of the $i$-th unit by $\dot{\vec{x}}_{i}=\vec{f}(\vec{x_{i}})+\sum\limits_{d=1}^{D}q_{d}\sum\limits_{j_{1},\dots,j_{d}=1}^{n}A_{ij_{1}\dots j_{d}}^{(d)}(t)\vec{g}^{(d)}(\vec{x}_{i},\vec{x}_{j_{1}},\dots,\vec{x}_{j_{d}})\,,$ (2) where for $d=1,\dots,D$, $q_{d}>0$ denotes the coupling strength, $\vec{g}^{(d)}:\mathbb{R}^{(d+1)m}\rightarrow\mathbb{R}^{m}$ the nonlinear coupling function and $\mathbf{A}^{(d)}(t)$ the tensor encoding which units are interacting together. More precisely ${A}^{(d)}_{ij_{1}\dots j_{d}}(t)=1$ if the units $i,j_{1},\dots,j_{d}$ do interact at time $t$, observe indeed that such tensor depends on time, namely the intensity of the coupling as well which units are coupled, do change in time. Finally, we assume the time- varying interaction to be symmetric, namely if ${A}^{(d)}_{ij_{1}\dots j_{d}}(t)=1$, then ${A}^{(d)}_{\pi(ij_{1}\dots j_{d})}(t)=1$ for any permutation $\pi$ of the indexes $i,j_{1},\dots,j_{d}$. Let us emphasize that we consider the number of nodes to be fixed, only the interactions change in time; one could relax this assumption by considering to have a sufficiently large reservoir of nodes, from which the core of the system can recruit new nodes or deposit unused nodes. Let us fix a periodic reference solution, $\vec{s}(t)$, of system (1). We are interested in determining the conditions under which the orbit $(\vec{s}(t),\dots,\vec{s}(t))^{\top}$ is a solution of the coupled system (2), and moreover it is stable, namely the $n$ units globally synchronize and behave at unison. A necessary condition is that the coupling functions vanish once evaluated on such orbit, i.e., $\vec{g}^{(d)}(\vec{s},\dots,\vec{s})=0$, for $d=1,\dots,D$. This assumption is known in the literature as non-invasive condition. For the sake of pedagogy, we will hereby consider a particular case of non- invasive couplings and we will refer the interested reader to Appendix A for a general discussion. We are thus assuming the coupling functions $\vec{g}^{(d)}$ to be diffusive-like, namely for each $d$ there exists a function $\vec{h}^{(d)}:\mathbb{R}^{dm}\rightarrow\mathbb{R}^{m}$ such that $\vec{g}^{(d)}(\vec{x}_{i},\vec{x}_{j_{1}},\dots,\vec{x}_{j_{d}})=\vec{h}^{(d)}(\vec{x}_{j_{1}},\dots,\vec{x}_{j_{d}})-\vec{h}^{(d)}(\vec{x}_{i},\dots,\vec{x}_{i})\,.$ (3) In this way we can straightforwardly ensure that the coupling term in Eq. (3) vanishes once evaluated on the orbit $(\vec{s}(t),\dots,\vec{s}(t))^{\top}$, allowing thus to conclude that the latter is also a solution of the coupled system. To study the stability of the reference solution, let us now perturb the synchronous solution $(\vec{s}(t),\dots,\vec{s}(t))^{\top}$ with a spatially inhomogeneous term, meaning that $\forall i\in\\{1,\dots,n\\}$ we define $\vec{x}_{i}=\vec{s}+\delta\vec{x}_{i}$. Substituting the latter into Eq. (2) and expanding up to the first order, we obtain $\delta\dot{\vec{x}}_{i}=\frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{\rvert}_{\vec{s}}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j_{1},\dots,j_{d}=1}^{n}B_{ij_{1}\dots j_{d}}(t)\sum_{\ell=1}^{d}\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{\ell}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j_{\ell}}\,,$ (4) where $\displaystyle B_{ij_{1}}(t)$ $\displaystyle=$ $\displaystyle A_{ij_{1}}^{(1)}(t)-k^{(1)}_{i}(t)\delta_{ij_{1}}\,,$ $\displaystyle B_{ij_{1}j_{2}}(t)$ $\displaystyle=$ $\displaystyle A_{ij_{1}j_{2}}^{(2)}(t)-2k_{i}^{(2)}(t)\delta_{ij_{1}j_{2}}\,,\dots$ $\displaystyle B_{ij_{1}j_{2}\dots j_{D}}(t)$ $\displaystyle=$ $\displaystyle A_{ij_{1}j_{2}\dots j_{D}}^{(D)}(t)-D!k_{i}^{(D)}(t)\delta_{ij_{1}j_{2}\dots j_{D}}\,,$ being $\delta_{ij_{1}j_{2}\dots j_{D}}$ the generalized multi-indexes Kronecker-$\delta$, and the (time-varying) $d$-degree of node $i$ is given by $k_{i}^{(d)}(t)=\frac{1}{d!}\sum_{j_{1},..,j_{d}=1}^{n}A_{ij_{1}\dots j_{d}}^{(d)}(t)\,,$ (5) which represents the number of hyperedges of order $d$ incident to node $i$ at time $t$. Observe that if $\mathbf{A}^{(d)}$ is weighted, then $k_{i}^{(d)}(t)$ counts both the number and the weight, it is thus the generalization of the strength of a node. Let us now define $k_{ij}^{(d)}(t)=\frac{1}{(d-1)!}\sum_{j_{1},...,j_{d-1}}^{n}A_{ijj_{1}\dots j_{d-1}}^{(d)}(t)\,,$ (6) namely the number of hyperedges of order $d$ containing both nodes $i$ and $j$ at time $t$. Again, once $\mathbf{A}^{(d)}$ is weighted, then $k_{ij}^{(d)}(t)$ generalizes the link strength. Let us observe that because of the invariance of $\mathbf{A}^{(d)}$ under index permutation, we can conclude that $k_{ij}^{(d)}(t)=k_{ji}^{(d)}(t)$. Finally, we define the generalized time-varying higher-order Laplacian matrix for the interaction of order $d$ as $L_{ij}^{(d)}(t)=\begin{cases}-d!k_{i}^{(d)}(t)&\text{if $i=j$}\\\ (d-1)!k_{ij}^{(d)}(t)&\text{if $i\neq j$}\end{cases}\,.$ (7) Observe that such a matrix is symmetric because of the assumption of the tensors $\mathbf{A}^{(d)}$. Let us also notice the difference in sign with respect to other notations used in the literature. We can then rewrite Eq. (4) as follows $\displaystyle\delta\dot{\vec{x}}_{i}$ $\displaystyle=$ $\displaystyle\frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{\rvert}_{\vec{s}}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\left[\sum_{j_{1}=1}^{n}\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j_{1}}\sum_{j_{2},\dots,j_{d}=1}^{n}B_{ij_{1}\dots j_{d}}(t)+\dots+\sum_{j_{d}=1}^{n}\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j_{d}}\sum_{j_{1},\dots,j_{d-1}=1}^{n}B_{ij_{1}\dots j_{d}}(t)\right]$ (8) $\displaystyle=$ $\displaystyle\frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{\rvert}_{\vec{s}}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j=1}^{n}L^{(d)}_{ij}(t)\left[\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}+\dots+\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{d}}}\right]_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j}\,,$ where we used the fact the $\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}+\dots+\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{d}}}$ is independent from the indexes being the latter just place holders to identify the variable with respect to the derivative has to be done. Finally, by defining $\displaystyle\mathbf{J}_{f}$ $\displaystyle:=$ $\displaystyle\frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{\rvert}_{\vec{s}(t)}\text{ and }$ $\displaystyle\mathbf{J}_{h^{(d)}}$ $\displaystyle:=$ $\displaystyle\sum_{\ell=1}^{d}\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{\ell}}}\Big{\rvert}_{(\vec{s}(t),\dots,\vec{s}(t))}\forall d\in\\{1,\dots,D\\}\,,$ we can rewrite Eq. (8) in compact form $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j=1}^{n}L^{(d)}_{ij}(t)\mathbf{J}_{h^{(d)}}\delta\vec{x}_{j}\,.$ (9) This is a non-autonomous linear differential equation determining the stability of the perturbation $\delta\vec{x}_{i}$, for instance, by computing the largest Lyapunov exponent. To make some analytical progress in the study of Eq. (9), we will consider two main directions: the functions $\vec{h}^{(d)}$ satisfy the condition of natural coupling (see Section II.1) or the higher-order structures exhibit regular topologies (see Section II.2). The aim of each assumption is to disentangle the dependence of the nonlinear coupling functions from the higher-order Laplace matrices and thus achieve a better understanding of the problem under study. ### II.1 Natural coupling Let us assume the functions $\vec{h}^{(d)}$ to satisfy the condition of natural coupling, namely $\vec{h}^{(D)}(\vec{x},\dots,\vec{x})=\dots=\vec{h}^{(2)}(\vec{x},\vec{x})=\vec{h}^{(1)}(\vec{x})\,,$ (10) that implies $\mathbf{J}_{h^{(1)}}=\mathbf{J}_{h^{(2)}}=\dots=\mathbf{J}_{h^{(D)}}$ and it allows to eventually rewrite Eq. (9) as follows $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{j=1}^{n}M_{ij}(t)\mathbf{J}_{h^{(1)}}\delta\vec{x}_{j}\,,$ (11) where $M_{ij}(t):=\sum_{d=1}^{D}q_{d}L^{(d)}_{ij}(t)\quad\forall i,j=1,\dots n\,.$ (12) Let us observe that the matrix $\mathbf{M}(t)$ is a Laplace matrix; it is non- positive definite (as each one of the $\mathbf{L}^{(d)}(t)$ matrices does for any $d=1,\dots,D$ and any $t>0$, and $q_{d}>0$), it admits $\mu^{(1)}=0$ as eigenvalue associated to the eigenvector $\phi^{(1)}=(1,\dots,1)^{\top}$ and it is symmetric. So there exists an orthonormal time-varying eigenbasis, $\phi^{(\alpha)}(t)$, $\alpha=1,\dots,n$, for $\mathbf{M}(t)$ with associated eigenvalues $\mu^{(\alpha)}\leq 0$. Let us define Carletti and Fanelli (2022) the $n\times n$ time dependent matrix $\mathbf{c}(t)$ that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely $\frac{d\vec{\phi}^{(\alpha)}(t)}{dt}=\sum_{\beta}c_{\alpha\beta}(t)\vec{\phi}^{(\beta)}(t)\quad\forall\alpha=1,\dots,n\,.$ (13) By recalling the orthonormality condition $\left(\vec{\phi}^{(\alpha)}(t)\right)^{\top}\cdot\vec{\phi}^{(\beta)}(t)=\delta_{\alpha\beta}\,,$ we can straightforwardly conclude that $\mathbf{c}$ is a real skew-symmetric matrix with a null first row and first column, i.e., $c_{\alpha\beta}+c_{\beta\alpha}=0$ and $c_{1\alpha}=0$. To make one step further, we consider Eq. (11), and we project it onto the eigendirections, namely we introduce $\delta\vec{x}_{i}=\sum_{\alpha}\delta\hat{\vec{x}}_{\alpha}\phi^{(\alpha)}_{i}$ and recalling the definition of $\mathbf{c}$ we obtain $\frac{d\delta\hat{\vec{x}}_{\beta}}{dt}=\sum_{\alpha}c_{\beta\alpha}(t)\delta\hat{\vec{x}}_{\alpha}+\left[\mathbf{J}_{f}+\mu^{(\beta)}(t)\mathbf{J}_{h^{(1)}}\right]\delta\hat{\vec{x}}_{\beta}\,.$ (14) Let us observe that the latter formula and the following analysis differ from the one presented in Van Gorder (2021) where the perturbation is assumed to align onto a single mode, a hypothesis that ultimately translates in the stationary of the Laplace eigenvectors that is $\mathbf{c}=\mathbf{0}$. The same assumption is also at the root of the results by Zhang and Strogatz (2021); indeed, commuting time-varying networks implies to deal with a constant eigenbasis. In conclusion, Eq. (14) returns the more general description for the projection of the linearized dynamics on a generic time- varying Laplace eigenbasis, and thus allowing us to draw general conclusions without unnecessary simplifying assumptions. ### II.2 Regular topologies An alternative approach to study Eq. (9) is to assume regular topologies Muolo _et al._ (2023), namely hypergraphs such that $\mathbf{L}^{(d)}(t)=\alpha_{d}\mathbf{L}^{(1)}(t)$, for $d=1,\dots,D$, with $\alpha_{1}=1$ and $\alpha_{d}\in\mathbb{R}_{+}$. Indeed we can use this assumption to obtain from Eq. (9) $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{j=1}^{n}L^{(1)}_{ij}(t)\mathbf{J}_{\hat{h}}\delta\vec{x}_{j}\,,$ (15) where $\mathbf{J}_{\hat{h}}:=\sum_{d=1}^{D}q_{d}\alpha_{d}\mathbf{J}_{h^{(d)}}\,,$ (16) that results in a sort of weighted nonlinear coupling term. We can now make use of the existence of a time-varying orthonormal basis of $\mathbf{L}^{(1)}(t)$, namely $\psi^{(\alpha)}(t)$, $\alpha=2,\dots,n$, associated to eigenvalues $\Lambda^{(\alpha)}<0$, $\psi^{(1)}(t)=(1,\dots,1)^{\top}$ and $\Lambda^{(1)}=0$, to project $\delta\vec{x}_{i}$ onto the $n$ eigendirections, $\delta\vec{x}_{i}=\sum_{\alpha}\delta\tilde{\vec{x}}_{\alpha}\psi^{(\alpha)}_{i}$. Because the latter vary in time we need to define a second $n\times n$ time dependent matrix $\mathbf{b}(t)$ given by $\frac{d\vec{\psi}^{(\alpha)}(t)}{dt}=\sum_{\beta}b_{\alpha\beta}(t)\vec{\psi}^{(\beta)}(t)\quad\forall\alpha=1,\dots,n\,,$ (17) that it is again real, skew-symmetric, with a null first row and first column, i.e., $b_{\alpha\beta}+b_{\beta\alpha}=0$ and $b_{1\alpha}=0$, because of the orthonormality condition of eigenvectors. By projecting Eq. (15) onto $\psi^{(\alpha)}(t)$, we get $\frac{d\delta\tilde{\vec{x}}_{\beta}}{dt}=\sum_{\alpha}b_{\beta\alpha}(t)\delta\tilde{\vec{x}}_{\alpha}+\left[\mathbf{J}_{f}+\Lambda^{(\beta)}(t)\mathbf{J}_{\hat{h}}\right]\delta\tilde{\vec{x}}_{\beta}\,.$ (18) Let us conclude by observing that the latter equation has the same structure of (14). Those equations determine the generalization of the Master Stability Equation to the case of time-varying higher-order structures. The time variation signature of the topology is captured by the matrices $\mathbf{c}(t)$ or $\mathbf{b}(t)$ and the eigenvectors $\mu^{(\alpha)}(t)$ or $\Lambda^{(\alpha)}(t)$, while the dynamics (resp. the coupling) in the Jacobian $\mathbf{J}_{f}$ (resp. $\mathbf{J}_{h^{(1)}}$ or $\mathbf{J}_{\hat{h}}$). It is important to notice that as the eigenvalues $\mu^{(1)}=0$, $\Lambda^{(1)}=0$ and the skew-symmetric matrices $\mathbf{c}(t),\mathbf{b}(t)$ have null first row and column, in analogy with the MSF approaches carried over static networks Pecora and Carroll (1998) and higher-order structures Gambuzza _et al._ (2021), also in the case of time- varying higher-order structures, we can decouple the Master Stability Equation into two components. One component describes the movement along the synchronous manifold, while the other component represents the evolution of different modes that are transverse to the synchronous manifold. The Maximum Lyapunov Exponent (MLE) associated with the transverse modes measures the exponential growth rate of a tiny perturbation in the transverse subspace. It serves as an enhanced form of Master Stability Function (MSF) and provides valuable insights into the stability of the reference orbit. For the synchronous orbit to be stable, the MLE associated to all transverse modes must be negative. Moreover, the MSF approaches applied to static networks and higher-order structures can be simplified by examining the evolution of the perturbation along each independent eigendirection associated with distinct eigenvalues of the Laplacian matrix. Let us observe that this is not possible in the present because the matrices $\mathbf{c}(t)$ and $\mathbf{b}(t)$ mix the different modes and introduce a complex interdependence among them, making it challenging to disentangle their individual contributions. For this reason, one has to address numerically the problem Carletti and Fanelli (2022). To demonstrate the above introduced theory and emphasize the outcomes arising from the modified Master Stability Equations (14) and (18), we will present two key examples in the following sections. Indeed, we will utilize the Stuart-Landau limit cycle oscillator and the chaotic Lorenz system as prototype dynamical systems anchored to each individual nodes. To simplify the calculations, we assume that the hypergraph consists of only three nodes, three links and one triangle (face), whose weights change in time. Additionally, the eigenvector projection matrices $\mathbf{c}(t)$ and $\mathbf{b}(t)$ do not vary in time; this assumption results from a suitable choice of the Laplace eigenbasis as explained later in Appendix B. Finally, to simplify the analysis we also assume the Laplace eigenvalues to be constant in time. Let us stress that despite such assumptions, the proposed framework is very general and can be applied to any time varying hypergraphs. ## III Synchronization of Stuart-Landau oscillators coupled via time-varying higher-order networks The aim of this section is to present an application of the theory above introduced. We decided to use the Stuart-Landau (SL) model as a prototype example for two reasons; first, it provides the normal form for a generic system close to a supercritical Hopf-bifurcation, second, because of its structure, the Jacobian of the reaction part becomes constant once evaluated on the reference orbit and this simplifies the presentation of the results. A SL oscillator can be described by a complex amplitude $w$ that evolves in time according to $\dot{w}=\sigma w-\beta|w|^{2}w$, where $\sigma=\sigma_{\Re}+i\sigma_{\Im}$ and $\beta=\beta_{\Re}+i\beta_{\Im}$ are complex model parameters. The system admits a limit cycle solution $w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}$, where $\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}$, that is stable provided $\sigma_{\Re}>0$ and $\beta_{\Re}>0$, conditions that we hereby assume. To proceed in the analysis, we couple together $n$ identical SL oscillators, each described by a complex amplitude $w_{j}$, with $j=1,...,n$, anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely $\frac{dw_{j}}{dt}=\sigma w_{j}-\beta w_{j}|w_{j}|^{2}+\sum\limits_{d=1}^{D}q_{d}\sum\limits_{j_{1},\dots,j_{d}=1}^{n}A_{jj_{1}\dots j_{d}}^{(d)}(t)\vec{g}^{(d)}(w_{j},w_{j_{1}},\dots,w_{j_{d}})\,.$ (19) For the sake of simplicity, we restrict our analysis to pairwise and three- body interactions, namely $D=2$ in Eq. (19). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix A.1. ### III.1 Diffusive-like and regular topology Let us thus assume the existence of two functions $h^{(1)}(w)$ and $h^{(2)}(w_{1},w_{2})$ such that $g^{(1)}$ and $g^{(2)}$ do satisfy the diffusive-like assumption, namely $\begin{array}[]{l}g^{(1)}(w_{j},w_{j_{1}})=h^{(1)}(w_{j_{1}})-h^{(1)}(w_{j})\text{ and }\\\ \\\ g^{(2)}(w_{j},w_{j_{1}},w_{j_{2}})=h^{(2)}(w_{j_{1}},w_{j_{2}})-h^{(2)}(w_{j},w_{j})\,.\end{array}$ For the sake of definitiveness, let us fix $h^{(1)}(w)=w\text{ and }h^{(2)}(w_{1},w_{2})=w_{1}w_{2}\,,$ (20) let us observe that the latter functions do not satisfy the condition for natural coupling, indeed $h^{(1)}(w)=w\neq w^{2}=h^{(2)}(w,w)$. Let us assume to deal with regular topology, namely $\mathbf{L}^{(2)}=\alpha_{2}\mathbf{L}^{(1)}$. Hence following Eq. (16) we can define $\mathbf{J}_{\hat{h}}=q_{1}\mathbf{J}_{h^{(1)}}+q_{2}\alpha_{2}\mathbf{J}_{h^{(2)}}$. Let us perturb the limit cycle solution $w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}$ by defining $w_{j}=W_{LC}(1+\rho_{j})e^{i\theta_{j}}$, where $\rho_{j}$ and $\theta_{j}$ are real and small functions for all $j$. A straightforward computation allows to write the time evolution of $\rho_{j}$ and $\theta_{j}$ $\dfrac{d}{dt}\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)=\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)+\sum_{\ell}L_{j\ell}^{(1)}\biggl{[}\left(\begin{matrix}q_{1,\Re}&-q_{1,\Im}\\\ q_{1,\Im}&q_{1,\Re}\end{matrix}\right)+2\alpha_{2}\sqrt{\frac{\sigma_{\Re}}{\beta_{\Re}}}\left(\begin{matrix}\cos(\omega t)&-\sin(\omega t)\\\ \sin(\omega t)&\cos(\omega t)\end{matrix}\right)\left(\begin{matrix}q_{2,\Re}&-q_{2,\Im}\\\ q_{2,\Im}&q_{2,\Re}\end{matrix}\right)\biggr{]}\left(\begin{matrix}{\rho_{\ell}}\\\ {\theta_{\ell}}\end{matrix}\right)\,,$ (21) where $\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}$ is the frequency of the limit cycle solution. By exploiting the eigenvectors $\psi^{(\alpha)}(t)$ and eigenvalues $\Lambda^{(\alpha)}(t)$ of $\mathbf{L}^{(1)}(t)$ to project the perturbation $\rho_{j}$ and $\theta_{j}$ we obtain: $\dfrac{d}{dt}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right)=\sum_{\alpha}b_{\beta\alpha}\left(\begin{matrix}{\rho_{\alpha}}\\\ {\theta_{\alpha}}\end{matrix}\right)+\Bigl{\\{}\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)+\Lambda^{(\beta)}\left[\left(\begin{matrix}q_{1,\Re}&-q_{1,\Im}\\\ q_{1,\Im}&q_{1,\Re}\end{matrix}\right)\\\ +2\alpha_{2}\sqrt{\frac{\sigma_{\Re}}{\beta_{\Re}}}\left(\begin{matrix}\cos(\omega t)&-\sin(\omega t)\\\ \sin(\omega t)&\cos(\omega t)\end{matrix}\right)\left(\begin{matrix}q_{2,\Re}&-q_{2,\Im}\\\ q_{2,\Im}&q_{2,\Re}\end{matrix}\right)\right]\Bigr{\\}}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right)\,,$ (22) where the matrix $\mathbf{b}$ has been defined in Eq. (17). For the sake of definiteness and to focus on the impact of the time-varying topology, we hereby consider a simple higher-order network structure composed of $n=3$ nodes, three links and one triangle. Moreover, the eigenvalues are assumed to be constant and the time-derivative of the associated eigenvectors projected on the eigenbasis to return a constant matrix $\mathbf{b}$, for a given $\Omega\geq 0$ $\mathbf{b}=\begin{pmatrix}0&0&0\\\ 0&0&\Omega\\\ 0&-\Omega&0\end{pmatrix}\,.$ (23) One can show (see Appendix B and Carletti and Fanelli (2022)) that those assumptions on the hypergraph correspond to two eigenvectors rotating in a plane orthogonal to the constant eigenvector $\psi^{(1)}\sim(1,\dots,1)^{\top}$ with frequency $\Omega>0$. The case $\Omega=0$ corresponds thus to a static higher-order network structure. Figure 1: Synchronization on time-varying regular higher-order network of coupled SL oscillators. We report the MSF as a function of $q_{1,\Im}$ and $q_{2,\Im}$ for two different values of $\Omega$, $\Omega=0$ (panel (a)) and $\Omega=2$ (panel (b)), by using a color code, we determine the region of stability (black) and the region of instability (yellow). The remaining parameters have been fixed at the values $\alpha_{2}=2$, $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$, $q_{1,\Re}=0.1$, $q_{2,\Re}=0.1$, $\Lambda^{(2)}=-1$, and $\Lambda^{(3)}=-2$. Under those assumptions, Eq. (22) determines a time periodic linear system whose stability can be determined by using Floquet theory. In order to illustrate our results, we let $q_{1,\Im}$ and $q_{2,\Im}$ to freely vary in the range $[-5,5]$, while keeping fixed to generic values the remaining parameters, and we compute the Floquet eigenvalue with the largest real part, corresponding thus to the Master Stability Function (MSF) of Eq. (22), as a function of $q_{1,\Im}$ and $q_{2,\Im}$. The corresponding results are shown in Fig. 1 for $\Omega=0$ (panel (a)) and $\Omega=2$ (panel (b)). By a direct inspection, one can clearly conclude that the parameters region associated with a negative MSF (black region), i.e., to the stability of the SL limit cycle and thus to global synchronization, is larger for $\Omega>0$ than for $\Omega=0$. Figure 2: Synchronization on time-varying regular higher-order network of coupled SL oscillators. The MSF is reported as a function of $\epsilon_{1}$ and $\epsilon_{2}$ for two different values of $\Omega$, $\Omega=0$ (panel (a)) and $\Omega=2$ (panel (b)). The color code represents the values of the MSF, negative values (black) while positive values (yellow). The remaining parameters have been fixed at the values $\alpha_{2}=2$, $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$, $q_{1,0}=0.1-0.5i$, $q_{2,0}=0.1+0.5i$, $\Lambda^{(2)}=-1$, and $\Lambda^{(3)}=-2$. To study the combined effect of both coupling strengths $q_{1}$ and $q_{2}$, we set $q_{1}=\epsilon_{1}q_{1,0}$ and $q_{2}=\epsilon_{2}q_{2,0}$, and we compute the MSF as a function of $\epsilon_{1}$ and $\epsilon_{2}$, having fixed without loss of generality $q_{1,0}=0.1-0.5i$ and $q_{2,0}=0.1-0.5i$. The corresponding results are presented in Fig. 2 for static ($\Omega=0$, panel (a)) and time-varying ($\Omega=2$, panel (b)) higher-order structure. We can again conclude that the region of parameters corresponding to global synchronization (black region) is larger in the case of time-varying hypergraph than in the static case. Figure 3: Synchronization domains. We show the MSF in the plane $(\Omega,\epsilon_{1})$ (panel (a)) for $\epsilon_{2}=0.02$ and in the plane $(\Omega,\epsilon_{2})$ (panel (b)) for $\epsilon_{1}=0.02$. We can observe that in both panels, the critical value of coupling strengths $\hat{\epsilon}_{j}(\Omega)$ to achieve synchronization is smaller for $\Omega>0$ than for $\Omega=0$. Furthermore, in panel (a) existence of an interval $\mathcal{I}_{1}=[\Omega_{1},\Omega_{2}]$ can be observed such that for all $\Omega\in\mathcal{I}_{1}$, there exist three different values of critical coupling $\hat{\epsilon}_{1}$ for the occurrence of synchronization. In panel (b), we can observe the existence of two intervals $\mathcal{I}_{2}=[\Omega_{3},\Omega_{4}]$ and $\mathcal{I}_{3}=[\Omega_{5},\Omega_{6}]$ such that for all $\Omega\in\mathcal{I}_{2}$ there exist two critical values of $\hat{\epsilon}_{2}$ and for all $\Omega\in\mathcal{I}_{3}$ there exist three critical values of $\hat{\epsilon}_{2}$ for the emergence of synchronization. The remaining parameters are kept fixed at the values $\alpha_{2}=2$, $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$, $q_{1,0}=0.1-0.5i$, $q_{2,0}=0.1+0.5i$, $\Lambda^{(2)}=-1$, and $\Lambda^{(3)}=-2$. Our last analysis concerns the relation between the frequency $\Omega$ and the size of the coupling parameters $\epsilon_{1}$, $\epsilon_{2}$, still assuming $q_{1}=\epsilon_{1}q_{1,0}$ and $q_{2}=\epsilon_{2}q_{2,0}$, on the onset of synchronization. In Fig. 3 we report the MSF in the plane $(\Omega,\epsilon_{1})$ for a fixed value of $\epsilon_{2}$ (panel (a)), and in the plane $(\Omega,\epsilon_{2})$ for a fixed value of $\epsilon_{1}$ (panel (b)). Let us observe that the synchronization can be easier achieved the smaller the value $\epsilon_{j}$, $j=1,2$, for which the MSF is negative, having fixed $\Omega$. Let us thus define $\hat{\epsilon}_{1}(\Omega)=\min\\{\epsilon>0:\mathrm{MSF}(\epsilon,\epsilon_{2},\Omega)<0\\}$, for fixed $\epsilon_{2}$, and similarly $\hat{\epsilon}_{2}(\Omega)$. The results of Fig. 3 clearly show that $\hat{\epsilon}_{1}(\Omega)<\hat{\epsilon}_{1}(0)\sim 3.5$ and $\hat{\epsilon}_{2}(\Omega)<\hat{\epsilon}_{2}(0)\sim 4.2$ and thus support our claim that time-varying structures allow to achieve synchronization easier. To support our analysis, we performed numerical simulations of the SL defined on the simple $3$ nodes time-varying hypergraph. We selected $(\epsilon_{1},\epsilon_{2})=(2.5,0.5)$ and the remaining parameters values as in Fig. 2. By observing the latter figure, we conclude that for the chosen parameters, the MSF is positive if $\Omega=0$ and negative if $\Omega=2$, hence the SL should globally synchronize on the time-varying hypergraph while it would not achieve this state in the static case. Results of Fig. 4 confirm these conclusions; indeed, we can observe that (real part of) the complex state variable is in phase for all $i$ in the case $\Omega=2$ (right panel), while this is not clearly the case for $\Omega=0$ (left panel). Figure 4: Temporal evolution of $\Re w_{i}$ for a suitable choice of coupling parameters $\epsilon_{1}=2.5$ and $\epsilon_{2}=0.5$. In the top panel, we set $\Omega=0$ while $\Omega=2$ in the bottom panel. The other parameters values are the same as in Fig. 2, i.e., $\alpha_{2}=2$, $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$, $q_{1,0}=0.1-0.5i$, $q_{2,0}=0.1+0.5i$, $\Lambda^{(2)}=-1$, and $\Lambda^{(3)}=-2$. ### III.2 Diffusive-like and natural coupling The aim of this section is to replace the condition of regular topology with a condition of natural coupling and consider thus again, a diffusive-like coupling. Let us thus consider now two functions $h^{(1)}(w)$ and $h^{(2)}(w_{1},w_{2})$ satisfying the natural coupling assumption, namely $h^{(1)}(w)=h^{(2)}(w,w)\,.$ For the sake of definitiveness, let us fix $h^{(1)}(w)=w^{3}\text{ and }h^{(2)}(w_{1},w_{2})=w_{1}(w_{2})^{2}\,.$ (24) Consider again to perturb the limit cycle solution $w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}$ by defining $w_{j}=W_{LC}(1+\rho_{j})e^{i\theta_{j}}$, where $\rho_{j}$ and $\theta_{j}$ are real and small functions for all $j$. A straightforward computation allows us to write the time evolution of $\rho_{j}$ and $\theta_{j}$ as, $\dfrac{d}{dt}\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)=\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)+3\frac{\sigma_{\Re}}{\beta_{\Re}}\sum_{\ell}M_{j\ell}\left(\begin{matrix}\cos(2\omega t)&-\sin(2\omega t)\\\ \sin(2\omega t)&\cos(2\omega t)\end{matrix}\right)\left(\begin{matrix}{\rho_{l}}\\\ {\theta_{l}}\end{matrix}\right)\,,$ (25) where $\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}$ is the frequency of the limit cycle solution and $\mathbf{M}$ is the matrix $q_{1}\mathbf{L}^{(1)}(t)+q_{2}\mathbf{L}^{(2)}(t)$ (see Eq. (12)). Let us observe that in this case, the coupling parameters $q_{1}$ and $q_{2}$ should be real numbers if we want to deal with real Laplace matrices, hypothesis that we hereby assume to hold true. By invoking the eigenvectors $\phi^{(\alpha)}(t)$ and eigenvalues $\mu^{(\alpha)}(t)$ of $\mathbf{M}(t)$, and the matrix $\mathbf{c}$ (see Eq. (13)), we can project the perturbation $\rho_{j}$ and $\theta_{j}$ on the eigenbasis and thus rewrite the time variation of the perturbation as follows $\dfrac{d}{dt}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right)=\sum_{\alpha}c_{\beta\alpha}\left(\begin{matrix}{\rho_{\alpha}}\\\ {\theta_{\alpha}}\end{matrix}\right)+\biggl{[}\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)+3\frac{\sigma_{\Re}}{\beta_{\Re}}\mu^{(\beta)}\left(\begin{matrix}\cos(2\omega t)&-\sin(2\omega t)\\\ \sin(2\omega t)&\cos(2\omega t)\end{matrix}\right)\biggr{]}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right)\,.$ (26) Figure 5: Synchronization on time-varying higher-order network of coupled SL oscillators with diffusive-like natural coupling. We report the MSF as a function of the eigenvalues $\mu^{(2)}$ and $\mu^{(3)}$ for two different choices of $\Omega$, $\Omega=0$ (panel (a)) and $\Omega=2$ (panel (b)) by using a color code, black is associated to negative values while positive ones are shown in yellow. We characterize the range of the axes by considering the absolute values of the eigenvalues. The remaining parameters are kept fixed at $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$. Let us assume again to deal with an hypergraph made by $3$ nodes and consider a time-independent matrix $\mathbf{c}$ $\mathbf{c}=\begin{pmatrix}0&0&0\\\ 0&0&\Omega\\\ 0&-\Omega&0\end{pmatrix}\,,$ for some $\Omega\geq 0$. The eigenvalue $\mu^{(1)}=0$ of $\mathbf{M}$ determines the dynamics parallel to the synchronous manifold. On the other hand, the equations obtained for $\mu^{(2)}$ and $\mu^{(3)}$ give the dynamics of transverse modes to the synchronization manifold. Hence the MSF can be obtained by solving the latter equations and provide the conditions for a global stable synchronous solution to exist. In Fig. 5, we show the level sets of the MSF as a function of the eigenvalues $\mu^{(2)}$ and $\mu^{(3)}$ while keeping the remaining parameters in Eq. (26) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., $\Omega=0$, while in panel (b) a time-varying hypergraph, i.e., $\Omega=2$, negative values of MSF are reported in black and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues $\mu^{(2)}$ and $\mu^{(3)}$ and thus the SL system can easier synchronize. ## IV Synchronization of Lorenz systems nonlinearly coupled via time-varying higher-order networks The aim of this section is to show that our results hold true beyond the example of the dynamical system shown above, i.e., the Stuart-Landau. We thus decide to present an application of synchronization for chaotic systems on a time-varying higher-order network. For the sake of definitiveness, we used the paradigmatic chaotic Lorenz model for the evolution of individual nonlinear oscillators. We consider again the scenario of regular topology with the toy model hypergraph structure composed of $n=3$ nodes described previously, the whole system can thus be described by $\begin{cases}\dot{x}_{i}&=a_{1}(y_{i}-x_{i})+\epsilon_{2}\sum\limits_{j=1}^{N}\sum\limits_{k=1}^{N}A^{(2)}_{ijk}(x_{j}^{2}x_{k}-x_{i}^{3})\\\ \dot{y}_{i}&=x_{i}(a_{3}-z_{i})-y_{i}+\epsilon_{1}\sum\limits_{j=1}^{N}A^{(1)}_{ij}(y_{j}-y_{i})\\\ \dot{z}_{i}&=x_{i}y_{i}-a_{2}z_{i}\end{cases}\,,$ (27) where the system parameters are kept fixed at $a_{1}=10$, $a_{2}=\frac{8}{3}$, $a_{3}=28$ for which individual nodes exhibits chaotic trajectory. The pairwise and higher-order structures are related to each other by $\mathbf{L}^{(2)}=\alpha_{2}\mathbf{L}^{(1)}$. We assume the eigenvalues of the Laplacian $\mathbf{L}^{(1)}$ to be constant and the matrix $\mathbf{b}$ to be given by $\mathbf{b}=\begin{pmatrix}0&0&0\\\ 0&0&\Omega\\\ 0&-\Omega&0\end{pmatrix}\quad\text{for some $\Omega\geq 0$.}$ Let us thus select as reference solution $\vec{s}(t)$ a chaotic orbit of the isolated Lorenz model and consider as done previously the time evolution of a perturbation about such trajectory. Computations similar to those reported above, allow to obtain a linear non-autonomous system ruling the evolution of the perturbation, whose stability can be numerically inferred by computing the largest Lyapunov exponent, i.e., the MSF. We first considered the impact of the coupling strength, $\epsilon_{1}$ and $\epsilon_{2}$ on synchronization; results are reported in Fig. 6 where we present the level sets of the MSF as a function of the above parameters by using a color code: black dots refer to negative MSF while yellow dots to positive MSF. The panel (a), refers to a static hypergraph, i.e., $\Omega=0$, while the panel (b) to a time-varying one, i.e., $\Omega=3$, one can thus appreciate that the latter setting allows a negative MSF for a larger range of parameters $\epsilon_{1}$ and $\epsilon_{2}$ and hence we can conclude that time-varying hypergraph enhance synchronization also in the case of chaotic oscillators. Figure 6: Synchronization on time-varying regular higher-order network of coupled Lorenz oscillators. We report the MSF as a function of the coupling strengths, $\epsilon_{1}$ and $\epsilon_{2}$, for two different values of $\Omega$, $\Omega=0$ (panel (a)) and $\Omega=3$ (panel (b)), by using a color code, where black dots stand for a negative MSF, i.e., global synchronization, while yellow dots for a positive MSF. The remaining parameters are kept fixed at $a_{1}=10$, $a_{2}=\frac{8}{3}$, $a_{3}=28$, and $\alpha_{2}=2$. We conclude this analysis by studying again the relation between the frequency $\Omega$ and the size of the coupling parameters $\epsilon_{1}$, $\epsilon_{2}$ on the onset of synchronization. In Fig. 7 we show the MSF in the plane $(\Omega,\epsilon_{1})$ for a fixed value of $\epsilon_{2}=0.01$ (panel (a)), and in the plane $(\Omega,\epsilon_{2})$ for a fixed value of $\epsilon_{1}=0.2$ (panel (b)). By using again $\hat{\epsilon}_{1}(\Omega)=\min\\{\epsilon>0:\mathrm{MSF}(\epsilon,\epsilon_{2},\Omega)<0\\}$, for fixed $\epsilon_{2}$, and similarly $\hat{\epsilon}_{2}(\Omega)$, we can conclude that $\hat{\epsilon}_{1}(\Omega)<\hat{\epsilon}_{1}(0)\sim 1.4$ and $\hat{\epsilon}_{2}(\Omega)<\hat{\epsilon}_{2}(0)\sim 0.04$ and thus supporting again our claim that time-varying structures allow to achieve synchronization easier. Figure 7: We show the MSF in the plane $(\Omega,\epsilon_{1})$ (panel (a)) for $\epsilon_{2}=0.01$) and in the plane $(\Omega,\epsilon_{2})$ (panel (b)) for $\epsilon_{1}=0.2$). We can observe that in both panels the critical value of coupling strengths $\hat{\epsilon}_{j}(\Omega)$ to achieve synchronization is smaller for $\Omega>0$ than for $\Omega=0$. This implies that synchronization can occur more easily on a time-varying higher-order structure than on a static one. ## V Conclusions To sum up we have here introduced and studied a generalized framework for the emergence of global synchronization on time-varying higher-order networks and developed a theory for its stability without imposing strong restrictions on the functional time evolution of the higher-order structure. We have demonstrated that the latter can be examined by extending the Master Stability Function technique to the novel framework for specific cases based either on the inter-node coupling scheme or the topology of the higher-order structure. Our findings reveal that the behavior of the higher-order network is represented by a matrix that changes over time and possesses skew symmetry. This matrix is derived from the time-dependent evolution of the eigenvectors of the higher-order Laplacian. Additionally, the eigenvalues associated with these eigenvectors can also vary over time and have an impact on shaping the evolution of the introduced disturbance. We have validated the proposed theory on time-varying hypergraphs of coupled Stuart-Landau oscillators and chaotic Lorenz systems, and the results obtained indicate that incorporating temporal aspects into group interactions can facilitate synchronization in higher-order networks compared to static ones. The framework and concepts presented in this study create opportunities for future research on the impact of temporality in systems where time-varying group interactions have been observed but not yet thoroughly explored due to the absence of a suitable mathematical setting. Importantly, the fact that our theory does not require any restrictions on the time evolution of the underline structure could offer the possibility to apply it for a diverse range of applications other than synchronization. ## References * Arenas _et al._ (2008) A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou, Physics Reports 469, 93 (2008). * Boccaletti _et al._ (2018) S. Boccaletti, A. N. Pisarchik, C. I. Del Genio, and A. Amann, _Synchronization: from coupled systems to complex networks_ (Cambridge University Press, 2018). * Barabási (2016) A.-L. Barabási, _Network science_ (Cambridge University Press, 2016). * Wasserman _et al._ (1994) S. Wasserman, K. Faust, _et al._ , _Social Network Analysis: Methods and Applications_ (Cambridge University Press, 1994). * Valencia _et al._ (2008) M. Valencia, J. Martinerie, S. Dupont, and M. Chavez, Physical Review E 77, 050905 (2008). * Bassett _et al._ (2011) D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha, J. M. Carlson, and S. T. Grafton, Proceedings of the National Academy of Sciences 108, 7641 (2011). * Holme and Saramäki (2012) P. Holme and J. Saramäki, Physics Reports 519, 97 (2012). * Masuda and Lambiotte (2016) N. Masuda and R. Lambiotte, _A guide to temporal networks_ (World Scientific, 2016). * Ghosh _et al._ (2022) D. Ghosh, M. Frasca, A. Rizzo, S. Majhi, S. Rakshit, K. Alfaro-Bittner, and S. Boccaletti, Physics Reports 949, 1 (2022). * Carletti and Fanelli (2022) T. Carletti and D. Fanelli, Chaos, Solitons & Fractals 159, 112180 (2022). * Anwar _et al._ (2022) M. S. Anwar, S. Rakshit, D. Ghosh, and E. M. Bollt, Physical Review E 105, 024303 (2022). * Carletti _et al._ (2020a) T. Carletti, D. Fanelli, and S. Nicoletti, Journal of Physics Complexity 1, 035006 (2020a). * Battiston _et al._ (2020) F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lucas, A. Patania, J.-G. Young, and G. Petri, Physics Reports 874, 1 (2020). * Battiston _et al._ (2021) F. Battiston, E. Amico, A. Barrat, G. Bianconi, G. Ferraz de Arruda, B. Franceschiello, I. Iacopini, S. Kéfi, V. Latora, Y. Moreno, _et al._ , Nature Physics 17, 1093 (2021). * Majhi _et al._ (2022) S. Majhi, M. Perc, and D. Ghosh, Journal of the Royal Society Interface 19, 20220043 (2022). * Boccaletti _et al._ (2023) S. Boccaletti, P. De Lellis, C. del Genio, K. Alfaro-Bittner, R. Criado, S. Jalan, and M. Romance, Physics Reports 1018, 1 (2023). * Berge (1973) C. Berge, _Graphs and hypergraphs_ , North-Holland Pub. Co. (American Elsevier Pub. Co, 1973). * Bianconi (2021) G. Bianconi, _Higher-order networks: An introduction to simplicial compelxes_ (Cambridge University Press, 2021). * Neuhäuser _et al._ (2020) L. Neuhäuser, A. Mellor, and R. Lambiotte, Physical Review E 101, 032310 (2020). * Neuhäuser _et al._ (2021) L. Neuhäuser, R. Lambiotte, and M. T. Schaub, Physical Review E 104, 064305 (2021). * Carletti _et al._ (2020b) T. Carletti, F. Battiston, G. Cencetti, and D. Fanelli, Phys. Rev. E 101, 022308 (2020b). * Schaub _et al._ (2020) M. T. Schaub, A. R. Benson, P. Horn, G. Lippner, and A. Jadbabaie, SIAM Review 62, 353 (2020). * Muolo _et al._ (2023) R. Muolo, L. Gallo, V. Latora, M. Frasca, and T. Carletti, Chaos, Solitons & Fractals 166, 112912 (2023). * Gao _et al._ (2023) S. Gao, L. Chang, M. Perc, and Z. Wang, Physical Review E 107, 014216 (2023). * Skardal and Arenas (2020) P. S. Skardal and A. Arenas, Communications Physics 3, 1 (2020). * Skardal and Arenas (2019) P. S. Skardal and A. Arenas, Physical Review Letters 122, 248301 (2019). * Carletti _et al._ (2023) T. Carletti, L. Giambagli, and G. Bianconi, Physical Review Letters 130, 187401 (2023). * Anwar and Ghosh (2022a) M. S. Anwar and D. Ghosh, Chaos: An Interdisciplinary Journal of Nonlinear Science 32, 033125 (2022a). * Anwar and Ghosh (2022b) M. S. Anwar and D. Ghosh, Physical Review E 106, 034314 (2022b). * Iacopini _et al._ (2019) I. Iacopini, G. Petri, A. Barrat, and V. Latora, Nature Communications 10, 2485 (2019). * Chowdhary _et al._ (2021) S. Chowdhary, A. Kumar, G. Cencetti, I. Iacopini, and F. Battiston, Journal of Physics: Complexity 2, 035019 (2021). * Cencetti _et al._ (2021) G. Cencetti, F. Battiston, B. Lepri, and M. Karsai, Scientific Reports 11, 7028 (2021). * Anwar and Ghosh (2022c) M. S. Anwar and D. Ghosh, arXiv preprint arXiv:2212.01081 (2022c). * Stilwell _et al._ (2006) D. J. Stilwell, E. M. Bollt, and D. G. Roberson, SIAM Journal on Applied Dynamical Systems 5, 140 (2006). * Petit _et al._ (2017) J. Petit, B. Lauwens, D. Fanelli, and T. Carletti, Physical Review Letters 119, 148301 (2017). * Pecora and Carroll (1998) L. M. Pecora and T. L. Carroll, Physical Review Letters 80, 2109 (1998). * Van Gorder (2021) R. A. Van Gorder, Proceedings of the Royal Society A 477, 20200753 (2021). * Zhang and Strogatz (2021) Y. Zhang and S. H. Strogatz, Nature Communications , 3273 (2021). * Gambuzza _et al._ (2021) L. V. Gambuzza, F. Di Patti, L. Gallo, S. Lepri, M. Romance, R. Criado, M. Frasca, V. Latora, and S. Boccaletti, Nature Communications 12, 1255 (2021). ## Appendix A Non-invasive couplings Here we will discuss the results corresponding to a slightly more general hypothesis for $\vec{g}^{(d)}$, namely to be non-invasive, i.e., $\vec{g}^{(d)}(\vec{s},\dots,\vec{s})=0\quad\forall d=1,\dots,D\,,$ (28) whose goal is again to guarantee that the coupling term in Eq. (3) vanishes once evaluated on the orbit $(\vec{s}(t),\dots,\vec{s}(t))^{\top}$. Indeed by using again $\vec{x}_{i}=\vec{s}+\delta\vec{x}_{i}$ and expanding Eq. (3) up to the first order we get $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j_{1},\dots,j_{d}=1}^{n}B_{ij_{1}\dots j_{d}}(t)\left[\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{i}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{i}+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j_{1}}+\dots+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\delta\vec{x}_{j_{d}}\right]\,;$ (29) from Eq. (28) we can obtain $\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{i}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}+\dots+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}=0\,,$ and thus rewrite (29) as follows $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j_{1},\dots,j_{d}=1}^{n}B_{ij_{1}\dots j_{d}}(t)\left[\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}(\delta\vec{x}_{j_{1}}-\delta\vec{x}_{i})+\dots+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}(\delta\vec{x}_{j_{d}}-\delta\vec{x}_{i})\right]\,.$ (30) Recalling the definition of $k^{(d)}_{ij}$ given in Eq. (6) we get $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}(d-1)!\left[\sum_{j_{1}=1}^{n}k^{(d)}_{ij_{1}}(t)\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}(\delta\vec{x}_{j_{1}}-\delta\vec{x}_{i})+\dots+\sum_{j_{l}=1}^{n}k^{(d)}_{ij_{d}}(t)\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}(\delta\vec{x}_{j_{d}}-\delta\vec{x}_{i})\right]\,.$ (31) By using the definition of the higher-order Laplace matrix (7) we eventually obtain $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}-\sum_{d=1}^{D}q_{d}\sum_{j=1}^{n}L^{(d)}_{ij}(t)\left[\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}+\dots+\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{d}}}\Big{\rvert}_{(\vec{s},\dots,\vec{s})}\right]\delta\vec{x}_{j}\,.$ (32) Let us consider now a particular case of non-invasive function, we assume thus there exists a function $\vec{\varphi}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}$, such that $\vec{\varphi}(0)=0$ and define $g^{(d)}(\vec{x}_{i},\vec{x}_{j_{1}},\dots,\vec{x}_{j_{d}})=\sum_{\ell=1}^{d}\vec{\varphi}(\vec{x}_{i}-\vec{x}_{j_{\ell}})\,,$ (33) then $\frac{\partial\vec{g}^{(d)}}{\partial\vec{x}_{j_{\ell}}}=-\mathbf{J}_{\varphi}(0)\,,$ where $\mathbf{J}_{\varphi}(0)$ is the Jacobian of the function $\vec{\varphi}$ evaluated at $0$. In conclusion (32) rewrites as follows $\delta\dot{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}-\sum_{d=1}^{D}q_{d}\sum_{j=1}^{n}L^{(d)}_{ij}(t)(-d)\mathbf{J}_{\varphi}(0)\delta\vec{x}_{j}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{j=1}^{n}G_{ij}(t)\mathbf{J}_{\varphi}(0)\delta\vec{x}_{j}\,,$ (34) where $\mathbf{G}(t)=\sum_{d=1}^{D}dq_{d}\mathbf{L}^{(d)}(t)$ can be considered as an effective time-varying simplicial complex or hypergraph. Let us now observe that the effective matrix $\mathbf{G}(t)$ is a Laplace matrix; it is non-positive definite (as each one of the $\mathbf{L}^{(d)}(t)$ does for any $d=1,\dots,D$ and any $t>0$), it admits $\mu^{(1)}=0$ as eigenvalue associated to the eigenvector $\phi^{(1)}=(1,\dots,1)^{\top}$ and it is symmetric. So there exist a orthonormal time-varying eigenbasis, $\phi^{(\alpha)}(t)$, $\alpha=1,\dots,n$, for $\mathbf{G}(t)$ with associated eigenvalues $\mu^{(\alpha)}\leq 0$. Similar to before, we define the $n\times n$ time dependent matrix $\mathbf{c}(t)$ that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely $\frac{d\vec{\phi}^{(\alpha)}}{dt}(t)=\sum_{\beta}c_{\alpha\beta}(t)\vec{\phi}^{(\beta)}(t)\quad\forall\alpha=1,\dots,n\,.$ (35) By recalling the orthonormality condition $\left(\vec{\phi}^{(\alpha)}(t)\right)^{\top}\cdot\vec{\phi}^{(\beta)}(t)=\delta_{\alpha\beta}$ we can again straightforwardly conclude that $\mathbf{c}$ is a real skew- symmetric matrix with a null first row and first column, i.e., $c_{\alpha\beta}+c_{\beta\alpha}=0$ and $c_{1\alpha}=0$. Thereafter, we consider Eq. (34), and we project it onto the eigendirections, namely we introduce $\delta\vec{x}_{i}=\sum_{\alpha}\delta\hat{\vec{x}}_{\alpha}\phi^{(\alpha)}_{i}$ and recalling the definition of $\mathbf{c}$ we obtain $\frac{d\delta\hat{\vec{x}}_{\beta}}{dt}=\sum_{\alpha}c_{\beta\alpha}(t)\delta\hat{\vec{x}}_{\alpha}+\left[\mathbf{J}_{f}+\mu^{(\beta)}(t)\mathbf{J}_{\varphi}(0)\right]\delta\hat{\vec{x}}_{\beta}\,.$ (36) This is the required Master Stability Equation, solving which for the calculation of maximum Lyapunov exponents provide the condition for stability of the synchronous solution. ### A.1 Synchronization of Stuart-Landau oscillators with non-invasive coupling assumption Figure 8: Synchronization on time-varying higher-order network of coupled SL oscillators with non-invasive coupling configuration. Region of synchrony and desynchrony are depicted by simultaneously varying $\mu^{(2)}$ and $\mu^{(3)}$ for two different values of $\Omega$ (a) $\Omega=0$, (b) $\Omega=2$, where the domain in black indicates the area of the stable synchronous solution. The range of the axes is characterized by considering the absolute values of the eigenvalues. All the other values are kept fixed at $\sigma=1.0+4.3i$, $\beta=1.0+1.1i$, $\varphi^{\prime}(0)=1$. To validate the above results we again consider the SL oscillator with a particular case of non-invasive coupling function, namely we assume to exist a real function $\varphi$ such that $\varphi(0)=0$, $\varphi^{\prime}(0)\neq 0$ and $\begin{array}[]{l}g^{(1)}(w_{1},w_{2})=\varphi(w_{1}-w_{2})\,,\text{ and }\\\ g^{(2)}(w_{1},w_{2},w_{3})=\varphi(w_{1}-w_{2})+\varphi(w_{1}-w_{3})\,.\end{array}$ (37) By reasoning as before, we get $\begin{array}[]{l}\dfrac{d}{dt}\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)=\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)\left(\begin{matrix}{\rho_{j}}\\\ {\theta_{j}}\end{matrix}\right)+\varphi^{\prime}(0)\sum_{\ell}\left(q_{1}L^{(1)}_{j\ell}+q_{2}L^{(2)}_{j\ell}\right)\left(\begin{matrix}1&0\\\ 0&-1\end{matrix}\right)\left(\begin{matrix}{\rho_{l}}\\\ {\theta_{l}}\end{matrix}\right).\end{array}$ (38) By using again the eigenvectors $\phi^{(\alpha)}(t)$, eigenvalues $\mu^{(\alpha)}(t)$ of $\mathbf{G}(t)$ and the matrix $\mathbf{c}$ (see Eq. (35)), we can rewrite the previous formula as $\begin{array}[]{l}\dfrac{d}{dt}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right)=\sum_{\alpha}c_{\beta\alpha}\left(\begin{matrix}{\rho_{\alpha}}\\\ {\theta_{\alpha}}\end{matrix}\right)+\biggl{[}\left(\begin{matrix}-2\sigma_{\Re}&0\\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{matrix}\right)+\varphi^{\prime}(0)\mu^{(\beta)}\left(\begin{matrix}1&0\\\ 0&-1\end{matrix}\right)\biggr{]}\left(\begin{matrix}{\rho_{\beta}}\\\ {\theta_{\beta}}\end{matrix}\right).\end{array}$ (39) Figure 8 represent the result for the non-invasive coupling assumption. Here, we consider the non-invasive function so that $\varphi^{\prime}(0)=1$ and the skew-symmetric projection matrix $\mathbf{c}$ is considered constant throughout the analysis as earlier. Here we show the level sets of the MSF as a function of the eigenvalues $\mu^{(2)}$ and $\mu^{(3)}$ while keeping the remaining parameters in Eq. (39) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., $\Omega=0$, while in the (b) panel, a time-varying hypergraph, i.e., $\Omega=2$, negative values of MSF are reported in black, and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of the time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues $\mu^{(2)}$ and $\mu^{(3)}$ and thus the SL system can achieve synchronization more easily. ## Appendix B Structure of the small hypergraph The goal of this section is to provide more details about the construction of the simple time-varying hypergraph used as support for the numerical simulations in the main text. To start with we need to obtain the time- evolution of eigenvectors $\vec{\psi}^{(\alpha)}(t)$, which follows the equation $\begin{array}[]{l}\dfrac{d\vec{\psi}^{(\alpha)}}{dt}=\sum\limits_{\alpha}b_{\beta\alpha}\vec{\psi}^{(\alpha)}\,,\end{array}$ (40) where the matrix $\mathbf{b}$ has been given in Eq. (23). The eigenvector associated with the least eigenvalue $\Lambda^{(1)}=0$ is constant and is given by $\vec{\psi}^{(1)}=\frac{1}{\sqrt{3}}(1,1,1)^{\top}$. The other two eigenvectors are obtained by solving the previous equation and are represented as $\vec{\psi}^{(2)}(t)=\vec{v}_{1}\cos(\Omega t)+\vec{v}_{2}\sin(\Omega t)$ and $\vec{\psi}^{(3)}(t)=-\vec{v}_{1}\sin(\Omega t)+\vec{v}_{2}\cos(\Omega t)$, where $\vec{v}_{1}$, $\vec{v}_{2}$ are the unknown vectors that should be determined using the constraints to have orthonormal eigenbasis for every $t$. Following a few steps of calculation, we can obtain the other two eigenvectors as follows $\begin{array}[]{l}\vec{\psi}^{(2)}(t)=\dfrac{1}{\sqrt{6}}\begin{pmatrix}1\\\ -2\\\ 1\end{pmatrix}\cos(\Omega t)+\dfrac{1}{\sqrt{2}}\begin{pmatrix}-1\\\ 0\\\ 1\end{pmatrix}\sin(\Omega t)\;\;\mbox{and},\\\ \vec{\psi}^{(3)}(t)=-\dfrac{1}{\sqrt{6}}\begin{pmatrix}1\\\ -2\\\ 1\end{pmatrix}\sin(\Omega t)+\dfrac{1}{\sqrt{2}}\begin{pmatrix}-1\\\ 0\\\ 1\end{pmatrix}\cos(\Omega t).\end{array}$ (41) Now recalling our assumption about constant eigenvalues and using the relation $\mathbf{L}^{(1)}_{ij}(t)=\sum\limits_{\alpha}\Lambda^{(\alpha)}\vec{\psi}^{(\alpha)}_{i}(t)\vec{\psi}^{(\alpha)}_{j}(t)$, we can obtain the entries of the pairwise Laplace matrix as $\begin{array}[]{l}{L}^{(1)}_{ij}(t)=\Lambda^{(2)}\vec{\psi}^{(2)}_{i}(t)\vec{\psi}^{(2)}_{j}(t)+\Lambda^{(3)}\vec{\psi}^{(3)}_{i}(t)\vec{\psi}^{(3)}_{j}(t),\end{array}$ (42) where we use the fact that $\Lambda^{(1)}=0$ for all time $t$. Finally by using the relation between pairwise adjacency and Laplace matrices $L^{(1)}_{ij}(t)=A^{(1)}_{ij}(t)$, for $i\neq j$, we obtain the temporal evolution of the links as $\begin{array}[]{l}A^{(1)}_{12}(t)=\dfrac{1}{2}-\dfrac{1}{3}\cos(\frac{\pi}{3}+2\Omega t),\\\ \\\ A^{(1)}_{13}(t)=\dfrac{1}{2}+\dfrac{1}{3}\cos(2\Omega t),\\\ \\\ A^{(1)}_{23}(t)=\dfrac{1}{2}-\dfrac{1}{3}\cos(\frac{\pi}{3}-2\Omega t),\end{array}$ (43) where we have used the fact that the non-zero eigenvalues are given by $\Lambda^{(2)}=-1$ and $\Lambda^{(3)}=-2$. Again from the regular structure of the hypergraph, we have $\mathbf{L}^{(2)}(t)=\alpha_{2}\mathbf{L}^{(1)}(t)$, for all $t$. Therefore, following the relation (42), entries of the $2nd$-order Laplacian $\mathbf{L}^{(2)}$ can be represented as, $\begin{array}[]{l}L^{(2)}_{ij}(t)=\alpha_{2}[\Lambda^{(2)}\vec{\psi}^{(2)}_{i}(t)\vec{\psi}^{(2)}_{j}(t)+\Lambda^{(3)}\vec{\psi}^{(3)}_{i}(t)\vec{\psi}^{(3)}_{j}(t)].\end{array}$ (44) Now, the definition of higher-order Laplacian implies that, $L^{(2)}_{ij}(t)=\sum\limits_{k}A^{(2)}_{ijk}(t)$, $i\neq j$. Hence, using the above relation and Eq. (44), we can obtain the temporal evolution of the $3$-hyperedge as $\begin{array}[]{l}A^{(2)}_{123}(t)=1-\dfrac{2}{3}\cos(\frac{\pi}{3}+2\Omega t),\end{array}$ (45) where we have again used the fact that the non-zero eigenvalues are $\Lambda^{(2)}=-1$, and $\Lambda^{(3)}=-2$, and the value of the parameter $\alpha_{2}$ has been set $\alpha_{2}=2$. Due to the assumption of undirected hypergraph, we also trivially have, $A^{(2)}_{123}(t)=A^{(2)}_{\pi{(123)}}(t)$, where $\pi{(123)}$ indicates any permutation of $(123)$. Fig. 9 portrays the temporal evolution of the links and $3$-hyperedge weights. To better understand the evolution of the hypergraph, we provide the graphical evolution of the hypergraph in the accompanying Supplementary Movie, together with the time evolution of the weights of the links $A^{(1)}_{ij}(t)$ and of the hyperedge $A^{(2)}_{123}(t)$. Figure 9: Temporal evolution of edges and $3$-hyperedge obtained from Eqs. (43) and (45) for a particular value of $\Omega=2$.
# Exploring FPGA designs for MX and beyond Ebby Samson Imperial College London London, UK <EMAIL_ADDRESS>Naveen Mellempudi AMD Austin, USA <EMAIL_ADDRESS>Wayne Luk Imperial College London London, UK <EMAIL_ADDRESS>George A. Constantinides Imperial College London London, UK <EMAIL_ADDRESS> ###### Abstract A number of companies recently worked together to release the new Open Compute Project MX standard for low-precision computation, aimed at efficient neural network implementation. In this paper, we describe and evaluate the first open-source FPGA implementation of the arithmetic defined in the standard. Our designs fully support all the standard’s concrete formats for conversion into and out of MX formats and for the standard-defined arithmetic operations, as well as arbitrary fixed-point and floating-point formats. Certain elements of the standard are left as implementation-defined, and we present the first concrete FPGA-inspired choices for these elements, which we outline in the paper. Our library of optimized hardware components is available open source, and can be used to build larger systems. For this purpose, we also describe and release an open-source Pytorch library for quantization into the new standard, integrated with the Brevitas library so that the community can develop novel neural network designs quantized with MX formats in mind. We demonstrate the usability and efficacy of our libraries via the implementation of example neural networks such as ResNet-18 on the ImageNet ILSVRC12 dataset. Our testing shows that MX is very effective for formats such as INT5 or FP6 which are not natively supported on GPUs. This gives FPGAs an advantage as they have the flexibility to implement a custom datapath and take advantage of the smaller area footprints offered by these formats. ###### Index Terms: MX, FPGA, Brevitas, quantization, scale ## I Introduction Developments in digital hardware over the past few decades have greatly increased the compute performance available for machine learning training and inference. This has allowed larger neural networks with more parameters to be trained and deployed, yielding ever more powerful models with better accuracy. However, the demand for more computational power is still increasing with demand for models with more parameters [1]. Aside from this, there is demand for the ability to deploy powerful models with large numbers of parameters in low-power environments such as edge devices. For these reasons, lots of research in digital hardware design had the goal of creating hardware which allows deploying these models with smaller footprints while preserving inference accuracy. Traditionally, machine learning inference and training are performed using the IEEE FP32 format, with all values such as parameters, activations, gradients and weight updates represented in FP32. However, there has been a growing demand for compact data representations with high-throughput compute. The MX standard [2] aims to help create more compact models while preserving the accuracy of their full-precision counterparts. The standard introduces quantization with a new scale sharing regime, as compared to traditional per- tensor or per-channel quantization. Our contributions are the following: * • We present the first open-source FPGA-oriented implementation of the new MX standard. * • We explore parameter choices beyond the concrete implementations defined in the standard and evaluate their impact on inference accuracy and FPGA area, allowing us to uncover some promising design points. * • We provide open-source software infrastructure to facilitate exploration with MX and similar schemes in Pytorch. ## II Background ### II-A Quantization TABLE I: Restrictions on the scales of a weight tensor ($F$) and activation tensor ($A$) imposed by scale sharing regimes on a 2D convolution example. The block size for MX scaling is denoted by $k$. $S$ and $T$ are scales for $F$ and $A$ respectively. Scale Sharing | Restrictions ---|--- | Weights | Activations | F, S $\in\mathbb{R}^{K\times C\times H^{\prime}\times W^{\prime}}$ | A, T $\in\mathbb{R}^{N\times H\times W\times C}$ Per-Tensor | $\forall l,c,h^{\prime},w^{\prime}\ \ S_{l,c,h^{\prime},w^{\prime}}=s$ | $\forall n,h,w,c\ \ T_{n,h,w,c}=t$ Per-Channel | $\forall l,h^{\prime},w^{\prime}\ \ S_{l,c,h^{\prime},w^{\prime}}=s_{c}$ | $\forall n,h,w\ \ T_{n,h,w,c}=t_{c}$ | S $\in\mathbb{R}^{K\times\lceil\frac{C}{k}\rceil\times H^{\prime}\times W^{\prime}\times k}$ | T $\in\mathbb{R}^{N\times H\times W\times\lceil\frac{C}{k}\rceil\times k}$ MX | $\forall p\ \ S_{l,c,h^{\prime},w^{\prime},p}=s_{l,c,h^{\prime},w^{\prime}}$ | $\forall p\ \ T_{n,h,w,c,p}=t_{n,h,w,c}$ One common approach to reducing area and bandwidth of neural networks is by exploring number representations where fewer bits are used per parameter than in IEEE 754 [3] floating-point formats. A downside of these narrow formats is that they typically have lower precision or dynamic range in comparison. The reduction in dynamic range is extreme in floating-point formats with narrow exponents [4] and fixed-point formats [5]. These formats are usually coupled with a shared scale factor and zero point [6] which transform values to make better use of the available range. This transformation is shown in Equation 1 where tensors $S$ and $Z$ represent the scale and zero point respectively and $\odot$ represents element-wise multiplication. Equation 2 shows restrictions imposed by the MX standard, where $\mathbb{Z}_{b}$ denotes the set of values representable by a $b$-bit integer and $\mathbb{M}_{e,m}$ denotes the set of values representable by a floating-point format with an $e$-bit exponent and $m$-bit mantissa. Quantized tensor $X_{q}$ can be integer or floating-point. The scale and zero point scale and shift values such that most of them lie in the range representable by the target format, and that most of them have distinct values in the target format. They are usually shared with tensor-wise or channel-wise granularity due to the computational cost savings from factoring them out of dot products. Table I shows the restrictions imposed by scale sharing regimes on a 2D convolution (Equation 3). Per-tensor scales restrict all elements of the scale to the same value, while per-channel scales allow scales to vary along the $C$ dimension. In the case of per-vector [7] and MX [8] scaling, the scale tensor is reshaped such that the $C$ dimension is replaced with two dimensions $\lceil\frac{C}{k}\rceil$ and $k$. In this configuration, only values along the $k$ dimension share a common scale. The need for reshaping will be explained in Section IV-A. $\displaystyle X=S\odot(X_{q}-Z)\quad\text{where}\quad X\in\mathbb{R}^{d_{1}\times d_{2}\times\ldots\times d_{n}}$ (1) $\begin{gathered}S\in\mathbb{M}_{8,0}^{d_{1}\times d_{2}\times\ldots\times d_{n}}\quad Z=0\\\ X_{q}\in\mathbb{Z}_{b}^{d_{1}\times d_{2}\times\ldots\times d_{n}}\quad\text{ or }\quad X_{q}\in\mathbb{M}_{e,m}^{d_{1}\times d_{2}\times\ldots\times d_{n}}\end{gathered}$ (2) $\begin{aligned} &A^{\prime}_{n,h,w,l}=\sum_{c=1}^{C}\sum_{h^{\prime}=1}^{H^{\prime}}\sum_{w^{\prime}=1}^{W^{\prime}}(A(n,h+h^{\prime},w+w^{\prime},c)\\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times F(l,c,h^{\prime},w^{\prime}))\end{aligned}$ (3) ### II-B Shared Scale Boundaries Traditionally, a scale value is shared between all elements in a tensor as this allows the scale to be factored out of dot products. This however limits all values in the tensor to be within a small range dictated by the dynamic range of the element format. Per-channel scales ease this by allowing each input channel in activation and weight tensors to have a different scale. Per- vector scales [7] use even finer granularity. As granularity gets finer, quantization noise generally decreases. Another method proposes that tensors are divided into square blocks [9] as this keeps blocks contiguous during the backward pass. The MX standard [8] is a special case of per-vector scaling with restrictions such as a power-of-two scale. The dimension used for sharing exponents in per-vector and MX scaling is called the principal dimension. A block refers to a group of elements of a tensor which share a scale value. There are two main considerations when drawing block boundaries. First is the extra bandwidth required to load the scales along with parameters. Second is the effect on accumulation during dot products between the activation and weight tensors. Per-tensor scaling minimizes these effects by using fewer scales, thereby allowing accumulation to be fully performed with scales factored out. In other cases, accumulations appear across shared scale boundaries. ### II-C The MX Standard The MX Standard [8] is aimed at efficient neural network implementation by reducing the loss in accuracy when using fewer than 8 bits to represent parameters [10]. The standard uses a fine grained scale restricted to a “E8M0” format (8-bit exponent, 0-bit mantissa), which restricts the scale to powers of two. The MX standard recommends that the scale is set to the largest power- of-two in the block, divided by the largest power-of-two representable in the element format. This presents new challenges in hardware implementation as converting into this format in hardware requires that area is dedicated to computing statistics over the input tensor. Normalising output activations during inference also requires computing statistics, this process recomputes the scale after the values in a block have been modified. The standard introduces a set of concrete MX compliant formats shown in Table II. Concrete formats use a block size ($k$) of 32. The standard also defines arithmetic operations for computing dot products between vectors with MX scaling (Equation 4). This equation shows how a dot product is calculated between tensors $X$ and $Y$, which are associated with MX scales $S$ and $T$. The standard leaves some aspects of these operations as implementation- defined, such as the internal precision used for accumulation. In our implementation (Section III), we make use of this freedom to optimise for FPGA and test alternative choices where the standard allows, including alternative block sizes and element types. TABLE II: MX concrete formats and formats supported by our implementation. $e\in[2,6]$, $m\in[1,5]$, $b\in[2,8]$, $\log_{2}(k)\in[2,9]$. All scales are E8M0. Special behaviour follows OCP FP8 [11]. Format Name | Element Type | Block Size | Specials/Behaviour ---|---|---|--- MXFP8 | E5M2 | 32 | NaN, Inf / OFL, SAT MXFP8 | E4M3 | 32 | NaN / OFL, SAT MXFP6 | E3M2 | 32 | - MXFP6 | E2M3 | 32 | - MXFP4 | E2M1 | 32 | - MXINT8 | INT8 | 32 | - Our MXFP | E$e$M$m$ | $k$ | NaN, Inf / OFL, SAT Our MXINT | INT$b$ | $k$ | - ### II-D Brevitas Brevitas is a Pytorch library that facilitates quantization of neural networks. The library allows quantizers to be inserted into the compute graph of a model. As an example, they can quantize weights and activations before a convolution. The purpose of these quantizers is to model the effect of quantization in the model by injecting quantization noise. Quantizers use rounding and clipping to mimic the precision and range of a target format. This is done by passing inputs through a quantization function, followed a dequantization function which converts quantized tensors to FP32. This allows quantization-aware training (QAT) to be performed on FP32 hardware, which improves accuracy of the model on low-precision hardware. This also allows testing the accuracy impact of quantization while running on FP32 hardware. In this paper, we extend Brevitas with quantization to MX. $\displaystyle\text{DotGeneral}(X,Y,S,T)=\sum_{c=1}^{C}\text{Dot}(X_{c},Y_{c},S_{c},T_{c})$ (4) $\displaystyle\text{Dot}(A,B,s,t)=(st)\sum_{p=1}^{k}A_{p}B_{p}$ $\displaystyle X,Y\in\mathbb{R}^{C\times k}\quad A,B\in\mathbb{R}^{k}\quad S,T\in\mathbb{R}^{C}$ ## III IP Cores Our open-source IP cores [12] support all of the concrete formats introduced by the MX standard as well as the standard-defined arithmetic operations. The cores comply with the MX standard and present new choices for details which are left as implementation-defined by the standard, such as handling of special values and the internal precision of accumulation. For these elements, we made choices with the goal of efficient FPGA implementation. We also take advantage of the customisability of FPGAs to support arbitrary precision integer and floating-point formats. The components in our library can be used to implement the concrete MX formats by setting parameters. In addition, our blocks support other configurations of element types and block sizes within the constraints shown in Table II. ### III-A Special Values The MX standard describes two ways of representing special values. In one case, the shared scale can be set to the NaN encoding (`0xFF`) to set all elements associated with it to NaN. In the second case where the element format supports special encodings, individual elements can be set to specials. In order to comply with the MX standard, the FP8 concrete formats need to support the OCP FP8 standard [11] which describes special encodings and the behaviour of these specials in overflow and saturating modes. There are four different types of special behaviour depending on the FP8 format (E5M2 or E4M3) and the mode (overflow or saturating), as described in the OCP FP8 specicifation. Excluding FP8, none of the other concrete formats support special encodings in the element type, and the logic for propagating specials is left as implementation-defined by the standard. In our implementation, the four types of special behaviour from the OCP FP8 specification can be used with any MXFP format, or all special encodings can be disabled at the element level. In these cases where the element format does not support special encodings, if the input scale is a NaN, the output scale is set to the NaN encoding. All conversions propagate NaNs by setting output elements to the NaN encoding, either by setting an element to NaN if possible or by setting the shared scale to the NaN encoding if the element format does not support NaNs. Infs are propagated according to the OCP FP8 specification if the element format has an encoding for Inf, otherwise it is treated as a NaN. Specials are propagated similarly in dot product circuits. ### III-B Dot Product Circuits The Dot standard-defined operation computes the dot product between a pair of blocks (Equation 4). The standard leaves the internal precision of this process as implementation-defined. As this operation factors out shared scales and is expected to be used with formats with small dynamic range, error-free integer accumulation is a viable option in hardware using Kulisch accumulation [13]. Our implementation uses this error-free accumulation, and arranges adders in a binary tree structure to perform pairwise summation [14], for low latency. Our implementation is shown in Figure 1, and Table III shows the internal precision used for floating-point and integer element types. The triangular block represents pairwise summation in hardware. If the element format has special encodings, the dot product circuit also checks input elements for specials, and sets the NaN or Inf flags. Figure 1: Our implementation of the Dot standard-defined operation, grey symbols are used for formats with special encodings. Table III shows widths of signals. Multiplier inputs can be FP or INT but outputs are always INT. TABLE III: Dot product circuit configurations. $B$ denotes width of an integer, $E,M$ denote widths of the exponent and mantissa fields of a floating-point number. Width | MXFP | MXINT ---|---|--- $b_{i}$ | $1+E+M$ | $B$ $b_{int}$ | $2(1+2^{E}+(M-1))$ | $2B$ $b_{o}$ | $2(1+2^{E}+(M-1))+\log_{2}(k)$ | $2B+\log_{2}(k)$ While the Dot standard-defined operation performs dot products within the boundaries of a shared scale, the Dot General standard-defined operation performs dot products across boundaries. Our implementation of this operation replicates the Dot operation for each block within the input vector, then accumulates the outputs of the Dot operations using adders with normalisation. The standard does not define the internal precision to be used for this accumulation with normalisation, so our implementation uses a component similar to a floating-point adder as shown in Figure 2. This adder uses ”round to nearest even” where precision is lost, the label $b+3$ in the figure represents the bit width of the input concatenated with the guard, round and sticky bits used for rounding. Figure 2: An adder that normalises operands, similar to a floating-point adder. ### III-C Conversion and Normalisation Our open-source library provides conversions from IEEE 754 FP32/BFloat16 and back, to facilitate integration of MX arithmetic into existing designs. The BFloat16 to MX converter can also normalise outputs of dot product circuits. The converters take a block of floating-point values along with their pre- computed scale. This is used to convert the input to a block of MX values, using ”round to nearest even”. The MX standard recommends computing the scale during inference, while most previous work computes the scale during training, keeping it constant during inference [15] [16]. The method recommended by the standard requires extra area. In particular, area is required to compute statistics during normalisation after dot product operations. Normalisation will also require variable shifts rather than constant shifts because scales are no longer constant at inference. Our IP cores can be used with constant pre-computed scales if desired, but our library also provides a component to compute the scale, implemented using a tree of comparators. The block size used by all concrete formats is ($k=32$). Our IP cores implement the block size as a parameter which can be modified and set to any power of two according to the constraint in Table II. If the block size of the converter is modified, the depth of the comparator tree is set to to $\lceil\log_{2}(k)\rceil$ and new pipeline stages are added to keep the critical path two comparators long. This preserves timing characteristics. ## IV Exploration Infrastructure To facilitate developing models for the MX standard, we developed an open- source infrastructure [17] to allow Quantization Aware Training (QAT) and evaluation of MX quantized models on GPU. To allow mixing MX with existing quantization schemes, this infrastructure has been integrated into Brevitas [18]. Integration with Brevitas gives the ability to change any aspect of quantization schemes and conduct design space exploration on Pytorch models. It also allows QAT with a quantized forward pass and FP32 backward pass. The MX standard was released along with a Pytorch library that allows exploration of MX formats [19]. Our infrastructure expands the exploration space to provide many more choices such as scale types and other scale computation methods, including choices not supported by the MX standard such as floating-point scales. In addition, our infrastructure allows mixing quantization schemes such as MX and per-tensor. ### IV-A Minifloat Quantization Most MX concrete formats use floating-point. At the time of writing, the latest version of Brevitas [18] does not support quantization schemes with low-precision floating-point elements. As a result, we have developed quantizers to floating-point element types and integrated these into Brevitas to compliment the existing fixed-point, binary and ternary types. Brevitas supports several ways to collect statistics and compute scales. The method recommended by the MX standard sets the scale to the largest power-of- two present in the input divided by the largest power-of-two representable in the target format, and is already in Brevitas. Other Brevitas methods use the mean or a histogram when calculating the scale. These were designed for per- tensor and per-channel quantization where statistics are calculated along a set of dimensions. As MX could have multiple blocks in a single dimension, an input tensor needs to be reshaped before statistics can be computed (Equation 5). The principal dimension (input channels) is split into $\lceil\frac{C}{k}\rceil$ blocks of $k$. This reshaping is paired with zero padding such that $C\mod k=0$ and allows statistics to be computed along the innermost dimension, ensuring compatibility with existing Brevitas scale implementations. $S\in\mathbb{R}^{K\times C\times H^{\prime}\times W^{\prime}}\ \ \text{to}\ \ S\in\mathbb{R}^{K\times\lceil\frac{C}{k}\rceil\times H^{\prime}\times W^{\prime}\times k}$ (5) The MX standard recommends that the scale is computed during inference. All current Brevitas methods compute a scale during training and keep it constant during inference, as this is what most previous work has used [15] [16]. Using the standard’s recommendation will add extra area to compute the scale during normalisation. Our exploration infrastructure allows the choice between the standard’s recommended method and a constant scale during inference. Our implementation of MX scaling is another important addition to Brevitas. In the original Brevitas, scales are stored as tensors consisting of a single element (per-tensor) or $C$ elements (per-channel). This allows the scale to be applied using the element-wise multiplication operator in Pytorch where the scale is broadcast along the missing dimensions. In the case of MX scaling, the scale sharing granularity is finer than one scale per vector, which is incompatible with this element-wise multiplication and broadcasting. In the memory-efficient case, the length of the principal dimension of the scale tensor would be the number of shared scales per vector. However, this would be incompatible with the current element-wise multiplication with broadcasting. Our implementation preserves compatibility by making a scale tensor with the same dimensions as the input tensor, with shared scales repeated for each element that uses them. While this is inefficient for memory, the scale tensor can be compressed by removing repeated elements before deployment. Figure 3 shows the quantization process in our MX quantizer. Figure 3: Features of our quantizer, each block here can be customised or replaced to implement other quantization schemes. The $2^{emax_{elem}}$ term refers to the largest exponent possible in the element format and $A^{\prime}$ represents a real valued tensor formed by applying the scale on $A_{q}$. ### IV-B Dot Products Equation 3 calculates a 2D convolution. As the boundaries between shared scales change, the sums over $c$, $h^{\prime}$ and $w^{\prime}$ are of interest because the boundaries determine where scales can be factored out of additions. If scales are factored out, error-free addition can be used (denoted by operator $\Sigma$), implemented efficiently using Kulisch accumulators [9] [20] for all supported formats (Table II). If scales could not be factored out, normalisation is required before and after addition (denoted by operator $\Xi$). Full multipliers are denoted by $\times$ while multiplications that can be reduced to additions/shifts are denoted by $\otimes$. In the case of per-tensor scaling, scales can be completely factored out. With per-channel scaling, the sum over $c$ crosses block boundaries (Equation 6) while the scales can be factored out of sums over $h^{\prime}$ and $w^{\prime}$. Our implementation of MX scaling (Equation 7) sets the $C$ dimension as the principal dimension, because this is typically the longest dimension used for reduction during dot products in CNNs and gives more choices for $k$. This means that sums over $h^{\prime}$ and $w^{\prime}$ use normalisation while sums over $C$ use a mixture of both addition types. The ratio of normalisation to integer adders is controlled by the block size. $\begin{aligned} &A^{\prime}_{n,h,w,l}=\operatorname*{\text{\Huge$\Xi$}}_{c=1}^{C}\ (s_{c}\otimes t_{c})\\\ &\otimes\sum_{h^{\prime}=1}^{H^{\prime}}\sum_{w^{\prime}=1}^{W^{\prime}}A_{q}(n,h+h^{\prime},w+w^{\prime},c)\times F_{q}(l,c,h^{\prime},w^{\prime})\end{aligned}$ (6) ## V Evaluation Our library of IP cores and our exploration infrastructure were evaluated on image classification on ImageNet using ResNet-18 [21]. A reference model was trained using FP32 and this pre-trained model was used as a starting point for all quantization schemes tested. MX formats were tested alongside per-tensor and per-channel schemes by applying quantization to all weight and activation tensors in the model. In all cases, the scale was restricted to E8M0 and computed at inference. The variables between schemes were the granularity of the shared scale and element format which was restricted to 8 bits or less per element as this is what the MX standard is aimed at. Synthesis was done using Vivado 2023.1, targetting a Xilinx Zynq UltraScale+ xczu7ev-ffvc1156-2-e device. $\begin{aligned} &A^{\prime}_{n,,h,w,l}=\operatorname*{\text{\Huge$\Xi$}}_{c=1}^{\lceil\frac{C}{k}\rceil}\operatorname*{\text{\Huge$\Xi$}}_{h^{\prime}=1}^{H^{\prime}}\operatorname*{\text{\Huge$\Xi$}}_{w^{\prime}=1}^{W^{\prime}}\ (s_{l,c,h^{\prime},w^{\prime}}\otimes t_{l,c,h^{\prime},w^{\prime}})\\\ &\otimes\sum_{p=1}^{k}(A_{q}(n,h+h^{\prime},w+w^{\prime},ck+p)\\\ &\quad\quad\quad\quad\times F_{q}(l,ck+p,h^{\prime},w^{\prime}))\end{aligned}$ (7) ### V-A Network-level Accuracy The accuracy of each quantization scheme was evaluated under both post- training quantization (PTQ) and quantization-aware training (QAT) using our exploration infrastructure introduced in Section IV. PTQ was performed by rounding the parameters of the FP32 model to the target scheme using ”round to nearest even”. Only linear and convolutional layers in the ResNet-18 model had quantization applied, all other operations such as batchnorm operations and ReLU activations continued to use single-precision [22], because convolutional layers and linear layers consist of more parameters and perform far more operations than the other layers. QAT was performed by rounding the reference model to a target scheme, then training further with a fine-tuning set. A variety of MX formats have been evaluated, including all the concrete formats defined in the standard. The error on the test dataset after PTQ and QAT is plotted in Figure 4. For the MXINT family of formats, bit width has the largest impact on error, increasing bitwidth decreases error. Aside from that, block size also has an impact on error where decreasing block size decreases error. This is also reflected in the MXFP family, but block size has a smaller impact on accuracy. ### V-B Core-level Hardware Results For rapid evaluation of MX formats, we create area models by profiling with out of context synthesis. The models are on a per IP block basis, for each of: multiplier arrays, adder trees and normalisation circuits. Multiplier arrays and FP adder trees have linearly increasing area with respect to the number of multipliers/adders. The coefficients in the linear model are found by least- squares fitting to a subset of synthesised designs. Normalisation circuits are similar, with a linear increase in area with the number of values to be normalised (number of output activations in a layer). For integer adder trees, our model calculates the sum of area of all adders in the tree. The logarithmic increase of adder size with tree depth is taken into account. The area for each individual adder and multiplier was found by synthesising while sweeping mantissa/exponent widths. This area model was used to estimate the area that would be required to unroll all of the linear and convolutional layers in the model, ignoring the cost of other operations as these will use a relatively small amount of area. Placing such an unrolled model on a single device is not feasible due to the large area required, however, this measure captures the effect of changing block size across a range of layers with varying $C$. The same model was also used to estimate the area of per-channel and per-tensor schemes. As for latency and throughput, our individual cores are pipelined; there is no significant change in the clock frequency, and hence the core-level throughput achievable across the range of parameters we have explored. The same would be true for a fully-unrolled implementation. The number of pipeline stages does increase with $k$ in our implementation, due to the depth of the comparator tree as detailed in Section III. This effect was negligible in our experiments due to its logarithmic complexity and the small number of pipeline stages relative to other components. The number of pipeline stages in a FP32 implementation would be much larger as FP32 multipliers/adders (IP from Vivado) require more pipeline stages to match the frequency of our quantized implementation. Figure 4 shows the test error and estimated area of quantization schemes before and after QAT. The plots omit schemes with more than 40% error and schemes with high area utilization that provided little error improvement. The MXFP8 formats (E4M3/E5M2) are omitted because the error-free accumulation within our Dot implementation scales with $O(2^{e})$ and uses large area. In our PTQ results, the Pareto optimal points which offer the lowest area cost are the MXINT5 formats. The MXINT6/7/8 formats with coarse grained scales provide accuracy close to the FP32 baseline. The MXFP6/7 formats E2M3 and E2M4 provide a marginal improvement over MXINT6/7. After QAT, the MXINT4 formats become the most desirable for low area, with MXINT5/6/7 formats providing near baseline accuracy. Generally across both PTQ and QAT results, if low area is desired (left side of plots), it is desirable to use a narrow MXINT format and the main design choice is the block size which can be used to trade area with accuracy. On the other hand if near baseline accuracy (grey dotted line) is desired, the wider MXINT formats are better suited, with bit width being the most impactful design choice. The results also show the effect of our exploration infrastructure. Notably, QAT brings down the error cost of 4-bit formats such as MXFP4 (E2M1) and MXINT4 and makes them feasible for implementations with limited area. QAT also brings the MXINT5/6 formats to near baseline accuracy. However, the effect of QAT on MXFP formats was not as significant. MX formats considerably improve on the area/error tradeoff for FPGA implementation, compared to per-tensor and per-channel scaling, however in this application it is mainly the MXINT formats, i.e. block floating point, that provide the best results, rather than narrow-width MXFP. Figure 4: Error vs. estimated area of quantization schemes. Marker shape shows scale sharing regime. The grey dotted line is the FP32 baseline, other dotted lines show Pareto fronts. Pareto-optimal points are labelled with format and block size. Only schemes that offered more than 60% accuracy are shown. ## VI Conclusion In this paper, we have introduced an open-source MX compatible library of arithmetic components, which can be used to implement ML accelerator designs on FPGAs. Our library fully supports all concrete formats introduced by the MX standard, and is fully parameterised to support a wide range of element formats and block sizes beyond the concrete formats, too. Alongside this library, we have developed a software exploration infrastructure for MX which facilitates training and evaluating MX quantized models on GPUs. Our exploration infrastructure is fully integrated with Brevitas, allowing MX to be included in design space exploration alongside other traditional quantization schemes. Finally, we explored the trade-off between inference accuracy and FPGA area for a range of formats introduced under the MX standard. Our findings show the benefit of using narrow formats such as MXINT4/5 and MXFP6/7 (E2M3, E2M4) over traditional per-tensor or per-channel quantization. Our experiments also show that MXINT6/7 are more desirable in this trade-off than the concrete MXINT8 format at times. Our exploration infrastructure opens up a lot of interesting design choices. In the future, it could be used to explore mixed-precision models with different quantization schemes MX or non-MX. The scale also opens up some areas for exploration such as different scale formats (FP/INT) or other scale computation methods. Following this, IP Cores could be created for the best schemes from this exploration. ## References * [1] J. Kaplan, S. McCandlish, T. J. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling Laws for Neural Language Models,” _ArXiv_ , vol. abs/2001.08361, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:210861095 * [2] B. D. Rouhani, N. Garegrat, T. Savell, R. Zhao, A. More, K.-N. Han, M. Hall, J. Klar, E. Chung, Y. Yu, M. Schulte, R. Wittig, I. Bratt, N. Stephens, J. Milanovic, J. Brothers, P. Dubey, M. Cornea, A. Heinecke, M. L. Andres Rodriguez, S. Deng, M. Naumov, P. Micikevicius, M. Siu, and C. Verrilli, “OCP Microscaling Formats (MX) Specification.” * [3] IEEE, “IEEE Standard for Floating-Point Arithmetic,” _IEEE Std 754-2019 (Revision of IEEE 754-2008)_ , pp. 1–84, 2019. * [4] R. DiCecco, L. Sun, and P. Chow, “FPGA-based training of convolutional neural networks with a reduced precision floating-point library,” in _2017 International Conference on Field Programmable Technology (ICFPT)_ , 2017, pp. 239–242. * [5] M. Courbariaux, Y. Bengio, and J.-P. David, “Training deep neural networks with low precision multiplications,” _arXiv: Learning_ , 2014. [Online]. Available: https://api.semanticscholar.org/CorpusID:16349374 * [6] M. Nagel, M. Fournarakis, R. A. Amjad, Y. Bondarenko, M. van Baalen, and T. Blankevoort, “A White Paper on Neural Network Quantization,” _ArXiv_ , vol. abs/2106.08295, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:235435934 * [7] S. Dai, R. Venkatesan, H. Ren, B. Zimmer, W. J. Dally, and B. Khailany, “VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference,” _MLSys 2021_ , vol. abs/2102.04503, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:231855747 * [8] B. D. Rouhani, R. Zhao, A. More, M. Hall, A. Khodamoradi, S. Deng, D. Choudhary, M. Cornea, E. Dellinger, K. Denolf, S. Dusan, V. Elango, M. Golub, A. Heinecke, P. James-Roxby, D. Jani, G. Kolhe, M. Langhammer, A. Li, L. Melnick, M. Mesmakhosroshahi, A. Rodriguez, M. Schulte, R. Shafipour, L. Shao, M. Siu, P. Dubey, P. Micikevicius, M. Naumov, C. Verrilli, R. Wittig, D. Burger, and E. Chung, “Microscaling Data Formats for Deep Learning,” _ArXiv_ , 2023. * [9] S. Fox, S. Rasoulinezhad, J. Faraone, D. Boland, and P. Leong, “A Block Minifloat Representation For Training Deep Neural Networks,” in _ICLR 2021_ , 2021. * [10] C. Zhang, J. Cheng, I. Shumailov, G. Constantinides, and Y. Zhao, “Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?” in _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 2023. [Online]. Available: http://dx.doi.org/10.18653/v1/2023.emnlp-main.617 * [11] P. Micikevicius, S. Oberman, P. Dubey, M. Cornea, A. Rodriguez, I. Bratt, R. Grisenthwaite, N. Jouppi, C. Chou, A. Huffman, M. Schulte, R. Wittig, D. Jani, and S. Deng, “OCP 8-bit Floating Point Specification (OFP8).” * [12] E. Samson, “MX-for-FPGA,” 2023. [Online]. Available: https://github.com/ebby-s/MX-for-FPGA * [13] U. Kulisch, _Computer Arithmetic and Validity_. Berlin, Boston: De Gruyter, 2012. [Online]. Available: https://doi.org/10.1515/9783110301793 * [14] N. J. Higham, _Accuracy and Stability of Numerical Algorithms_ , 2nd ed. Society for Industrial and Applied Mathematics, 2002. [Online]. Available: https://epubs.siam.org/doi/abs/10.1137/1.9780898718027 * [15] S. R. Jain, A. Gural, M. Wu, and C. Dick, “Trained Uniform Quantization for Accurate and Efficient Neural Network Inference on Fixed-Point Hardware,” _CoRR_ , vol. abs/1903.08066, 2019. [Online]. Available: http://arxiv.org/abs/1903.08066 * [16] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang, V. Srinivasan, and K. Gopalakrishnan, “PACT: Parameterized Clipping Activation for Quantized Neural Networks,” _ArXiv_ , vol. abs/1805.06085, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:21721698 * [17] E. Samson, “Brevitas-MX,” 2023. [Online]. Available: https://github.com/ebby-s/brevitas * [18] A. Pappalardo, “Xilinx/brevitas,” 2023. [Online]. Available: https://doi.org/10.5281/zenodo.3333552 * [19] Microsoft, “microsoft/microxcaling,” 2023. [Online]. Available: https://github.com/microsoft/microxcaling * [20] U. Kulisch, “Very fast and exact accumulation of products,” _Computing_ , vol. 91, no. 4, pp. 397–405, 2011. [Online]. Available: https://doi.org/10.1007/s00607-010-0131-y * [21] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 770–778, 2015. [Online]. Available: https://api.semanticscholar.org/CorpusID:206594692 * [22] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, “DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients,” 2018.
Elliptic Quantum Groups Hitoshi Konno Department of Mathematics, Tokyo University of Marine Science and Technology, Etchujima, Koto, Tokyo 135-8533, Japan <EMAIL_ADDRESS> ###### Abstract We expose the elliptic quantum groups in the Drinfeld realization associated with both the affine Lie algebra ${g}$ and the toroidal algebra ${g}_{tor}$. There the level-0 and level $\not=0$ representations appear in a unified way so that one can define the vertex operators as intertwining operators of them. The vertex operators are key for many applications such as a derivation of the elliptic weight functions, integral solutions of the (elliptic) $q$-KZ equation and a formulation of algebraic analysis of the elliptic solvable lattice models. Identifying the elliptic weight functions with the elliptic stable envelopes we make a correspondence between the level-0 representation of the elliptic quantum group and the equivariant elliptic cohomology. We also emphasize a characterization of the elliptic quantum groups as $q$-deformations of the $W$-algebras. Keywords : Elliptic quantum group, Affine Lie algebra, Toroidal algebra, deformed $W$-algebra, $q$-KZ equation, Hypergeometric integral, Quantum integrable system, Quiver variety, Elliptic cohomology, Stable envelope Key points : * • The elliptic quantum groups $U_{q,p}({g})$ and $U_{q,\kappa,p}({g}_{tor})$ are formulated in terms of the Drinfeld generators in a uniform way111One may unify them as “quiver elliptic quantum groups”, where $U_{q,p}({g})$ is a finite Dynkin quiver type, whereas $U_{q,\kappa,p}({g}_{tor})$ is an affine Dynkin quiver type. and equipped with $H$-Hopf algebroid structure as a coalgebra structure. * • $U_{q,p}({g})$ is characterized both as a $p$-deformation of the quantum affine algebra $U_{q}({g})$ and as a $q$-deformation of the $W$-algebra of the coset type ${g}\oplus{g}\supset{g}_{\rm diag}$. In the same way, $U_{q,\kappa,p}({g}_{tor})$ is characterized both as a $p$-deformation of the quantum toroidal algebra $U_{q,\kappa}({g}_{tor})$ and as a $(q,\kappa)$-deformation of affine quiver $W$-algebra. * • The vertex operators defined as intertwiners of the tensor product of the level-0 and $\not=0$ modules are key objects for many applications. * • The Gelfand-Tsetlin basis for the level-0 representation of $U_{q,p}(\widehat{{sl}}_{N})$ is constructed explicitly by using the elliptic weight function as the change of basis matrix from the standard basis. * • The elliptic weight function of type ${sl}_{N}$ is identified with the elliptic stable envelope for the equivariant elliptic cohomology of the cotangent bundle to the partial flag variety $\mathrm{E}_{T}(T^{*}fl_{\lambda})$. This allows us to make a dictionary between $U_{q,p}(\widehat{{sl}}_{N})$ and $\mathrm{E}_{T}(T^{*}fl_{\lambda})$. ## 1 Introduction Quantum groups are algebraic systems associated with solutions of the Yang- Baxter equation (YBE). They are in general associative algebras classified by finite dimensional simple Lie algebras $\bar{{g}}$, affine Lie algebras ${g}$ or toroidal algebras ${g}_{tor}$, and equipped with certain coalgebra structure. Typical examples are Yangians $Y(\bar{{g}})$, Yangian doubles ${\mathcal{D}}Y(\bar{{g}})$, quantum groups $U_{q}(\bar{{g}})$, affine quantum groups $U_{q}({g})$ and quantum toroidal algebras $U_{q,\kappa}({g}_{tor})$. See for example [11, 60, 26]. Similarly elliptic quantum groups (EQG) are quantum groups associated with elliptic solutions of the YBE classified by affine Lie algebras ${g}$ or toroidal algebras ${g}_{tor}$. In elliptic setting, it is known at least for the cases classified by ${g}$ that there are two types of YBEs, the vertex type and the face type (Fig.1 and Fig.2). Accordingly, there are two types of elliptic quantum groups, the vertex type and the face type[29]. Both of them are dynamical quantum groups, whose universal $R$ matrices[29] satisfy the dynamical YBE (DYBE)[21]222 Obviously the face type YBE is dynamical[21]. The vertex type is also dynamical, because the elliptic nome $p$ is a dynamical parameter getting a shift by a central element $q^{-2c}$ and yields the new elliptic nome $p^{*}=pq^{-2c}$[29].. $\varepsilon_{3}$$\varepsilon_{3}^{\prime}$$\quad u_{3}$$\varepsilon_{1}$$\varepsilon_{2}$$\varepsilon_{1}^{\prime}$$u_{1}$$\varepsilon_{2}^{\prime}$$u_{2}$$=$$\varepsilon_{3}$$\varepsilon_{3}^{\prime}$$\quad u_{3}$$\varepsilon_{2}^{\prime}$$u_{2}$$\varepsilon_{1}^{\prime}$$u_{1}$$\varepsilon_{1}$$\varepsilon_{2}$ Figure 1: The vertex type Yang-Baxter equation $a$$b$$c$$f$$d$$e$$\bullet$$u_{1}$$u_{2}$$\ u_{1}-u_{2}$$=$$a$$b$$c$$f$$d$$e$$\bullet$$u_{2}$$u_{1}$$\ u_{1}-u_{2}$ Figure 2: The face type Yang-Baxter equation There are several formulations of the elliptic quantum groups. They can be classified by their generators and coalgebra structures. See Table 1 : ${\mathcal{A}}_{q,p}(\widehat{{sl}}_{N})$ and ${{\mathcal{B}}_{q,\lambda}}({g})$ in terms of the Chevalley generators, $U_{q,p}({g})$ in terms of the Drinfeld generators and $E_{q,p}(\widehat{{gl}}_{N})$ in terms of the $L$ operators. Only ${\mathcal{A}}_{q,p}(\widehat{{sl}}_{N})$ is the vertex type. The others are the face type. Coalgebra structures are the quasi-Hopf algebra structure for ${\mathcal{A}}_{q,p}(\widehat{{sl}}_{N})$, ${{\mathcal{B}}_{q,\lambda}}({g})$, and the Hopf algebroid structure for $E_{q,p}(\widehat{{gl}}_{N})$, $U_{q,p}({g})$, respectively. For expositions of these formulations with historical remarks and arguments for their consistency, the reader can consult[45]. | co-algebra structure | generators ---|---|--- $\begin{matrix}{\mathcal{A}}_{q,p}(\widehat{{sl}}_{N})\ \mbox{(vertex type)}\\\\[8.53581pt] {{\mathcal{B}}_{q,\lambda}}({{g}})\ \mbox{(face type)}\\\\[8.53581pt] \end{matrix}$ | quasi-Hopf algebra | Chevalley $\begin{matrix}E_{q,p}({g})\ \mbox{(face type)}\\\\[8.53581pt] \end{matrix}$ | Hopf Algebroid | $L$-operator $\begin{matrix}U_{q,p}({{g}})\ \mbox{(face type)}\\\\[8.53581pt] \end{matrix}$ | Hopf Algebroid | Drinfeld Table 1: Three formulations of the elliptic quantum groups In this article, we expose the elliptic quantum group $U_{q,p}({g})$ and its toroidal version $U_{q,\kappa,p}({g}_{tor})$. They are generated by the Drinfeld generators, i.e. analogues of the loop generators of the affine Lie algebra ${g}$[33]. After giving definitions of $U_{q,p}({g})$ and some typical representations of $U_{q,p}(\widehat{{sl}}_{N})$, we introduce the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$ and describe their applications : derivations of the elliptic weight functions and integral solutions of the elliptic $q$-KZ equation, construction of the level-0 action on the Gelfand- Tsetlin basis, algebraic analysis of the elliptic solvable lattice models. We then describe a geometric interpretation of the level-0 representation of $U_{q,p}(\widehat{{sl}}_{N})$ in terms of the equivariant elliptic cohomology of the cotangent bundle to the partial flag variety $\mathrm{E}_{T}(T^{*}fl_{\lambda})$. We also emphasize a characterization of $U_{q,p}({g})$ as a $q$-deformation of the $W$-algebra of the coset type. In the final section we briefly describe a formulation of the elliptic quantum toroidal algebras, $U_{q,\kappa,p}({g}_{tor})$, $U_{q,t,p}({{gl}}_{1,tor})$ and expose their connections to the Macdonald theory, affine quiver $W$-algebra and application to instanton calculus of the super symmetric gauge theories. ## 2 Elliptic Algebra $U_{q,p}(\mbox{\fourteeneufm g})$ Through this paper, let $p,q$ be generic complex numbers satisfying $|p|,|q|<1$. Let $\bar{{g}}$ be a simple Lie algebra over ${\mathbb{C}}$ and ${g}=X^{(1)}_{l}$ the corresponding untwisted affine Lie algebra with the generalized Cartan matrix $A=(a_{ij})_{i,j\in I}$, $I=\\{0\\}\cup\bar{I},\ \bar{I}=\\{1,\dots,l\\}$, $\mathrm{rank}A=l$. We denote by $B=(b_{ij})_{i,j\in I},\,b_{ij}=d_{i}a_{ij}$ the symmetrization of $A$ and set $q_{i}=q^{d_{i}}$. We also use the notations for $n\in\mathbb{Z}$, $\displaystyle[n]_{q}=\frac{q^{n}-q^{-n}}{q-q^{-1}},\qquad[n]_{i}=\frac{q_{i}^{n}-q_{i}^{-n}}{q_{i}-q_{i}^{-1}}.$ We fix a realization $({h},\Pi,\Pi^{\vee})$ of $A$, i.e. ${h}$ is a $l+2$-dimensional $\mathbb{C}$-vector space, $\Pi=\\{\alpha_{0},\alpha_{1},\dots,\alpha_{l}\\}\subset{h}^{*}$ a set of simple roots, and $\Pi^{\vee}=\\{h_{0},h_{1},\dots,h_{l}\\}\subset{h}$ a set of simple coroots satisfying $\langle\alpha_{j},h_{i}\rangle=a_{ij}\ (i,j\in I)$ for a canonical pairing $\langle\,,\rangle:{h}^{*}\times{h}\to{\mathbb{C}}$[33]. Set also ${\mathcal{Q}}=\sum_{i\in I}{\mathbb{Z}}\alpha_{i}$. We take $\\{h_{1},\dots,h_{l},c,d\\}$ as the basis of ${h}$ and $\\{\bar{\Lambda}_{1},\cdots,\bar{\Lambda}_{l},\Lambda_{0},\delta\\}$ the dual basis satisfying $\displaystyle\langle\delta,d\rangle=1=\langle\Lambda_{0},c\rangle,\quad\langle\bar{\Lambda}_{i},h_{j}\rangle=\delta_{i,j},$ with the other pairings being 0. Let $H_{P}$ be a ${\mathbb{C}}$-vector space spanned by $P_{0},P_{1},\cdots,P_{l}$ and $H^{*}_{P}$ be its dual space spanned by $Q_{0},Q_{1},\cdots,Q_{l}$ with a pairing $\langle Q_{i},P_{j}\rangle=a_{ij}$. For $\alpha=\sum_{i\in I}c_{i}\alpha_{i}\in{h}^{*}$, we set $P_{\alpha}=\sum_{i\in I}c_{i}P_{i}$, $Q_{\alpha}=\sum_{i\in I}c_{i}Q_{i}$. In particular $P_{\alpha_{i}}=P_{i}$, $Q_{\alpha_{i}}=Q_{i}$. Define also an analogue of the root lattice ${\mathcal{R}}_{Q}=\sum_{i\in I}{\mathbb{Z}}Q_{i}$. Let us consider $H:={h}\oplus H_{P}$ and $H^{*}:={h}^{*}\oplus H^{*}_{P}$ with a pairing $\langle\ ,\ \rangle:H^{*}\times H\to{\mathbb{C}}$ by extending those on ${h}^{*}\times{h}$ and $H^{*}_{P}\times H_{P}$ with $\langle{h}^{*},H_{P}\rangle=0=\langle H^{*}_{P},{h}\rangle$. We denote by ${\mathbb{F}}={\mathcal{M}}_{H^{*}}$ the field of meromorphic functions on $H^{*}$. We regard a meromorphic function $g(h,P)$ of $h\in{h},P\in H_{P}$ as an element in ${\mathbb{F}}$ by $g(h,P)(\mu)=g(\langle\mu,h\rangle,\langle\mu,P\rangle)$ for $\mu\in H^{*}$. ### 2.1 Definition The elliptic algebra $U_{q,p}({g})$ is a topological algebra over ${\mathbb{F}}[[p]]$ generated by the Drinfeld generators $\displaystyle\alpha_{i,m},\quad e_{i,n},\quad f_{i,n},\quad K_{i}^{\pm},\quad q^{\pm c/2},\quad q^{d}\quad(i\in\bar{I},\quad m\in\mathbb{Z}\backslash\\{0\\},\quad n\in\mathbb{Z}).$ In order to write down the defining relations, it is convenient to introduce their generating functions $e_{i}(z),f_{i}(z)$ and $\phi^{\pm}_{i}(z)$ called the elliptic currents. $\displaystyle e_{i}(z)=\sum_{n\in{\mathbb{Z}}}e_{i,n}z^{-n},\qquad f_{i}(z)=\sum_{n\in{\mathbb{Z}}}f_{i,n}z^{-n},$ $\displaystyle\phi^{+}_{i}(q^{-{c}/{2}}z)=K^{+}_{i}\exp\left(-(q_{i}-q_{i}^{-1})\sum_{n>0}\frac{\alpha_{i,-n}}{1-p^{n}}z^{n}\right)\exp\left((q_{i}-q_{i}^{-1})\sum_{n>0}\frac{p^{n}\alpha_{i,n}}{1-p^{n}}z^{-n}\right),$ $\displaystyle\phi^{-}_{i}(q^{{c}/{2}}z)=K^{-}_{i}\exp\left(-(q_{i}-q_{i}^{-1})\sum_{n>0}\frac{p^{n}\alpha_{i,-n}}{1-p^{n}}z^{n}\right)\exp\left((q_{i}-q_{i}^{-1})\sum_{n>0}\frac{\alpha_{i,n}}{1-p^{n}}z^{-n}\right).$ We also set $p^{*}=pq^{-2c}$. The defining relations are given as follows. For $g(h,P)\in{\mathbb{F}}$, $\displaystyle q^{\pm c/2}\ :\hbox{ central },$ $\displaystyle g(h,{P})e_{j}(z)=e_{j}(z)g(h+\langle\alpha_{j},h\rangle,P-\langle Q_{j},P\rangle),\quad g(h,{P})f_{j}(z)=f_{j}(z)g(h-\langle\alpha_{j},h\rangle,P),$ $\displaystyle[g(h,P),\alpha_{i,m}]=0,\qquad[g(h,P),q^{d}]=0,\quad g(h,{P})K^{\pm}_{j}=K^{\pm}_{j}g(h,P-\langle Q_{j},P\rangle),$ $\displaystyle[q^{d},\alpha_{j,m}]=q^{m}\alpha_{j,m},\quad[q^{d},x^{\pm}_{j}(z)]=x^{\pm}_{j}(q^{-1}z)$ $\displaystyle[K^{\pm}_{i},K^{\pm}_{j}]=[K^{\pm}_{i},K^{\mp}_{j}]=0=[K^{\pm}_{i},\alpha_{j,m}],$ $\displaystyle K^{\pm}_{i}e_{j}(z)(K^{\pm}_{i})^{-1}=q^{\mp\langle\alpha_{j},h_{i}\rangle}e_{j}(z),\quad K^{\pm}_{i}f_{j}(z)(K^{\pm}_{i})^{-1}=q^{\pm\langle\alpha_{j},h_{i}\rangle}f_{j}(z),$ $\displaystyle[\alpha_{i,m},\alpha_{j,n}]=\delta_{m+n,0}\frac{[a_{ij}m]_{i}}{m}\frac{q^{cm}-q^{-cm}}{q_{j}-q_{j}^{-1}}\frac{1-p^{m}}{1-p^{*m}}q^{-cm},$ $\displaystyle[\alpha_{i,m},e_{j}(z)]=\frac{[a_{ij}m]_{i}}{m}\frac{1-p^{m}}{1-p^{*m}}q^{-cm}z^{m}e_{j}(z),$ $\displaystyle[\alpha_{i,m},f_{j}(z)]=-\frac{[a_{ij}m]_{i}}{m}z^{m}f_{j}(z),$ $\displaystyle(z-q^{b_{ij}}w)g_{ij}(w/z;p^{*})e_{i}(z)e_{j}(w)=(q^{b_{ij}}z-w)g_{ij}(z/w;p^{*})e_{j}(w)e_{i}(z),{}$ $\displaystyle(z-q^{-b_{ij}}w)g_{ij}(w/z;p)^{-1}f_{i}(z)f_{j}(w)=(q^{-b_{ij}}z-w)g_{ij}(z/w;p)^{-1}f_{j}(w)f_{i}(z),{}$ $\displaystyle[e_{i}(z),f_{j}(w)]=\frac{\delta_{i,j}}{q_{i}-q_{i}^{-1}}\left(\delta\bigl{(}q^{c}{w}/{z}\bigr{)}\phi^{-}_{i}(q^{{c}/{2}}w)-\delta\bigl{(}q^{-c}{w}/{z}\bigr{)}\phi^{+}_{i}(q^{{c}/{2}}z)\right),$ \+ Serre relations. Here we set $\displaystyle g_{ij}(z;s)=\exp\left(-\sum_{m>0}\frac{1}{m}\frac{q^{b_{ij}m}-q^{-b_{ij}m}}{1-s^{m}}(sz)^{m}\right)\ \in\ {\mathbb{C}}[[s]][[z]].$ These relations are treated as formal Laurent series in the argument of the elliptic currents i.e. $z,w$ etc.. All the coefficients in $z,w$ etc. are well defined in the $p$-adic topology. It is easy to find that in the limit $p\to 0$ the above relations except for those indicating the non-commutativity of ${\mathbb{F}}$ and $e_{i}(z),f_{i}(z),K^{\pm}_{j}$ go to the defining relations of the quantum affine algebra $U_{q}({g})$[15]. One also finds that the non-commutativity of ${\mathbb{F}}$ can be realized as the one of $P_{j}$ and $Q_{i}$ by setting $[P_{i},Q_{j}]=a_{i,j}\ (i,j\in I)$. Hence by using the group algebra ${\mathbb{C}}[{\mathcal{R}}_{Q}]$ of ${\mathcal{R}}_{Q}$, i.e. $e^{Q_{\alpha}},e^{Q_{\beta}},e^{Q_{\alpha}}e^{Q_{\beta}}=e^{Q_{\alpha}+Q_{\beta}},e^{0}=1\in{\mathbb{C}}[{\mathcal{R}}_{Q}]$, one obtains the following isomorphism[17]. For generic $p,q$, $\displaystyle U_{q,p}({g})/pU_{q,p}({g})\ \cong\ (U_{q}({g})\otimes{\mathbb{F}}[[p]])\sharp{\mathbb{C}}[{\mathcal{R}}_{Q}].$ Here the smash product $\sharp$ expresses the non-commutativity between ${\mathbb{F}}$ and ${\mathbb{C}}[{\mathcal{R}}_{Q}]$. Remark. For representations, on which $q^{\pm c/2}$ take complex values e.g. $q^{\pm k/2}$ (see Sec.4), we treat $p$ and $p^{*}=pq^{-2k}$ as generic complex numbers satisfying $|p|<1$ and $|p^{*}|<1$. Then one has $\displaystyle g_{ij}(z;s)=\frac{(sq^{b_{ij}}z;s)_{\infty}}{(sq^{-b_{ij}}z;s)_{\infty}},\qquad(z;s)_{\infty}=\prod_{n=0}^{\infty}(1-zs^{n})$ for $|sq^{\pm b_{ij}}z|<1$, $s=p,p^{*}$. Hence in the sense of analytic continuation, one can rewrite the relations of $e_{i}(z)$ and $e_{j}(w)$ and of $f_{i}(z)$ and $f_{j}(w)$ as $\displaystyle z\theta_{p^{*}}(q^{b_{ij}}w/z)e_{i}(z)e_{j}(w)=-w\theta_{p^{*}}(q^{b_{ij}}z/w)e_{j}(w)e_{i}(z),$ (2.1) $\displaystyle z\theta_{p}(q^{-b_{ij}}w/z)f_{i}(z)f_{j}(w)=-w\theta_{p}(q^{-b_{ij}}z/w)f_{j}(w)f_{i}(z),$ (2.2) respectively. Here $\theta_{p}(z)$ denotes the odd theta function given by $\displaystyle\theta_{p}(z)=(z;p)_{\infty}(p/z;p)_{\infty}(p;p)_{\infty}.$ The additive notations are also often used. Introduce $r,r^{*}=r-k\ \in{\mathbb{R}}_{>0}$ and set $p=q^{2r},p^{*}=pq^{-2k}=q^{2r^{*}}$, $\displaystyle E_{i}(z):=e_{i}(z){z^{-\frac{P_{\alpha_{i}}-1}{r^{*}}}},\quad F_{i}(z):=f_{i}(z){z^{\frac{(P+h)_{\alpha_{i}}-1}{r}}}.$ Then from (2.1), (2.2) and the non-commutativity of $h,P$ with $e_{i}(z),f_{i}(z)$ one obtains $\displaystyle F_{i}(z_{1})F_{j}(z_{2})=\frac{[u_{1}-u_{2}-a_{ij}/2]}{[u_{1}-u_{2}+a_{ij}/2]}F_{j}(z_{2})F_{i}(z_{1}),$ $\displaystyle E_{i}(z_{1})E_{j}(z_{2})=\frac{[u_{1}-u_{2}+a_{ij}/2]^{*}}{[u_{1}-u_{2}-a_{ij}/2]^{*}}E_{j}(z_{2})E_{i}(z_{1}).$ Here we set $z_{i}=q^{2u_{i}}\ (i=1,2)$. The symbols $[u]$ and $[u]^{*}$ denote Jacobi’s odd theta functions given by $\displaystyle[u]=q^{\frac{u^{2}}{r}-u}\theta_{p}(q^{2u}),\quad[u]^{*}=q^{\frac{u^{2}}{r^{*}}-u}\theta_{p^{*}}(q^{2u}).$ They have the following quasi periodicity. $\displaystyle[u+r]=-[u],\quad[u+r\tau]=-e^{-\pi i\tau}e^{-2\pi i{u}/{r}}[u],$ $\displaystyle[u+r^{*}]^{*}=-[u]^{*},\quad[u+r^{*}\tau^{*}]^{*}=-e^{-\pi i\tau^{*}}e^{-2\pi i{u}/{r^{*}}}[u]^{*},$ where $p=e^{-2\pi i/\tau},\ p^{*}=e^{-2\pi i/\tau^{*}}$. ### 2.2 Coalgebra structure In order to formulate a coalgebra structure over ${\mathbb{F}}={\mathcal{M}}_{H^{*}}$, which does not commute with the Drinfeld generators, one needs to extend the Hopf algebra to the $H$-Hopf algebroid. For the elliptic algebra ${\mathcal{U}}=U_{q,p}({g})$, it is roughly sketched as follows. For expositions of the background concepts of $H$-Hopf algebroid, the reader can consult[16, 35, 41]. Let $P+h=\sum_{i\in I}c_{i}(P_{i}+h_{i})\in H,\ c_{i}\in{\mathbb{C}}$. The algebra ${\mathcal{U}}$ is a $H$-algebra by $\displaystyle{\mathcal{U}}=\bigoplus_{\alpha,\beta\in{h}^{*}}{\mathcal{U}}_{\alpha,\beta}$ $\displaystyle({\mathcal{U}})_{\alpha\beta}=\left\\{x\in U\left|\ q^{P+h}xq^{-(P+h)}=q^{\langle\alpha,P+h\rangle}x,\quad q^{P}xq^{-P}=q^{\langle Q_{\beta},P\rangle}x\quad\forall P+h,P\in H\right.\right\\}$ with the two moment maps $\mu_{l},\mu_{r}:{\mathbb{F}}\to{\mathcal{U}}_{0,0}$ defined by $\displaystyle\mu_{l}(\widehat{f})=f(h,P+h,p)\in{\mathbb{F}}[[p]],\qquad\mu_{r}(\widehat{f})=f(h,P,p^{*})\in{\mathbb{F}}[[p^{*}]]$ for $\widehat{f}=f(h,P,p^{*})\in{\mathbb{F}}[[p^{*}]]$. Here $p^{*}=pq^{-2c}$ as before. The tensor product ${\mathcal{U}}{\widetilde{\otimes}}\,{\mathcal{U}}$ is the ${h}^{*}$-bigraded vector space with $\displaystyle({\mathcal{U}}{\widetilde{\otimes}}\,{\mathcal{U}})_{\alpha\beta}=\bigoplus_{\gamma\in{h}^{*}}({\mathcal{U}}_{\alpha\gamma}\otimes_{{\mathcal{M}}_{H^{*}}}{\mathcal{U}}_{\gamma\beta}),$ where $\otimes_{{\mathcal{M}}_{H^{*}}}$ denotes the ordinary tensor product modulo the following relation. $\displaystyle\mu_{r}(\widehat{f})a\,\widetilde{\otimes}\,b=a\,\widetilde{\otimes}\,\mu_{l}(\widehat{f})b\qquad a,b\in{\mathcal{U}}.\ $ Let us regard $T_{\alpha}=e^{-Q_{\alpha}}\in{\mathbb{C}}[{\mathcal{R}}_{Q}]$ as a shift operator $\displaystyle(T_{\alpha}\widehat{f})=f(h,P+\langle Q_{\alpha},P\rangle,p).$ Then ${\mathcal{D}}=\\{\widehat{f}e^{-Q_{\alpha}}\ |\ \widehat{f}\in{\mathbb{F}},e^{-Q_{\alpha}}\in{\mathbb{C}}[{\mathcal{R}}_{Q}]\\}$ becomes the $H$-algebra having the property $\displaystyle{\mathcal{U}}\cong{\mathcal{U}}\,\widetilde{\otimes}\,{\mathcal{D}}\cong{\mathcal{D}}\,\widetilde{\otimes}\,{\mathcal{U}}$ by $a\cong a\,\widetilde{\otimes}\,T_{-\beta}\cong T_{-\alpha}\,\widetilde{\otimes}\,a$ for all $a\in{\mathcal{U}}_{\alpha\beta}$. Let $\pi_{V},V=\oplus_{i}{\mathbb{C}}v_{i}$ be the vector representation of $\bar{{g}}$ and consider the elliptic dynamical $R$-matrix $R^{+}(z,\Pi)\in\mathrm{End}(V\otimes V)$ of type ${g}$[29, 39]. For example, the $\widehat{{sl}}_{N}$ type is given by $\displaystyle R^{+}(z,\Pi)$ $\displaystyle=$ $\displaystyle{\rho}^{+}(z)\bar{R}(z,\Pi),$ (2.3) $\displaystyle\bar{R}(z,\Pi)$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{N}E_{j,j}\otimes E_{j,j}+\sum_{1\leq j_{1}<j_{2}\leq N}\left(b(u,(P+h)_{j_{1},j_{2}})E_{j_{1},j_{1}}\otimes E_{j_{2},j_{2}}+\bar{b}(u)E_{j_{2},j_{2}}\otimes E_{j_{1},j_{1}}\right.$ $\displaystyle\qquad\left.+c(u,(P+h)_{j_{1},j_{2}})E_{j_{1},j_{2}}\otimes E_{j_{2},j_{1}}+\bar{c}(u,(P+h)_{j_{1},j_{2}})E_{j_{2},j_{1}}\otimes E_{j_{1},j_{2}}\right),{}$ where $E_{i,j}v_{k}=\delta_{j,k}v_{i}$, $z=q^{2u}$, $\Pi_{j,k}=q^{2(P+h)_{j,k}}$, $(P+h)_{j,k}=\sum_{i=j}^{k-1}(P_{i}+h_{i})$, $\displaystyle{\rho}^{+}(z)=q^{-\frac{N-1}{N}}z^{\frac{N-1}{rN}}\frac{\Gamma(z;p,q^{2N})\Gamma(q^{2N}z;p,q^{2N})}{\Gamma(q^{2}z;p,q^{2N})\Gamma(q^{2N-2}z;p,q^{2N})},$ $\displaystyle b(u,s)=\frac{[s+1][s-1][u]}{[s]^{2}[u+1]},\qquad\bar{b}(u)=\frac{[u]}{[u+1]},$ $\displaystyle c(u,s)=\frac{[1][s+u]}{[s][u+1]},\qquad\bar{c}(u,s)=\frac{[1][s-u]}{[s][u+1]}.$ The symbol $\Gamma(z;p,q^{2N})$ denotes the elliptic Gamma function defined by $\displaystyle\Gamma(z;p,q)=\frac{(pq/z;p,q)_{\infty}}{(z;p,q)_{\infty}},\qquad(z;p,q)_{\infty}=\prod_{m,n=0}^{\infty}(1-zp^{m}q^{n}),\quad|p|,|q|<1.$ (2.4) Let ${L}^{+}(z)=\sum_{i,j}E_{i,j}L^{+}_{i,j}(z)\in\mathrm{End}(V)\,\widetilde{\otimes}\,{\mathcal{U}}[[z,z^{-1}]]$ be the the $L$-operator satisfying the dynamical $RLL$-relation[29] $\displaystyle R^{+(12)}(z_{1}/z_{2},\Pi){L}^{+(1)}(z_{1}){L}^{+(2)}(z_{2})={L}^{+(2)}(z_{2}){L}^{+(1)}(z_{1})R^{+*(12)}(z_{1}/z_{2},\Pi^{*}).$ Here $R^{+*}(z,\Pi^{*})$ denotes the same elliptic $R$-matrix $R^{+}(z,\Pi^{*})$ with $\Pi^{*}_{i,j}=q^{2P_{i,j}}$ except for replacing the elliptic nome $p$ by $p^{*}$. Define two $H$-algebra homomorphisms, the counit $\varepsilon:{\mathcal{U}}\to{\mathcal{D}}$ and the comultiplication $\Delta:{\mathcal{U}}\to{\mathcal{U}}\widetilde{\otimes}\;{\mathcal{U}}$, and the antihomomorphism $S:{\mathcal{U}}\to{\mathcal{U}}$ by $\displaystyle\varepsilon(L^{+}_{i,j}(z))=\delta_{i,j}{T}_{Q_{\epsilon_{i}}}\quad(n\in{\mathbb{Z}}),\qquad\varepsilon(e^{Q})=e^{Q},$ $\displaystyle\varepsilon(\mu_{l}({\widehat{f}}))=\varepsilon(\mu_{r}(\widehat{f}))=\widehat{f}T_{0},$ $\displaystyle\Delta(L^{+}_{i,j}(z))=\sum_{k}L^{+}_{i,k}(z)\widetilde{\otimes}L^{+}_{k,j}(z),$ $\displaystyle\Delta(e^{Q})=e^{Q}\,\widetilde{\otimes}\,e^{Q},$ $\displaystyle\Delta(\mu_{l}(\widehat{f}))=\mu_{l}(\widehat{f})\widetilde{\otimes}1,\quad\Delta(\mu_{r}(\widehat{f}))=1\widetilde{\otimes}\mu_{r}(\widehat{f}),$ $\displaystyle S(L^{+}_{ij}(z))=(L^{+}(z)^{-1})_{ij},$ $\displaystyle S(e^{Q})=e^{-Q},\quad S(\mu_{r}(\hat{f}))=\mu_{l}(\hat{f}),\quad S(\mu_{l}(\hat{f}))=\mu_{r}(\hat{f}).$ Here $\alpha_{i}=\epsilon_{i}-\epsilon_{i+1}\ (i\in\bar{I})$. Then the set $(U_{q,p}({g}),H,{{\mathcal{M}}}_{H^{*}},\mu_{l},\mu_{r},\Delta,\varepsilon,S)$ becomes a $H$-Hopf algebroid[41, 44, 46, 45]. An explicit realization of $L^{+}(z)$ in terms of the elliptic currents of ${\mathcal{U}}$ was studied in[28, 36, 44, 46]. In general, the connection between $L^{+}(z)$ and the elliptic currents is given as follows. Let $\\{h^{1},\cdots,h^{l}\\}$ be the dual basis to $\\{h_{1},\cdots,h_{l}\\}$ of $\bar{{h}}$ and $(\pi_{V},V)$ be the vector representation of $\bar{{g}}$. Define[29] $\displaystyle L^{-}(z)=({\rm Ad}(q^{-2\theta_{V}(P)})\otimes\mathrm{id})(q^{2T_{V}}L^{+}(zpq^{-c}),$ $\displaystyle\theta_{V}(P)=-\frac{1}{2}\sum_{j}\left(\pi_{V}(h_{j})\pi_{V}(h^{j})+2P_{j}\pi_{V}(h^{j})\right),$ $\displaystyle T_{V}=\sum_{j}\pi_{V}(h_{j})\otimes h^{j}.$ One finds that $L^{\pm}(z)$ satisfy $\displaystyle R^{\pm(12)}(z_{1}/z_{2},\Pi){L}^{\pm(1)}(z_{1}){L}^{\pm(2)}(z_{2})={L}^{\pm(2)}(z_{2}){L}^{\pm(1)}(z_{1})R^{\pm*(12)}(z_{1}/z_{2},\Pi^{*}),$ (2.5) $\displaystyle R^{+(12)}(q^{c}z_{1}/z_{2},\Pi){L}^{+(1)}(z_{1}){L}^{-(2)}(z_{2})={L}^{-(2)}(z_{2}){L}^{+(1)}(z_{1})R^{+*(12)}(q^{-c}z_{1}/z_{2},\Pi^{*}).$ (2.6) Here $R^{\pm}(z,\Pi),R^{\pm*}(z,\Pi^{*})$ and $L^{+}(z)$ are related to $R^{\pm}_{VV}(z,\lambda+h),R^{\pm}_{VV}(z,\lambda)$ and $L^{+}_{V}(z,\lambda)$ in [29] by $\displaystyle R^{\pm}(z,\Pi)=R^{\pm}_{VV}(z,\lambda+h),\qquad R^{\pm*}(z,\Pi^{*})=R^{\pm}_{VV}(z,\lambda),$ $\displaystyle L^{+}(z)=L^{+}_{V}(z,\lambda)e^{-\sum_{i}E_{i,i}Q_{\epsilon_{i}}}$ with $\lambda=(r-k+h^{\vee})d+\sum_{i}(P_{\alpha^{\prime}_{i}}+1)\bar{h}^{i}$, $\lambda+h=(r+h^{\vee})d+\sum_{i}(P_{\alpha^{\prime}_{i}}+h_{\alpha^{\prime}_{i}}+1)\bar{h}^{i}$, $[P_{i,j},Q_{\epsilon_{k}}]=\delta_{i,k}-\delta_{j,k}$ and $\alpha^{\prime}_{i}$ being the simple root of the Langlands dual Lie algebra of $\bar{{g}}$[39]. Then consider the following Gauss decomposition of $L^{\pm}(z)$. $\displaystyle L^{\pm}(z)=F^{\pm}(z)K^{\pm}(z)E^{\pm}(z).$ Here $K^{\pm}(z)$ are diagonal matrices with entries $K^{\pm}_{j}(z)$, whereas $F^{\pm}(z)$ (resp. $E^{\pm}(z)$) are upper (resp. lower) triangular matrices with entries $F^{\pm}_{i,j}(z)$ (resp. $E^{\pm}_{j,i}(z)$) $(i<j)$ and all diagonal entries 1. By combining the relations for these entries obtained from (2.5)-(2.6), one finds the following identification with the elliptic currents $E_{j}(z),F_{j}(z)$ and $\phi^{\pm}_{j}(z)$. $\displaystyle E_{j}(zq^{j-c/2})=\mu^{*}\left(E^{+}_{j+1,j}(zq^{c/2})-E^{-}_{j+1,j}(zq^{-c/2})\right),$ $\displaystyle F_{j}(zq^{j-c/2})=\mu\left(F^{+}_{j,j+1}(zq^{-c/2})-F^{-}_{j,j+1}(zq^{c/2})\right),$ $\displaystyle\phi^{\pm}_{j}(zq^{-c/2}q^{j})=\kappa K^{\pm}_{j}(z)K^{\pm}_{j+1}(z)^{-1}.$ Here $\mu,\mu^{*},\kappa$ are constants satisfying $\displaystyle\mu\mu^{*}=-\kappa\frac{q}{q-q^{-1}}\frac{(p^{*};p^{*})_{\infty}}{\theta_{p^{*}}(q^{2})}.$ See for example [44, 45]. Note also that to formulate a $H$-Hopf algebroid structure one may also use the Drinfeld comultiplication for the elliptic currents[47, 48] instead of the standard comultiplication for $L^{+}(z)$ given in the above. The resultant $H$-Hopf algebroid is completely different coalgebra structure from the above. These two coalgebra structures have their own appropriate applications. In particular, they define different vertex operators as intertwining operators of $U_{q,p}({g})$-modules, even if the modules are the same. The standard one is appropriate to discuss the problems exposed in the subsequent sections Sec.5-10, whereas the Drinfeld comultiplication is appropriate to discuss problems related to the deformed $W$-algebras. See Sec.3 and Sec.11. One should also note that for the quantum toroidal algebras, whatever they are trigonometric or elliptic, only the coalgebra structures associated with the Drinfeld comultiplication are available at the moment, because of a lack of expressions of the (elliptic) $R$-matrices and the $L$-operators. ## 3 $U_{q,p}(\mbox{\fourteeneufm g})$ as Deformed $W$-algebra The elliptic algebra $U_{q,p}({g})$ has another characterization as a $q$-deformation of the $W$-algebra ${\cal W}\bar{{g}}$ of the coset type $({g})_{r-h^{\vee}-k}\oplus({g})_{k}\supset({g}_{\rm diag})_{r-h^{\vee}}$[27, 9, 12], or more precisely as an algebra of screening currents of the deformation of ${\cal W}\bar{{g}}$. Here we assume ${g}$ is an untwisted affine Lie algebra and realized as a central extension of the loop algebra[33]. The generating functions of ${g}$ is called the currents. The symbol $({g})_{k}$ means to consider ${g}$ at the level $k$ representation. A key for understanding this characterization of $U_{q,p}({g})$ is the (generalized) Feigin-Fuchs (FF) construction of ${\cal W}\bar{{g}}$ from ${g}$[19, 12]. There one starts from the level-$k$ currents of ${g}$ in the Lepowsky-Wilson (LW) realization[53] and the corresponding energy-momentum (EM) tensor of the Wess-Zumino-Witten (WZW) model, i.e. the generating function of the Virasoro algebra. The LW realization of ${g}$ is a realization of the currents in terms of the $\mathrm{rk}\;{{g}}$ number of free bosons and the level-$k$ $Z$-algebra. The EM tensor is obtained by the Sugawara construction in terms of the currents. Then the FF construction deforms the EM tensor by introducing the so-called background charge term, which depends on a parameter $r$, and at the same time deforms the free boson part of the currents of ${g}$ by $r$ and the $Z$-algebra part by adding the dynamical parameters. The deformed EM tensor becomes the generating function of the Virasoro generators of the coset $W$-algebra ${\cal W}\bar{{g}}$, whereas the deformed currents of ${g}$ becomes the so-called screening currents, i.e. the primary field of conformal dimension one w.r.t. this new Virasoro generators. All the generating functions of ${\cal W}\bar{{g}}$, the Virasoro as well as the other generators of higher conformal dimensions, are characterized as the intersection of the kernel of the screening operators i.e. certain contour integral of the screening currents, on $U({g})$[18]. In this sense, we refer the resultant $r$-deformation (+ modification by the dynamical parameters) of the affine Lie algebra ${g}$ to the $W$-algebra ${\cal W}\bar{{g}}$. When one consider a $q$-deformation of it, ${g}$ and its $r$-deformation ${\cal W}\bar{{g}}$ turn out to give $U_{q}({g})$ and its $p$-deformation $U_{q,p}({g})$ with $p=q^{2r}$, respectively. For example, the currents of the level $k(\not=0)$ affine Lie algebra $\widehat{{sl}}_{2}$ has the following LW realization. $\displaystyle e(z)=Z^{+}(z):\exp\left\\{\sqrt{2k}\sum_{m\not=0}\frac{a_{m}}{km}z^{-m}\right\\}:,\quad f(z)=Z^{-}(z):\exp\left\\{-\sqrt{2k}\sum_{m\not=0}\frac{a_{m}}{km}z^{-m}\right\\}:,$ where $Z^{\pm}(z)$ are the generating functions of the level-$k$ $Z$-algebra, and $a_{m}\ (m\in{\mathbb{Z}}\backslash\\{0\\})$ generate the Heisenberg subalgebra of $\widehat{{sl}}_{2}$ satisfying $\displaystyle[\sqrt{{2k}}a_{m},\sqrt{{2k}}a_{n}]=2mk\delta_{m+n,0}.$ (3.1) The generating function $\widetilde{\phi(z)}=\sum_{m\not=0}\frac{a_{m}}{m}z^{-m}$ is called free boson. The $Z$-algebra is realized in terms of the ${\mathbb{Z}}_{k}$-parafermions $\Psi(z),\Psi^{\dagger}(z)$ and the zero mode operators $\alpha$, $h$ satisfying $[h,\alpha]=2$ as follows. $\displaystyle Z^{+}(z)=\Psi(z)e^{\alpha}z^{h/k},\qquad Z^{-}(z)=\Psi^{\dagger}(z)e^{-\alpha}z^{-h/k}.$ (3.2) The corresponding 2d conformal field theory (CFT) is the $\widehat{{sl}}_{2}$ WZW model generated by the EM tensor $\displaystyle T(z)=\frac{1}{2}(\partial\widetilde{\phi(z)})^{2}+\mbox{(zero- modes term)}+T_{PF}(z).$ Here $T_{PF}(z)$ denotes the EM tensor of the ${\mathbb{Z}}_{k}$-parafermions. The central charge of the models is $c_{WZW}=\frac{3k}{k+2}$. Then the FF construction is the following procedure. Deform $T(z)$ by adding the background charge term to $T(z)$ $\displaystyle T(z)\ \mapsto\ T_{FF}(z)=T(z)+\sqrt{\frac{k}{2r(r-k)}}\partial^{2}\widetilde{\phi(z)}.$ This makes $T_{FF}(z)$ the generating function of the coset Virasoro algebra $(\widehat{{sl}}_{2})_{r-2-k}\oplus(\widehat{{sl}}_{2})_{k}\supset(\widehat{{sl}}_{2})_{r-2}$ whose central charge is $c_{Vir}=\frac{3k}{k+2}\left(1-\frac{2(k+2)}{r(r-k)}\right)$. At the same time, deform the currents of $\widehat{{sl}}_{2}$ by scaling the coefficients of the free boson and by adding the dynamical parameter $P$ and its conjugate $Q$ satisfying $[P,Q]=2$ to the $Z$-algebra part : $\displaystyle e(z)$ $\displaystyle\mapsto$ $\displaystyle S^{+}(z)=Z^{+}(z)e^{-Q}z^{-\frac{P-1}{r-k}}:\exp\left\\{\sqrt{\frac{2kr}{r-k}}\sum_{m\not=0}\frac{a_{m}}{km}z^{-m}\right\\}:,$ (3.3) $\displaystyle f(z)$ $\displaystyle\mapsto$ $\displaystyle S^{-}(z)=Z^{-}(z)z^{\frac{P+h-1}{r}}:\exp\left\\{-\sqrt{\frac{2k(r-k)}{r}}\sum_{m\not=0}\frac{a_{m}}{km}z^{-m}\right\\}:.$ (3.4) It then turns out $S^{\pm}(z)$ become the screening currents of the Virasoro algebra generated by $T_{FF}(z)$. Now let us consider a $q$-deformation of these constructions. It is known that the quantum affine algebra $U_{q}({g})$ has an analogue of the LW realiozation. See for example [17]. In our case, the level-$k$ currents of $U_{q}(\widehat{{sl}}_{2})$ are realized in terms of the Heisenberg generators $\tilde{a}_{m}$ and the level-$k$ $q$-deformed $Z$ operators $Z^{\pm}_{q}(z)$. The former satisfy $\displaystyle[\tilde{a}_{m},\tilde{a}_{n}]=\frac{[2m]_{q}[km]_{q}}{m}q^{-k|m|}\delta_{m+n,0}.$ (3.5) This is a $q$-deformation of (3.1), i.e. in the limit $q\to 1$ one recovers (3.1). The operators $Z^{\pm}_{q}(z)$ are realized in the same was as (3.2) by replacing $\Psi(z),\Psi^{\dagger}(z)$ with appropriate $q$-parafermions[38, 17]. The level-$k$ currents of $U_{q}(\widehat{{sl}}_{2})$ are hence realized as $\displaystyle e_{q}(z)=Z_{q}^{+}(z):\exp\left\\{\sum_{m\not=0}\frac{\tilde{a}_{m}}{[km]_{q}}z^{-m}\right\\}:,\quad f_{q}(z)=Z_{q}^{-}(z):\exp\left\\{-\sum_{m\not=0}\frac{\tilde{a}_{m}}{[km]_{q}}z^{-m}\right\\}:.$ Then a $q$-analogue of the FF construction (3.3)-(3.4) is obtained by noting the following observations. * • In (3.3) and (3.4), the $Z$-algebra part remains the same as in $e(z)$ and $f(z)$ except for getting a modification by the dynamical parameters * • The commutation relation of the Heisenberg generators $a_{m}$ with the scaled coefficients can be read as from $S^{+}(z)$, $\displaystyle[\sqrt{\frac{2kr}{r-k}}a_{m},\sqrt{\frac{2kr}{r-k}}a_{n}]=2mk\frac{r}{r-k}\delta_{m+n,0},$ (3.6) and from $S^{-}(z)$, $\displaystyle[\sqrt{\frac{2k(r-k)}{r}}a_{m},\sqrt{\frac{2k(r-k)}{r}}a_{n}]=2mk\frac{r-k}{r}\delta_{m+n,0}.$ (3.7) The first point indicates that the $q$-$Z$-algebra operators $Z^{\pm}_{q}(z)$ in $e_{q}(z)$ and $f_{q}(z)$ should remain the same except for getting a modification by the dynamical parameters $P$ and $Q$ in a $q$-deformed FF process. The second point suggests to use the Heisenberg generators $\alpha_{m}$ or $\alpha^{\prime}_{m}$ satisfying the following $p$-deformed commutation relation from (3.5). $\displaystyle[{\alpha}_{m},{\alpha}_{n}]=\frac{[2m]_{q}[km]_{q}}{m}\frac{1-p^{m}}{1-p^{*m}}q^{-km}\delta_{m+n,0},$ (3.8) $\displaystyle[{\alpha}^{\prime}_{m},{\alpha}^{\prime}_{n}]=\frac{[2m]_{q}[km]_{q}}{m}\frac{1-p^{*m}}{1-p^{m}}q^{km}\delta_{m+n,0}.$ (3.9) Here we set $p=q^{2r}$, $p^{*}=pq^{-2k}=q^{2(r-k)}$, and $\alpha_{m}$, $\alpha^{\prime}_{m}$ are not independent $\displaystyle{\alpha}^{\prime}_{m}=\frac{1-p^{*m}}{1-p^{m}}q^{km}{\alpha}_{m}.$ These are $q$-deformations of (3.6) and (3.7). Note that (3.8) is nothing but the relations for the Heisenberg subalgebra of $U_{q,p}(\widehat{{sl}}_{2})$ at level $k$ given in Sec. 2.1. One hence obtains the following $q$-analogue of the FF construction. $\displaystyle e_{q}(z)$ $\displaystyle\mapsto$ $\displaystyle S^{+}_{q}(z)=Z_{q}^{+}(z)e^{-Q}z^{-\frac{P-1}{r-k}}:\exp\left\\{\sum_{m\not=0}\frac{{\alpha}_{m}}{[km]_{q}}z^{-m}\right\\}:,$ $\displaystyle f_{q}(z)$ $\displaystyle\mapsto$ $\displaystyle S^{-}_{q}(z)=Z_{q}^{-}(z)z^{\frac{P+h-1}{r}}:\exp\left\\{-\sum_{m\not=0}\frac{{\alpha}^{\prime}_{m}}{[km]_{q}}z^{-m}\right\\}:.$ Then the currents $S^{+}_{q}(z)$ and $S^{-}_{q}(z)$ turn out to give a realization of the level-$k$ elliptic currents $e(z)$ and $f(z)$ of $U_{q,p}(\widehat{{sl}}_{2})$, respectively[38, 17]. Hence regarding $e(z)$ and $f(z)$ as screening currents, one can define a deformation of the $W$-algebra ${\cal W}{sl}_{2}$ of the coset type at level $k$ as the kernel of the screening operators associated to either $e(z)$ or $f(z)$. In general, in the FF process the $Z$-algebra structure of ${g}$ remains the same as in the LW realization to realize the screening currents of ${\cal W}\bar{{g}}$ of the coset type $({g})_{r-h^{\vee}-k}\oplus({g})_{k}\supset({g}_{\rm diag})_{r-h^{\vee}}$ except for a modification by adding the dynamical parameters. Correspondingly, the $q$-$Z$-algebra structure of $U_{q}({g})$ remains the same to realize the elliptic currents $e_{i}(z),f_{i}(z)$ of $U_{q,p}({g})$ as a $p$-deformation of $U_{q}({g})$ except for a modification by the dynamical parameters[38, 17]. Then by using $e_{i}(z)$ and $f_{i}(z)$ as screening currents, one defines a deformation of ${\cal W}\bar{{g}}$ as the intersection of the kernel of the screening operators associated with either $e_{i}(z)$’s or $f_{i}(z)$’s. We denote the resultant deformed $W$-algebra by ${\mathcal{W}}_{p,p^{*}}\bar{{g}}$. $\displaystyle U_{q}({g})$ $\displaystyle{\footnotesize q\mbox{-deform.}}\ \rotatebox[origin={c}]{45.0}{$\longrightarrow$}\hskip 28.45274pt$ $\longrightarrow$ $\displaystyle{\footnotesize{p=q^{2r}}\mbox{-deform.}+\mbox{dynamical param.}}$ $\displaystyle{g}\hskip 65.44142pt$ $\displaystyle\quad U_{q,p}({g})$ $\displaystyle{\footnotesize{r}\mbox{-deform.}+\mbox{dynamical param.}}\ \rotatebox[origin={c}]{-45.0}{$\longrightarrow$}\hskip 28.45274pt$ $\longrightarrow$ $\displaystyle{\footnotesize q\mbox{-deform.}}$ (the Feigin-Fuchs construction) $\displaystyle{\cal W}\bar{{g}}$ $\displaystyle\parallel$ $\displaystyle({g})_{r-h^{\vee}-k}\oplus({g})_{k}\supset({g})_{r-h^{\vee}}$ Figure 3 : Two chacterizations of $U_{q,p}({g})$ Note that here we have obtained two elliptic nomes $p$ and $p^{*}$ naturally in a $q$-deformation of the FF construction. The parameters $p,p^{*}$ correspond to $q,t$ in Frenkel-Reshetikhin’s formulation of the deformed $W$-algebra $W_{q,t}(\bar{{g}})$[24]333 Frenkel-Reshetikhin’s deformed $W$-algebras are the Hamiltonian reduction type, which are isomorphic to the coset type only for the simply laced $\bar{{g}}$. For example, for the case $B_{l}$ the coset construction gives Fateev-Lukyanov’s $WB_{l}$ algebra[54], which is different from $W(B_{l})$ obtained by the quantum Hamiltonian reduction of $B^{(1)}_{l}$. See for example [9]. . It is remarkable that the appearance of two elliptic nomes is also consistent to the quasi-Hopf formulation of the face type EQG ${{\mathcal{B}}_{q,\lambda}}({g})$[29], where $p$ is treated as a dynamical parameter and $p^{*}$ is nothing but a dynamical shift of $p$ by $q^{-2c}$. The generating function of ${\mathcal{W}}_{p,p^{*}}\bar{{g}}$ can also be constructed explicitly in terms of the vertex operators of $U_{q,p}({g})$ w.r.t. the Drinfeld comultiplication. The same construction is valid also in the elliptic quantum toroidal algebras, which give a realization of the deformed affine quiver $W$-algebras. See Sec.11 for the case ${{gl}}_{1,tor}$. Moreover in Sec.5, we discuss the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$ derived as intertwining operators of $U_{q,p}(\widehat{{sl}}_{N})$-modules w.r.t. the standard comultiplication. Constructing correlation functions of them, one finds that they play the role of the vertex operators of the corresponding deformed $W$-algebra ${\mathcal{W}}{}_{p,p^{*}}{{sl}_{N}}$. See Sec.7 and 8. ## 4 Representations A representation of $H$-algebra is called a dynamical representation. We skip all formal descriptions of it and expose only the level-0 and the level-1 dynamical representations of $U_{q,p}(\widehat{{sl}}_{N})$. Here one says that a $U_{q,p}({g})$-module has level $k$, if $q^{c/2}$ acts as the scalar $q^{k/2}$ on it. For detailed description of the dynamical representations, the reader can consult[16, 35, 41]. ### 4.1 The evaluation representation Let $\displaystyle{{{\mathcal{V}}}=\oplus_{i=1}^{N}{\mathbb{F}}v_{i}\otimes 1}$ and set ${{\mathcal{V}}}_{z}={{\mathcal{V}}}[z,z^{-1}]$. The following gives the level-0 dynamical action of $U_{q,p}(\widehat{{sl}}_{N})$ on ${{\mathcal{V}}}_{z}$. $\displaystyle\pi_{z}(q^{c/2})=1,\quad\pi_{z}(d)=-z\frac{d}{dz},$ $\displaystyle\pi_{z}(\alpha_{j,m})=\frac{[m]_{q}}{m}(q^{j-N+1}z)^{m}(q^{-m}E_{j,j}-q^{m}E_{j+1,j+1}),$ $\displaystyle\pi_{z}(e_{j}(w))=\frac{(pq^{2};p)_{\infty}}{(p;p)_{\infty}}E_{j,j+1}\delta\left(q^{j-N+1}{z}/{w}\right)e^{-Q_{\alpha_{j}}},$ $\displaystyle\pi_{z}(f_{j}(w))=\frac{(pq^{-2};p)_{\infty}}{(p;p)_{\infty}}E_{j+1,j}\delta\left(q^{j-N+1}{z}/{w}\right),$ $\displaystyle\pi_{z}(\phi_{j}^{+}(w))=q^{-\pi(h_{j})}e^{-Q_{\alpha_{j}}}\frac{\theta_{p}(q^{-j+N-1+2\pi(h_{j})}\frac{w}{z})}{\theta_{p}(q^{-j+N-1}{w}/{z})},$ $\displaystyle\pi_{z}(\phi_{j}^{-}(w))=q^{\pi(h_{j})}e^{-Q_{\alpha_{j}}}\frac{\theta_{p}(q^{j-N+1-2\pi(h_{j})}\frac{z}{w})}{\theta_{p}(q^{j-N+1}{z}/{w})},\qquad j=1,\cdots,N-1.$ Here $\pi(h_{j})=E_{j,j}-E_{j+1,j+1}$, and the element $e^{Q}\in{\mathbb{C}}[{\mathcal{R}}_{Q}]$ acts on $f(h,P)v\otimes 1$ by $e^{Q}\cdot(f(h,P)v\otimes 1)=f(h,P-\langle P,Q\rangle)v\otimes e^{Q}$. This is called the evaluation representation associated with the vector representation with the evaluation parameter $z$. In particluar, one has $\displaystyle\pi_{z}({L^{+}_{i,j}(w)})_{k,l}=R^{+}(w/z,\Pi^{*})_{ik}^{jl}.$ ### 4.2 The level-1 highest weight representation Let $\Lambda_{0}$ and $\Lambda_{a}=\Lambda_{0}+\bar{\Lambda}_{a}\ (a=1,\cdots,N-1)$ be the fundamental weights of $\widehat{{sl}}_{N}$. For generic $\nu\in{h}^{*}$, set $\displaystyle{{\mathcal{V}}}(\Lambda_{a}+\nu,\nu)$ $\displaystyle=$ $\displaystyle{\mathbb{F}}\otimes_{\mathbb{C}}({\mathcal{F}}_{\alpha}\otimes e^{{\Lambda}_{a}}{\mathbb{C}}[{{\mathcal{Q}}}])\otimes e^{Q_{{\nu}}}{\mathbb{C}}[{{\mathcal{R}}}_{Q}],$ where ${\mathcal{F}}_{\alpha}={\mathbb{C}}[\alpha_{j,-m}\ (j=1,\cdots,N-1,\ m\in{\mathbb{N}}_{>0})]$ denotes the Fock module of the Heisenberg algebra. We here consider the group algebra ${\mathbb{C}}[{{\mathcal{Q}}}]\otimes{\mathbb{C}}[{\mathcal{R}}_{Q}]$ with the following central extension. $\displaystyle e^{\alpha_{j}}e^{\alpha_{k}}=(-1)^{(\alpha_{j},\alpha_{k})}q^{\frac{1}{r}(\delta_{j,k+1}-\delta_{k,j-1})}e^{\alpha_{k}}e^{\alpha_{j}},$ $\displaystyle e^{Q_{\alpha_{j}}}e^{Q_{\alpha_{k}}}=q^{\left(\frac{1}{r}-\frac{1}{r^{*}}\right)(\delta_{j,k+1}-\delta_{j,k-1})}e^{Q_{\alpha_{k}}}e^{Q_{\alpha_{j}}},$ $\displaystyle e^{Q_{\alpha_{j}}}e^{\alpha_{k}}=q^{\frac{1}{r}(\delta_{j,k+1}-\delta_{k,j-1})}e^{\alpha_{k}}e^{Q_{\alpha_{j}}}.$ Then the space ${{\mathcal{V}}}(\Lambda_{a}+\nu,\nu)$ is the level-1 irreducible highest weight $U_{q,p}(\widehat{{sl}}_{N})$-module with the highest weight $(\Lambda_{a}+\nu,\nu)$ w.r.t. the set $(P+h,P)$ by the action $\displaystyle E_{j}(z)=\;:\exp\left\\{-\sum_{n\neq 0}\frac{1}{[n]_{q}}\alpha_{j,n}z^{-n}\right\\}:e^{{\alpha}_{j}}e^{-Q_{\alpha_{j}}}z^{h_{\alpha_{j}}+1}(q^{N-j}z)^{-\frac{P_{\alpha_{j}}-1}{r^{*}}},$ $\displaystyle F_{j}(z)=\;:\exp\left\\{\sum_{n\neq 0}\frac{1}{[n]_{q}}\alpha^{\prime}_{j,n}z^{-n}\right\\}:e^{-{\alpha}_{j}}z^{-h_{\alpha_{j}}+1}(q^{N-j}z)^{\frac{(P+h)_{\alpha_{j}}-1}{r}},$ $\displaystyle\phi^{+}_{j}(q^{-{1}/{2}}z)=q^{-h_{j}}e^{-Q_{\alpha_{j}}}:\exp\left\\{(q-q^{-1})\sum_{m\not=0}\frac{\alpha_{j,m}}{1-p^{m}}p^{m}z^{-m}\right\\}:,$ $\displaystyle\phi^{-}_{j}(q^{{1}/{2}}z)=q^{h_{j}}e^{-Q_{\alpha_{j}}}:\exp\left\\{(q-q^{-1})\sum_{m\not=0}\frac{\alpha_{j,m}}{1-p^{m}}z^{-m}\right\\}:\quad(1\leq j\leq N-1).$ The highest weight vector is given by $1\otimes e^{\Lambda_{a}}\otimes e^{Q_{{\nu}}}$. Note that one has the following natural decomposition. $\displaystyle{{\mathcal{V}}}(\Lambda_{a}+\nu,\nu)=\bigoplus_{\xi,\eta\in{\mathcal{Q}}}{\mathcal{F}}_{a,\nu}(\xi,\eta),\qquad{\mathcal{F}}_{a,\nu}(\xi,\eta)={\mathbb{F}}\otimes_{\mathbb{C}}({\mathcal{F}}_{\alpha}\otimes e^{{\Lambda}_{a}+{\xi}})\otimes e^{Q_{{\nu}+\eta}}.$ It turns out that ${\mathcal{F}}_{a,\nu}(\xi,\eta)$ is isomorphic to the Verma module of the $W$-algebra ${\mathcal{W}}{sl}_{N}$ with the central charge $(N-1)\left(1-\frac{N(N+1)}{r(r-1)}\right)$ and the highest weight $\frac{1}{r(r-1)}\bigl{|}r(\nu+\eta+\rho)-(r-1)(\Lambda_{a}+\nu+\xi+\eta+\rho)\bigr{|}^{2}$[17]. ## 5 Vertex Operators The vertex operators defined for the infinite dimensional quantum groups such as $U_{q}({g})$, $U_{q,p}({g})$ and $U_{q,\kappa,p}({g}_{tor})$ have proven to be key objects in many applications[23, 30, 37, 10, 47]. Here we expose the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$[28, 36], which are used in a derivation of the elliptic weight functions (Sec.6), integral solutions of the elliptic $q$-KZ equation and the vertex function (Sec.7) and also in the algebraic analysis of the elliptic lattice models (Sec.8). Let ${{\mathcal{V}}}_{z}$ be as in Sec.4.1 and ${{\mathcal{V}}}(\Lambda_{a}+\nu,\nu)$ denote the irreducible level-$1$ highest weight $U_{q,p}(\widehat{{sl}}_{N})$-module with the highest weight $(\Lambda_{a}+\nu,\nu)$ in Sec.4.2. Let $\Lambda_{a+N}=\Lambda_{a}$. The (type I) vertex operator $\Phi(z)$ is the intertwinner of the $U_{q,p}(\widehat{{sl}}_{N})$-modules $\displaystyle{\Phi}(z)$ $\displaystyle:$ $\displaystyle{{\mathcal{V}}}(\Lambda_{a}+\nu,\nu)\to{{\mathcal{V}}}_{z}\,\widetilde{\otimes}\,{{\mathcal{V}}}(\Lambda_{a-1}+\nu,\nu)$ satisfying the intertwining relations $\displaystyle\Delta(x){\Phi}(z)={\Phi}(z)x\qquad\forall x\in U_{q,p}(\widehat{{sl}}_{N}).$ (5.1) The components of the vertex operator are defined by ${\Phi}(zq^{-1})u=\sum_{\mu=1}^{N}v_{\mu}\,\widetilde{\otimes}\,\Phi_{\mu}\left(z\right)u,\quad\qquad\forall u\in{{\mathcal{V}}}(\lambda,\nu).$ One can solve the linear equation (5.1) by using the representations in the last section uniquely up to an overall constant factor. The result is summarized as follows. $\displaystyle\Phi_{\mu}(z)$ $\displaystyle=$ $\displaystyle a_{\mu,N}\oint_{{\mathbb{T}}^{N-\mu}}\prod_{m=\mu}^{N-1}\frac{dt_{m}}{2\pi it_{m}}\Phi_{N}(z)F_{N-1}(t_{N-1})F_{N-2}(t_{N-2})\cdots F_{\mu}(t_{\mu})\varphi_{\mu}(z,t_{\mu},\cdots,t_{N-1};\Pi),{}$ $\displaystyle\Phi_{N}(z)$ $\displaystyle=$ $\displaystyle:\exp\left\\{\sum_{m\neq 0}(q^{m}-q^{-m}){{\mathcal{E}}}_{m}^{{}^{\prime}N}z^{-m}\right\\}:e^{-{\bar{\epsilon}}_{N}}(-z)^{-h_{\bar{\epsilon}_{N}}}z^{\frac{1}{r}(P+h)_{\bar{\epsilon}_{N}}},$ (5.2) where we set $\bar{\epsilon}_{i}=\epsilon_{i}-\sum_{j=1}^{N}\epsilon_{j}/N$, $\Pi=\\{\Pi_{\mu,m}\ (m=\mu+1,\cdots,N)\\}$, $\displaystyle{\mathbb{T}}^{N-\mu}=\\{t\in{\mathbb{C}}^{N-\mu}\ |\ |t_{\mu}|=\cdots=|t_{N-1}|=1\\},$ and $\displaystyle{\varphi_{\mu}(z,t_{\mu},\cdots,t_{N-1};\Pi)=\prod_{m=\mu}^{N-1}\frac{[v_{m+1}-v_{m}+(P+h)_{\mu,m+1}-\frac{1}{2}][1]}{[v_{m+1}-v_{m}+\frac{1}{2}][(P+h)_{\mu,m+1}]}}$ with $z=q^{2u},t_{m}=q^{2v_{m}}$, $v_{N}=u$. We assume $|p|<|z|<1$. The symbol ${\mathcal{E}}_{m}^{{}^{\prime}j}$ denotes a linear combination of $\alpha^{\prime}_{j,m}$ obtained by solving $\displaystyle\alpha^{\prime}_{j,m}=[m]_{q}^{2}(q-q^{-1})({\mathcal{E}}_{m}^{{}^{\prime}j}-q^{-m}{\mathcal{E}}_{m}^{{}^{\prime}j+1}),\qquad\sum_{j=1}^{N}q^{(j-1)m}{\mathcal{E}}^{{}^{\prime}j}_{m}=0.$ The most important property of $\Phi_{\mu}(z)$ is the following commutation relation. $\displaystyle\Phi_{\mu_{2}}(z_{2})\Phi_{\mu_{1}}(z_{1})$ $\displaystyle=$ $\displaystyle\sum_{\mu_{1}^{\prime},\mu_{2}^{\prime}=1}^{N}R(z_{1}/z_{2},\Pi)_{\mu_{1}\mu_{2}}^{\mu_{1}^{\prime}\mu_{2}^{\prime}}\ \Phi_{\mu_{1}^{\prime}}(z_{1})\Phi_{\mu_{2}^{\prime}}(z_{2}),$ (5.3) where $\displaystyle R(z,\Pi)=\mu(z){\overline{R}}(z,\Pi),\qquad\mu(z)=z^{-\frac{r-1}{r}\frac{N-1}{N}}\frac{\Gamma(pz;p,q^{2N})\Gamma(q^{2N}z;p,q^{2N})}{\Gamma(q^{2}z;p,q^{2N})\Gamma(pq^{2N-2}z;p,q^{2N})}.$ For example, this yields the transition property of the elliptic weight function (Sec.6) as well as the $R$-matrix coefficients in the elliptic $q$-KZ connection (Sec.7). ## 6 Elliptic Weight Functions The weight functions are objects in the theory of ($q$-) hypergeometric integrals. They play the role of a basis of the (twisted) de Rham cohomology[4, 56, 59, 70]. The elliptic version of the weight function was first introduced for the ${sl}_{2}$ type in [70]444Strictly speaking, the elliptic weight functions in [70] were used as a pole subtraction matrix, in the terminology of [2], in the $q$-hypergeometric integral solution of the $q$-KZ equation. They specifies the cycles of the integral. On the other hand, there the role of a basis of the co-cycle was played by the trigonometric weight functions. . We here expose the elliptic weight functions of the ${sl}_{N}$ type, which can be derived by considering a composition of the vertex operators given in the last section. In the next section, we show that they play the role in elliptic hypergeometric integral solutions of the elliptic $q$-KZ equation. Moreover, the elliptic weight functions are identified with the elliptic stable envelopes[2] associated with Nakajima quiver variety $X$. This makes clear a connection between representation theory of EQG and geometry of the elliptic cohomology (Sec.10). Let $\Phi_{\mu}(z)$ be the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$ in the last section and consider their composition $\displaystyle\phi_{\mu_{1}\cdots\mu_{n}}(z_{1},\cdots,z_{n}):=\Phi_{\mu_{1}}(z_{1})\cdots\Phi_{\mu_{n}}(z_{n})\ :\ {\mathcal{F}}_{a,\nu}(\xi,\eta)\ \to\ {\mathcal{F}}_{a^{\prime},\nu}(\xi,\eta).$ Here $a^{\prime}\equiv a-n$ mod $N$. Let $[1,n]=\\{1,\cdots,n\\}$ and define the index sets $I_{l}:=\\{i\in[1,n]\ |\ \mu_{i}=l\\}$ $(l=1,\cdots,N)$. Set also $\lambda_{l}:=|I_{l}|$, $\lambda:=(\lambda_{1},\cdots,\lambda_{N})$. Then $I=(I_{1},\cdots,I_{N})$ is a partition of $[1,n]$, i.e. $\displaystyle I_{1}\cup\cdots\cup I_{N}=[1,n],\quad I_{k}\cap I_{l}=\emptyset\quad\mbox{$(k\not=l)$}.$ Thus obtained partition $I$ is often denoted by $I_{\mu_{1},\cdots\mu_{n}}$. For $\lambda=(\lambda_{1},\cdots,\lambda_{N})\in{\mathbb{N}}^{N}$, let ${\mathcal{I}}_{\lambda}$ be the set of all partitions $I=(I_{1},\cdots,I_{N})$ satisfying $|I_{l}|=\lambda_{l}\ (l=1,\cdots,N)$. Set also $\lambda^{(l)}:=\lambda_{1}+\cdots+\lambda_{l}$, $I^{(l)}:=I_{1}\cup\cdots\cup I_{l}$ and let $I^{(l)}=:\\{i^{(l)}_{1}<\cdots<i^{(l)}_{\lambda^{(l)}}\\}$. Note that ${\mathcal{I}}_{\lambda}$ specifies the coordinate flags of the partial flag variety $fl_{\lambda}$ : $0=V_{0}\subset V_{1}\subset\cdots\subset V_{N-1}\subset V_{N}={\mathbb{C}}^{n}$ with $\dim V_{l}=\lambda^{(l)}$. Now let us substitute the realization of $\Phi_{\mu}(z)$ (5.2) to $\phi_{\mu_{1}\cdots\mu_{n}}(z_{1},\cdots,z_{n})$. In each $\Phi_{\mu}(z)$, we assign the integration variable $t^{(l)}_{a}$ to the elliptic current $F_{l}(\ast)$ $(l=1,\cdots,N-1)$ as its argument if $\Phi_{\mu}(z)$ is the $i^{(l)}_{a}$-th vertex operator i.e. $z=z_{i^{(l)}_{a}}$. After rearranging the order of the elliptic currents $F_{l}$’s and $\Phi_{N}$’s and taking their normal ordering as specified in $\widetilde{\Phi}(t,z)$ in the below, one obtains the following expression for $I=I_{\mu_{1}\cdots\mu_{n}}$, $\displaystyle\phi_{\mu_{1}\cdots\mu_{n}}(z_{1},\cdots,z_{n})=\oint_{{\mathbb{T}}^{M}}\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\frac{dt^{(l)}_{a}}{2\pi it^{(l)}_{a}}\ \widetilde{\Phi}(t,z)\widetilde{W}_{I}(t,z,\Pi),\qquad|p|<|z_{1}|,\cdots,|z_{n}|<1,{}$ (6.1) $\displaystyle\widetilde{\Phi}(t,z)=:\Phi_{N}(z_{1})\cdots\Phi_{N}(z_{n})::F_{N-1}(t^{(N-1)}_{1})\cdots F_{N-1}(t^{(N-1)}_{\lambda^{(N-1)}}):\cdots:F_{1}(t_{1}^{(1)})\cdots F_{1}(t^{(1)}_{\lambda^{(1)}}):{}$ $\displaystyle\qquad\qquad\times\prod_{1\leq k<l\leq n}<\Phi_{N}(z_{k})\Phi_{N}(z_{l})>^{Sym}\prod_{l=1}^{N-1}\prod_{1\leq a<b\leq\lambda^{(l)}}<F_{l}(t^{(l)}_{a})F_{l}(t^{(l)}_{b})>^{Sym},$ $\displaystyle\widetilde{W}_{I}(t,z,\Pi)={\rm Sym}_{t^{(1)}}\cdots{\rm Sym}_{t^{(N-1)}}\widetilde{U}_{I}(t,z,\Pi),$ $\displaystyle\widetilde{U}_{I}(t,z,\Pi)$ $\displaystyle=$ $\displaystyle\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\left(\frac{[v^{(l+1)}_{b}-v^{(l)}_{a}+(P+h)_{\mu_{s},l+1}-C_{\mu_{s},l+1}(s)][1]}{[v^{(l+1)}_{b}-v^{(l)}_{a}+1]\left[(P+h)_{\mu_{s},l+1}-C_{\mu_{s},l+1}(s)\right]}\right|_{i^{(N)}_{s}=i^{(l+1)}_{b}=i^{(l)}_{a}}$ $\displaystyle\qquad\qquad\qquad\times\prod_{b=1\atop i^{(l+1)}_{b}>i^{(l)}_{a}}^{\lambda^{(l+1)}}\frac{[v^{(l+1)}_{b}-v^{(l)}_{a}]}{[v^{(l+1)}_{b}-v^{(l)}_{a}+1]}\prod_{b=a+1}^{\lambda^{(l)}}\frac{[v^{(l)}_{a}-v^{(l)}_{b}-1]}{[v^{(l)}_{a}-v^{(l)}_{b}]}\Biggr{)},$ where we set $t^{(l)}_{a}=q^{2v^{(l)}_{a}}$, $t^{(N)}_{s}=z_{s}=q^{2u_{s}}$, $M=\sum_{l=1}^{N-1}\lambda^{(l)}$ and $C_{\mu_{s},l+1}(s)=\sum_{j=s+1}^{n}\langle\bar{\epsilon}_{\mu_{j}},h_{\mu_{s},l+1}\rangle$. The symbol $<\ \ >^{Sym}$ denotes the symmetric part of the operator product expansion coefficient $<\ \ >$ defined by for ${\mathcal{O}}=\Phi_{N},F_{l}$, $\displaystyle{\mathcal{O}}(w_{a}){\mathcal{O}}(w_{b})=<{\mathcal{O}}(w_{a}){\mathcal{O}}(w_{b})>:{\mathcal{O}}(w_{a}){\mathcal{O}}(w_{b}):.$ Hence $\widetilde{\Phi}(t,z)$ is an operator valued symmetric function in $z=(z_{1},\cdots,z_{n})$ as well as in $t^{(l)}=(t^{(l)}_{1},\cdots,t^{(l)}_{\lambda^{(l)}})$ for each $l$. The function $\widetilde{W}_{I}(t,z,\Pi)$ is the elliptic weight function of type ${sl}_{N}$. It is obtained by collecting all non-symmetric part of $<\ \ >$’s and factors arisen from the exchange among $\Phi_{N}$’s and $F_{l}$’s and by making symmetrizations ${\rm Sym}_{t^{(l)}}$ of all entries in $t^{(l)}$. Hence $\widetilde{W}_{I}(t,z,\Pi)$ is a symmetric function in $v^{(l)}_{a}\ (a=1,\cdots,\lambda^{(l)})$ for each $l$. The weight function $\widetilde{W}_{I}(t,z,\Pi)$ has several nice properties such as * • The triangular property: For $I,J\in{\mathcal{I}}_{\lambda}$, * (1) $\widetilde{W}_{J}(z_{I},z,\Pi)=0$ unless $I\leqslant J$. * (2) ${\displaystyle\widetilde{W}_{I}(z_{I},z,\Pi)=\prod_{1\leq k<l\leq N}\prod_{a\in I_{k}}\prod_{b\in I_{l}\atop a<b}\frac{[u_{b}-u_{a}]}{[u_{b}-u_{a}+1]}}$ Here $\leqslant$ denotes the partial ordering defined by $\displaystyle I\leqslant J\Leftrightarrow i^{(l)}_{a}\leq j^{(l)}_{a}\qquad\forall l,a.$ for $I^{(l)}=\\{i^{(l)}_{1}<\cdots<i^{(l)}_{\lambda^{(l)}}\\}$ and $J^{(l)}=\\{j^{(l)}_{1}<\cdots<j^{(l)}_{\lambda^{(l)}}\\}$ $(l=1,\cdots,N)$. We also denote by $t=z_{I}$ the specialization $t^{(l)}_{a}=z_{i^{(l)}_{a}}$ $(l=1,\cdots,N-1,a=1,\cdots,\lambda^{(l)})$[66]. * • The transition property: $\displaystyle\widetilde{W}_{I_{\cdots\ \mu_{i+1}\mu_{i}\cdots}}(t,\cdots,z_{i+1},z_{i},\cdots,\Pi){}$ $\displaystyle=\sum_{\mu_{i}^{\prime},\mu_{i+1}^{\prime}}{\overline{R}}(z_{i}/z_{i+1},\Pi q^{-2\sum_{j=i}^{n}\langle\bar{\epsilon}_{\mu_{j}},h\rangle})_{\mu_{i}\mu_{i+1}}^{\mu_{i}^{\prime}\mu_{i+1}^{\prime}}\ \widetilde{W}_{I_{\cdots\ \mu^{\prime}_{i}\mu^{\prime}_{i+1}\cdots}}(t,\cdots,z_{i},z_{i+1},\cdots,\Pi)$ as well as the orthogonality, quasi-periodicity, the shuffle product formula compatible to the wheel conditions etc.[42]. These properties are used to identify $\widetilde{W}_{I}(t,z,\Pi)$ with the elliptic stable envelope for the cotangent bundle to the partial flag variety (Sec.10). ## 7 Integral Solution of the Elliptic $q$-KZ equation Let us consider the $n$-point function $\phi_{\mu_{1}\cdots\mu_{n}}(z_{1},\cdots,z_{n})$ with the zero weight condition $\sum_{i=1}^{n}\bar{\epsilon}_{\mu_{i}}=0$, which is equivalent to consider $\lambda=(m^{N})$ with $mN=n$. In this case, one can take a trace over the space ${\mathcal{F}}_{a,\nu}(\xi,\eta)$. It then turns out that the trace gives a solution of the elliptic $q$-KZ equation due to the cyclic property of the trace and the commutation relation of the vertex operators (5.3)[22, 42]. From (6.1), one gets the following elliptic hypergeometric integral. $\displaystyle{\rm tr}_{{\mathcal{F}}_{a,\nu}}\left(Q^{-d}\Phi_{\mu_{1}}(z_{1})\cdots\Phi_{\mu_{N}}(z_{N})\right)=\oint_{{\mathbb{T}}^{M}}\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\frac{dt^{(l)}_{a}}{2\pi it^{(l)}_{a}}\ e(t,\Pi){\Phi}(t,z)\widetilde{W}_{I}(t,z,\Pi),$ $\displaystyle e(t,\Pi)=\exp{\left\\{\frac{\sum_{l=1}^{N-1}\log(\Pi_{l}/\Pi_{l+1})\sum_{a}\log(t^{(l)}_{a})}{\log p}\right\\}},$ $\displaystyle{\Phi}(t,z)=\prod_{l=1}^{N-1}\left[\prod_{a=1}^{\lambda^{(l)}}\prod_{b=1}^{\lambda^{(l+1)}}\frac{\Gamma(t^{(l)}_{a}/t^{(l+1)}_{b};p,Q)}{\Gamma(p^{*}t^{(l)}_{a}/t^{(l+1)}_{b};p,Q)}\prod_{1\leq a<b\leq\lambda^{(l)}}\frac{\Gamma(p^{*}t^{(l)}_{a}/t^{(l)}_{b},p^{*}t^{(l)}_{b}/t^{(l)}_{a};p,Q)}{\Gamma(t^{(l)}_{a}/t^{(l)}_{b},t^{(l)}_{b}/t^{(l)}_{a};p,Q)}\right].$ Note that only the weight function $\widetilde{W}_{I}(t,z,\Pi)$ carries the tensor indices through $I$. We expect that $\widetilde{W}_{I}(t,z,\Pi),{I\in{\mathcal{I}}_{(m^{N})}}$ form a basis of the twisted de Rham cohomology associated with $T_{Q,z_{i}}\Phi(t,z)/\Phi(t,z)$, $i=1,\cdots,n$, where $T_{Q,z_{i}}$ is the shift operator $T_{Q,z_{i}}f(\cdots,z_{i},\cdots)=f(\cdots,Qz_{i},\cdots)$. Furthermore, to specify the cycle of the integral one may insert to the integral an additional elliptic weight function $\widetilde{W}_{J}(t,z,\Pi)$, $J\in{\mathcal{I}}_{(m^{N})}$, whose elliptic nome is $Q$ instead of $p$. Thus obtained elliptic hypergeometric integrals labeled by $I,J\in{\mathcal{I}}_{(m^{N})}$ are conjectured to give fundamental solutions of the elliptic $q$-KZ equation. See [70] for the trigonometric ${sl}_{2}$ case. In the trigonometric limit $Q\to 0$, the trace goes to the following vacuum expectation value. $\displaystyle\langle 0|\Phi_{\mu_{1}}(z_{1})\cdots\Phi_{\mu_{N}}(z_{N}){|0\rangle}=\oint_{{\mathbb{T}}^{M}}\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\frac{dt^{(l)}_{a}}{2\pi it^{(l)}_{a}}\ e(t,\Pi){\Phi}^{trig.}(t,z)\widetilde{W}_{I}(t,z,\Pi),$ $\displaystyle{\Phi}^{trig.}(t,z)=\prod_{l=1}^{N-1}\left[\prod_{a=1}^{\lambda^{(l)}}\prod_{b=1}^{\lambda^{(l+1)}}\frac{(p^{*}t^{(l)}_{a}/t^{(l+1)}_{b};p)_{\infty}}{(t^{(l)}_{a}/t^{(l+1)}_{b};p)_{\infty}}\prod_{1\leq a<b\leq\lambda^{(l)}}\frac{(t^{(l)}_{a}/t^{(l)}_{b},t^{(l)}_{b}/t^{(l)}_{a};p)_{\infty}}{(p^{*}t^{(l)}_{a}/t^{(l)}_{b},p^{*}t^{(l)}_{b}/t^{(l)}_{a};p)_{\infty}}\right].$ This gives the integral formula for the vertex function for the equivariant $\mathrm{K}$-theory $\mathrm{K}_{T}(T^{*}fl_{\lambda})$ with $T=({\mathbb{C}}^{\times})^{n}\times{\mathbb{C}}^{\times}$, which is the generating function of counting the quasi-maps from ${\mathbb{P}}^{1}$ to $T^{*}fl_{\lambda}$[49]. Here the elliptic weight function $\widetilde{W}_{I}$ plays the role of the pole subtraction matrix[2]. In particular, for the full flag variety case i.e. $\lambda^{(l)}=l\ (l=1,\cdots,N-1)$ with $n=N$, it gives the integral formula for the Macdonald symmetric function[59], which is also known as the 3d vortex partition function of the $\mathrm{T}[U(N)]$ theory, if one drops the weight function $\widetilde{W}_{I}$ from the integrand. Indeed the first and the second blocks in ${\Phi}(t,z)$ are elliptic analogues of the Cauchy type reproducing kernel and the Macdonald weight function, respectively, by reading $(p,p^{*})$ as $(q,t)$ in the Macdonald theory. Hence the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$ play the role of the vertex operators of the deformed $W$-algebra ${\mathcal{W}}_{p,p^{*}}{sl}_{N}$[3, 25]. ## 8 Algebraic Analysis of the Elliptic Lattice Models The algebraic analysis [30] is a scheme of mathematical formulation of the solvable 2d lattice models defined by the $R$-matrices. It formulates the model on the infinite lattice directly by using both finite and infinite dimensional representations of the corresponding affine quantum groups $U_{q}({g})$ or elliptic quantum groups $U_{q,p}({g})$. In particular, the infinite dimensional representations are identified with the spaces of states of the model specified by the ground state boundary condition, where the configurations at enough far sites from the center of the lattice is given by the ground state configurations. In addition, the two types intertwining operators, the type I and II vertex operators, of the quantum groups realize the local operators of the model such as spin operators acting on the infinite dimensional representations (type I) and the creation operators of the physical excitations (type II), respectively. Then the traces of compositions of the vertex operators give correlation functions as well as form factors of the models. Hence these are characterized as solutions of the (elliptic) $q$-KZ equations[23, 22, 30] as discussed in the last section for the type I case. In this sense, the algebraic analysis is an off-critical extension of CFT [8]. There are series of elliptic solvable lattice models defined by the elliptic solutions of the face type YBE, i.e. the elliptic dynamical $R$-matrices, associated with affine Lie algebras[5, 32, 13, 50, 51]. The critical behavior of such models is described by the $W$-algebra ${\mathcal{W}}\bar{{g}}$ of the coset type $({g})_{r-h^{\vee}-k}\oplus({g})_{k}\supset({g}_{\rm diag.})_{r-h^{\vee}}$[14, 13]. Here $r$ is taken as a real positive parameter and called the restriction height. Accordingly the elliptic quantum group $U_{q,p}({g})$ plays a central role in the algebraic analysis of these models as a deformation of the $W$-algebra ${\cal W}_{p,p^{*}}\bar{{g}}$. See Sec.3. The spaces of states with the ground state boundary condition are identified with the irreducible ${\cal W}_{p,p^{*}}\bar{{g}}$-modules, whose direct sum are isomorphic to the $U_{q,p}({g})$-modules (Sec.4.2). The type I and II vertex operators of $U_{q,p}({g})$ provide correlation functions and form factors of the model. See [22, 55, 37]. In the case ${g}=\widehat{{sl}}_{N}$, one also has the vertex type elliptic lattice models[6, 7]. These are related to the face models by the so-called vertex-face correspondence[6, 31]. The algebraic analysis of the vertex models is carried out based on the one of the face models by applying the vertex-face correspondence[52, 37, 10]. ## 9 Level-0 Action on the Gelfand-Tsetlin Basis Let $(\pi_{z},{{\mathcal{V}}}_{z})$ be the evaluation representation of $U_{q,p}(\widehat{{sl}}_{N})$ in Sec.4.1. The action of the $L$-operator $L^{+}(1/w)$ on ${{\mathcal{V}}}_{w}\,\widetilde{\otimes}\,{{\mathcal{V}}}_{z}$ is extended to the tensor space ${{\mathcal{V}}}_{w}\,\widetilde{\otimes}\,{{\mathcal{V}}}_{z_{1}}\,\widetilde{\otimes}\,\cdots\,\widetilde{\otimes}\,{{\mathcal{V}}}_{z_{n}}$ by the comultiplication $\displaystyle(\pi_{z_{1}}\otimes\cdots\otimes\pi_{z_{n}}){\Delta^{\prime}}^{(n-1)}(L^{\pm}(1/w))$ $\displaystyle=\bar{R}^{(0n)}(z_{n}/w,\Pi^{*}q^{2\sum_{j=1}^{n-1}h^{(j)}}){{\overline{R}}}^{(0n-1)}(z_{n-1}/w,\Pi^{*}q^{2\sum_{j=1}^{n-2}h^{(j)}})\cdots\bar{R}^{(01)}(z_{1}/w,\Pi^{*}).$ Here we take the opposite comultiplication $\Delta^{\prime}$ following [43]. This is the action on the standard basis $\displaystyle v_{I}:=v_{\mu_{1}}\,\widetilde{\otimes}\,\cdots\,\widetilde{\otimes}\,v_{\mu_{n}},\qquad I=I_{\mu_{1}\cdots\mu_{n}}\in{\mathcal{I}}_{\lambda},\ \lambda\in{\mathbb{N}}^{N},\ |\lambda|=n.$ Let us consider the following change of basis[43]. $\displaystyle\xi_{I}=\sum_{J\in{\mathcal{I}}_{\lambda}}\widetilde{W}_{J}(z^{-1}_{I},z^{-1},\Pi q^{2\sum_{j=1}^{n}<\bar{\epsilon}_{\mu_{j}},h>})v_{J}.$ (9.1) The transition matrix is given by the specialized elliptic weight function so that it is lower triangular (Sec.6). It turns out that the new vectors $\xi_{I}$ $(I\in{\mathcal{I}}_{\lambda},\lambda\in{\mathbb{N}}^{N},|\lambda|=n)$ form the Gelfand-Tsetlin (GT) basis of ${{\mathcal{V}}}_{z_{1}}\,\widetilde{\otimes}\,\cdots\,\widetilde{\otimes}\,{{\mathcal{V}}}_{z_{n}}$, on which the commuting subalgebra generated by $\alpha_{j,m}$ $(j=1,\cdots,N-1,\ m\in{\mathbb{Z}}\backslash\\{0\\})$ is diagonalized simultaneously. In fact the level-0 action of the elliptic currents of $U_{q,p}(\widehat{{sl}}_{N})$ on $\xi_{I}$ is given by $\displaystyle\phi_{j}^{\pm}(w)\xi_{I}=\left.\prod_{a\in I_{j}}\frac{[u_{a}-v+1]}{[u_{a}-v]}\right|_{\pm}\left.\prod_{b\in{I_{j+1}}}\frac{[u_{b}-v-1]}{[u_{b}-v]}\right|_{\pm}\ e^{-Q_{\alpha_{j}}}\xi_{I}$ $\displaystyle E_{j}(w)\xi_{I}=a^{*}\sum_{i\in I_{j+1}}\delta(z_{i}/w)\prod_{k\in I_{j+1}\atop\not=i}\frac{[u_{i}-u_{k}+1]}{[u_{i}-u_{k}]}\ e^{-Q_{\alpha_{j}}}\xi_{I^{i^{\prime}}}$ $\displaystyle F_{j}(w)\xi_{I}=a\sum_{i\in I_{j}}\delta(z_{i}/w)\prod_{k\in I_{j}\atop\not=i}\frac{[u_{k}-u_{i}+1]}{[u_{k}-u_{i}]}\ \xi_{I^{{}^{\prime}i}}$ where ${aa^{*}=-\frac{1}{q-q^{-1}}\frac{[0]^{\prime}}{[1]}}$, and $I=(I_{1},\cdots,I_{N})$, $I^{i^{\prime}}\in{\mathcal{I}}_{(\lambda_{1},\cdots,\lambda_{j}+1,\lambda_{j+1}-1,\cdots,\lambda_{N})}$, $I^{{}^{\prime}i}\in{\mathcal{I}}_{(\lambda_{1},\cdots,\lambda_{j}-1,\lambda_{j+1}+1,\cdots,\lambda_{N})}$ with $\displaystyle(I^{i^{\prime}})_{j}=I_{j}\cup\\{i\\},\quad(I^{i^{\prime}})_{j+1}=I_{j+1}-\\{i\\},\quad(I^{i^{\prime}})_{k}=I_{k}\ \ (k\not=j,j+1),$ $\displaystyle(I^{{}^{\prime}i})_{j}=I_{j}-\\{i\\},\quad(I^{{}^{\prime}i})_{j+1}=I_{j+1}\cup\\{i\\},\quad(I^{{}^{\prime}i})_{k}=I_{k}\ \ (k\not=j,j+1).$ Note that this action is given in terms of the partitions in completely a combinatorial way. In the next section, this is identified with a geometric action of $U_{q,p}(\widehat{{sl}}_{N})$ on the equivariant elliptic cohomology ${\displaystyle\bigoplus_{\lambda\in{\mathbb{N}}^{N},|\lambda|=n}\mathrm{E}_{T}(T^{*}fl_{\lambda})}$ under the identification of the elliptic weight function with the elliptic stable envelope. ## 10 Geometric Interpritation The notion of stable envelopes was initiated in [57, 65]. They form a good basis of equivariant cohomology and $\mathrm{K}$-theory of Nakajima quiver varieties[61, 62] and has nice applications to enumerative geometry, geometric representation theory and quantum integrable systems. In particular, the transition matrices between stable envelopes defined for different chambers give the $R$-matrices satisfying the YBE so that they provide a new geometric formulation of quantum groups as well as quantum integrable systems associated with the quiver varieties. The elliptic version of stable envelopes was defined in [2] for the equivariant elliptic cohomology $\mathrm{E}_{T}(X)$ of the Nakajima quiver varieties $X$. Remarkably they provide the elliptic dynamical $R$-matrix as their transition matrix, which coincides for example with (2.2) for the case $X=T^{*}fl_{\lambda}$. We here expose an identification of the elliptic stable envelopes for $\mathrm{E}_{T}(T^{*}fl_{\lambda})$ with the elliptic weight functions for $U_{q,p}(\widehat{{sl}}_{N})$ described in Sec.6. This makes more transparent the correspondence between geometry of quiver variety and representation of EQG. For the definition of the elliptic cohomology and the elliptic stable envelopes, the reader can consult[2]. The elliptic stable envelope $\mathrm{Stab}_{\mathfrak{C}}(F_{I})$ for $\mathrm{E}_{T}(T^{*}fl_{\lambda})$ at the torus fixed point $F_{I}$ labeled by the partition $I\in{\mathcal{I}}_{\lambda}$ can be constructed explicitly by the abelianization procedure[2, 43]. On the other hand, let us consider the elliptic weight function modified as $\displaystyle{\mathcal{W}}_{I}(t,z,\Pi)=\frac{H_{\lambda}(t,z)\widetilde{W}_{I}(t,z,\Pi)}{E_{\lambda}(t)},$ $\displaystyle H_{\lambda}(t,z)=\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\prod_{b=1}^{\lambda^{(l+1)}}\left[v^{(l+1)}_{b}-v^{(l)}_{a}+1\right],\quad E_{\lambda}(t)=\prod_{l=1}^{N-1}\prod_{a=1}^{\lambda^{(l)}}\prod_{b=1}^{\lambda^{(l)}}[v^{(l)}_{b}-v^{(l)}_{a}+1].$ This yields the expresion $\displaystyle{\mathcal{W}}_{I}(t,z,\Pi)$ $\displaystyle=$ $\displaystyle{\rm Sym}_{t^{(1)}}\cdots{\rm Sym}_{t^{(N-1)}}\ {U}_{I}(t,z,\Pi),$ $\displaystyle U_{I}(t,z,\Pi)$ $\displaystyle=$ $\displaystyle\prod_{l=1}^{N-1}\frac{\prod_{a=1}^{\lambda^{(l)}}u^{(l)}_{I}(t^{(l)}_{a},t^{(l+1)},\Pi_{\mu_{\mbox{\tiny${i^{(l)}_{a}}$}},l+1}q^{-2C_{\mu_{\mbox{\tiny${i^{(l)}_{a}}$}},l+1}(i^{(l)}_{a})})}{\prod_{1\leq a<b\leq\lambda^{(l)}}{[v^{(l)}_{a}-v^{(l)}_{b}]}{[v^{(l)}_{b}-v^{(l)}_{a}-1]}},$ $\displaystyle u^{(l)}_{I}(t^{(l)}_{a},t^{(l+1)},\Pi_{j,k})$ $\displaystyle=$ $\displaystyle\left.\frac{\left[v^{(l+1)}_{b}-v^{(l)}_{a}+(P+h)_{j,k}\right]}{[(P+h)_{j,k}]}\right|_{i^{(l+1)}_{b}=i^{(l)}_{a}}{}$ $\displaystyle\qquad\times\prod_{b=1\atop i^{(l+1)}_{b}>i^{(l)}_{a}}^{\lambda^{(l+1)}}{\left[v^{(l+1)}_{b}-v^{(l)}_{a}\right]}\prod_{b=1\atop i^{(l+1)}_{b}<i^{(l)}_{a}}^{\lambda^{(l+1)}}{\left[v^{(l+1)}_{b}-v^{(l)}_{a}+{1}\right]}.$ Comparing these expressions, one finds $\mathrm{Stab}_{\mathfrak{C}}(F_{I})$ and ${\mathcal{W}}_{I}(z,t,\Pi)$ are identical as $\displaystyle\mathrm{Stab}_{\mathfrak{C}}(F_{I})={\mathcal{W}}_{\sigma_{0}(I)}({\tilde{t}},\sigma_{0}(z^{-1}),\Pi^{-1}).$ (10.1) Here for the longest element $\sigma_{0}$ in $\mathfrak{S}_{n}$ and $I=I_{\mu_{1}\cdots\mu_{n}}$, we define ${\sigma}_{0}(I)=I_{\mu_{{\sigma}_{0}(1)}\cdots\mu_{{\sigma}_{0}(n)}}$ and set ${\sigma}_{0}(I)=(\tilde{I}_{1},\dots,\tilde{I}_{N})$, $\tilde{I}^{(l)}=\tilde{I}_{1}\cup\cdots\cup\tilde{I}_{l}$ and $\tilde{I}^{(l)}=\\{\tilde{i}^{(l)}_{1}<\cdots<\tilde{i}^{(l)}_{\lambda^{(l)}}\\}$. Then ${\tilde{t}}$ denotes the set of all $t^{(l)}_{a}$ corresponding to $\tilde{i}^{(l)}_{a}$. The chamber $\mathfrak{C}$ is taken as $|z_{1}|<\cdots<|z_{n}|$. In addition, the restriction $t=z_{J}$ defined in Sec.6 can be identified with the restriction of the stable class $\mathrm{Stab}_{\mathfrak{C}}(F_{I})$ to the fixed point $F_{J}$. One then finds $\displaystyle{\rm Stab}_{\mathfrak{C}}(F_{I})|_{F_{J}}={\mathcal{W}}_{\sigma_{0}(I)}(z^{-1}_{J},\sigma_{0}(z^{-1}),\Pi^{-1}).$ (10.2) Such coincidence as well as the integral formula for the vertex function obtained as correlation function of the vertex operators in Sec.7 lead us to the following dictionary from the level-0 representation of $U_{q,p}(\widehat{{sl}}_{N})$ to the equivariant elliptic cohomology $\mathrm{E}_{T}(T^{*}fl_{\lambda})$. $\displaystyle\mbox{evaluation parameters}\ z_{1},\cdots,z_{n}\ $ $\displaystyle\leftrightarrow$ $\displaystyle\ \mbox{equivariant parameters}\ \in({\mathbb{C}}^{\times})^{n}$ $\displaystyle\mbox{dynamical parameters}\ \Pi_{j}\ $ $\displaystyle\leftrightarrow$ $\displaystyle\ \mbox{K\"{a}hler parameters}\in\mathrm{Pic}_{T}(X)\otimes_{\mathbb{Z}}E\qquad$ integration variables $\displaystyle\leftrightarrow$ Chern roots of the tautological vector bundle ${\mathcal{V}}_{l}$ $\displaystyle\ t^{(l)}_{a}\ (a=1,\cdots,\lambda^{(l)})\quad$ of $\mathrm{rk}\;\lambda^{(l)}$on the $l$-th vertex of the quiver diagram $\displaystyle\mbox{the weight}\ (\mu_{1},\cdots,\mu_{n})\ $ $\displaystyle\leftrightarrow$ $\displaystyle\ \mbox{the fixed point}\ F_{I_{\mu_{1},\cdots,\mu_{n}}}\ $ Furthermore, denoting the fixed point classes by $[I]$ one has the localization formula for the stable envelope. $\displaystyle{\rm Stab}_{\mathfrak{C}}(F_{K})=\sum_{I\in{\mathcal{I}}_{\lambda}}\frac{{\rm Stab}_{\mathfrak{C}}(F_{K})|_{F_{I}}}{N(z_{I})}\ [I],$ where $N(z_{I})$ is a certain normalization. Under the identification (10.2), one can compare this with the change of basis relation from the GT basis $\xi_{I}$ to the standard basis $v_{K}$, i.e. the reversed relation of (9.1) obtained by appying the orthogonality property of the elliptic weight functions. Then one finds that these two are identical under the correspondence[43] $\displaystyle\mbox{standard bases}\ v_{I}\ $ $\displaystyle\leftrightarrow$ $\displaystyle\ \mbox{stable classes }\ \mathrm{Stab}_{\mathfrak{C}}(F_{I})$ $\displaystyle\mbox{the Gelfand-Tsetlin bases}\ \xi_{I}\ $ $\displaystyle\leftrightarrow$ $\displaystyle\ \mbox{fixed point classes}\ [I].$ This allows us to interpret the level-0 action of $U_{q,p}(\widehat{{sl}}_{N})$ on the GT basis of ${\mathcal{V}}_{z_{1}}\otimes\cdots\otimes{\mathcal{V}}_{z_{n}}$ in Sec.9 as an action on the fixed point classes in $\bigoplus_{\lambda\in{\mathbb{N}}^{N},|\lambda|=n}\mathrm{E}_{T}(T^{*}fl_{\lambda})$. ## 11 Elliptic Quantum Toroidal Algebras A toroidal algebra ${g}_{tor}$ associated with a complex simple Lie algebra $\bar{{g}}$ is a canonical two-dimensional central extension of the double loop Lie algebra ${\mathbb{C}}^{\times}\times{\mathbb{C}}^{\times}\to\bar{{g}}$. Its quantum version, i.e. quantum toroidal algebra, was introduced in [26]. We denote it by $U_{q,\kappa}({g}_{tor})$. It is formulated as an extension of the quantum affine algebra $U_{q}({{g}})$ in the Drinfeld realization[15] by introducing the new generators, i.e. the 0-th generators w.r.t. the index set $I$, and replacing the finite type Cartan matrix $(a_{i,j})\ i,j\in\bar{I}$ appearing in the defining relation of $U_{q}({{g}})$ with the affine type generalized Cartan matrix $(a_{i,j})\ i,j\in I$. This procedure is called the quantum affinization. In particular, for the case $\bar{{g}}={{gl}}_{N}$ one can introduce an extra parameter $\kappa$ due to a cyclic property of the affine Dynkin diagram of type $A^{(1)}_{N-1}$. In the same way, the elliptic quantum toroidal algebra $U_{q,\kappa,p}({g}_{tor})$ is formulated as the quantum affinization of the elliptic algebra $U_{q,p}({g})$ by adding the 0-th generators $\alpha_{0,m},e_{0,n},f_{0,n},K^{\pm}_{0}\ (m\in{\mathbb{Z}}\backslash\\{0\\},n\in{\mathbb{Z}})$[48]. As a result, the element $\prod_{i\in I}(K^{+}_{i})^{a_{i}^{\vee}}$ becomes a central element. Here $a^{\vee}_{i}\ (i\in I)$ denote the colabel of the affine Dynkin diagram[33]. In addition, ${U}_{q,\kappa,p}({g}_{tor})$ possesses two subalgebras, both of which are isomorphic to the elliptic algebra $U_{q,p}({g})$: the one generated by $\alpha_{i,l},k_{i}^{\pm},e_{i,n},f_{i,n},q^{\pm{c}/{2}},q^{d}$ $(i\in\bar{I},l\in{\mathbb{Z}}\backslash\\{0\\},n\in{\mathbb{Z}})$ and the other by $e_{i,0},f_{i,0},K^{\pm}_{i},q^{d}$ $(i\in I)$. These are elliptic analogues of those in $U_{q,\kappa}({g}_{tor})$[26]. On the other hand, the ${{gl}}_{1}$ version of the quantum toroidal algebra $U_{q,t}({{gl}}_{1,tor})$ was introduced in [58] as a $q,t$-deformation of the $W_{1+\infty}$ algebra. Remarkably, $U_{q,t}({{gl}}_{1,tor})$ is isomorphic to the elliptic Hall algebra[68]. It is also remarkable that representations of $U_{q,t}({{gl}}_{1,tor})$ have a deep connection to the Macdonald theory[58, 20, 69, 25]. In addition, representations of $U_{q,t}({{gl}}_{1,tor})$ provide a relevant scheme of calculation of the instanton partition functions[63] as well as a study of the Alday-Gaiotto-Tachikawa (AGT) correspondence[1] for the 5d and 6d lifts of the 4d ${{\mathcal{N}}=2}$ SUSY gauge theories, which are known as linear quiver gauge theories. The elliptic quantum toroidal algebra $U_{q,t,p}({{gl}}_{1,tor})$ was formulated in [47]. It has several nice representations such as the level-(0,0) representation realized in terms of the elliptic Ruijsenaars difference operators[67] as well as an elliptic analogue of the $q$-Fock representation of $U_{q,t}({{gl}}_{1,tor})$ which is expected to give a geometric representation on the equivariant elliptic cohomology of the Hilbert scheme of points on ${\mathbb{C}}^{2}$, $\bigoplus_{n}\mathrm{E}_{T}(\mathrm{Hilb}_{n}({\mathbb{C}}^{2}))$. These suggest a deep connection of $U_{q,t,p}({{gl}}_{1,tor})$ to an expected elliptic analogue of the theory of Macdonald symmetric function. It is also remarkable that $U_{q,t,p}({{gl}}_{1,tor})$ realizes a deformation of the Jordan quiver $W$-algebra ${\mathcal{W}}_{p,p^{*}}(\Gamma(\widehat{A}_{0}))$ introduced in [34] in the same way as the elliptic algebra $U_{q,p}({g})$ realizes the deformed $W$-algebra ${\cal W}_{p,p^{*}}\bar{{g}}$ discussed in Sec.3. Namely, the level-$(1,N)$ elliptic currents of $U_{q,t,p}({{gl}}_{1,tor})$ gives a realization of the screening currents and a composition of the vertex operators w.r.t. the Drinfeld comultiplication gives a realization of the generating functions of ${\mathcal{W}}_{p,p^{*}}(\Gamma(\widehat{A}_{0}))$. Moreover thus obtained realization turns out to provide a relevant scheme to the instanton calculus of the 5d and 6d lifts of the 4d ${\mathcal{N}}=2^{*}$ SUSY gauge theories, i.e. the ${\mathcal{N}}=2$ SUSY gauge theories coupled with the adjoint matter[63], known as the Jordan quiver gauge theories555It is also called the ADHM quiver gauge theories.. There one of the essential feature is that $U_{q,t,p}({{gl}}_{1,tor})$ at the level (1,$N$) possesses the four parameters $q,t,p,p^{*}$ satisfying one constraint $p/p^{*}=t/q$, which play the role of the $SU(4)$ $\Omega$-deformation parameters[64]. ## 12 Summary We have exposed the EQGs $U_{q,p}({g}),U_{q,\kappa,p}({g}_{tor})$ and $U_{q,t,p}({{gl}}_{1,tor})$ in the Drinfeld realization. The main features of them are summarized as follows. * • They allow to treat both the level-0 and level $\not=0$ representations in a unified way. In particular the level $\not=0$ representation yields the two elliptic nomes $p,p^{*}$, which are identified with $q,t$ in the Macdonald theory and the deformed $W$-algebras. In the same way $U_{q,t,p}({{gl}}_{1,to})$ possesses the $SU(4)$ $\Omega$-deformation parameters $q,t,p,p^{*}$ in the level $(1,N)$ representation. * • The coalgebra structure of them allows to formulate the vertex operators as intertwining operators of modules, where the level-0 and level $\not=0$ representations are mixed as tensor product. * • The vertex operators are important objects having many applications to formulations of mathematical and physical quantities such as elliptic weight functions, integral solutions of the elliptic $q$-KZ equation, vertex functions, correlation functions and form factors in algebraic analysis of elliptic solvable lattice models etc.. * • They realize a deformation of the W-algebras including the affine quiver types. In particular, the vertex operators of EQGs provide those of the deformed $W$-algebras. We also have shown a correspondence between the level-0 representation of $U_{q,p}(\widehat{{sl}}_{N})$ and the equivariant elliptic cohomology $\mathrm{E}_{T}(T^{*}fl_{\lambda})$ by deriving the elliptic weight functions, the integral solutions of the (elliptic) $q$-KZ equation as well as the level-0 action on the Gelfand-Tsetlin basis. In this process, the vertex operators of $U_{q,p}(\widehat{{sl}}_{N})$ play an essential role. However, in the geometry side neither the non-zero level representations nor the vertex operators intertwining them have yet been understood. It should be outstanding to formulate the vertex operators of EQGs geometrically. In addition, knowing that the formulations of elliptic quantum groups and equivariant elliptic cohomologies possess the equivariant and Kähler parameters in a unified way, one of their promising application seems to be the 3d-mirror symmetry for a pair of supersymmetric gauge theories with ${\mathcal{N}}$ = 4 supersymmetry[2]. This provides an important example of the deeper mathematical problem of symplectic duality for conical symplectic resolutions. In such gauge theories, Nakajima quiver varieties are known to give moduli spaces of vacua called the Higgs branch. The 3d-mirror symmetry states that the Higgs branch of the dual theory conjecturally coincides with the Coulomb branch of the original one in such a way that the roles of the equivariant and the Kähler parameters of the dual theories are exchanged. To show the duality at the level of physical quantities such as vertex functions would be outstanding. ## References * [1] L.F. Alday, D. Gaiotto and Y. Tachikawa, Liouville Correlation Functions from Four Dimensional Gauge Theories, Lett.Math.Phys. 91 (2010), 167–197. * [2] M. Aganagic and A. Okounkov, Elliptic Stable Envelopes, Preprint 2016, arXiv:1604.00423. * [3] M. Aganagic and S.Shakirov, Gauge/vortex duality and AGT, Math. Phys. Stud. Springer, Cham (2016), 419–448. * [4] K. Aomoto and M. Kita, Theory of Hypergeometric Functions, 1994, Maruzen Pub. (in Japanese); English Edition, 2011, Springer Monographs in Mathematics, Springer Japan. * [5] G.E. Andrews and R.J. Baxter and P.J. Forrester. Eight vertex SOS model and generalized Rogers-Ramanujan-type identities. J.Stat.Phys.,35 (1984) 193–266. * [6] R.J. Baxter, Exactly solved models in statistical mechanics, Academic Press, London (1982) * [7] A.A. Belavin, Dynamical symmetry of integrable quantum systems.Nucl. Phys. B180 [FS2] (1981), 189-200. * [8] A.A. Belavin, A.M. Polyakov and A.B. Zamolodchikov, Infinite conformal symmetry in two-dimensional quantum field theory, Nuclear Phys. B 241 (1984), 333–380. * [9] P. Bouwknegt and K. Schoutens, $W$ Symmetry in Conformal Field Theory, Phys.Rep. 223 (1993), 183–276. * [10] J.-S. Caux, H. Konno, M. Sorrell and R. Weston, Exact Form-factor Results for the Longitudinal Structure Factor of the Massless $XXZ$ Model in Zero Field, J. Stat. Mech. (2012), P01007 (40pages). * [11] V. Chari and A. Pressley, A Guide to Quantum Groups, 1994, Cambridge Univ. Press. * [12] P. Christe and F. Ravanini, $G_{N}\otimes G_{L}/G_{N+L}$ Conformal Field Theories and Their Modular Invariant Partition Functions. Int.J.Mod.Phys. A4, 897-920 (1989) * [13] E. Date, M. Jimbo, A. Kuniba, T. Miwa and M. Okado. Exactly solvable SOS models. Nucl. Phys. B, 290 [FS20]: 231–273, 1987 * [14] E. Date and M. Jimbo and M. Okado, Crystal Base and $q$-Vertex Operators, Comm. Math. Phys., 155, 1993, 47–69. * [15] V.G. Drinfeld, A New Realization of Yangians and Quantized Affine Algebras. Soviet Math. Dokl. 36 (1988) 212-216. * [16] P. Etingof and A. Varchenko, Exchange Dynamical Quantum Groups, Comm.Math.Phys. 205 (1999), 19–52. * [17] R.M. Farghly, H. Konno, K. Oshima, Elliptic Algebra $U_{q,p}(\widehat{{g}})$ and Quantum Z-algebras, Alg. Rep. Theory 18 (2014), 103-135. * [18] B. Feigin and E. Frenkel, Affine Kac-Moody Algebras at the Critical Level and Gelfand-Dikii Algebras, Int.J.Mod.Phys. A 7, Suppl.1A, Proceedings of the RIMS Project 1991, “Infinite Analysis” (1992), 197–215. * [19] B. Feigin and D. Fuchs, Skew-symmetric invariant differential operators on the line and Verma modules over the Virasoro algebra, Funktsional. Anal. i Prilozhen. 16 (1982), 47–63. English translation: Funct.Annal.Appl. 16 (1982), 114 –126; Verma modules over a Virasoro algebra. Funktsional. Anal. i Prilozhen. 17 (1983), 91– 92. English translation: Funct.Annal.Appl. 17 (1983), 241– 242. * [20] B. Feigin and A. Tsymbaliuk, Bethe Subalgebras of $U_{q}(\widehat{{{gl}}}_{n})$ via Shuffle Algebras, Selecta Math. (N.S.) 22 (2016), no. 2, 979–1011. * [21] G. Felder, Elliptic Quantum Groups, Proc. ICMP Paris-1994 (1995), 211–218. * [22] O. Foda, M. Jimbo, T. Miwa, K. Miki and A. Nakayashiki, Vertex Operators in Solvable Lattice Models, J.Math.Phys. 35 (1994) 13–46. * [23] I. B. Frenkel and N. Reshetikhin, Quantum Affine Algebras and Holonomic Difference Equations, Comm.Math.Phys. 146 (1992) 1–60. * [24] E. Frenkel and N. Reshetikhin, Deformation of $W$-Algebras Associated to Simple Lie Algebras, Comm.Math.Phys. 197 (1998) 1-32. * [25] M. Fukuda, Y. Ohkubo and J. Shiraishi, Generalized Macdonald Functions on Fock Tensor Spaces and Duality Formula for Changing Preferred Direction, arXiv:1903.05905. * [26] V. Ginzburg, M. Kapranov and E. Vasserot, Elliptic Algebras and Equivariant Elliptic Cohomology I, Preprint (1995), arXiv:q-alg/9505012. * [27] P. Goddard and A. Kent and D. Olive. Virasoro algebras and coset space models Phys. Lett. B,152:88–,1985; Unitary representations of the Virasoro and super-Virasoro algebras Comm. Math. Phys.,103:105–119,1986. * [28] M. Jimbo, H. Konno, S. Odake and J. Shiraishi, Elliptic Algebra $U_{q,p}(\widehat{\mathfrak{sl}}_{2})$: Drinfeld Currents and Vertex Operators, Comm. Math. Phys. 199 (1999), 605–647 * [29] M. Jimbo, H. Konno, S. Odake and J. Shiraishi, Quasi-Hopf Twistors for Elliptic Quantum Groups, Transformation Groups 4 (1999), 303–327. * [30] M. Jimbo and T. Miwa. Algebraic Analysis of Solvable Lattice Models. CBMS Regional Conference Series in Mathematics vol. 85, AMS, 1994. * [31] M. Jimbo and T. Miwa and M. Okado, Solvable Lattice Models whose States are Dominant Integral Weights of $A^{(1)}_{n-1}$, Lett. Math. Phys. 14 (1987), 123–131. * [32] M. Jimbo and T. Miwa and M. Okado, Solvable Lattice Models Related to the Vector Representation of Classical Simple Lie Algebras, Comm. Math. Phys. 116 (1988), 507–525. * [33] V. G. Kac, Infinite Dimensional Lie algebras, 3rd. ed. Cambridge University Press, 1990\. * [34] T. Kimura and V. Pestun, Quiver $W$-algebras Lett.Math.Phys. 108 1351-1381 (2018) * [35] E. Koelink and H. Rosengren, Harmonic Analysis on the $SU(2)$ Dynamical Quantum Group, Acta.Appl.Math., 69 (2001), 163–220. * [36] T. Kojima and H. Konno, The Elliptic Algebra $U_{q,p}(\widehat{{sl}}_{N})$ and the Drinfeld Realization of the Elliptic Quantum Group ${{\mathcal{B}}_{q,\lambda}}(\widehat{{sl}}_{N})$, Comm. Math. Phys. 239 (2003), 405-447. * [37] T. Kojima, H. Konno and R. Weston, The Vertex-Face Correspondence and Correlation Functions of the Fusion Eight-Vertex Models I: The General Formalism, Nucl. Phys. B720 (2005), 348-398. * [38] H. Konno, An Elliptic Algebra $U_{q,p}(\widehat{{sl}}_{2})$ and the Fusion RSOS Models, Comm. Math. Phys. 195 (1998), 373–403. * [39] H. Konno, Dynamical $R$ Matrices of Elliptic Quantum Groups and Connection Matrices for the $q$-KZ Equations, SIGMA, 2 (2006), Paper 091, 25 pages. * [40] H. Konno, Elliptic Quantum Group $U_{q,p}(\widehat{{sl}}_{2})$ and Vertex Operators, J.Phys.A41 (2008) 194012. * [41] H. Konno, Elliptic Quantum Group $U_{q,p}(\widehat{{sl}}_{2})$, Hopf Algebroid Structure and Elliptic Hypergoemetric Series, J. Geom. Phys. 59 (2009), 1485-1511. * [42] H. Konno, Elliptic Weight Functions and Elliptic $q$-KZ Equation, Journal of Integrable Systems 2 (2017) 1-43. doi: 10.1093/integr/xyx011. * [43] H. Konno, Elliptic Stable Envelopes and Finite-dimensional Representations of Elliptic Quantum Group, Journal of Integrable Systems 3 (2018) 1-43. doi: 10.1093/integr/xyy012. * [44] H. Konno, Elliptic Quantum Groups $U_{q,p}(\widehat{{gl}}_{N})$ and $E_{q,p}(\widehat{{gl}}_{N})$, Adv.Studies in Pure Math. 76 (2018), 347-417. * [45] H. Konno, Elliptic Quantum Groups: Representations and Related Geometry, Springer Briefs in Mathematical Physics (2020) Springer. * [46] H. Konno and K. Oshima, Elliptic Quantum Group $U_{q,p}(B^{(1)}_{N})$ and Vertex Operators, RIMS Kokyuroku Bessatsu , B62 (2017) 97-148. * [47] H. Konno and K. Oshima, Elliptic Quantum Toroidal Algebra $U_{q,t,p}({{gl}}_{1,tor})$ and Affine Quiver Gauge Theories, Lett.Math.Phys. 113 (2023) 32, 64 pages. * [48] H. Konno and K. Oshima, Elliptic Quantum Toroidal Algebras, $Z$-algebra Structure and Representations, preprint (2023), Alg. Rep. Theory (2024) https://doi.org/10.1007/s10468-024-10251-3. * [49] P. Koroteev, P. Pushkar, A. Smirnov and A. Zeitlin, Quantum $\mathrm{K}$-theory of Quiver Varieties and Many-body Systems, Preprint 2017, arXiv:1705.10419. * [50] A. Kuniba, Exact Solution of Solid-on-solid Models for Twisted Affine Lie Algebras $A^{(2)}_{2n}$ and $A^{(2)}_{2n-1}$, Nucl. Phys., B355, 1991, 801–821. * [51] A. Kuniba and J. Suzuki, Exactly solvable $G^{(1)}_{2}$ solid-on-solid models. , Phys. Lett., A160, 1991, 216–222. * [52] M. Lashkevich and Y. Pugai. Free field construction for correlation functions of the eight-vertex model. Nucl. Phys. B516 (1998), 623-651. * [53] J. Lepowsky and R.L. Wilson, A New Family of Algebras Underlying the Rogers-Ramanujan Identities and Generalizations, Proc. Natl. Acad. Sci. USA 78 (1981) 7254-7258; The Structure of Standard Modules, I: Universal Algebras and the Roger-Ramanujan Identities, Invent.Math.77 (1984) 199–290. * [54] S.L. Lukyanov and V.A. Fateev , Additional Symmetries and Exactly-Soluble Models in Two-Dimensional Conformal Field Theory, Sov. Sci. Rev. A. Phys.15 (1990), 1-117. * [55] S. Lukyanov and Y. Pugai, Multi-point Local Height Probabilities in the Integrable RSOS Model, Nucl.Phys. B473(1996), 631-658. * [56] A. Matsuo, Jackson Integrals of Jordan Pochhammer Type and Quantum Knizhnik Zamolodchikov Equations, Comm. Math.Phys. 151 (1993) 263–273; Quantum Algebra Structure of Certain Jackson Integrals, Comm. Math.Phys. 157 (1993) 479–498. * [57] D. Maulik and A. Okounkov, Quantum Groups and Quantum Cohomology , Preprint 2012, arXiv:1211.1287, * [58] K. Miki, A $(q,\gamma)$ analog of the $W_{1+\infty}$ algebra. J. Math. Phys. 48, 123520, 35pp. (2007) * [59] K. Mimachi, A Solution to Quantum Knizhnik-Zamolodchikov Equations and Its Application to Eigenvalue Problems of the Macdonald Type, Duke Math.J. 85 (1996) 635–658. * [60] A. Molev, Yangians and Classical Lie Algebras, Mathematical Surveys and Monographs 143 (2007) AMS. * [61] H. Nakajima, Instantons on ALE Spaces, Quiver Varieties and Kac-Moody Algebras, Duke Math. J. 76 (1994) 365–416. * [62] H. Nakajima, Quiver Varieties and Kac-Moody Algebras, Duke Math. J. 91 (1998) 515–560. * [63] N. Nekrasov, Seiberg - Witten Prepotential From Instanton Counting. Adv. Theor. Math. Phys. 7 831-864 (2004) * [64] N. Nekrasov, BPS/CFT Correspondence: Non-perturbative Dyson-Schwinger Equations and $qq$-Characters. J.High Energy Phys. 03 181 (2016) * [65] A. Okounkov, Lectures on K-theoretic Computations in Enumerative Geometry. (2015), arXiv: 1512.07363. * [66] R. Rimányi, V. Tarasov and A. Varchenko, Trigonometric Weight Functions as $K$-theoretic Stable Envelope Maps for the Cotangent Bundle of a Flag Variety, J.Geom.Phys., 94 (2015) 81–119. * [67] S.N.M. Ruijsenaars, First order analytic difference equations and integrable quantum systems, J. Math. Phys., 38 (1997), 1069–1146. * [68] O. Schiffmann, Drinfeld Realization of the Elliptic Hall Algebra. arXiv:1004.2575. * [69] O. Schiffmann and E. Vasserot, The Elliptic Hall Algebra, Cherednik Hecke Algebras and Macdonald Polynomials. Composito Math. 147 188-234 (2011) * [70] V. Tarasov and A. Varchenko, Geometry of $q$-Hypergeometric Functions, Quantum Affine Algebras and Elliptic Quantum Groups, Astérisque 246 (1997) Société Mathématique de France.
# Mean path length inside non-scattering refractive objects Matt Majic<EMAIL_ADDRESS>Walter R. C. Somerville <EMAIL_ADDRESS>Eric C. Le Ru<EMAIL_ADDRESS>The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Chemical and Physical Sciences, Victoria University of Wellington, PO Box 600 Wellington, New Zealand ###### Abstract It has recently been argued that the geometric-optics mean path length of rays inside a refractive object under Lambertian illumination is independent of the scattering strength of the medium [Savo et al., Science 358, 765 (2017)]. We here show that it is in fact different in the case of zero-scattering. We uncover and explain the role of trapped ray trajectories in creating this unexpected discontinuity from zero- to low-scattering. This allows us to derive new analytic results for the zero-scattering mean path length of simple refractive shapes. This work provides a fresh perspective on the study of path length inside refractive objects, with possible applications in for example the study of scattering by large particles or the design of optical systems. Finding the mean chord length for a random distribution of lines in a given object is a natural question in many areas of physics. It is a seemingly complex task from a mathematical perspective, since one should consider the spatial and angular distribution of lines as well as how they intersect the surface of the object. For convex bodies the answer is however surprisingly simple; given by the mean chord length theorem, which has been known for more than a century [1]. It states that the mean chord length $\langle C\rangle$ is independent of the shape of the object, and only depends on the ratio of volume $V$ to surface area $\Sigma$ as $\langle C\rangle=4V/\Sigma$. Proofs from various perspectives have been given [2, 3, 4]. It was only fairly recently shown that this theorem can be generalized further to the study of random walks in diffusive objects. The mean path length theorem [5] states that the mean path length is still simply $\langle L\rangle=4V/\Sigma$; this is independent of both shape and the scattering/diffusive properties of the medium. The validity extends across many fields as it is valid for any random walk inside an object, and is particularly relevant to geometric optics within a closed scattering medium. One important condition for this theorem is that the entrance point and initial direction are uniformly and isotropically distributed, which in optics is equivalent to a Lambertian illumination [2]. Path length distributions and mean path length are central to the design of many optical systems where a ray optics description can be used. They can be used for calculating the optical properties of absorbing and scattering media [6, 7], refractive granular media in pharmaceutical powders [8], for solar cell design [9, 10, 11], random lasing [12], and integrating spheres [13, 14]. Ray tracing can also be combined with diffraction effects to calculate the electromagnetic scattering properties of large particles in models such as the geometric optics approximation and physical optics model [15, 16, 17, 18, 19, 20], or anomalous diffraction theory [21, 22, 23]. These have for example been applied to the study of ice crystals [17, 18, 20] for climate modeling. The zero-scattering mean path length is directly related to the orientation- averaged absorption cross-sections of large absorbing particles [24, 25, 26]. Path length distributions have also been used to derive the scattering phase function of an object analytically [27]. In most of these applications, the object has a different refractive index than the surrounding medium. Rays are then refracted at the boundary and may also be reflected internally or externally, which may increase the path length of some internal rays. Even then, it has been argued recently [28, 29] that the mean path length invariance remains valid for scattering samples, and is simply modified by a factor $s^{2}$: $\displaystyle\langle L\rangle=\frac{4V}{\Sigma}s^{2}$ (1) for any 3D convex body, independent of the scattering properties of the sample. A justification of Eq. (1) was given assuming a thermodynamic equilibrium/equipartition [28, 29], and following energy conservation arguments drawing on the discussion in Ref. [30]. In this letter, we focus on the mean path length in the low-scattering limit. The equipartition assumption and Eq. (1) also apply, but we will show that the zero scattering case is different, resulting in a discontinuous transition between low- and zero- scattering for some geometries. This is related to the existence of trapped paths (similar to the whispering gallery trajectories inside a sphere), which cannot be populated in the strict absence of scattering. To further support this argument, we have derived a number of new analytic expressions for the zero-scattering mean path length inside simple 2D and 3D refractive objects. These demonstrate the discontinuity at zero-scattering and their derivations support the physical interpretations in terms of trapped rays. Less symmetric geometries are also studied using Monte Carlo ray tracing simulations [29], allowing us to discuss the generality of this discontinuity, its sensitivity to imperfections, and its relevance to applications. Mean path length in a refractive sample. Figure 1: A light ray being refracted as it passes from medium 1 to medium 2. The darker region in medium 2 is not accessible from rays entering from medium 1. We consider as shown in Fig. 1 a non-absorbing convex body $V_{2}$ embedded in an outside medium $V_{1}$ with refractive indices $n_{2}>n_{1}$ respectively and define $s=n_{2}/n_{1}>1$. At this stage, we do not exclude the possibility that medium 2 is a scattering medium. We study the trajectory of light rays within the geometric-optics approximation, as they undergo stochastic refraction/reflection at the interface. Consider a ray incident on the surface, with an angle of incidence $\theta_{1}$ to the surface normal, and rotated by $\phi$ around it. The ray may be refracted at an angle $\theta_{2}$ with a probability $T_{12}(\theta_{1})$ with $\sin\theta_{1}=s\sin\theta_{2}$ (Snell’s law), while the azimuthal angle $\phi$ is unaffected. By optical reciprocity, the probabilities of transmission at complementary angles are identical: $T_{12}(\theta_{1})=T_{21}(\theta_{2})$. Similar laws apply to internal rays hitting the object surface with internal $\theta^{\prime}_{2}$ and external $\theta^{\prime}_{1}$ angles. Since $n_{2}>n_{1}$, angles $\theta^{\prime}_{2}$ above the critical angle $\theta_{c}=\mathrm{asin}(1/s)$ have zero transmission: there is total internal reflection (TIR). As for the mean path (or chord) length theorems, we assume that the illumination of the object from the outside is Lambertian. The surface irradiance is therefore uniform and the incident angles follow the probability distribution $p(\theta_{1})=2\cos\theta_{1}\sin\theta_{1}=\sin(2\theta_{1})$ ($0\leq\theta_{1}<\pi/2$) in 3D [2]. Because Lambertian illumination maximizes entropy and energy is conserved in the scattering process, the ensemble of external rays reflecting and internal rays exiting must also follow a Lambertian distribution (otherwise entropy would have been decreased). Then, since incident and outgoing rays have the same distribution, for every internal ray making an angle $\theta^{\prime}_{2}<\theta_{c}$ to the normal (and reflected with a probability $p_{2}=1-T_{21}(\theta^{\prime}_{2})$), there is an incident external ray that is externally reflected at the Snell- matching $\theta^{\prime}_{1}$ with the same probability $p_{1}=1-T_{12}(\theta^{\prime}_{1})=p_{2}$, i.e. there is a one-to-one correspondence between these rays (see Sec. S.I [31] for more detail). This allows us to ignore all internally reflected rays with $\theta^{\prime}_{2}<\theta_{c}$ in mean path length calculations, if we also ignore any reflection from the outside (dashed rays in Fig. 1). These externally reflected rays would normally have zero path length, but if they were to transmit inwards, they would have exactly the path length and angular distribution of the rays that reflect from the inside for $\theta^{\prime}_{2}<\theta_{c}$. See also Sec. S.II for an explicit example of this cancellation in simple geometries. This is a crucial result for refractive objects, as it simplifies dramatically the calculations. Note however that the contribution of rays with $\theta^{\prime}_{2}>\theta_{c}$ (TIRs), whose distribution is not specified, must still be accounted for. This result also highlights that Eq. (1) for scattering media assumes that these externally reflected rays with $L=0$ are included in the statistics, an important point that was not made explicit in Refs. [28, 29]. The mean path length not counting $L=0$ rays can be simply deduced as $\langle L_{L>0}\rangle={\langle L\rangle}/{\bar{T}_{12}}$, where $\bar{T}_{12}$ is the Lambertian-averaged transmission [32]. Following these considerations, we may express the mean path length in the object as $\displaystyle\langle L\rangle=\int_{\Sigma}\frac{\mathrm{d}\Sigma}{\Sigma}\int\frac{\mathrm{d}\phi}{2\pi}\int_{0}^{\frac{\pi}{2}}\mathrm{d}\theta_{1}L(\mathbf{r},\theta_{1},\phi)\sin(2\theta_{1}).$ (2) where $L(\mathbf{r},\theta_{1},\phi)$ denotes the total ray path length for a given entry point $\mathbf{r}$ and incidence angles, including possible total internal reflections until it reaches the surface with $\theta^{\prime}_{2}<\theta_{c}$ (thanks to the cancellation between internal/external reflections). In the absence of scattering and total internal reflections, $L$ then coincides with the chord length $C$. In the presence of scattering $L$ should be understood as an average over all possible scattering paths, which renders this ray approach difficult. We have applied this method to several standard geometries in the non- scattering case and obtained analytical results for simple shapes and numerical results for more complex shapes. The most surprising outcome is that the zero-scattering mean path length $\langle L^{0}\rangle$ is different (smaller) than the scattering mean path length $\langle L\rangle$ for some geometries, which results in a discontinuous transition at zero-scattering. We first discuss further this counter-intuitive result as it will provide physical insight into its origin and provide an alternative method of calculating $\langle L^{0}\rangle$ in special cases. Transition between the low- and zero-scattering regimes. To understand how this discontinuous transition arises, we will attempt to connect the two different approaches for the scattering (thermodynamic/equipartition) and non- scattering (ray optics) cases. Specifically, we here derive the mean path length $\langle L\rangle$ in the low-scattering limit from $\langle L^{0}\rangle$ using a ray-optics argument. At the center of this discussion is the existence of trapped rays. These undergo successive TIRs and cannot escape, similar to propagating modes in an optical fiber or whispering gallery modes in dielectric spheres. Note that these trajectories may be repeating (as the optical modes) or chaotic. Because of reciprocity, these rays cannot be excited from outside in the ray-optics framework and therefore are irrelevant to $\langle L^{0}\rangle$. 111Note that arbitrarily long path lengths may still exist, for example in the 2D ellipse, where rays that refract in at almost the critical angle at the tips will undergo a large number of total internal reflections before inevitably refracting out. In terms of the elliptical billiard table [33], the rays would enter at a trough on the phase portrait that touches the line at the critical angle. However, these are not strictly trapped. But if scattering is present, then there is a probability that some rays are scattered into and out of trapped trajectories. For very low scattering, this probability is small and one might expect that it does not affect the mean path length. However, trapped rays exhibit very long path lengths because scattering is low. It is this product of a small probability by a large path length that may result in a finite, non-zero contribution to $\langle L\rangle$ even in the limit of zero-scattering, but not for zero- scattering, hence the discontinuity. To be more quantitative, we denote the scattering coefficient $\alpha$ and assume that the scattering mean free path $l_{\alpha}=1/\alpha$ is much greater than $\langle L^{0}\rangle$, i.e. low-scattering limit. For simplicity, we will here consider special cases where the probability of scattering into a trapped trajectory, denoted $P_{T}$, is independent of the position of the scattering event. A more general case is discussed in Sec. S.III. The average probability of a ray scattering is $\alpha\langle L^{0}\rangle\ll 1$. Since the average path length for non-trapped rays is of order $\langle L^{0}\rangle$, scattering into them results in negligible changes to path length (of order $\alpha\langle L^{0}\rangle^{2}$). In contrast, a ray will only escape a trapped path if it is scattered again, which results in an average path length $l_{\alpha}\gg\langle L^{0}\rangle$ for trapped rays. Moreover, scattering may occur into another trapped path with probability $P_{T}$, which increases the path length by $l_{\alpha}$ again until the next scattering event. Summing, we obtain the mean path length for trapped rays as: $\displaystyle\langle L^{T}\rangle=l_{\alpha}+P_{T}l_{\alpha}+P_{T}^{2}l_{\alpha}+\ldots=\frac{l_{\alpha}}{1-P_{T}}.$ (3) $\langle L^{T}\rangle\gg\langle L^{0}\rangle$ but this is compensated by the small probability $\alpha\langle L^{0}\rangle P_{T}$ of scattering into a trapped trajectory. We can now add this contribution to $\langle L^{0}\rangle$ to obtain the low-scattering mean path-length: $\displaystyle\langle L\rangle\approx\langle L^{0}\rangle+\left[\alpha\langle L^{0}\rangle P_{T}\right]\langle L^{T}\rangle=\frac{\langle L^{0}\rangle}{1-P_{T}}.$ (4) This derivation provides an explanation for the discontinuity at zero- scattering, which is due to the second term, related to trapped trajectories. Eq. (4) moreover provides a simple method of deducing $\langle L^{0}\rangle$ analytically for objects where $P_{T}$ is independent of position and angle, which includes many objects with faceted sides. An important special case is for objects where no trapped rays can be supported ($P_{T}=0$), for which $\langle L^{0}\rangle=\langle L\rangle$. Amongst these are objects with a smaller refractive index than the embedding medium ($s<1$). The consideration of trapped paths also suggests a link between this problem and the theory of “billiards” in classical mechanics [33]. In particular, ergodic shapes will also automatically have $P_{T}=0$ (since every ray samples the entire phase space) and therefore $\langle L^{0}\rangle=\langle L\rangle$. Analytic results. To further illustrate this discussion, we now provide a collection of newly derived analytic results for $\langle L^{0}\rangle$ in simple 3D and 2D geometries. The main 3D geometries that we considered are summarized in Fig. 2, where their parameters are defined. The advantage of these ideal geometries is that the derivations illustrate how concepts such as trapped rays affect the mean path length. The derived $\langle L^{0}\rangle$ as a function of $s$ for all 3D geometries are summarized and compared to the scattering case in Fig. 3. To calculate $\langle L^{0}\rangle$, we use Eq. (2), rewritten in terms of the inside angle as: $\displaystyle\langle L^{0}\rangle=s^{2}\int_{\Sigma}\frac{\mathrm{d}\Sigma}{\Sigma}\int\frac{\mathrm{d}\phi}{2\pi}\int_{0}^{\theta_{c}}\mathrm{d}\theta_{2}L(\mathbf{r},\theta_{2},\phi)\sin(2\theta_{2}).$ (5) Figure 2: Main geometries considered in this work, (a) the 2D strip and 3D slab, (b) the circle and sphere, (c) the infinite cylinder and (d) the cube and cuboid. We also treat the square, rectangle and infinite square rod. We start with the simplest case of an infinite slab of width $a$, Fig. 2(a). In this case, $L(\mathbf{r},\theta_{2},\phi)$ only depends on $\theta_{2}$. Moreover, since we can ignore probabilistic reflections and it is not possible to excite TIRs from the outside, $L$ is the same as the chord length so we have $L=a/\cos\theta_{2}$ and the mean is calculated as $\displaystyle\langle L_{\text{slab}}^{0}\rangle=2as^{2}(1-\cos\theta_{c})=2as^{2}\left(1-\sqrt{1-\frac{1}{s^{2}}}\right).$ (6) This could also have been deduced from Eq. (4) since the trapping probability is uniform: $P_{T}=\cos\theta_{c}$. $\langle L_{\text{slab}}^{0}\rangle$ decreases with $s$ (see Fig. 3) and is less than the mean path length for a slab with scattering, $\langle L_{\text{slab}}\rangle=2as^{2}$. For a sphere of radius $a$, Fig. 2(b), $L(\mathbf{r},\theta_{2},\phi)$ again depends on $\theta_{2}$ only and it is not possible to excite TIRs from the outside, so we have $L=2a\cos\theta_{2}$ (the chord length) and integrating Eq. (5): $\displaystyle\langle L_{\text{sphere}}^{0}\rangle=$ $\displaystyle\frac{4a}{3}s^{2}\left[1-\left(1-\frac{1}{s^{2}}\right)^{\nicefrac{{3}}{{2}}}\right].$ (7) This expression appears for example in the absorption cross section for large weakly absorbing spheres [25, 19]. For comparison, the mean path length for a sphere with scattering is $\langle L_{\text{sphere}}\rangle=(4/3)as^{2}$. We cannot here use Eq. (4) because the probability of trapping $P_{T}$ depends on position: trapping is more likely for scattering events close to the sphere surface. Figure 3: Comparison of the $s$ dependence of the zero-scattering mean path length $\langle L^{0}\rangle$ for 3D objects for which analytical expressions were derived. All values are normalized to the mean chord length $\langle C\rangle=4V/\Sigma$. The scattering case $\langle L\rangle=s^{2}\langle C\rangle$ is shown as a dashed line. For the cube, Fig. 2(d), there are three regimes depending on $s$. First, we can show that for $s\leq\sqrt{3/2}$, no trapped rays exist, hence $P_{T}=0$ and $\displaystyle\langle L_{\text{cube}}^{0}(s\leq\sqrt{3/2})\rangle=\langle L_{\text{cube}}\rangle=\frac{2}{3}as^{2}.$ (8) For $s\geq\sqrt{2}$, we may use Eq. (5); the problem is simplified by the fact that all rays exit the opposite face of the cube. Some will total-internally reflect on an adjacent face (see Fig. 2(d)), some will exit straight through, but in both cases, the path length is given as $L=a/\cos\theta_{2}$, therefore we obtain $\displaystyle\langle L^{0}_{\text{cube}}(s\geq\sqrt{2})\rangle=2as^{2}\bigg{(}1-\sqrt{1-\frac{1}{s^{2}}}\bigg{)}.$ (9) In the intermediate case, $\sqrt{3/2}<s<\sqrt{2}$, calculating $\langle L_{\text{cube}}^{0}\rangle$ via Eq. (5) is rather technical, see Sec. S.IV. We present in Sec. S.V a simpler derivation using Eq. (4), which applies because the trapping probability $P_{T}$ is again independent of the location of the scattering event. Both result in: $\displaystyle\langle L_{\text{cube}}^{0}(\sqrt{3/2}<s<\sqrt{2})\rangle=$ (10) $\displaystyle{\frac{4as^{2}}{\pi}}\left(\sin^{-1}(s^{2}-1)-\sqrt{1-\frac{1}{s^{2}}}~{}\sin^{-1}(2s^{2}-3)\right).$ A similar derivation can be carried out for a cuboid of edges $a,b,c$ and the results are the same with the transformation: $\displaystyle a\rightarrow\frac{3abc}{ab+bc+ca},$ (11) i.e. rescaled by the relative factor $V/\Sigma$ for each shape. The cases of an infinite circular cylinder (Fig. 2(c)) and an infinite square rod are also derived and discussed in Secs. S.VI and S.VII. Finally, 2D objects can be treated with a similar approach. We have obtained analytic expressions for $\langle L^{0}\rangle$ for an infinite strip, a circle, a square, and a rectangle. The results and derivations are provided in Sec. S.VIII, along with a graphical summary. The conclusions are similar to those for 3D objects. Figure 4: Comparison of the $s$ dependence of the zero-scattering mean path length $\langle L^{0}\rangle$ for less symmetric objects. All values are normalized to the mean chord length $\langle C\rangle$, and the scattering case ($\langle L\rangle$) is shown as a solid line. (a) 2D objects: ellipse and stadium of aspect ratio 2, Pascal’s Limaçon of polar equation $r(\theta)=b+a\cos\theta$, with $b/a=3$. (b) 3D objects: same as 2D objects with symmetry of revolution around $z$. Application to physical systems. To investigate the generality of these results, we now consider less symmetric geometries, for which numerical calculations (Monte Carlo ray tracing [29]) can be used to derive $\langle L^{0}\rangle$. Figure 4 summarizes these results. We consider explicitly in Fig. 4(a) 2D shapes with different refractive index and decreasing symmetry: ellipse, stadium, and a convex Limaçon. The latter two do not show any discontinuity within our numerical accuracy, i.e. $\langle L^{0}\rangle=\langle L\rangle$, even at high refractive index, which suggests that the probability of scattering into a trapped trajectory is zero. Note that trapped rays may still exist (such as the one depicted for the stadium in Fig. 4(a)), but they correspond to unstable orbits with vanishingly small probabilities of scattering into. The situation is different for ellipses where a larger number of rays may be trapped, in agreement with the theory of elliptical billiards [33]. Interestingly, corresponding 3D objects with symmetry of revolution all show $\langle L^{0}\rangle<\langle L\rangle$, likely because of the trapped trajectories in the planes perpendicular to the revolution axis. It would be interesting to further link these results to the theory of classical billiard and chaos/ergodicity but this is outside the scope of this letter. Figure 4 overall suggests that the mean path length discontinuity is a special property of geometries with higher symmetry. These special shapes are nevertheless commonly used as model systems in many applications. As an example, ice crystals are often taken as high-symmetry objects to derive their optical properties for atmospheric models [17, 18, 20]. Within the geometric- optics approximation [17, 19], the absorption cross-section $C_{\mathrm{abs}}$ of weakly absorbing objects is directly proportional to the zero-scattering mean-path-length. The expressions we obtained (and more that could be derived using the same approach) can then be used to derive an analytic expression. For example, for a 5 $\mu$m-wide ice cube at $\lambda=1\,\mu$m ($s=s^{\prime}+is^{\prime\prime}=1.3+1.6\times 10^{-6}i$), we find that the analytic prediction: $\displaystyle\langle C_{\mathrm{abs}}\rangle\approx\frac{4\pi s^{\prime\prime}}{\lambda}\langle L_{\text{cube}}^{0}(s^{\prime})\rangle$ $\displaystyle\approx\frac{16as^{\prime 2}s^{\prime\prime}}{\lambda}\left(\sin^{-1}(s^{\prime 2}-1)-\sqrt{1-\frac{1}{s^{\prime 2}}}~{}\sin^{-1}(2s^{\prime 2}-3)\right)$ (12) agrees within $\pm 10\%$ with numerical calculations. This approach is valid for particle sizes much larger than the wavelength, but smaller than the characteristic absorption length, i.e. $1\ll(2\pi/\lambda)a\lessapprox(1/s^{\prime\prime})$. Together with the approximate extinction cross-section $\langle C_{\mathrm{ext}}\rangle\approx 3a^{2}$, these provide simple analytical inputs for atmospheric models over a large size range, replacing the time-consuming ray-tracing simulations otherwise required. This approach could be generalized to more realistic ice- crystal shapes and to other weakly absorbing atmospheric aerosols. Apart from such applications, whilst the zero-scattering discontinuity is interesting from a fundamental point of view, we should also consider its relevance to real physical systems. Firstly, all physical media are imperfect and should exhibit a small, but non-zero, scattering coefficient. In addition, surface imperfections are unavoidable, be it roughness or small deviation from ideal shapes (likely to make the object non-convex). Thirdly, the ray-optics description is only an approximation and wave effects can affect reflection/refraction, in particular resulting in a small probability of outcoupling during TIR events, which would preclude the existence of strict trapped trajectories. Because of these effects, one could argue that the zero- scattering discontinuity is irrelevant and the general formula (Eq. (1)) applies instead. However, one should also consider that any physical medium has non-zero absorption coefficient. This small absorption negates the contribution of extremely long-lived trapped rays for low scattering, so that the relevant experimental mean path length is in fact the zero-scattering one, as long as the scattering is smaller than the absorption coefficient. This argument is developed more qualitatively in Sec. S.IX. Conclusion. We have examined how shape affects the mean path length of rays in non-scattering refractive objects, providing the theoretical groundwork to derive the mean path length analytically, and applying it to simple shapes. Crucial to being able to derive these results was the fact that all probabilistic reflections below the critical angle can be discounted if they are ignored from both the inside and outside. We believe that some other geometries will be able to be treated using the same approach. These analytic results also highlight explicitly the discontinuous transition from non- scattering to scattering media and demonstrate that it is due to the existence of trapped trajectories that cannot be occupied without scattering. We believe this work is an important contribution to the resurgent study of path length invariance in media. The derived analytic expressions will also be useful in other theoretical contexts where mean path length, or path length distributions, are studied, as refractive non-scattering objects are central to many applications. ###### Acknowledgements. The authors are grateful to the MacDiarmid Institute, NZ, for financial support. SUPPLEMENTARY INFORMATION ## I Derivation of reflection cancellation In the main text we used the fact that all probabilistic (non-TIR) reflections can be ignored when calculating the mean path length. Here we provide a more detailed explanation of this phenomenon with diagrams (Fig. 5). First we noted that due to retaining entropy, there is a one-to-one correspondence between ingoing and outgoing rays: for each ingoing external ray, there is an outgoing ray at the same angle at that surface point. This in turn means that for each ray incoming at an angle $\theta_{1}$ to the normal, there is an internal ray headed toward the surface at the Snell matching angle $\theta_{2}$ where $\sin\theta_{1}=s\sin\theta_{2}$. In terms of Fig. 5, this means that the red and blue solid lines represent the same density or number of rays. As shown in the top diagrams in Fig. 5, for each externally reflected ray at an angle $\theta_{1}$, which was reflected with a probability $p_{1}=1-T_{12}(\theta_{1})$, there is an internally reflected ray, which was reflected inward at $\theta_{2}$ with the same probability $p_{2}=1-T_{21}(\theta_{2})=p_{1}$. This is because optical reciprocity ensures $T_{12}(\theta_{1})=T_{21}(\theta_{2})$. These cases are added together in the center diagram of Fig. 5, showing that the outgoing rays (red with blue dots) are composed of external rays undergoing reflection (with probability $p_{1}=1-T_{12}(\theta_{1})$), or internal rays undergoing refraction (with probability $T_{21}(\theta_{2})$), while the ingoing rays (blue with red dots) are composed similarly. In total there is a relative density of 1 outgoing at $\theta_{1}$ and a density of 1 ingoing at $\theta_{2}$. As shown in the lower diagrams of Fig. 5, the situation is equivalent to a scenario where both the internal and external reflections are replaced with refractions, because this results in the same density of 1 for outgoing and ingoing rays. We say equivalent meaning that the number density of rays is conserved, as well as ray angles and therefore also the mean path length. So in analytic derivations only reflections for internal rays with $\theta^{\prime}_{2}>\theta_{c}$ (TIR) need to be considered, as these reflections have no correspondence with external reflections. Figure 5: Schematics illustrating the effective cancellation of external and internal probabilistic (non-TIR) reflections. Top: the possible paths of external (blue) and internal (red) rays incident on the surface, where $\theta_{1}$ and $\theta_{2}$ are Snell-matching angles. The dashed lines represent the fractional probability of the outcomes. Center: the addition of these two cases, where the origins of the rays are indicated by the coloring of the dashes. Bottom: equivalent scenario where probabilistic reflections are ignored, which leads to the same total number of rays leaving at each angle, so preserves the mean path length. ## II Explicit example of reflection cancellation We can demonstrate the cancellation between internal and external reflections explicitly for simple geometries where the chord length and inside angle $\theta_{2}$ of a light ray after each internal reflection is identical – these include the 2D infinite strip, circle, 3D infinite slab, infinite cylinder, and sphere. In these shapes, a light ray may internally reflect $n$ times, each time with a probability $1-T_{21}(\theta_{2})$ so the path length is increased by a factor $\displaystyle f=\sum_{n=0}^{\infty}\left[1-T_{21}(\theta_{2})\right]^{n}=\frac{1}{T_{21}(\theta_{2})}$ (13) The fraction of rays incident with an angle $\theta_{1}$ transmitted into the sample is $T_{12}(\theta_{1})=T_{21}(\theta_{2})$, thereby canceling the factor in Eq. (13). ## III Effect of non-uniform probability of trapping $P_{T}$ In an object like a sphere, the density of trapped trajectories varies and is larger closer to the surface. The trapping probability $P_{T}$ then depends on the location of the scattering event. We may still define an average trapping probability $\bar{P}_{T,1}$. But for any ray that gets “trapped”, there is also a probability that the next scattering event will result in another trapped ray, and so on. If $P_{T}$ is non-uniform, these subsequent average probabilities $\bar{P}_{T,n}$ may be different to that of the first trapping event. For a sphere for example, one expects $\bar{P}_{T,2}>\bar{P}_{T,1}$ as the scattering event for a ray already in a trapped trajectory is more likely to occur closer to the surface. Let $P_{n}$ be the average probability that a ray gets trapped exactly $n$ times, with corresponding average path length $nl_{\alpha}$. Then the mean path length may be approximated in the low scattering limit by $\displaystyle\langle L\rangle=\sum_{n\geq 0}P_{n}\langle L_{n}\rangle.$ (14) We are ignoring as in the main text any terms of order $\alpha\langle L_{0}\rangle$ or less. In the low scattering limit the probability of scattering is $P_{S}\approx\alpha\langle L_{0}\rangle$. Then the probability of a trapping event occurring is $P_{S}\bar{P}_{T,1}$. The probability of no trapping is then $P_{0}=1-\alpha\langle L_{0}\rangle\bar{P}_{T,1}\approx 1$. The probability of exactly one trapping event is $P_{1}=\alpha\langle L_{0}\rangle\bar{P}_{T,1}(1-\bar{P}_{T,2})$. Following this logic for $n$ trappings gives $\displaystyle P_{n}=\alpha\langle L_{0}\rangle(1-\bar{P}_{T,n+1})\prod_{k=1}^{n}\bar{P}_{T,k}.$ (15) Plugging all this into Eq. 14, using $\alpha l_{\alpha}=1$, and assuming that the probability of trapping converges to $\bar{P}_{T,\infty}$ as the number of scatterings approaches $\infty$: $\displaystyle\langle L\rangle$ $\displaystyle=\langle L_{0}\rangle+\langle L_{0}\rangle(1-\bar{P}_{T,\infty})\sum_{n=1}^{\infty}n\prod_{k=1}^{n}\bar{P}_{T,k}.$ (16) As expected, this expression reduces to Eq. (4) in the main text when $\bar{P}_{T,n}=P_{T}$. ## IV Explicit integral calculation of $\langle L^{0}\rangle$ for a cube Figure 6: A light ray refracting into the “front” face of a cube, reflecting off the top face, and refracting out the opposite face. We start from Eq. 5 in the main text. We integrate over the two angles of entry and over a vertically oriented face of the cube, which covers $0\leq z\leq 1$ and $0\leq x\leq 1$, where $z=0$ at the top and $x=0$ on the right, as per Fig. 6. The integral may be split into four: $\displaystyle\langle L_{\text{cube}}^{0}\rangle=\frac{4s^{2}}{\pi}(I_{t}+I_{o1}+I_{o2}+I_{r}),$ (17) where $I_{t}$ counts the rays that refract towards the top face then refract out: $\displaystyle I_{t}=$ $\displaystyle\int_{a_{1}}^{b_{1}}\int_{0}^{\xi_{\rm TRE}}\int_{0}^{1}\int_{0}^{1}L_{t}(\theta_{2},\phi,z)p(\theta_{2})\mathrm{d}z\mathrm{d}x\mathrm{d}\phi\mathrm{d}\theta_{2},$ (18) $I_{o1}$ counts the rays that refract towards the top face then reflect towards the opposite face (depicted in Fig. 6), or leave the opposite face directly: $\displaystyle I_{o1}=$ $\displaystyle\int_{a_{2}}^{b_{2}}\int_{0}^{\xi_{\rm TRE}}\int_{0}^{1}\int_{0}^{1}L_{o}(\theta_{2})p(\theta_{2})\mathrm{d}z\mathrm{d}x\mathrm{d}\phi\mathrm{d}\theta_{2},$ (19) $I_{o2}$ counts the rays that refract towards the top face, reflect off both the top and right faces, and leave out the opposite face: $\displaystyle I_{o2}=$ $\displaystyle\int_{a_{3}}^{b_{3}}\int_{0}^{\xi_{\rm TRE}}\int_{0}^{1}\int_{0}^{1}L_{o}(\theta_{2})p(\theta_{2})\mathrm{d}z\mathrm{d}x\mathrm{d}\phi\mathrm{d}\theta_{2},$ (20) and $I_{r}$ counts the rays that refract towards the top face, reflect off the top face, and leave out the right face: $\displaystyle I_{r}=$ $\displaystyle\int_{a_{4}}^{b_{4}}\int_{0}^{\xi_{\rm TRE}}\int_{0}^{1}\int_{0}^{1}L_{r}(\theta_{2},\phi,x)p(\theta_{2})\mathrm{d}z\mathrm{d}x\mathrm{d}\phi\mathrm{d}\theta_{2}.$ (21) $p(\theta_{2})=2\cos\theta_{2}\sin\theta_{2}$, and the distances to the top face $(L_{t})$, opposite face $(L_{o})$ and right face $(L_{r})$ are $\displaystyle L_{o}=$ $\displaystyle\frac{a}{\cos\theta_{2}}$ $\displaystyle L_{t}=$ $\displaystyle\frac{z}{\sin\theta_{2}\cos\phi}$ $\displaystyle L_{r}=$ $\displaystyle\frac{x}{\sin\theta_{2}\sin\phi}.$ $\phi$ is measured clockwise from the vertical as shown in Fig. 6. By symmetry we can assume that $\phi\geq 0$ and that all rays head towards the top face ($\phi<\xi_{\rm TRE}$), where $\displaystyle\xi_{\rm TRE}=\tan^{-1}\dfrac{x}{z}$ (22) is the top-right edge. The integral bounds for $\theta_{2}$ are $\displaystyle a_{1}=$ $\displaystyle\min(\theta_{c},\max(\xi_{\rm TOE},\xi_{\rm TIRT}))$ $\displaystyle b_{1}=$ $\displaystyle\theta_{c}$ $\displaystyle a_{2}=$ $\displaystyle 0$ $\displaystyle b_{2}=$ $\displaystyle\min(\theta_{c},\min(\xi_{\rm ORE},\max(\xi_{\rm TOE},\xi_{\rm TIRT})))$ $\displaystyle a_{3}=$ $\displaystyle\min(\theta_{c},\xi_{\rm ORE})$ $\displaystyle b_{3}=$ $\displaystyle\min(\theta_{c},\max(\xi_{\rm ORE},\xi_{\rm TIRR}))$ $\displaystyle a_{4}=$ $\displaystyle\min(\theta_{c},\max(\xi_{\rm ORE},\xi_{\rm TIRR}))$ $\displaystyle b_{4}=$ $\displaystyle\min(\theta_{c},\max(\xi_{\rm ORE},\xi_{\rm TIRT})),$ where $\xi_{\rm TOE}$ is the top-opposite edge: $\displaystyle\xi_{\rm TOE}=\tan^{-1}\frac{z}{\cos\phi},$ (23) $\xi_{\rm ORE}$ is the opposite-right edge: $\displaystyle\xi_{\rm ORE}=\tan^{-1}\frac{x}{\cos\phi},$ (24) $\xi_{\rm TIRT}$ is the condition for total internal reflection off the top face: $\displaystyle\xi_{\rm TIRT}=\sin^{-1}\frac{\cos\theta_{c}}{\cos\phi},$ (25) $\xi_{\rm TIRR}$ is the condition for total internal reflection off the right face (this can only occur if the ray first undergoes TIR off the top face): $\displaystyle\xi_{\rm TIRR}=\min\bigg{(}\xi_{\rm TIRT},\sin^{-1}\frac{\cos\theta_{c}}{\sin\phi}\bigg{)}.$ (26) In order to evaluate these integrals analytically, they can be broken down into regions of $x,z,\phi$ where the max and min functions are not necessary. This results in about 50 4-dimensional integrals, most of which can be evaluated as sums of trigonometric and logarithmic functions and elliptic integrals. We do not provide details here, but a similar simpler derivation is given in the case of the square in Sec. VIII. On adding all integrals together the result simplifies to Eq. (17). This remarkable simplification highlights the importance of the alternative approach based on trapping probability $P_{T}$, developed in the next section. ## V Calculation of $\langle L^{0}\rangle$ for a cube using Eq. (4) We here use instead Eq. (4), which applies because the trapping probability $P_{T}$ is independent of the location of the scattering event. We can formulate the condition for these trapped rays by requiring TIR off all six faces. Without loss of generality, let our trapped ray bounce off the front face with $\theta>\theta_{c}$ and some $\phi$. Then the condition for TIR off the top face (and also the bottom face) is $\sin\theta<\frac{\cos\theta_{c}}{\cos\phi}$. And similarly the condition for TIR off the right and left faces is $\sin\theta<\frac{\cos\theta_{c}}{\sin\phi}$. Finding trapped trajectories is equivalent to finding $\theta\geq\theta_{c}$ and $\phi$ fulfilling these two constraints. The boundary of solutions lies at $\theta=\theta_{c}$. The other two conditions are satisfied if $\tan\theta_{c}$ is less than the maximum value that $1/\cos\phi$ and $1/\sin\phi$ can have simultaneously. This occurs at $\phi=\pi/4$ where $1/\cos\phi=1/\sin\phi=\sqrt{2}$. So trapped trajectories exist for $\tan\theta_{c}<\sqrt{2}$, or equivalently $s>\sqrt{3/2}$. Moreover, the probability of trapping a ray is given by (using the 8-fold symmetry about $\phi$): $\displaystyle P_{T}=$ $\displaystyle\frac{2}{\pi}\int_{\cos^{-1}\cot\theta_{c}}^{\pi/4}\mathrm{d}\phi\int_{\theta_{c}}^{\sin^{-1}\tfrac{\cos\theta_{c}}{\cos\phi}}\sin\theta\mathrm{d}\theta.$ (27) Then Eq. (4)gives after integration: $\displaystyle\langle L_{\text{cube}}^{0}(\sqrt{3/2}<s<\sqrt{2})\rangle=$ (28) $\displaystyle{\frac{4as^{2}}{\pi}}\left(\sin^{-1}(s^{2}-1)-\sqrt{1-\frac{1}{s^{2}}}~{}\sin^{-1}(2s^{2}-3)\right).$ ## VI Infinite circular cylinder Figure 7: A ray refracting through a cylinder. The angle $\phi$ is not shown as it comes out of the page; it is measured from the the vertical to the projection of the ray to the plane coming out of the page. For a cylinder of radius $a$, every incident point is equivalent, and the angle to the normal between internal bounces is invariant, a consequence of the reflectional and rotational symmetries of the cylinder. As a result, it is not possible to excite total internal reflections from the outside, and hence the mean path length and mean chord length are identical. The chord length $L$ can be derived from the cosine rule $c^{2}=L^{2}+a^{2}-2aL\cos\theta_{2}$, where $c^{2}=h^{2}+a^{2}$, $h$ being the height along the cylinder that the ray travels; see Fig. 7. $h$ is related to $L$ through $h=L\sin\theta_{2}\cos\phi$. Putting this together gives $\displaystyle L_{\rm cylinder}=\frac{2a\cos\theta_{2}}{1-\sin^{2}\theta_{2}\cos^{2}\phi},$ (29) and for the mean: $\displaystyle\langle L^{0}_{\text{cylinder}}\rangle=$ $\displaystyle\frac{s^{2}}{\pi}\int_{0}^{\theta_{c}}\int_{0}^{2\pi}\frac{2a\cos^{2}\theta_{2}\sin\theta_{2}}{1-\sin^{2}\theta_{2}\cos^{2}\phi}~{}\mathrm{d}\phi\mathrm{d}\theta_{2}$ $\displaystyle=$ $\displaystyle 4as^{2}\int_{0}^{\theta_{c}}\sin\theta_{2}\cos\theta_{2}~{}\mathrm{d}\theta_{2}$ $\displaystyle=$ $\displaystyle 2a,$ (30) which, interestingly, is independent of $s$. The cylinder qualitatively combines a slab along the axial direction and a circle along the radial one. The opposite effect of these shapes on $s$ seems to cancel perfectly. For comparison, the scattering mean path length is $\langle L_{\text{cylinder}}\rangle=2as^{2}$. The discrepancy between the scattering and non-scattering cases may be attributed to trapped rays which run down the length of the cylinder. ## VII Infinite square rod For the infinite square rod, the derivations are similar to that of the cube and 2D square (see below) so are not repeated; there are two regimes: $\displaystyle\langle L_{\text{square rod}}^{0}(s<\sqrt{2})\rangle=$ (31) $\displaystyle\frac{as^{2}}{\pi}\left(\cos^{-1}\left(1-s^{2}\right)-\sqrt{1-\frac{1}{s^{2}}}~{}\cos^{-1}\left(3-2s^{2}\right)\right),$ $\displaystyle\langle L_{\text{square rod}}^{0}(s\geq\sqrt{2})\rangle=$ $\displaystyle as^{2}\bigg{(}1-\sqrt{1-\frac{1}{s^{2}}}\bigg{)}.$ (32) These results are both less than the mean path length with scattering: $\langle L_{\text{square rod}}\rangle=as^{2}$, due to trapped rays which run down the length of the rod. ## VIII 2D objects We consider the problem of calculating the mean path length for a convex 2D object with refractive index and no scattering. The diffuse external radiation creates a Lambertian incidence at all points on the surface, where the angles of rays incident are distributed in a 2D problem as $\tfrac{1}{2}\cos\theta_{1}$. As in the 3D case, the average path length is obtained from integrating over all possible incident points and angles: $\displaystyle\langle L_{2D}^{0}\rangle$ $\displaystyle=\frac{1}{2P}\int_{P}\int_{-\pi/2}^{\pi/2}L(\theta_{1},r)\cos\theta_{1}\mathrm{d}\theta_{1}\mathrm{d}r$ (33) where $P$ is the object’s perimeter, and $\int_{P}$ is the integral around the perimeter, so that $P=\int_{P}\mathrm{d}r$. Since $L$ is more naturally expressed in terms of $\theta_{2}$, it will be convenient to parametrize the integral as $\displaystyle\langle L_{2D}^{0}\rangle$ $\displaystyle=\frac{s}{2P}\int_{P}\int_{-\theta_{c}}^{\theta_{c}}L(\theta_{2},r)\cos\theta_{2}\mathrm{d}\theta_{2}\mathrm{d}r.$ (34) We derive analytic expressions in the following for simple 2D geometries, namely the infinite strip, cicle, and square. The results are summarized and compared to the scattering case in Fig. 8. Figure 8: Comparison of the $s$ dependence of the zero-scattering mean path length $\langle L^{0}\rangle$ for 2D objects for which analytical expressions were derived. All values are normalized to the mean chord length $\langle C\rangle=\pi A/P$. The scattering case $\langle L\rangle=s\langle C\rangle$ is shown as a dashed line. ### VIII.1 Infinite strip We start with the simplest case of an infinite strip of width $a$. The derivation and results are very similar to the infinite slab presented in the main text. The chord length is $\displaystyle L_{\rm strip}=\frac{a}{\cos\theta_{2}}.$ (35) Like the slab, there is no TIR and all surface points are identical. The integral (34) then reduces to $\displaystyle\langle L_{\text{strip}}^{0}\rangle$ $\displaystyle=\frac{as}{2}\int_{-\theta_{c}}^{\theta_{c}}\mathrm{d}\theta_{2}$ $\displaystyle=as\theta_{c}.$ (36) This is less than the mean path length including scattering: $\displaystyle\langle L_{\text{strip}}\rangle=as\frac{\pi}{2}.$ (37) The result (36) can also be obtained via Eq. (4), since the probability of trapping is simply the fraction of angles greater than $\theta_{c}$: $P_{T}=1-2\theta_{c}/\pi$. ### VIII.2 Circle Figure 9: Light ray refracting through a circle. $L_{\rm min}$ is the minimum path length due to refraction. For a circle of radius $a$, which is analogous to the 3D sphere in many respects, the chord length is again independent of the point of entry: $\displaystyle L_{\rm circle}=2a\cos\theta_{2},$ (38) and the mean is then: $\displaystyle\langle L_{\text{circle}}^{0}\rangle$ $\displaystyle=as\int_{-\theta_{c}}^{\theta_{c}}\cos^{2}\theta_{2}\mathrm{d}\theta_{2}$ $\displaystyle=a\left[s\theta_{c}+\cos\theta_{c}\right],$ (39) which is less than the scattering mean path length: $\displaystyle\langle L_{\text{circle}}\rangle=\frac{as}{2}.$ (40) ### VIII.3 Square For a square of side length $a$ there are two cases, $s\leq\sqrt{2}$ and $s\geq\sqrt{2}$. For $s\leq\sqrt{2}$, then $\theta_{c}\geq\pi/4$ and there are no trapped rays; if a ray with angle $\theta_{2}$ undergoes TIR off one face - i.e. $\theta_{2}>\theta_{c}$ relative to that face’s normal, then the ray then hits an adjacent side with an angle $\pi/2-\theta_{2}<\theta_{c}$. The mean path length is therefore identical to that for the scattering case, i.e. $\displaystyle\langle L_{\text{square}}^{0}(s\leq\sqrt{2})\rangle=\langle L_{\text{square}}\rangle=\frac{\pi}{4}as.$ (41) For $s\geq\sqrt{2}$, since as explained in the main text reflections for $\theta<\theta_{c}$ may be ignored, the problem simplifies in that all rays that enter the square leave the opposite face, either due to hitting the opposite face directly, or via totally internally reflecting off an adjacent face, which is guaranteed if $\theta_{c}\leq\pi/4$. Then $L=a/\cos\theta_{2}$ regardless of the incident angle or point of entry, and the integral (34) simplifies to $\displaystyle\langle L_{\text{square}}^{0}(s\geq\sqrt{2})\rangle=as\theta_{c}.$ (42) this expression coincides with (41) when $s=\sqrt{2}$. Figure 10: Three different cases of light rays (in red) crossing a square, broken down into different angular regions. This square has $s\leq\sqrt{2}$, ($\theta_{c}\geq\frac{\pi}{4}$). Alternatively, these results can be found, albeit with more difficulty, from the integral (34) for $s\leq\sqrt{2}$ as follows. Consider a test ray entering from the right along $0\leq z\leq a$. By symmetry we may let $0\leq\theta_{2}\leq\theta_{c}$. As shown in Fig. 10, the ray can either hit the opposite face directly, with a length $L=a/\cos\theta_{2}$, hit the top face and TIR and leave out the opposite face (giving a total path of $L={a}/{\cos\theta_{2}}$ again), or hit the top face and leave with $L={z}/{\sin\theta_{2}}$. This depends on the angle from the point of entry to the opposite top corner, $\theta_{t}=\tan^{-1}({z}/{a})$. Note that $\theta_{t}$ is always less than $\theta_{c}$, but $\theta_{t}$ may be greater or less than ${\pi}/{2}-\theta_{c}$. Explicitly: $\displaystyle L_{\rm square}=\left\\{\begin{aligned} &\frac{a}{\cos\theta_{2}}\quad&0\leq\theta_{2}\leq\theta_{t},\\\ &\frac{a}{\cos\theta_{2}}&\theta_{t}\leq\theta_{2}\leq\max(\tfrac{\pi}{2}-\theta_{c},\theta_{t}),\\\ &\frac{z}{\sin\theta_{2}}&\max(\tfrac{\pi}{2}-\theta_{c},\theta_{t})\leq\theta_{2}\leq\theta_{c}.\end{aligned}\right.$ (43) Inserting this into the integral (34) gives $\displaystyle\langle L_{\text{square}}^{0}(s\leq\sqrt{2})\rangle=$ $\displaystyle\frac{s}{a}\int_{0}^{a}\bigg{[}\int_{0}^{\max(\tfrac{\pi}{2}-\theta_{c},\theta_{t})}a\mathrm{d}\theta_{2}$ $\displaystyle+\int_{\max(\tfrac{\pi}{2}-\theta_{c},\theta_{t})}^{\theta_{c}}\frac{z}{\sin\theta_{2}}\cos\theta_{2}\mathrm{d}\theta_{2}\bigg{]}\mathrm{d}z.$ (44) This can be evaluated analytically by splitting the integrals at the point $\theta_{t}=\pi/2-\theta_{c}$, which is equivalent to $z=a\cot\theta_{c}$. If $z<a\cot\theta_{c}$, then $\theta_{t}<\pi/2-\theta_{c}$, and vice versa. Then $\displaystyle\langle L_{\text{square}}^{0}(s\leq\sqrt{2})\rangle=$ $\displaystyle\frac{s}{a}\bigg{\\{}\int_{0}^{a\cot\theta_{c}}\bigg{[}\int_{0}^{\tfrac{\pi}{2}-\theta_{c}}a\mathrm{d}\theta_{2}$ $\displaystyle+\int_{\tfrac{\pi}{2}-\theta_{c}}^{\theta_{c}}\frac{z}{\sin\theta_{2}}\cos\theta_{2}\mathrm{d}\theta_{2}\bigg{]}\mathrm{d}z$ $\displaystyle+\int_{a\cot\theta_{c}}^{a}\bigg{[}\int_{0}^{\theta_{t}}a\mathrm{d}\theta_{2}$ $\displaystyle+\int_{\theta_{t}}^{\theta_{c}}\frac{z}{\sin\theta_{2}}\cos\theta_{2}\mathrm{d}\theta_{2}\bigg{]}\mathrm{d}z\bigg{\\}}.$ (45) The integrals individually evaluate to logarithmic and trigonometric functions, which together cancel to give $\langle L_{\text{square}}^{0}(s\leq\sqrt{2})\rangle=\pi as/4$. ### VIII.4 Rectangle For a rectangle of sides $a,b$, we can use the same arguments as for the square. For $s\leq\sqrt{2}$ there are no trapped rays: $\displaystyle\langle L^{0}_{\text{rect}}(s<\sqrt{2})\rangle=\langle L_{\text{rect}}\rangle=\frac{\pi abs}{2(a+b)}.$ (46) And for $s\geq\sqrt{2}$, again all rays exit the opposite face, so the integral (34) reduces to $\displaystyle\langle L^{0}_{\text{rect}}(s\geq\sqrt{2})\rangle=\frac{2abs\theta_{c}}{a+b}.$ (47) ## IX Absorption in a low scattering sample One important application of the study of path length is to study the optical absorption of a refractive object. If we define the absorption $A$ as the fraction of rays that are absorbed compared to the total number of incident rays, it can be expressed in terms of the path length distribution as [5, 29]: $\displaystyle A=1-\int_{0}^{\infty}e^{-\alpha_{a}L}p(L)\mathrm{d}L.$ (48) where $p(L)$ is the probability distribution of path length in the absence of absorption and $\alpha_{a}$ is the absorption coefficient. In the low absorption limit, $\alpha_{a}\rightarrow 0$, the exponential may be expanded in a Taylor series, giving approximately $\displaystyle A=\alpha_{a}\langle L\rangle+\mathcal{O}(\alpha_{a}^{2}\langle L\rangle^{2}).$ (49) This common approximation highlights the importance of the mean path length, but would also suggest that the measured absorption could be discontinuous at zero scattering when $\langle L^{0}\rangle\neq\langle L\rangle$. In practical situations ideal zero-scattering is not possible, so we here analyze the absorption in the limit of low scattering. An implicit assumption in deriving Eq. (49) is that there are no significant contributions in $p(L)$ from $L$ comparable or larger than $1/\alpha_{a}$, since $\alpha_{a}L\ll 1$ is not valid for those $L$. This is no longer true in cases where there exists trapped, or even just long-lived, trajectories. For objects with refractive index and a very low scattering coefficient $\alpha_{s}$, it is appropriate to split the path length distribution $p(L)$ into: * • Rays that are directly reflected off the surface with probability $\bar{R}_{12}=1-\bar{T}_{12}$ (with zero path length), where $\bar{R}_{12}$ and $\bar{T}_{12}$ are the average (over a Lambertian distribution) reflection and transmission coefficients. * • Rays that enter, propagate inside, but do not scatter before exiting, with probability $P_{0}^{\prime}$ and (renormalized) path length distribution $p_{0}(L)$. * • Rays that enter, scatter and exit without undergoing total internal reflection, with probability $P_{S}^{\prime}$ and path length distribution $p_{S}(L)$. * • Rays that get trapped in long-lived trajectories, with probability $P_{T}^{\prime}$ and path length distribution $p_{T}(L)$. We can write explicitly: $\displaystyle p(L)=$ $\displaystyle\bar{R}_{12}\delta(L)+P_{0}^{\prime}p_{0}(L)+P_{S}^{\prime}p_{S}(L)+P_{T}^{\prime}p_{T}(L).$ (50) The probabilities in the low scattering limit are, for a general object: $\displaystyle P_{0}^{\prime}=$ $\displaystyle\bar{T}_{12}-\alpha_{s}\langle L_{0}\rangle$ $\displaystyle P_{S}^{\prime}=$ $\displaystyle(1-P_{T})\alpha_{s}\langle L_{0}\rangle$ $\displaystyle P_{T}^{\prime}=$ $\displaystyle\alpha_{s}\langle L_{0}\rangle P_{T},$ (51) where $\langle L_{0}\rangle$ is the mean path length for zero-scattering and $P_{T}$ is the average probability of a scattered ray entering a trapped trajectory. Note that: $\displaystyle\bar{R}_{12}+P_{0}^{\prime}+P_{S}^{\prime}+P_{T}^{\prime}=1.$ (52) To derive low order approximate expressions, the absorption may be split into contributions from different types of ray trajectories, following Eq. 50: $\displaystyle A=P_{0}^{\prime}A_{0}+P_{S}^{\prime}A_{S}+P_{T}^{\prime}A_{T},$ (53) where $\displaystyle A_{0}=1-\int_{0}^{\infty}p_{0}(L)e^{-\alpha_{a}L}\mathrm{d}L$ $\displaystyle A_{S}=1-\int_{0}^{\infty}p_{S}(L)e^{-\alpha_{a}L}\mathrm{d}L$ $\displaystyle A_{T}=1-\int_{0}^{\infty}p_{T}(L)e^{-\alpha_{a}L}\mathrm{d}L.$ (54) In the low absorbance limit, $A_{0}$ and $A_{S}$ are simplified by the fact that the path length distributions $p_{0}(L)$ and $p_{S}(L)$ are confined mostly to small $L$ relative to the absorption mean free path $1/\alpha_{a}$, so $\displaystyle A_{0}\approx$ $\displaystyle\alpha_{a}\frac{\langle L_{0}\rangle}{\bar{T}_{12}},$ $\displaystyle A_{S}\approx$ $\displaystyle\alpha_{a}\frac{\langle L_{S}\rangle}{\bar{T}_{12}}+\mathcal{O}\Big{(}\frac{\alpha_{s}}{\alpha_{a}}\Big{)}$ (55) where $\langle L_{0}\rangle/\bar{T}_{12}$ is the mean path length of rays that enter, and $L_{S}/\bar{T}_{12}$ is the mean path length of rays that enter and scatter but do not get trapped into long-lived trajectories. In order to simplify further, we now consider the limit of low absorption and lower scattering, i.e. the scattering coefficient $\alpha_{s}$ is much less than the absorption coefficient, and both are small, $\alpha_{s}\ll\alpha_{a}\ll 1/\langle L_{0}\rangle$. $A_{T}$ can then be simplified to first order by recognizing that all trapped rays get absorbed since the absorption coefficient is much higher than the scattering coefficient, therefore: $\displaystyle A_{T}\approx 1-\mathcal{O}(\alpha_{s}/\alpha_{a}).$ (56) Then altogether the absorption is $\displaystyle A\approx$ $\displaystyle\bigg{(}1-\alpha_{s}\frac{\langle L_{0}\rangle}{\bar{T}_{12}}\bigg{)}\alpha_{a}\langle L_{0}\rangle$ $\displaystyle+(1-P_{T})\alpha_{s}\langle L_{0}\rangle\alpha_{a}\frac{\langle L_{S}\rangle}{\bar{T}_{12}}$ $\displaystyle+\alpha_{s}\langle L_{0}\rangle P_{T}.$ (57) In fact the terms containing the product $\alpha_{a}\alpha_{s}$ are second order, and neglecting them leaves $\displaystyle A$ $\displaystyle\approx\alpha_{a}\langle L_{0}\rangle+\alpha_{s}\langle L_{0}\rangle P_{T}\qquad\left[\alpha_{s}\ll\alpha_{a}\ll 1/\langle L_{0}\rangle\right].$ If $\alpha_{s}\ll\alpha_{a}$, it then reduces to $A\approx\alpha_{a}\langle L_{0}\rangle$. This demonstrates that for realistic objects with residual scattering, the zero-scattering mean path length is the relevant quantity in terms of optical absorption, providing that absorption dominates over scattering. Although outside the scope of this work, the effect of other imperfections, such as the wave nature of light or surface scattering from imperfection, can be incorporated using similar arguments. In these cases we also deduce that a small non-zero absorption coefficient will dramatically limit the contribution of trapped and long-lived trajectories, resulting again in the zero-scattering mean path length being the relevant quantity. ## References * Czuber [1884] A. Czuber, Zur theorie der geometrischen wahrscheinlihkeiten, Sitzungsber. Akad. Wiss. Wien 90, 719 (1884). * Kellerer [1971] A. M. Kellerer, Considerations on the random traversal of convex bodies and solutions for general cylinders, Radiation Res. 47, 359 (1971). * Coleman [1969] R. Coleman, Random paths through convex bodies, J. Appl. Proba. , 430 (1969). * De Kruijf and Kloosterman [2003] W. J. M. De Kruijf and J. L. Kloosterman, On the average chord length in reactor physics, Ann. Nucl. Energy 30, 549 (2003). * Blanco and Fournier [2003] S. Blanco and R. Fournier, An invariance property of diffusive random walks, Europhys. Lett. 61, 168 (2003). * Mupparapu _et al._ [2015] R. Mupparapu, K. Vynck, T. Svensson, M. Burresi, and D. S. Wiersma, Path length enhancement in disordered media for increased absorption, Opt. Express 23, A1472 (2015). * Tommasi _et al._ [2020a] F. Tommasi, L. Fini, F. Martelli, and S. Cavalieri, Invariance property in scattering media and absorption, Opt. Comm. 458, 124786 (2020a). * Scheibelhofer _et al._ [2018] O. Scheibelhofer, P. R. Wahl, B. Larchevêque, F. Chauchard, and J. G. Khinast, Spatially resolved spectral powder analysis: experiments and modeling, Appl. Spectrosc. 72, 521 (2018). * Sprafke and Wehrspohn [2015] A. N. Sprafke and R. B. Wehrspohn, Current concepts for optical path enhancement in solar cells, in _Photon Management in Solar Cells_ (Wiley, 2015) Chap. 1, pp. 1–20. * Sychugov [2019] I. Sychugov, Analytical description of a luminescent solar concentrator, Optica 6, 1046 (2019). * Sychugov [2020] I. Sychugov, Geometry effects on luminescence solar concentrator efficiency: analytical treatment, Appl. Opt. 59, 5715 (2020). * Wiersma [2008] D. S. Wiersma, The physics and applications of random lasers, Nature Phys. 4, 359 (2008). * Nelson and Prézelin [1993] N. B. Nelson and B. B. Prézelin, Calibration of an integrating sphere for determining the absorption coefficient of scattering suspensions, Appl. Opt. 32, 6710 (1993). * Villanueva _et al._ [2016] Y. Villanueva, C. Veenstra, and W. Steenbergen, Measuring absorption coefficient of scattering liquids using a tube inside an integrating sphere, Appl. Opt. 55, 3030 (2016). * Ravey and Mazeron [1982] J.-C. Ravey and P. Mazeron, Light scattering in the physical optics approximation; application to large spheroids, J . Opt. 13, 273 (1982). * Chowdhury _et al._ [1992] D. Q. Chowdhury, P. W. Barber, and S. C. Hill, Energy-density distribution inside large nonabsorbing spheres by using mie theory and geometrical optics, Appl. Opt. 31, 3518 (1992). * Macke [1993] A. Macke, Scattering of light by polyhedral ice crystals, Appl. Opt. 32, 2780 (1993). * Bi and Yang [2013] L. Bi and P. Yang, Physical-geometric optics hybrid methods for computing the scattering and absorption properties of ice crystals and dust aerosols, in _Light Scattering Reviews 8_ (Springer, 2013) pp. 69–114. * Kokhanovsky and Zege [1995] A. A. Kokhanovsky and E. P. Zege, Local optical parameters of spherical polydispersions: simple approximations, Appl. Opt. 34, 5513 (1995). * Sun _et al._ [2017] B. Sun, P. Yang, G. W. Kattawar, and X. Zhang, Physical-geometric optics method for large size faceted particles, Opt. Express 25, 24044 (2017). * Ackerman and Stephens [1987] S. A. Ackerman and G. L. Stephens, The absorption of solar radiation by cloud droplets: An application of anomalous diffraction theory, J. Atmosph. Sci. 44, 1574 (1987). * Mitchell [2000] D. L. Mitchell, Parameterization of the Mie extinction and absorption coefficients for water clouds, J. Atmosph. Sci. 57, 1311 (2000). * Xu _et al._ [2003] M. Xu, M. Lax, and R. R. Alfano, Anomalous diffraction of light with geometrical path statistics of rays and a Gaussian ray approximation, Opt. Lett. 28, 179 (2003). * van de Hulst [1981] H. C. van de Hulst, _Light scattering by small particles_ (Dover, New York, 1981). * Bohren and Huffman [2008] C. F. Bohren and D. R. Huffman, _Absorption and scattering of light by small particles_ (John Wiley & Sons, 2008). * Kokhanovsky and Macke [1997] A. A. Kokhanovsky and A. Macke, Integral light-scattering and absorption characteristics of large, nonspherical particles, Appl. Opt. 36, 8785 (1997). * Gille [1999] W. Gille, The small-angle scattering correlation function of the cuboid, J. Appl. Cryst. 32, 1100 (1999). * Savo _et al._ [2017] R. Savo, R. Pierrat, U. Najar, R. Carminati, S. Rotter, and S. Gigan, Observation of mean path length invariance in light-scattering media, Science 358, 765 (2017). * Tommasi _et al._ [2020b] F. Tommasi, L. Fini, F. Martelli, and S. Cavalieri, Invariance property in inhomogeneous scattering media with refractive-index mismatch, Phys. Rev. A 102, 043501 (2020b). * Yablonovitch [1982] E. Yablonovitch, Statistical ray optics, J. Opt. Soc. Am. 72, 899 (1982). * [31] See Supplemental Material at [URL will be inserted by publisher] for additional technical details. * Duntley [1942] S. Q. Duntley, The optical properties of diffusing materials, J. Opt. Soc. Am. 32, 61 (1942). * Berry [1981] M. V. Berry, Regularity and chaos in classical mechanics, illustrated by three deformations of a circular ’billiard’, Eur. J. Phys. 2, 91 (1981).
B.Comp. Dissertation Fair Multi-party Machine Learning - a game theoretic approach By Chen Zhiliang Department of Computer Science School of Computing National University of Singapore 2019/2020 B.Comp. Dissertation Fair Multi-party Machine Learning - a game theoretic approach By Chen Zhiliang Department of Computer Science School of Computing National University of Singapore 2019/2020 Project no.: H148310 Advisor: Prof Bryan Kian Hsiang Low Deliverables: Report - 1 volume Abstract High performance machine learning models have become highly dependent on the availability of large quantity and quality of training data. To achieve this, various central agencies such as the government have suggested for different data providers to pool their data together to learn a unified predictive model, which performs better. However, these providers are usually profit- driven and would only agree to participate in the data sharing process if the process is deemed both profitable and fair for themselves. Due to the lack of existing literature, it is unclear whether a fair and stable outcome is possible in such data sharing processes. Hence, we wish to investigate the outcomes surrounding these scenarios and study if data providers would even agree to collaborate in the first place. Tapping on cooperative game concepts in Game Theory, we introduce the data sharing process between a group of agents as a new class of cooperative games with modified definition of stability and fairness. Using these new definitions, we then theoretically study the optimal and suboptimal outcomes of such data sharing processes and their sensitivity to perturbation. Through experiments, we present intuitive insights regarding theoretical results analysed in this paper and discuss various ways in which data can be valued reasonably. Subject Descriptors: Game Theory Mechanism Design Machine Learning Keywords: Multi-party Machine Learning, Cooperative Games, Game Theory, Shapley Value, Characteristic Function Acknowledgement I would like to thank my advisor, Prof. Bryan Low, for guiding me throughout this project. He showed me numerous traits a good researcher should have and never hesitated to point me in the right direction whenever I appeared unsure of myself. I would also like to thank my parents, who have always supported me in my endeavours. Lastly, I thank Ong Min for offering me many words of encouragement and being my emotional pillar of support throughout the years. ###### Contents 1. 1 Introduction 1. 1.1 A motivating example behind fair multi-party machine learning 2. 1.2 Our contribution and research focus 2. 2 Related Work 3. 3 Background on cooperative games 1. 3.1 Defining a cooperative game with a characteristic function 1. 3.1.1 Outcomes of cooperative games 2. 3.1.2 Stability and Core 3. 3.1.3 Shapley value 4. 3.1.4 Limitations in context of data sharing 4. 4 Deviating from conventional cooperative games 1. 4.1 Overview 2. 4.2 Alternative definition of stability 3. 4.3 Definition of proportionality 5. 5 Desirable properties of outcomes in modified cooperative game 1. 5.1 Assumptions of modified cooperative game 6. 6 Optimal outcome 7. 7 Suboptimal Outcomes 1. 7.1 Stable but not Proportional outcome 2. 7.2 Proportional but not stable outcome 8. 8 Sensitivity Analysis 1. 8.1 Inclusion of an new agent 2. 8.2 Perturbation of contribution of one particular player 9. 9 Experiments 1. 9.1 Introducing two different characteristic function 1. 9.1.1 Fisher Information 2. 9.1.2 Mutual Information 2. 9.2 Experiment results 1. 9.2.1 Shapley value 2. 9.2.2 Characteristic functions affect optimality of outcomes 3. 9.2.3 Settling for a suboptimal outcome 4. 9.2.4 Perturbation of contribution of one particular player 3. 9.3 Extensions on other machine learning models 1. 9.3.1 Logistic Regression for classification problems 2. 9.3.2 Neural Networks 4. 9.4 Extension on real life data 10. 10 Future Work 1. 10.1 Choosing reasonable characteristic functions 2. 10.2 Desirable suboptimal outcomes 11. 11 Conclusion ## 1 Introduction In recent years, there has been an increasing amount of work done in the field of collaborative machine learning. As massive amount of data is held by different data providers, it becomes worthwhile for them to work together, possibly combining these datasets to learn a unified predictive model. Each data provider can then ideally make use of this model, which arguably performs better than any model created from small individual dataset. In fact, the importance of pooling data together from multiple sources has been underscored by various public initiatives such as the Ocean Protocol framework (created jointly by Pricewaterhousecoopers Singapore and Singapore-based startup DEX). From the point of view of a central agency, such as the government, data sharing is desirable because citizens, businesses and society stand to benefit from better predictive models in general. Most work on collaborative machine learning in a multi-agent setting then focuses on parallelizing the process of learning a predictive model from decentralized data sources. However, there has been little to no work focused on evaluating the relative contribution of agents and investigating the outcomes surrounding such collaborative processes. In particular, we wish to investigate if we can measure each agent’s relative contribution reasonably and find a sufficiently fair reward to allocate to each player so that they are satisfied with the collaborative process. Figure 1: Overview of data sharing process and this paper’s focus The first challenge arises in terms of evaluating the contribution of each agent in a multi-party machine learning process - when data is jointly pooled from multiple sources to create a unified model, what is the relative contribution of each agent in the collaborative effort? Answering this question is paramount because the reward each agent walks away from collaboration heavily depends on his relative contribution. The next challenge arises after one has established the relative contribution of each agent in the collaborative process - given this contribution measure, how can we reward the participating agents appropriately after collaboration (the concept of ’reward’ here refers to the value of the resulting model that each agent receives after collaborating)? While the reward given to an agent should clearly commensurate his contribution (fair), it should also be attractive enough to ensure agents are incentivised to continue collaborating (stable). We need to investigate if such fair and stable allocation of reward is even possible given the data contributed by each agent, and if not, what possible alternatives are available. To contextualise the above challenges, we introduce the following motivating example. ### 1.1 A motivating example behind fair multi-party machine learning Artificial intelligence has shown great promise in healthcare for many applications such as tumour detection and the diagnosis of eye diseases. However, much of its performance is dependent on the availability of sufficient high quality data. Figure 2: BioMind, developed by AI Research Centre for Neurological Disorders and Capital Medical University, beat human experts in a brain tumour detection competition in June 2018 Assume a government healthcare agency wants hospitals to use patients’ MRI 111Magnetic resonance imaging scans to create a model for automatic tumour detection. However, data held by individual hospital may be insufficient to produce a good model. From the point of view of the government, it is desirable for all the hospitals to pool their data to create a unified model - this implies that patients enjoy better predictive results. However in reality, hospitals are profit-driven and will only agree to participate if the following two criteria are met: first, a hospital wants the reward gained from the collaborative effort to be higher than any possible reward derived when it chooses to work alone or with a smaller group of hospitals. Second, a hospital expects the reward received to be fair and proportional to its relative contribution at the end of the collaboration. The differing level of reward here refers to the predictive model (of varying performance) that is given back to the hospitals. The first criterion stems directly from the profit-driven aim of hospitals - a better predictive model implies that a hospital can make more profits and thus clearly it prefers to gain more profits from collaboration than from other actions (one may find this similar to the idea of opportunity cost). On the other hand, the second criteria implies that hospitals holding high quality data, worried that smaller hospitals with relatively inferior data may become freeloader in the collaborative process, expects the reward given to be fair. As a result, the central agency needs to find a sound way to measure the contribution of each hospital in the collaborative process and decide the value of model that should be given to each hospital after collaborating. In particular, it is important to investigate what defines an optimal outcome, if an optimal outcome is even possible and if not, what alternatives are possible. This motivates our paper. ### 1.2 Our contribution and research focus This paper taps heavily on concepts related to cooperative games in Game Theory. Our research focus in this paper lies strongly in the theoretical representation of a multi-party machine learning process as a new class of cooperative game with new definitions of stability and fairness. After which, we analyse various optimal and suboptimal outcomes in such games and show how some of these outcomes can be derived. These outcomes provide insights into how data providers behave in a multi-party machine learning process and allow us to theoretically investigate if the collaboration will be successful and give desirable results. In Section 3, we provide a brief overview of cooperative games in Game Theory and show that conventional cooperative games are unable to account for some unique subtleties present in data sharing and other similar processes. In Section 4 and 5, we modify conventional assumptions in cooperative games to account for this and introduce a new class of cooperative game with the production of non-rivalrous goods; we also redefine the concept of stability and fairness in such games by tapping on the conventional concept of Shapley value in Game Theory. In Section 6, we introduce a computationally efficient way to check if an outcome of a modified game is stable and investigate how an outcome can be optimal. In Section 7, We also analyse various suboptimal outcomes in a data sharing process. Lastly, in Section 8, we perform sensitivity analysis to analyse how sensitive an optimal outcome is towards marginal changes in the data sharing process. We also perform a series of experiments in Section 9 using two different model valuation measures and demonstrate some results from previous sections. ## 2 Related Work Literature review reveals that while the study of fairness in the domain of data sharing and collaborative learning has become popular in recent years, most work has focused solely on evaluating the contribution of data providers in a data sharing process or treating the process as non-cooperative. In addition, none of existing works analyses how a fair and stable outcome can be derived after evaluating the data providers’ contribution. The use of Shapley value [11], a classic concept stemming from cooperative game, as an evaluation measure of one’s contribution in data sharing processes [13, 3] has become increasingly popular in recent years. However, these works focus on alleviating the computational complexities in calculating the Shapley value with respect to agents’ data; while the usage of Shapley value in these works is similar to ours, they do not comment on how one can derive an optimal reward payoff to participating data providers after finding the shapley value, which our work does. Moreover, these works also fail to point out that a data sharing process contains subtleties which deviate from conventional cooperative games, which our paper examines in Section 4. Furthermore, works such as [14, 12] attempt to formulate the data sharing process as a non-cooperative game, where each participating player contributes data to create a unified predictive model. However, such non-cooperative game rewards each participating players with the same model at the end of the game, which violates our aim to endow players with a reward commensurate to his contribution. Moreover, non-cooperative games fails to account for the possibility of group cooperation between players, which not only is more realistic in real life, but also examines the idea of fair division of reward at the end of the game. Some existing literature [13, 9] also remarked that participating players could receive the same model and be renumerated with varying amount of money after data sharing. However, it is uncertain how to determine the ”exchange rate” between data contribution and money and some surveys [1] mentioned that people feel that data and money are not exchangeable. On the contrary, our work focuses on distributing a subset of the learnt model (with value commensurating the player’s contribution to this learnt model) back to participating players directly; this is desirable because as long as we defined the ”value” of a learnt model appropriately using some theoretical measures, the players’ reward (in the form of a learnt model) can be directly pegged to this ”value”. ## 3 Background on cooperative games In Game Theory, cooperative games provide a framework for economists and mathematicians to study how a group of self-interested players chooses to behave when they are allowed to cooperate with each other to generate some resources of value. Despite being self-interested, these players may still choose to cooperate because there may be positive externalities created when larger groups are formed. However, it is not always the case that all players choose to cooperate in one single large group; depending on the resources created and the amount of positive externalities generated, it is entirely possible that we observe multiple smaller coalition of players forming. In fact, this is what we observe in real life: business cooperation usually exists between only a few corporations. However, our paper considers it desirable for all players to cooperate and show that under certain optimality conditions, they will choose to do so. Much analysis in cooperative games then focuses on what coalitions will form and how the resources generated in such coalitions can be divided amongst the participating players (termed as outcomes of the game). In the following sections, we will formally define such cooperative games [7] and investigate how such definitions can be interpreted in the context of a multi-party machine learning process. Subsequently, we identify how conventional cooperative game concepts fail to represent the data sharing process adequately and suggest modifications. ### 3.1 Defining a cooperative game with a characteristic function Assume that $N$ represents a set of $n$ players. Define $v\mathrel{\mathop{\mathchar 58\relax}}2^{n}\xrightarrow{}\mathcal{R}$ as a characteristic function. A characteristic function maps any coalition $C\subseteq N$, or a subset of players, to a real number which represents the value that is generated when this coalition of player chooses to work together. In the context of multi-party machine learning, we can imagine that when players choose to work together (in the form of data sharing) to create a model from the pooled data, this model represents the value created by a coalition of players; the characteristic function $v(.)$ can be certain statistical or information-centric metric used to indicate the value of such models. Furthermore, one can even use the model’s performance over a test data set as an indicator of its value. In fact, various analyses in this paper make no assumptions of the exact characteristic function used in a data sharing process. However, towards the end of the paper, we justify the use of certain functions over others. As we will see from experiments in section 9, there are many different interpretations of characteristic functions and they lead to different outcomes. #### 3.1.1 Outcomes of cooperative games An outcome of a cooperative game with $n$ players and a given characteristic function $v(.)$ is characterised entirely by the following two parts: 1. 1. a partition of players into different coalitions, called a coalition structure; and 2. 2. a payoff vector $<x_{1},...,x_{n}>$, which distributes the value of each coalition among its members $\sum_{i\in C}x_{i}\leq v(C)$ for all coalition $C$ formed in the coalition structure. The first part tells us which coalitions will be formed; the second part tells us that for each coalition formed in the first part, how value created in this coalition is divided amongst the players in it. In particular, we notice that second part implies that the sum of payoff distributed to members of a coalition cannot exceed the value generated by this coalition; for example, when a group of firms work together to generate some profits, the payment given to all the firms in total cannot exceed the profit generated in the first place. For any given cooperative game, there are many possible outcomes. Fundamental research in cooperative game theory focuses on finding outcomes with certain desirable or logical properties. In the following section, we review some desirable properties in which an outcome of a conventional cooperative game can have, assuming that all players choose to cooperate together. While the definition of these properties will be modified later on in our paper, the intuitive meaning behind these properties is still relevant. #### 3.1.2 Stability and Core Assuming that the grand coalition $N$ forms and all participating players choose to cooperate, a stable outcome ensures that any player does not have incentive to leave this grand coalition. In the context of multi-party machine learning, this property is desirable because it encourages players to share data in a larger coalition to create a unified model instead of breaking off into smaller groups. ###### Definition 1. Let $<x_{1},x_{2},...,x_{n}>$ be a payoff vector to players 1,2,…,n. Then this payoff vector is stable if and only if $\sum_{i\in C}x_{i}\geq v(C)$ for all $C\subseteq N$. Intuitively, an outcome is stable if the reward received by each player at the end is such that no subset of players can simultaneously break off into smaller sub-coalitions which generate higher rewards than the sum of payment each player earns from the current reward. As the name suggests, one can interpret stability as some kind of equilibrium for the players, where there is no incentive for anyone to take deviating actions. In conventional cooperative games, the set of all stable solution payoffs is called the Core. Notice that deriving the stable set of solution consists of finding the feasible region satisfying $2^{n}$ constraints (by Definition 1). Hence, the Core does not necessarily exist. #### 3.1.3 Shapley value On the other hand, when we divide the value created in the grand coalition $N$ to the participating players in a payoff vector, we also wish to capture the notion of fairness in this outcome. While the notion is ”fairness” is not captured in any one single definition, it can be represented reasonably by a few different properties. In cooperative games, the shapley value derives a payoff vector $<\phi_{1},\phi_{2},\dots,\phi_{n}>$ based on each player’s marginal contribution to all possible sub-coalitions in the following equation, given that $v(.)$ is the characteristic function: $\phi_{i}=\sum_{S\subseteq N\setminus\\{i\\}}{\frac{|S|!\;(N-|S|-1)!}{N!}}(v(S\cup\\{i\\})-v(S))$ (1) This payoff vector satisfies the following properties which capture the notion of ”fairness” intuitively: 1. 1. Symmetry; if $v(C\cup i)-v(C)=v(C\cup j)-v(C)$ for all $C\subseteq N$, then $\phi_{i}=\phi_{j}$. That is, if player $i$ and player $j$ has the same marginal contribution to every possible coalition of players, then $\phi_{i}=\phi_{j}$ 2. 2. Null player; if $v(C\cup i)=v(C)$ for all $C\subseteq N$, then $\phi_{i}=0$. That is, if player $i$ has zero marginal contribution to any subcoalition, then $\phi_{i}=0$ 3. 3. Deservedness if $v(C\cup i)\leq v(C\cup j)$ for all $C\subseteq N$, then $\phi_{i}\leq\phi_{j}$. That is, if player $j$ has a larger marginal contribution than player $i$ for all possible subcoalitions, then $\phi_{i}\leq\phi_{j}$. 4. 4. Efficiency; $\sum^{n}_{i}\phi_{i}=v(N)$. The last property is important because it allows one to interpret the shapley value as a measure of relative contribution with respect to the value of resources created by the grand coalition. In particular, it is not difficult to see that the shapley value gives us one particular outcome (as defined in Section 3.1.1) of a cooperative game by allocating the value created by the grand coalitions to a player with the properties above. Unfortunately, the allocation given by the Shapley value may not be stable in conventional cooperative games (since a stable solution may not even exist). In later sections, we will observe the recurring theme where fairness and stability may not be achievable together. #### 3.1.4 Limitations in context of data sharing Even though we are tempted to model the data sharing process as a cooperative game and regard the combined learnt model as the value generated by a group of players, we observe that under conventional game theory, the ideal outcomes (based on conventional concepts of stability and fairness) of such games are less than desirable and do not promote sharing of data. For example, imagine if two agents A and B whose datasets are valued in the following manner: $\begin{split}&v(A)=1\\\ &v(B)=1\\\ &v(A\cup B)=2\end{split}$ (2) Based on conventional outcomes studied in cooperative game theory, if the players choose to collaborate, the only stable solution payoff is ${x_{A},x_{B}}={1,1}$. Furthermore, the shapley value gives us $\phi_{A},\phi_{B}=1,1$ as well. But clearly, since players are producing predictive models, which can be duplicated for free, we should be rewarding both players with the payoff ${x_{A},x_{B}}={2,2}$ (notice we did not create resources from the thin air, but merely duplicated them). We first try to understand why this happens. In example (2), the shapley value explains the contribution of each player well: it seems correct that both players have contributed to $v(A\cup B)$ equally. However, we observe that, unlike conventional cooperative games, predictive models created from the data sharing process can be duplicated in part or entirely for negligible cost. As such, we do not have to restrict our total payoff to a coalition of players such that it sums to the value of one data model. In fact, in the extreme case, we can distribute the same entire learnt model to each player, regardless of how much they contributed to it (of course, this payoff is not fair). As such, the shapley value should merely give us an idea of what the relative contribution of each player is, and not dictate the final division of reward to each player. In the following section, we will modify conventional assumptions surrounding cooperative games to suit a multi-party machine learning data-sharing process and analyse what can be ideal outcomes in such modified games. In particular, we observe that the model created by coalitions in a data-sharing process can be categorised as a non-rivalrous good, where players can enjoy part of it without diminishing its availability to other players. Furthermore, we use the Shapley value merely as a measure of contribution between the players and not as an indicator of the reward that should be allocated to each of them. ## 4 Deviating from conventional cooperative games ### 4.1 Overview We wish to capture the notion that resources created are non-rivalrous in our modified game. In fact, each player can receive as high as the total value created by the group. However, the central agency mediating the collaborative process is able to control the level of reward given to any player. To do this, we need to redefine what a payoff vector is. ###### Definition 2. In a modified game, a payoff vector $<x_{1},x_{2},...,x_{n}>$ to a coalition structure is such that for all coalitions $C$ belonging to the coalition structure formed, $x_{i}\leq v(C)$ for any player $i\in C$. Compared to the definition of payoff vector in Section 3.1.1, this new definition restricts the payment to a coalition of players such that each player cannot receive more than the value created in this coalition. Notice this is different from conventional cooperative games, which dictate that the sum of payments received by all the players cannot exceed the value created in a coalition. The data sharing process can now be represented entirely by this modified game with a new definition of payoff vector in an outcome. However, changing this single definition leads to a myriad of effects on the concept of stability, fairness and outcome, which will be covered in the subsequent sections. ### 4.2 Alternative definition of stability Under this modified game, the original definition of stability (Definition 1) no longer ensures that a payoff vector is stable. For example, for a three- player game with the following characteristic function defining the value of models created in various coalitions: $\begin{split}&v(A)=1,v(B)=1,v(C)=1\\\ &v(A\cup B)=2,v(A\cup C)=2,v(B\cup C)=2\\\ &v(A\cup B\cup C)=3\end{split}$ (3) A payoff vector $<x_{A},x_{B},x_{C}>=<1.5,1.5,1.5>$ would have been stable according to Definition 1 because it is easily verifiable the sum of payoffs to any subset of players exceed the value created by them. However, in a modified game where resources created by a coalition can be duplicated for free, player $A$ and player $B$, for example, can choose to leave the grand coalition and work together to both earn a payoff of higher than 1.5 (Since they can pool their data together in subcoalition $A\cup B$ to potentially create a model of value 2 and earn a payoff $<x_{A},x_{B}>=<2,2>$). Thus, the original definition of stability fails for our modified game. We redefine, in an intuitive way, the definition of stability in a modified game: ###### Definition 3. In a modified game where the grand coalition $N$ of $n$ players forms, a payoff vector $<x_{1},x_{2},...,x_{n}>$ is stable if and only if for any subcoalitions $C\subseteq N$, there exists a player $k\in C$ such that $x_{k}\geq v(C)$ An intuitive way to understand this definition is that players in the grand coalition are only satisfied with their current reward if there are no subcoalitions where a subset of players can choose to form privately and simultaneously earn a higher reward. As such, considering that the maximum payoff one can earn in any subcoalition is the value created by the subcoalition itself, there must be at least one player in every possible subcoalition $C$ who is contented with his current payoff under the grand coalition (i.e $x_{k}\geq v(C)$), preventing that subcoalition of players from deviating privately (since the refusal of a single player is sufficient to prevent a group of players from deviating together). ### 4.3 Definition of proportionality In Section 3, we introduced how the Shapley value derives a payoff vector $\phi(N)=<\phi_{1},\phi_{2},...,\phi_{n}>$ $=<x_{1},x_{2},\dots,x_{n}>$ (assuming all $n$ players work together in coalition $N$) which dictates how the value generated from the grand coalition should be divided amongst its members to maintain some fair properties. In fact, the shapley value indicates the level of contribution of each member with respect to the model created of value $v(N)$. However, in a modified game, Definition 2 places bounds on the payoff to individual players instead on the sum of payoffs. We can scale the vector $\phi(N)$ by a positive constant $\alpha$ such that the resulting payoff vector $<x_{1},\dots,x_{n}>=\alpha\phi(N)$ preserves the ratio of contribution between each player. Here, we introduce the concept of proportionality to categorise any payoff vector satisfying the following property: ###### Definition 4. A payoff vector $<x_{1},x_{2},\dots,x_{n}>$ is proportional for a given contribution measure vector $\mathcal{C}\in\mathcal{R}^{n}$ if and only if there exists a positive constant $\alpha$ such that $<x_{1},x_{2},\dots,x_{n}>=\alpha\mathcal{C}$. In the rest of the paper, we will assume the shapley value as our contribution measure because a payoff vector proportional to the shapley value preserves its desirable properties covered in Section 3.1.3. ## 5 Desirable properties of outcomes in modified cooperative game Recall that an outcome of a cooperative game is defined by the resulting coalition structure formed and payoff vector defined over the coalitions in the coalition structure (Section 3.1.1). Some outcomes satisfy certain desirable properties in data sharing. Here, we summarise these properties. Given a modified game with a group of $n$ players, a characteristic function $v(.)$ used to measure the value of predictive model created by the data sharing process and the subsequent shapley value $<\phi_{1},\phi_{2},...,\phi_{n}>$ derived as the contribution measure, we want an outcome consisting of a coalition structure $\mathcal{S}$ and payoff vector $<x_{1},x_{2},\dots,x_{n}>$ to have the following properties: 1. 1. Formation of grand coalition coalition structure $\mathcal{S}$ consists of a single grand coalition with $n$ players. That is, all players choose to cooperate in the data sharing process. 2. 2. Stability (Definition 3) All players continue to collaborate in a large group. 3. 3. Fairness 1. (a) Proportionality (Definition 4) Players receive reward proportional to his relative contribution. 2. (b) Null Player If $\phi_{i}=0$, then $x_{i}=0$; a player with no contribution gets zero reward. 3. (c) Symmetry If $\phi_{i}=\phi_{j}$, then $x_{i}=x_{j}$; two players with equal contribution receives equal reward. 4. (d) Order Preserving If $\phi_{i}>\phi_{j}$, then $x_{i}>x_{j}$; if player $i$ has a lower contribution than player $j$, then player $i$’s reward is lower than that of player $j$. The rest of this paper investigates whether an outcome satisfying all the above properties (regarded as optimal) exists and other suboptimal outcomes. ### 5.1 Assumptions of modified cooperative game Before we formalise the definition of an optimal outcome, we make the following basic assumptions regarding our modified game: 1. 1. We assume that characteristic function $v(.)$ used to value predictive models is monotonic. That is, if $C\subseteq C^{\prime}$, then $v(C)\leq v(C^{\prime})$; this assumption implies that the value (as valued by the characteristic function) of the model generated by the largest group of players $N$ is also the largest. This assumption implies that larger coalition generates more value than smaller ones. 2. 2. We use the shapley value, as defined in Section 3.1.3, as the relative contribution measure $\mathcal{C}$ to gauge a player’s contribution in the data sharing process. As mentioned, the shapley value has many desirable properties. With these assumptions, we introduce a general algorithm to determine the reward allocated to each player in a modified cooperative game. This algorithm will be used by a central agency who receives the data from all participating players. Input: Datasets of $n$ players $D_{1},D_{2},\dots,D_{n}$, characteristic function $v\mathrel{\mathop{\mathchar 58\relax}}2^{n}\mapsto\mathcal{R}$ Output: rewards $x_{1},x_{2},\dots,x_{n}$ 1$\phi\leftarrow$ shapley($v(.),D_{1},D_{2},\dots,D_{n}$); 2if _$getOptimalOutcome(\phi,v)\neq None$_ then $x_{1},x_{2},\dots,x_{n}\leftarrow getOptimalOutcome(\phi,v)$; return $x_{1},x_{2},\dots,x_{n}$ 3else $x_{1},x_{2},\dots,x_{n}\leftarrow getSuboptimalOutcome(\phi,v)$; return $x_{1},x_{2},\dots,x_{n}$ Algorithm 1 Data-sharing and reward division In the following sections, we analyse how functions $getOptimalOutcome$ and $getSuboptimalOutcome$ derive optimal and suboptimal outcomes. The central agency can use these results to allocate rewards for the participating players. ## 6 Optimal outcome To contextualise an optimal outcome in our modified game to a description of a real-life data-sharing process, imagine $n$ data providers come together to create a predictive model from the pooled data. Then using a monotonic characteristic function $v(.)$ to measure the value of the predictive model, we are able to derive the relative contributions of the players with the Shapley value. Subsequently, an optimal outcomes corresponds to a reward payoff for the players such that they have no incentives to break away from the grand coalition and the reward also commensurates each player’s relative contribution. In this section, we investigate how we can find this optimal outcome efficiently. We consider an outcome to a modified game optimal if they satisfy all properties in the Section 5. Hence, the task of finding an optimal outcome becomes one of finding the feasible region of the division of rewards to the players $<x_{1},x_{2},\dots,x_{n}>$ such that all desirable properties are satisfied. However, this is computationally inefficient because the definition of stability alone implies that we need to check $2^{n}$ constraints. Fortunately, we show, in the following proposition, that the set of stable solution payoffs can be found by checking only $n$ constraints, given that the characteristic function $v(.)$ is monotonic. ###### Proposition 5. Stable Solution Set Given a monotonic valuation function $v(.)$, let $<x_{1},x_{2},...,x_{n}>$ be a payoff vector to $n$ agents, arranged in ascending order. Then this payoff vector is stable if and only if $\forall i\in\\{1,2,\dots,n\\}$, $x_{i}\geq v(\bigcup_{j\leq i}j)$ (4) ###### Proof. if $\forall i,x_{i}\geq v(\bigcup_{j\leq i}j)$, then for any sub coalition $C^{{}^{\prime}}$ containing a subset of agents, the agent in $C^{{}^{\prime}}$ with the largest index (we refer to this index as $k$) would be such that $x_{k}\geq v(\bigcup_{j\leq k}j)$. Since $k$ is the largest index in $C^{,}$, it follows that $C^{{}^{\prime}}\subseteq\bigcup_{j\leq k}j$ and because of our assumption that $v(.)$ is monotonic, we have $x_{k}\geq v(\bigcup_{j\leq k}j)\geq v(C^{{}^{\prime}})$. Thus by definition of stability, the payoff vector $<x_{1},...,x_{n}>$ is stable. Conversely, if the payoff vector is stable, then by definition, for all index $k$ and for every subcoalition $\bigcup_{j\leq k}j$, there exists at least one agent with payoff larger than $v(\bigcup_{j\leq k}j)$. Since, $x_{k}$ is the highest payoff received in this subcoalition (because we rearranged the payoff index in ascending order), we have $x_{k}\geq v(\bigcup_{j\leq k}j)$ for all $k$. ∎ This theorem implies that to ensure players are rewarded such that they have no incentive to break off and form smaller private coalitions, their reward must be larger than the value of model created by the union of all players with less or equal contribution as him. This is a simple definition but also an extremely important one because it allows us to define the space of rewards such that no players are incentivised to break off from the grand coalition. Figure 3: Visualisation of Proposition 5. Lining up the players in ascending contribution, the payoff $x_{i}$ to each player must be larger than the value created by the union of previous players to ensure stability. With the above proposition on finding the stable solution set, it is then natural for us refine this stable set with properties related to fairness to define an optimal outcome: ###### Theorem 6. Optimal outcome Given a monotonic valuation function $v(.)$ with the value created by grand coalition equals $v(N)$, let $<x_{1},x_{2},...,x_{n}>$ be a solution payoff to $n$ agents, arranged in ascending order and $<\phi_{1},\phi_{2},...,\phi_{n}>$ be the relative contribution of the agents of the same index. Then this solution payoff is stable and proportional if and only if $\forall i\in\\{1,2,\dots,n\\}$, $x_{i}=\phi_{i}\frac{v(N)}{\phi_{n}}\geq v(\bigcup_{j\leq i}j)$ (5) An outcome with such a solution payoff is then said to be optimal. ###### Proof. In a stable solution, the player with the highest index receives a payoff $x_{n}\geq v(\bigcup_{j\leq n}j)=v(N)$ by definition of stability. This implies that the agent receiving the highest payoff must receive $v(N)$. By observation, for the solution to be proportional, this agent also has the highest contribution $\phi_{n}$ and thus every other agent $i$ has its contribution $\phi_{i}$ scaled by a factor of $\frac{v(N)}{\phi_{n}}$. The inequality follows naturally in order to abide to the condition of stability. Now, since the shapley value is used as the relative contribution measure, the proportional and stable solution payoff inherits other fairness properties: Null Player, Symmetry, Order Preserving. As such, an outcome with such a solution payoff satisfies all properties in Section 5 and is optimal. ∎ Theorem 6 gives us an optimal outcome of a multi-agent data sharing process formulated as a modified cooperative game. Intuitively, this theorem tells us that an optimal reward allocation to each player is such that it is proportional to the contribution measure and also large enough to prevent them from breaking away from the grand coalition. The central agency mediating the process could use the theorem to check if an optimal outcome is possible and if so, use it to allocate rewards to the participating players appropriately. It is also not difficult to see that the theorem implies that an optimal outcome, if it exists, is unique. ## 7 Suboptimal Outcomes The optimal outcome for a group of $n$ players may not exist given a particular characteristic function and contribution measure indicated by the Shapley value. That is, there may be no outcomes satisfying Theorem 6. This implies that in a data sharing process, the central mediator may not find an allocation of the final reward such that all players regard it as fair and stable at the same time. Nevertheless, there are still suboptimal outcomes which are of interest to us; in particular, since having a stable and proportional solution payoff is necessary and sufficient for an optimal outcome, we can restrict our analysis on suboptimal outcomes with the following solution payoffs: * • a solution payoff which is stable but not proportional * • a solution payoff which is proportional but not stable Analysis of these two classes of suboptimal outcomes gives us an idea on how to derive a suboptimal outcome which is amenable to the players when an optimal one is not achievable. As a mediator promoting the data sharing process between players, a government agency can derive a suboptimal outcome with some but not all desirable properties and perhaps still convince the players to continue collaborating in the data-sharing process, especially if the suboptimal outcome does not violate stability or proportionality too much. ### 7.1 Stable but not Proportional outcome This outcome implies that each player will receive a reward that ensures he is not incentivised by larger rewards elsewhere to leave the grand coalition. However, some players may receive a reward that is not proportional to his relative contribution and if players can tolerate some disproportionality, then this outcome is still acceptable. Let $<\phi_{1},...,\phi_{n}>$ be the shapley value of the agents arranged in ascending order. Proportionality holds between the payoffs $x_{i},x_{j}$ of any two players $i$ and $j$ if $\frac{\phi_{i}}{\phi_{j}}=\frac{x_{i}}{x_{j}}$. Hence, if proportionality cannot be achieved in a stable outcome, we need a metric to measure the degree of proportionality violation in a given payoff vector and select the stable outcome which violates this metric the least. The concept of proportionality deviation is widely studied in proportional representation systems such as allocation of parliament seats[2]. For example, we can measure the sum of pairwise absolute proportionality violations between the reward received by all players: $Deviation_{sum}=\sum_{i,j\in N}\mathinner{\\!\left\lVert\frac{\phi_{i}}{\phi_{j}}-\frac{x_{i}}{x_{j}}\right\rVert}_{p}$ (6) This certainly is not the only available deviation measure. One alternative can be the max individual deviation from proportionality. This measure is useful if we assume agents are able to tolerate some deviation from proportionality (a certain level of unfairness): $Deviation_{max}=max\left(\mathinner{\\!\left\lVert\frac{\phi_{i}}{\phi_{j}}-\frac{x_{i}}{x_{j}}\right\rVert}_{p}\right)_{i,j}$ (7) Hence, finding a stable solution with the lowest deviation measure can be formulated as the following optimisation problem: $\displaystyle\\!\min_{x_{1},...,x_{n}}$ $\displaystyle DeviationMeasure$ (8a) subject to $\displaystyle x_{i}\geq v\left(\bigcup_{j\leq i}j\right),$ (8b) $\displaystyle\text{additional constraints}.$ (8c) Objective function (8a) represents a deviation measure such as (6) or (7); constraint (8b) represents the solution payoff’s stability constraint, since we need to outcome to be stable; constraints (8c) represents additional constraints which may be required if one wishes to enforce certain additional Fairness properties (Symmetry, Order Preserving, Null Player), since without which, a non-proportional solution may not necessarily guarantee these properties. We acknowledge that finding an appropriate deviation measure is a non-trivial task as there are drawbacks to different deviation measures ([2] mentions 19 different kinds of proportionality deviation indices!); for example, measure (5) may assign large disproportional reward to a single player while measure (6) yields non-unique solutions. As such, the central agency in the data sharing process needs to design a deviation measure appropriate to reconcile a reasonable non-proportional outcome. However, empirical results at the end of the paper show that $Deviation_{sum}$ gives reasonable outcomes. ### 7.2 Proportional but not stable outcome A proportional but not stable outcome implies that while the reward given to each player is fair, some players may be incentivised to break away from the grand coalition for better rewards. To tackle this realistically in real life, the central agency must first find out how ”far away” the proportional solution is from each player’s lower bounds of stability. ###### Definition 7. $\epsilon$-Stability Let $<x_{1},...,x_{n}>$ be the solution payoff to the agents, then this solution payoff is $\epsilon$-stable if for all $C\subseteq N$, there exists $i\in C$ such that $x_{i}\geq(v(C)-\epsilon)$ Notice Definition 7 is identical to the stability definition apart from the a deduction of $\epsilon$ in the lower bound. If $\epsilon>0$, then we can understand $\epsilon$ as the penalty each player pays for deviating from the grand coalition to form a sub-coalition. As such, when a solution is $\epsilon$-stable, if players have to pay a penalty larger than $\epsilon$ to break off from the grand coalition, then these players will remain in the grand coalition because they will be unable to reap higher rewards elsewhere after paying the penalty to leave. Instead of being viewed as a penalty, $\epsilon$ can also be viewed as additional compensation to incentivise players to stay in the grand coalition. The central agency could try to give additional benefits equivalent to $\epsilon$ to ensure players remain in the grand coalition (of course, this requires agencies and players to discuss what denomination these benefits come in). The following corollary allows one to find this $\epsilon$. ###### Corollary 7.1. Let $<x_{1},x_{2},...,x_{n}>$ be a proportional but not stable payoff vector. Define $d_{i}=\begin{cases}0&\text{if $x_{i}\geq v(\bigcup_{j\leq i}j)$}\\\ v(\bigcup_{j\leq i}j)-x_{i}&\text{otherwise}\\\ \end{cases}$ Then the payoff vector is $\epsilon$-stable if $\epsilon\geq\max\limits_{i}(d_{i})$. ###### Proof. Given any solution payoff $<x_{1},...,x_{n}>$, since for any subcoalition $C$, there exists $i$ such that $C\subseteq\bigcup_{j\leq i}j$ and thus $v(C)\leq v(\bigcup_{j\leq i}j)$ by the monotonicity of $v(.)$. It follows directly that for every agent $k$ and every subcoalition $C$ with $k$ being the highest indexed agent belonging to this subcoalition, $\epsilon\geq\max\limits_{i}(d_{i})\geq v(C)-x_{k}\xrightarrow{}x_{k}\leq(v(C)-\epsilon)$. ∎ ## 8 Sensitivity Analysis Given that an optimal outcome is decided for a modified game where players have received a learnt model of certain value, there can be many elements that influence the process retrospectively. Sensitivity analysis allows us to investigate how sensitive the optimal outcome is towards such changes. In particular, we wish to investigate if an outcome will remain optimal with such changes. Even if such perturbations have not occurred, the central agency can perform sensitivity analysis pre-emptively to investigate how robust the current optimal outcome is towards possible marginal changes in the future. Figure 4: Sensitivity analysis of two possible scenarios For example, one may realise retrospectively that a segment of a player’s data is unusable due to privacy concerns and needs to be removed from the model created (leading to a marginal decrease in contribution of that player); on the other hand, a player could have his relative contribution increased marginally by a constant amount because he was able to offer some non-data related help in the data sharing process (e.g offering GPU processing power). In addition, new players may enter the collaboration process. While we can reinstate the entire game with an additional player, the process to calculate the Shapley value of all players again using $v(.)$ requires us to pool the data in $2^{n}$ subsets, making the process time consuming. As such, if an expert can estimate the marginal contribution of the new player, we show in the following section that we can estimate if an optimal outcome is still possible without complex recalculations. ### 8.1 Inclusion of an new agent First, we estimate the new agent’s contribution relative to existing agents using some expert opinion (making educated guess based on some features surrounding the data set or the data provider). Second, we estimate how much each coalition’s value changes due to the inclusion of the new agent. For the first estimate, we let the index of the new agent be $new$ and introduce $\phi_{new}$ to the existing contribution vector (whilst preserving the ascending order of the vector): $<\phi_{1},\phi_{2},...,\phi_{n}>$ becomes $<\phi_{1},\phi_{2},...,\phi_{new},...,\phi_{n}>$. For the second estimate, it is computationally expensive to derive the value of each subcoalition containing new agent $new$; for ease of computation, we assume that for every subcoalition $C$ in the original group of agents, $v^{\prime}(C\cup new)\approx v(C)+\phi_{new}$. This implies the marginal increase in a value of a subcoalition by agent $new$ is the same and leads to an overall relative contribution $\phi_{new}$, as calculated by the shapley value formula. Let the new grand coalition, $N^{\prime}=(N\hskip 2.84526pt\cup\hskip 2.84526ptnew)$, contain $n+1$ players. Hence, $v^{\prime}(N^{\prime})=v^{\prime}(N\hskip 2.84526pt\cup\hskip 2.84526ptnew)\approx v(N)+\phi_{new}$. Also assume the current outcome containing the payoff vector $<x_{1},...,x_{n}>$ is stable, proportional and thus optimal. We use $w$ to indicate index of agents with lower contribution than $new$ and $s$ to indicate index of agents with higher contribution than $new$. ###### Theorem 8. A stable and proportional solution, and thus an optimal outcome, is still possible with a new player with marginal contribution $\phi_{new}$ to all existing subcoalitions if and only if $\phi_{new}$ satisfies the following conditions: $\displaystyle\begin{split}&\forall s,\hskip 5.69054pt\phi_{s}\frac{v(N)+\phi_{new}}{\phi_{max}}\geq v\left(\bigcup_{j\leq s}j\middle\backslash new\right)+\phi_{new},\\\ &\phi_{new}\frac{v(N)+\phi_{new}}{\phi_{max}}\geq v(\bigcup_{j<new}j)+\phi_{new}\end{split}$ (9) ###### Proof. Theorem 8 stems directly from optimality conditions in Theorem 6. For the first inequality in (9), in a game with a new player with $\phi_{new}$, an optimal outcome is possible if and only if $\begin{split}\forall s,&\hskip 5.69054ptx_{s}=\phi_{s}\frac{v^{\prime}(N^{\prime})}{\phi_{max}}\geq v^{\prime}(\bigcup_{j\leq s}j)\hskip 150.79959pt\text{(Theorem 6)}\\\ &\iff\phi_{s}\frac{v(N)+\phi_{new}}{\phi_{max}}\geq v^{\prime}(\bigcup_{j\leq s}j)\hskip 113.81102pt\text{(Estimation of $v^{\prime}$ on the left)}\\\ &\iff\phi_{s}\frac{v(N)+\phi_{new}}{\phi_{max}}\geq v\left(\bigcup_{j\leq s}j\middle\backslash new\right)+\phi_{new}\hskip 28.45274pt\text{(Estimation of $v^{\prime}$ on the right)}\end{split}$ (10) The proof is identical for the second inequality in (9). ∎ Notice that all terms except $\phi_{new}$ in the above theorem is already computed in the original game. Hence, all we have to do is check if $\phi_{new}$ satisfies the inequalities. Furthermore, it is interesting to note that if an outcome is already optimal, the entry of a new player does not influence the optimality conditions on existing players with lower relative contributions than the new player. ### 8.2 Perturbation of contribution of one particular player Next, we analyse the case where all subcoalitions containing a particular player $i$ is changed by a constant value. As mentioned, this may occur when the central agency retrospectively needs to perturb the value of one player’s data due to privacy concerns or if we retrospectively realise that a segment of a player’s data is corrupted. Again, we make an estimation of the new value generated by all coalitions containing player $i$ in the following manner: let $v^{\prime}(C)=v(C)+\delta$ for all coalition $C$ containing agent $i$. Then this implies a linear shift in player $i$’s shapley value: $\phi_{i}^{new}=\phi_{i}+\delta$ (by definition of shapley value). Also assume that $\delta$ is such that the order of relative contribution of agents does not change and the current outcome containing the payoff vector $<x_{1},...,x_{n}>$ is already optimal. For a particular agent of index $k$, let $d_{k}=\frac{\phi_{k}}{\phi_{max}}v(N)-v(\bigcup_{j\leq k}j)\geq 0$, which always holds because the current soluton is optimal. Again, we use $w$ to indicate index of agents with lower contribution than $i$ and $s$ to indicate index of agents with higher contribution than $i$. ###### Theorem 9. A stable and proportional solution, and thus an optimal outcome, is still possible under $\phi_{i}^{new}=\phi_{i}+\delta$ for player $i$ if and only if $\delta$ satisfies the following conditions: $\begin{split}&\forall w,\hskip 5.69054pt\delta\geq-\frac{\phi_{max}}{\phi_{w}}d_{w},\\\ &\forall s,\hskip 5.69054pt\delta\leq\frac{d_{s}\phi_{max}}{\phi_{max}-\phi_{s}},\\\ &\delta^{2}+(\phi_{i}+v(N)-\phi_{max})\delta+d_{i}\phi_{max}\geq 0,\end{split}$ (11) ###### Proof. We prove all three inequalities in Theorem 9 in order. An optimal outcome is possible under new conditions where player’s $i$’s contribution is perturbed by $\delta$ if for the first inequality: $\begin{split}\forall w,&\hskip 5.69054ptx_{w}=\phi_{w}\frac{v(N^{\prime})}{\phi_{max}}\geq v^{\prime}(\bigcup_{j\leq w}j)\hskip 150.79959pt\text{(Theorem 6)}\\\ &\iff\phi_{w}\frac{v(N)+\delta}{\phi_{max}}\geq v(\bigcup_{j\leq w}j)\hskip 113.81102pt\text{(Estimation of $v^{\prime}$)}\\\ &\iff\phi_{w}\frac{v(N)}{\phi_{max}}-v(\bigcup_{j\leq w}j)\geq-\frac{\phi_{w}}{\phi_{max}}\delta\hskip 28.45274pt\\\ &\iff d_{w}\geq-\frac{\phi_{w}}{\phi_{max}}\delta\hskip 147.95433pt\text{(By definition of $d_{w}$)}\\\ &\iff\delta\geq-\frac{\phi_{max}}{\phi_{w}}d_{w}\end{split}$ (12) For the second inequality: $\begin{split}\forall s,&\hskip 5.69054ptx_{s}=\phi_{s}\frac{v(N^{\prime})}{\phi_{max}}\geq v^{\prime}(\bigcup_{j\leq s}j)\hskip 179.25235pt\text{(Theorem 6)}\\\ &\iff\phi_{s}\frac{v(N)+\delta}{\phi_{max}}\geq v(\bigcup_{j\leq s}j)+\delta\hskip 113.81102pt\text{(Estimation of $v^{\prime}$)}\\\ &\iff\phi_{s}\frac{v(N)}{\phi_{max}}-v(\bigcup_{j\leq s}j)\geq-\frac{\phi_{s}}{\phi_{max}}\delta+\delta\\\ &\iff d_{s}\geq(-\frac{\phi_{s}}{\phi_{max}}+1)\delta\hskip 139.4185pt\text{(By definition of $d_{s}$)}\\\ &\iff\delta\leq\frac{\phi_{max}}{\phi_{max}-\phi_{s}}d_{s}\end{split}$ (13) Lastly, for the third equality: $\begin{split}&x_{i}=(\phi_{i}+\delta)\frac{v(N^{\prime})}{\phi_{max}}\geq v^{\prime}(\bigcup_{j\leq i}j)\hskip 179.25235pt\text{(Theorem 6)}\\\ &\iff(\phi_{i}+\delta)\frac{v(N)+\delta}{\phi_{max}}\geq v(\bigcup_{j\leq i}j)+\delta\hskip 108.12047pt\text{(Estimation of $v^{\prime}$)}\\\ &\iff\phi_{i}\frac{v(N)}{\phi_{max}}-v(\bigcup_{j\leq i}j)+\frac{\phi_{i}\delta+\delta v(N)+\delta^{2}}{\phi_{max}}\geq\delta\\\ &\iff d_{i}+\frac{\phi_{i}\delta+\delta v(N)+\delta^{2}}{\phi_{max}}\geq\delta\hskip 122.34685pt\text{(By definition of $d_{i}$)}\\\ &\iff d_{i}\phi_{max}+\phi_{i}\delta+\delta v(N)+\delta^{2}\geq\delta\phi_{max}\\\ &\iff\delta^{2}+(\phi_{i}+v(N)-\phi_{max})\delta+d_{i}\phi_{max}\geq 0\end{split}$ (14) ∎ To make sense of the inequalities presented in Theorem 9. We first notice that in the trivial case where $\delta=0$, each inequality holds because $d_{k}\geq 0$ for any $k$. Furthermore, when $\delta>0$ (when the contribution of a player is increased), then the first inequality is true because the right expression in the last line of (12) is always negative; the third inequality also holds true because the quadratic inequality on the last line of (14) has positive coefficients. This implies that we only have to check whether the second inequality holds true when $\delta>0$. Lastly, when $\delta<0$ (when the contribution of a player is decreased), the second inequality holds true; that is, we do not have to worry about infeasibility of payoffs to players with lower contribution than the perturbed player. For sanity check, we also notice that there exists some $\delta<0$ such that the first inequality holds true in the last line of (12) since the right term is negative. Lastly, it is easily verifiable that the third inequality, the quadratic inequality $\delta^{2}+(\phi_{i}+v(N)-\phi_{max})\delta+d_{i}\phi_{max}\geq 0$ holds true for some values of $\delta<0$ because the coefficients are all non-negative. ## 9 Experiments Let $D_{i}$ be the dataset held by player $i$. We emulate a multi-party machine learning process through the use of one-dimensional synthetic datasets $D_{1},D_{2},\dots,D_{7}$ held by 7 data providers and demonstrate the properties and outcomes mentioned in our paper. In particular, we demonstrate the following: 1. 1. Desirable properties of the Shapley value in measuring the contribution of each player’s dataset based on two different characteristic functions - Fisher Information and Mutual Information. 2. 2. Optimality of outcomes induced by these characteristic functions (i.e is there a fair and stable way to reward the players?) 3. 3. Deriving a suboptimal outcome when an optimal outcome is not achievable. 4. 4. Will an optimal outcome still be achievable if we perturb the contribution of one player marginally? In addition to demonstrating the above, we also offer some intuitive and useful insights to various observations made in the experiments with regards to the data sharing process. We also explain why we select Fisher Information and Mutual Information as characteristic functions over other alternatives. ### 9.1 Introducing two different characteristic function As mentioned previously, the characteristic function $v(.)$ used to measure the value of predictive model created should be reasonable and context dependent. One immediate idea that one has with regards to evaluating models is to evaluate its performance with a test data set. While this seems promising in evaluating the power of each data provider’s data set, this is not always achievable in reality because it is non-trivial to gather a test data set is truly representative of inputs that future predictions will be based on. Furthermore, the central agency may not have access to such data. Instead, it may be more favourable to rely on certain statistical measures to evaluate the level of uncertainty associated with the learnt predictive models. We introduce two different characteristic functions, Fisher Information and Mutual Information, which are both deeply rooted in probability. First, we give a brief introduction of these functions and discuss their appropriateness in valuing predictive models. #### 9.1.1 Fisher Information Fisher Information [6] measures the amount of information that an observed data carries about the parameter $\theta$ used to model the data. In particular, the Fisher Information of a set of data points gives us an idea about the variance of an unbiased parameter estimate generated from these data points, directly allowing us evaluate the uncertainty surround the predictive model (which comprises of these parameters). Traditionally, statisticians have used Fisher Information to design experiments to ensure estimated parameters lie within a small confidence interval. ###### Definition 10. Let $f(X;\theta)$ be the probability density function for random variable(s) $X$ conditional on a true parameter $\theta$. Then provided $f(X;\theta)$ is twice differentiable and under some regularity conditions, the Fisher Information is defined as $\mathcal{I}(\theta;X)=-\operatorname{E}\left[\left.{\frac{\partial^{2}}{\partial\theta^{2}}}\log f(X;\theta)\right|\theta\right]$ The more interesting result follows from the derivation of the Cramer Rao bound after $\mathcal{I}(\theta)$ is derived, which lower bounds the precision that we can estimate parameter $\theta$. ###### Definition 11. Let $\hat{\theta}$ be any unbiased estimator of $\theta$ derived from observed data $X$. Then Cramer Rao bound states that $Var(\hat{\theta})\geq\frac{1}{\mathcal{I}(\hat{\theta};X)}$, where the equality holds when the estimated parameter is efficient. This bound implies that when estimating parameters, maximising the Fisher Information is equivalent minimising the smallest attainable variance about an estimated parameter. In fact, in many estimators, the equality holds. In the context of a multi-party machine learning process, we can derive the Fisher Information of a set of data to derive the smallest attainable variance of the estimated parameter; a higher Fisher Information for a group of players’ data implies possibly less uncertainty. This allows us to define, for any coalition $C$ of players with combined dataset $D_{C}$, $v_{1}(C)=\mathcal{I}(\theta;D_{C})$. In Linear Regression of the form $y=x^{T}\theta+\epsilon$, where $\epsilon\sim\mathcal{N}(0,\sigma^{2}I)$, for a dataset $D_{C}$ containing $n$ observed input and output values $(X_{i};Y_{i})$ with noise $\sigma_{i}$: $v_{1}(C)=\mathcal{I}(\theta;D_{C})=\sum_{i=1}^{n}\frac{E(X_{i}^{T}X_{i})}{\sigma_{i}}$ (15) We notice that $v_{1}(D_{1})+v_{1}(D_{2})=v_{1}(D_{1}\cup D_{2})$ for any two datasets $D_{1}$ and $D_{2}$. This implies that $v_{1}(.)$ is an additive and monotonic characteristic function. In fact, the variance of the estimated parameter is equals the reciprocal of $v_{1}(.)$ in Linear Regression. #### 9.1.2 Mutual Information Mutual Information measures the reduction in uncertainty regarding model parameters when one has access to a set of data $D$. Even though it shares a similar idea with Fisher Information in that both gauge the value of data based on uncertainty of parameters, Mutual Information accounts for the prior uncertainty regarding the parameters. One may prefer to use Mutual Information if one has some idea what the ”inherent uncertainty” associated with the model parameters is. ###### Definition 12. The Mutual Information between two continuous variables $(A,B)$ is defined as $M.I(A,B)={\displaystyle\int_{\mathcal{Y}}\int_{\mathcal{X}}{p_{(X,Y)}(x,y)\log{\left({\frac{p_{(X,Y)}(x,y)}{p_{X}(x)\,p_{Y}(y)}}\right)}}\;dx\,dy,}$ In a Bayesian Linear Regression setting, let a dataset $D_{C}$ contain $n$ output values $(X_{i},Y_{i})$ (represented by $X\in\mathcal{R}^{n\times p}$, where $p$ is the number of parameters) with noise $\delta_{i}$ associated with each observation $i$. Assume $\delta\in\mathcal{R}^{n\times n}$ is a diagonal matrix with diagonals = $\delta_{i}$ and $\Sigma_{prior}$ is the prior covariance of parameters. Then we have: $v_{2}(C)=M.I(\theta,D_{C})=0.5\text{log}|\Sigma_{prior}||\Sigma_{prior}^{-1}+X^{T}\sigma^{-1}X|$ (16) We notice that $v_{2}(.)$, unlike $v_{1}(.)$, is not additive but still monotonic. ### 9.2 Experiment results We first synthesise one-dimensional data held by seven different players, shown in Figure 5. The legend also shows the distribution of inputs $X$ of the data held by each player, along with the output noise $\delta$ associated with each player. Notice player 1 holds a single datapoint at the origin and player 3 and 4 hold datapoints at similar input spaces. Figure 5: Data held by different players #### 9.2.1 Shapley value We then derive the shapley value $\phi_{i}$, our contribution metric, for each player $i$, based on $v_{1}(.)$ and $v_{2}(.)$ in Table 1. Because $v_{1}(.)$ and $v_{2}(.)$ hold different interpretations towards the value of data, we observe that even though both valuations preserve the players’ order of contribution, $v_{1}$ gives much larger disparity between players’ contribution as compared to $v_{2}$. This can be inferred directly from the functions’ definitions. As observed from Equation (15), $v_{1}(.)$ regards the value of 3 datapoints (sampled from similar locations and with equal noise level) to have 3 times the value of 1 datapoint; this leads to large ratios between the players’ contribution. On the other hand, $v_{2}(.)$’s formulation in Equation (16) gives diminishing returns when more data is included; this leads to smaller ratio between the players’ contribution. | $\phi_{1}$ | $\phi_{2}$ | $\phi_{3}$ | $\phi_{4}$ | $\phi_{5}$ | $\phi_{6}$ | $\phi_{7}$ ---|---|---|---|---|---|---|--- $v_{1}$ | 0 | 162 | 500 | 501 | 1007 | 3003 | 3868 $v_{2}$ | 0.002 | 1.404 | 1.613 | 1.617 | 1.884 | 2.314 | 2.327 Table 1: Shapley value based on two different valuation function $v_{1}$ and $v_{2}$ Despite the differences in interpretation and contribution values obtained from both characteristic functions, we notice some similarities when they derive the shapley value $\phi$: 1. 1. Player 1 is a null player with a single data point $(0,0)$, which is generally useless in estimating the model parameter. Hence, both characteristic functions calculate $\phi_{1}$ as close to zero. 2. 2. Player 3 and Player 4 hold almost identical data (sampled from the same distribution) and thus $\phi_{3}$ and $\phi_{4}$ are almost identical for both characteristic functions. 3. 3. The order of player’s contribution is the same for both characteristic functions. As such, despite differences in $v_{1}(.)$ and $v_{2}(.)$, they both offer some intuitive and desirable properties when deriving the contribution measure (these properties generally relate to how appropriately fair we are evaluating players’ contribution). When generating an appropriate level of rewards to the players based on this contribution measure later, we will see that such desirable properties will be passed onto the outcomes. #### 9.2.2 Characteristic functions affect optimality of outcomes Definition 4 and Proposition 5 defines the space of proportional and stable solution respectively. Using these definitions, we show the lower bound for stability for each player (recall that stability is defined by lower bounds on the reward given to each player in Proposition 5) and the proportional solution for $v_{1}(.)$ and $v_{2}(.)$ respectively in Figure 6. Moreover, Theorem 6 directly allows us to check if an optimal outcome is possible based on whether the proportional solution is stable; hence, Figure 6 also showcases whether the outcome is optimal for both characteristic functions. The choice of an appropriate characteristic function should not be motivated by whether said function is able to achieve an optimal outcome. The outcome, whether optimal or not, is simply the result from a function choice. The choice of a particular function to value a model should still depend on its theoretical or practical appropriateness. Figure 6: Stablility bound and Proportional solution for $v_{1}$ and $v_{2}$ We immediately notice the following differences between the solutions induced by two different characteristic functions: 1. 1. For $v_{1}(.)$, we see that for each player, the proportional reward given (red bars) is larger or equals to his own lower stability bound (blue dotted lines). Hence by Theorem 6, the payoff vector indicated by the red bars to the players is an optimal outcome; notice that player 3 and 4 enjoy equal amount of reward because they contributed data of similar value and player 1 receives no reward. 2. 2. For $v_{2}(.)$, we notice that the proportional solution payoff for Player 2,3,4 and 5 is lower than the stability lower bound defined by Proposition 5. Hence, the proportional solution is not stable and an optimal outcome is not achievable. As such, if the data providers and central agency agree to use $v_{1}(.)$ as the characteristic function used to evaluate the value of a model, then an optimal outcome can be achieved in the data sharing process. However, only suboptimal outcomes can be achieved with $v_{2}(.)$ To intuitively explain why $v_{2}(.)$ does not allow an optimal outcome, notice that the inequality in Theorem 6 (which checks if an optimal outcome is possible): $\phi_{i}\frac{v(N)}{\phi_{n}}\geq v(\bigcup_{j\leq i}j)$ most likely holds if the contribution of player $i$, $\phi_{i}$, is not much less than the combined value of the model created by players with lower or equal contribution as player $i$. In other words, if a player’s data is generally useful (dictated by how much the player’s data marginally increases the value of different coalitions of players) to every coalition, then his relative contribution should not be too small. In the larger picture, we need this inequality to hold for every player, suggesting that an optimal outcome is possible only when the players’ data is generally equally useful to all different coalitions. Notice that $v_{2}(.)$, Mutual Information, in Bayesian Linear Regression is a metric that values addition of data less when there is already a lot of data present; thus, the marginal addition of player’s data to larger groups of players is generally not very useful, causing the inequality to be violated and the optimal outcome to be not achievable. On the other hand, Fisher Information values datapoints additively without diminishing contribution, making players’ data generally useful for every coalition and allowing the optimal outcome to be achievable. #### 9.2.3 Settling for a suboptimal outcome To further demonstrate the analysis on suboptimal outcomes in Section 7, we focus on the case where the central agency, upon finding that an optimal solution cannot achieved, attempts to seek a suboptimal outcome which is still stable. This coincides with the outcome raised in Section 7.1, where players still receives a ”somewhat proportional” reward (defined by a deviation measure). Since $v_{2}(.)$ cannot achieve an optimal outcome, we demonstrate the suboptimal outcome based on $Deviation_{sum}$ minimisation (Equation 8a) in Figure 7. Figure 7: Suboptimal outcome (green line) based on $Deviation_{sum}$ minimisation From Figure 7, one notices that the suboptimal outcome (green line) is stable and preserves certain level of proportionality between players. For example, player 1, a null player, continues to receive zero reward, whereas player 3 and 4 still receive the same level of reward in the suboptimal outcome. However, the rewards given to other players are not proportional to their relative contribution anymore. It is easy to see that a suboptimal outcome which compromises proportionality in return for stability will always yield higher reward for players. This is because the proportional reward allocation is not stable and thus we have to increase the reward given to achieve stability. #### 9.2.4 Perturbation of contribution of one particular player We would also like to demonstrate how Theorem 9 allows us to check if an optimal outcome is still achievable given that the contribution of one player (with index $i$) changes marginally. Recall that this implies the following: $\begin{split}&v^{\prime}(C)=v(C)+\delta\text{ for all coalition }C\text{ containing agent }i\\\ &\phi^{new}_{i}=\phi_{i}+\delta\end{split}$ (17) For the rest of the section, we assume that $\delta>0$, implying that the contribution of a data provider has marginally increased (due to addition of better data or denoising). From Theorem 9, we only have to check if the following holds true for all players with higher contribution ($\phi$) than player $i$: $\delta\leq\frac{d_{s}\phi_{max}}{\phi_{max}-\phi_{s}}\hskip 5.69054pt\forall s\in\hskip 2.84526pt\\{s\mathrel{\mathop{\mathchar 58\relax}}\phi_{s}>\phi_{i}\\}$ (18) In the given example, assume that we would like to marginally change player 4’s contribution according to expression (17) based on characteristic function $v_{1}(.)$. Then, we simply check if the following three inequalities (for player 5,6 and 7) hold: $\begin{split}&\delta\leq\frac{d_{5}\phi_{max}}{\phi_{max}-\phi_{5}}\\\ &\delta\leq\frac{d_{6}\phi_{max}}{\phi_{max}-\phi_{6}}\\\ &\delta\leq\frac{d_{7}\phi_{max}}{\phi_{max}-\phi_{7}}\end{split}$ (19) Omitting the arithmetic derivation, the three inequalities can be summarised to the inequality condition: $0<\delta\leq 249.4$ for the given synthetic dataset we created and $v_{1}(.)$. Hence, the optimal outcome is still achievable if and only if the marginal contribution of player 4 is increased not more than 249.9. We set $\delta$ to 200 and 400 and display the outcomes in Figure 8 below. Figure 8: Outcomes when player 4’s contribution is increased by $\delta=200$ and $\delta=400$ As expected, since $\delta=200$ satisfies the inequality condition, the outcome is still optimal on the left, whereas when $\delta=400$ violates the inequality condition, the outcome is no longer optimal (the red circle highlights that player 5’s proportional payoff is not stable, and hence an optimal outcome is not achievable) on the right. ### 9.3 Extensions on other machine learning models #### 9.3.1 Logistic Regression for classification problems Similar to Linear Regression, the Fisher Information for a Logistic Regression model is well defined and indicative of the variance of the estimated parameters as well. The derivation [16] is well known and we summarise the results below. Let $\theta=(\theta_{1},\dots,\theta_{p})^{T}\in\mathcal{R}^{p}$ be the model parameters. For the dataset $D=\\{(x_{t},y_{t})\\}$ with $n$ data points, the likelihood of a Logistic Regressionmodel is as follows: $f(y|x,\theta)=P(y+1|x,\theta))=\frac{1}{1+e^{-{\theta^{T}x}}}$ (20) Then the Fisher Information (in matrix form since we have more than one parameter now) has entries: $\mathcal{I}(\theta)=X^{T}WX$ (21) where we have $X$ as the data set in matrix form and defined the $n\times n$ diagonal matrix: $W=\text{diag}\left(\frac{e^{\sum^{p}_{j=0}\theta_{j}x_{1j}}}{(1+e^{\sum^{p}_{j=0}\theta_{j}x_{1j}})^{2}},\dots,\frac{e^{\sum^{p}_{j=0}\theta_{j}x_{nj}}}{(1+e^{\sum^{p}_{j=0}\theta_{j}x_{nj}})^{2}}\right)$ (22) We can use the sum of diagonals, $(X^{T}WX)_{jj}$, of the Fisher Information matrix as the characteristic function associated to a set of data, since large diagonal entries correspond to smaller uncertainty about one of the parameters using the Cramer Rao Bound (Definition 11) #### 9.3.2 Neural Networks For neural networks with softmax or sigmoid output layers (whose output can be viewed as a likelihood), the Fisher Information and Mutual Information are also well defined for each parameter within the networks. However, due to large number of parameters in modern neural networks, it may be impractical to perform large matrix arithmetic for these measures. Agencies can resolve this by adopting a pre-trained neural network and only train the parameters on the last few layers, greatly reducing the number of parameters in question (of course, this requires the pretrained model to be trained on a similar task). ### 9.4 Extension on real life data We extend the data sharing process to a real life data set consisting of breast cancer tumour diagnosis [15]. Such data can be held in small sets separately by a few hospitals, and our data sharing process investigates their contribution and rewards when they work together. The data is labeled with a binary value indicating whether a tumour is malignant based on various attributes of its appearance such as perimeter and smoothness. We first train a neural network using a small subset of data to simulate a pre-trained neural network with two hidden layers of 6 hidden units each. Our data sharing process learns 4 parameters (3 parameters for each unit in the exposed layer and 1 offset parameter) associated with the last layer in a Logistic Regression setting. Figure 9 highlights the neural network structure. Figure 9: Pre-trained neural network with the data sharing process done over the last layer Hence, although the breast cancer data set has 10 features, the neural network transforms it into a 3 dimensional data set in the first 2 layers. This transformed data is then trained over a Logistic Regression model in the final layer. To further demonstrate that some data points in the 3 dimensional space is more useful than others, we use the trace of the Fisher Information matrix as our characteristic function in a Logistic Regression setting with $\mathcal{I}(\theta)=X^{T}WX$ defined in Equation (21). The expression tells us that $\mathcal{I}(\theta)$ has large diagonal entries when $W$, a diagonal matrix, has large diagonal entries. Equation (22) further implies that each diagonal entry corresponds to $\hat{p}(1-\hat{p})$ of a contributed data point, where $\hat{p}$ is the output of the Logistic Regression model for that data point. Thus, the diagonal entry is large when the corresponding data point contributed lies close to the decision boundary of the learnt Logistic Regression model. As such, if the Fisher Information is used as the characteristic function used to gauge the value of data, data points which lie close to the learnt decision boundary will have higher value (Figure 10). To demonstrate this, we assume 3 hospitals each with the same number of data points. They hold data points in 3 different regions: * • Hospital A holds only data points which lies near the boundary ($0.25<\hat{p}<0.75$) * • Hospital B holds data points which can be moderately far from the boundary ($0.1<\hat{p}<0.9$) * • Hospital C holds data points which can be very far from the boundary ($0.05<\hat{p}<0.95$) Figure 10: Varying importance (based on Fisher Information) of real life data after neural network transformation Figure 11: Shapley value of each hospital based on Fisher Information We see that randomly sampling data points for each player at different regions of the feature space does indeed give different level of contribution in the data sharing process. To calculate whether the outcome is optimal in terms of the model awarded to each hospital, one applies Theorem 6 based on the given Shapley value of each hospital. In this case, we assume the data set is divided amongst the three hospital (around 200 data points each) such that each holds equal number of data points but of different quality. It turns out, fortunately, the outcome is optimal (proportional and stable): | Hospital A | Hospital B | Hospital C ---|---|---|--- $\phi$ (contribution) | 317 | 1369 | 2801 Reward | 508 | 2194 | 4488 Table 2: Shapley value based on two different valuation function $v_{1}$ and $v_{2}$ We observe that each hospital receives a better model than if it had worked alone. This implies the data sharing process creates higher quality model for each participating hospital and is desirable for society at large. ## 10 Future Work ### 10.1 Choosing reasonable characteristic functions Choosing a sound characteristic function to measure players’ contribution in a data-sharing process is crucial in calculating contribution and generating fair reward for each player. In corporate practice, players and agencies may find it inconvenient to use complicated measures (such as Fisher Information and Mutual Information) due to the lack of understanding of their theoretical interpretations. This may require some domain experts to be involved in such data sharing projects. From a theoretical stand, it would be desirable to categorise characteristic functions into classes based on certain function properties (e.g monotonicity, submodularity, superadditivity). We can then study what outcomes are possible for different classes of functions. This is useful because when faced with a new characteristic function, we can try to see if such a function falls into any of these function classes and immediately reach some known conclusions about the outcome. ### 10.2 Desirable suboptimal outcomes This paper offered a few suboptimal outcomes when optimality cannot be achieved in Section 7. However, it is not clear which outcomes are actually desired by participating data providers in real life. One may need to rely on some empirical studies on the behaviour of players in terms of preference and rationality. ## 11 Conclusion This paper aimed to provide investigative insights towards the data sharing process with regards to players’ contribution evaluation and outcomes, which have not been studied much previously. In this paper, we formulated the data sharing process as a new class of cooperative game with new definition of stability and fairness. Following which, we provided theoretical analyses on the various optimal and suboptimal outcomes of such games in relation to any arbitrary characteristic function and explained how these outcomes may be useful to a central mediator in the data sharing process in terms of explaining the behaviours of participating players. Finally, we performed sensitivity analysis to study how sensitive an optimal outcome is towards marginal changes. Through experiments on synthetic real life datasets, we demonstrated the theoretical results introduced in the paper, offered intuitive meaning behind them and explained how these results can be extended towards different machine learning models. ## References * Alessandro and Jens, [2005] Alessandro, A. and Jens, G. (2005). Privacy and rationality in individual decision making. IEEE Security Privacy, e(1), 26-33. * Alexander, [2012] Alexander, K. (2012). Measurement of disproportionality in proportional representation systems. Mathematical and Computer Modelling 48 (p.1421-1438). * Amirata and James, [2019] Amirata, G. and James, Z. (2019). Data shapley: Equitable valuation of data for machine learning. Proceedings of the 36th ICML. * Chessa et al., [2019] Chessa, M., Grossklags, J., and Loiseau, P. (2019). A game-theoretic study on non-monetary incentives in data analytics projects with privacy implications. In 28th IEEE Computer Security Foundation Symposium (p. 90-104). * Efron and D.V., [1978] Efron, B. and D.V., H. (1978). Assessing the accuracy of the maximum likelihood estimator: Observed versus expected fisher information. Biometrika, Volume 65, Issue 3, December 1978, Pg457–483. * Erich and George, [2012] Erich, L. and George, C. (2012). Theory of point estimation. Second Edition, Springer texts in statistics. (p. 115). * Georgios et al., [2012] Georgios, C., Edith, E., and Miachel, W. (2012). Computational aspects of cooperative game theory. Ronald, J.B.; Thomas, G.D. (eds.) Synthesis lectures on Artificial Intelligence and Machine Learning (p. 11-35). * Kaiming et al., [2016] Kaiming, H., Xiangyu, Z., Shaoqing, R., and Jian, S. (2016). Deep residual learning for image recognition. Proceedings of 31st CVPR. * Koutsopoulos et al., [2015] Koutsopoulos, I., Gionis, A., and Halkidi, M. (2015). Auctioning data for learning. Proceedings of 15th ICDMW. * Linda, [2019] Linda, S. (2019). Information gain. Washington University. Lecture notes. EEP 596: Machine Vision. * Lloyd, [1953] Lloyd, S. (1953). A value for n-person games. Kuhn, H. W.; Tucker, A. W. (eds.). Contributions to the Theory of Games. Annals of Mathematical Studies (p. 307-317). * Nicolas et al., [2013] Nicolas, G., Stratis, I., Patrick, L., and Benjamin, R. (2013). Linear regression from strategic data sources. coRR, abs/1309.7824. * Ruoxi et al., [2019] Ruoxi, J., David, D., and Boxin, W. (2019). Towards efficient data valuation based on the shapley value. Proceedings of the 22nd AIS-TATS. * Stratis and , [2013] Stratis, I. and , Patrick, L. (2013). Linear regression as a non-cooperative game. In Y. Chen and N. Immorlica (Eds.), Web and Internet Economics (p. 277-290). * William et al., [1993] William, H., Nick, S., and Olvi, L. (1993). Nuclear feature extraction for breast tumor diagnosis. SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905 (p.861-870). * Zhou, [2016] Zhou, F. (2016). Introduction to statistical inference. Stanford University. Lecture notes. STATS 200, Autumn 2016. All code in the experiments can be found at: https://github.com/chenzhiliang94/multi-agent-data-sharing
# Radio spectra of pulsars fitted with the spectral distribution function of the emission from their current sheet Houshang Ardavan Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, United Kingdom Email address: <EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract In their catalogue of pulsars’ radio spectra, Swainston et al. (2022, PASA, 39, e056) distinguish between five different forms of these spectra: those that can be fitted with (i) a simple power law, (ii) a broken power law, (iii) a low-frequency turn-over, (iv) a high-frequency turn-over or (v) a double turn-over spectrum. Here, we choose two examples from each of these categories and fit them with the spectral distribution function of the caustics that are generated by the superluminally moving current sheet in the magnetosphere of a non-aligned neutron star. In contrast to the prevailing view that the curved features of pulsars’ radio spectra arise from the absorption of the observed radiation in high-density environments, our results imply that these features are intrinsic to the emission mechanism. We find that all observed features of pulsar spectra (including those that are normally fitted with simple or broken power laws) can be described by a single spectral distribution function and regarded as manifestations of a single emission mechanism. From the results of an earlier analysis of the emission from a pulsar’s current sheet and the values of the fit parameters for each spectrum, we also determine the physical characteristics of the central neutron star of each considered example and its magnetosphere. ###### keywords: pulsars: general – stars: neutron – methods: data analysis – radiation mechanisms: non-thermal ††pubyear: 2023††pagerange: Radio spectra of pulsars fitted with the spectral distribution function of the emission from their current sheet–Radio spectra of pulsars fitted with the spectral distribution function of the emission from their current sheet ## 1 Introduction Attempts at explaining the radiation from pulsars has so far been focused mainly on mechanisms of acceleration of charged particles (see, e.g., the references in Melrose et al., 2021): an approach spurred by the fact that, once the relevant version of this mechanism is identified, one can calculate the electric current density associated with the accelerating charged particles involved and thereby evaluate the classical expression for the retarded potential that describes the looked-for radiation. In the present paper, however, we evaluate the retarded potential, and hence the generated radiation field, using the macroscopic distribution of electric charge-current density that is already provided by the numerical computations of the structure of a non-aligned pulsar magnetosphere (Ardavan, 2021, Section 2). Both the radiation field thus calculated and the electric and magnetic fields that pervade the pulsar magnetosphere are solutions of Maxwell’s equations for the same charge-current distribution. These two solutions are completely different, nevertheless, because they satisfy different boundary conditions: the far-field boundary conditions with which the structure of the pulsar magnetosphere is computed are radically different from the corresponding boundary conditions with which the retarded solution of these equations (i.e. the solution describing the radiation from the charges and currents in the pulsar magnetosphere) is derived (see Section 3 and the last paragraph in Section 6 of Ardavan 2021). Numerical computations based on the force-free and particle-in-cell formalisms have now firmly established that the magnetosphere of a non-aligned neutron star entails a current sheet outside its light cylinder whose rotating distribution pattern moves with linear speeds exceeding the speed of light in vacuum (see Spitkovsky 2006; Kalapotharakos et al. 2012; Tchekhovskoy et al. 2016; and the references in Philippov & Kramer 2022). However, the role played by the superluminal motion of this current sheet in generating the multi- wavelength, focused pulses of radiation that we receive from neutron stars is not generally acknowledged. Given that the superluminally moving distribution pattern of this current sheet is created by the coordinated motion of aggregates of subluminally moving charged particles (see Ginzburg, 1972; Bolotovskii & Bykov, 1990), the motion of any of its constituent particles is too complicated to be taken into account individually. Only the densities of charges and currents enter the Maxwell’s equations, on the other hand, so that the macroscopic charge-current distribution associated with the magnetospheric current sheet takes full account of the contributions toward the radiation that arise from the complicated motions of the charged particles comprising it. The radiation field generated by a uniformly rotating volume element of the distribution pattern of the current sheet in the magnetosphere of a non- aligned neutron star embraces a synergy between the superluminal version of the field of synchrotron radiation and the vacuum version of the field of Čerenkov radiation. Once superposed to yield the emission from the entire volume of the source, the contributions from the volume elements of this distribution pattern that approach the observation point with the speed of light and zero acceleration at the retarded time interfere constructively and form caustics in certain latitudinal directions relative to the spin axis of the neutron star. The waves that embody these caustics are more focused the farther they are from their source: as their distance from their source increases, two nearby stationary points of their phases draw closer to each other and eventually coalesce at infinity. By virtue of their narrow peaks in the time domain, the resulting focused pulses thus procure frequency spectra whose distributions extend from radio waves to gamma-rays (Ardavan, 2021, Table 1 and Section 5.4). This paper is concerned with the radio spectra of pulsars. Its task is to ascertain whether the spectrum of the caustics generated by a pulsar’s current sheet (Section 2) can account for all five categories of spectral shapes catalogued111https://all-pulsar-spectra.readthedocs.io/en/latest/ by Swainston et al. (2022). To this end, it presents fits to two examples (with the largest number of known data points) of the catalogued spectra in each category (Section 3) and relates the values of their fit parameters to the physical characteristics of the central neutron star of the corresponding pulsar and its magnetosphere (Section 4). ## 2 Radio spectrum of the caustics generated by the superluminally moving current sheet The frequency spectrum of the radiation that is generated as a result of the superluminal motion of the current sheet in the magnetosphere of a non-aligned neutron star was presented, in its general form, in equation (177) of Ardavan (2021, Section 5.3). In a case where the magnitudes of the vectors denoted by $\boldsymbol{\cal P}_{l}$ and $\boldsymbol{\cal Q}_{l}$ in equation (177) of Ardavan (2021) are appreciably larger than those of their counterparts, ${\bar{\boldsymbol{\cal P}}}_{l}$ and ${\bar{\boldsymbol{\cal Q}}}_{l}$, and the dominant contribution towards the Poynting flux $S_{\nu}$ of the radiation is made by only one of the two terms corresponding to $l=1$ and $l=2$, e.g. $l=2$, that equation can be written as $S_{\nu}=\kappa_{0}\,k^{-2/3}\left|\boldsymbol{\cal P}_{2}\,{\rm Ai}(-k^{2/3}\sigma_{21}^{2})-{\rm i}k^{-1/3}\boldsymbol{\cal Q}_{2}\,{\rm Ai}^{\prime}(-k^{2/3}\sigma_{21}^{2})\right|^{2},$ (1) where Ai and ${\rm Ai}^{\prime}$ are the Airy function and the derivative of the Airy function with respect to its argument, respectively, $k=2\pi\nu/\omega$ is the frequency $\nu$ of the radiation in units of the rotation frequency $\omega/2\pi$ of the central neutron star, and $\kappa_{0}$ and $\sigma_{21}$ are two positive scalars. The coefficients of the Airy functions in the above expression stand for $\boldsymbol{\cal P}_{2}=k^{-1/2}\boldsymbol{\cal P}_{2}^{(2)}$ and $\boldsymbol{\cal Q}_{2}=k^{-1/2}\boldsymbol{\cal Q}_{2}^{(2)}$ when $k\geq k_{2}$ and for $\boldsymbol{\cal P}_{2}=\boldsymbol{\cal P}_{2}^{(0)}$ and $\boldsymbol{\cal Q}_{2}=\boldsymbol{\cal Q}_{2}^{(0)}$ when $k<k_{2}$, in which the complex vectors $\boldsymbol{\cal P}_{2}^{(0)}$, $\boldsymbol{\cal Q}_{2}^{(0)}$, $\boldsymbol{\cal P}_{2}^{(2)}$ and $\boldsymbol{\cal Q}_{2}^{(2)}$ are defined by equations (138)-(146) of Ardavan (2021) and $k_{2}$ designates a threshold frequency. The variable $\sigma_{21}$ determines the separation between two nearby stationary points of the phases of the received waves: the smaller the value of $\sigma_{21}$, the more focused is the observed radiation and the higher is its frequency content (Ardavan, 2021, Section 4.5). The radio component of the present radiation is mostly generated by values of $\sigma_{21}$ that range from $10^{-4}$ to $10^{-2}$. In this paper, we replace $\boldsymbol{\cal P}_{2}^{(0)}$, $\boldsymbol{\cal Q}_{2}^{(0)}$, $\boldsymbol{\cal P}_{2}^{(2)}$ and $\boldsymbol{\cal Q}_{2}^{(2)}$, which only have weak dependences on $\sigma_{21}$, by their values for $\sigma_{21}=10^{-3}$ and treat them as constant parameters. Evaluation of the right-hand side of equation (2) results in $\displaystyle S_{\nu}$ $\displaystyle=$ $\displaystyle\kappa_{1}\,k^{-2/3-j/2}\Big{[}{\rm Ai}^{2}(-k^{2/3}\sigma_{21}^{2})+\zeta_{1}^{2}k^{-2/3}{{\rm Ai}^{\prime}}^{2}(-k^{2/3}\sigma_{21}^{2})$ (2) $\displaystyle+2\zeta_{1}\cos\beta\,k^{-1/3}{\rm Ai}(-k^{2/3}\sigma_{21}^{2}){\rm Ai}^{\prime}(-k^{2/3}\sigma_{21}^{2})\Big{]},$ where $j=0$ when $k<k_{2}$ and $j=2$ when $k\geq k_{2}$, $\kappa_{1}=\kappa_{0}\left|\boldsymbol{\cal P}^{(j)}_{2}\right|^{2},\,\,\,\zeta_{1}=\frac{\left|{\boldsymbol{\cal Q}^{(j)}_{2}}\right|}{\left|{\boldsymbol{\cal P}^{(j)}_{2}}\right|},\,\,\,\cos\beta=\frac{\Im\left(\boldsymbol{\cal Q}^{(j)}_{2}\cdot\boldsymbol{\cal P}^{(j)*}_{2}\right)}{\left|\boldsymbol{\cal Q}^{(j)}_{2}\right|\left|\boldsymbol{\cal P}^{(j)}_{2}\right|},$ (3) and $\Im{}$ and $*$ denote an imaginary part and the complex conjugate, respectively. The above spectrum is emblematic of any radiation that entails caustics (see Stamnes, 1986). To take account of the fact that the parameter $\sigma_{21}$ assumes a non- zero range of values across the (non-zero) latitudinal width of the detected radiation beam (Ardavan, 2021, Section 4.5), we must integrate $S_{\nu}$ with respect to $\sigma_{21}$ over a finite interval $\rho\sigma_{0}\leq\sigma_{21}\leq\sigma_{0}$ with $\sigma_{0}\ll 1$ and $0\leq\rho<1$. Performing the integration of the Airy functions in equation (2) with respect to $\sigma_{21}$ by means of Mathematica, we thus obtain $\displaystyle{\cal F}_{\nu}$ $\displaystyle=$ $\displaystyle\int_{\rho\sigma_{0}}^{\sigma_{0}}S_{\nu}\,{\rm d}\sigma_{21}$ $\displaystyle=$ $\displaystyle\kappa\,\chi^{-j/2}\left[f_{1}(\chi,\rho)+\frac{\zeta^{2}}{4\sqrt{3}}f_{2}(\chi,\rho)-\frac{\zeta\cos\beta}{2\sqrt{3}}f_{3}(\chi,\rho)\right],$ where $\kappa=\frac{2^{j/2}\sigma_{0}^{3(1+j/2)}}{3^{(j+3)/2}\pi^{3/2}}\kappa_{1},\qquad\zeta=\sigma_{0}\zeta_{1},\qquad\chi=\frac{2}{3}{\sigma_{0}}^{3}k,$ (5) $\displaystyle f_{1}$ $\displaystyle=$ $\displaystyle\bigg{[}3\Gamma\left(\frac{7}{6}\right)\eta\chi^{-2/3}{}_{2}F_{3}\left(\begin{matrix}1/6&1/6&{}\\\ 1/3&2/3&7/6\end{matrix};-\eta^{6}\chi^{2}\right)$ $\displaystyle+\pi^{1/2}\eta^{3}{}_{2}F_{3}\left(\begin{matrix}1/2&1/2&{}\\\ 2/3&4/3&3/2\end{matrix}\,;-\eta^{6}\chi^{2}\right)$ $\displaystyle+\frac{9}{20}\Gamma\left(\frac{5}{6}\right)\eta^{5}\chi^{2/3}{}_{2}F_{3}\left(\begin{matrix}5/6&5/6&{}\\\ 4/3&5/3&11/6\end{matrix}\,;-\eta^{6}\chi^{2}\right)\bigg{]}_{\eta=\rho}^{\eta=1},$ (6) $f_{2}=\eta\chi^{-4/3}\,{}_{24}G^{31}\left(-\eta^{2}\chi^{2/3},\frac{1}{3}\left|\,\begin{matrix}5/6&7/6&{}&{}\\\ 0&2/3&4/3&-1/6\end{matrix}\right)\right.\bigg{|}_{\eta=\rho}^{\eta=1},$ (7) $f_{3}=\eta\chi^{-1}\,{}_{24}G^{31}\left(-\eta^{2}\chi^{2/3},\frac{1}{3}\left|\,\begin{matrix}5/6&1/2&{}&{}\\\ 0&1/3&2/3&-1/6\end{matrix}\right)\right.\bigg{|}_{\eta=\rho}^{\eta=1},$ (8) and ${}_{2}F_{3}$ and ${}_{24}G^{31}$ are respectively the generalised hypergeometric function (see Olver et al., 2010) and the generalised Meijer G-Function 222https://mathworld.wolfram.com/MeijerG-Function.html. The variable $\chi$ that appears in the above expressions is related to the frequency $\nu$ of the radiation via $\chi=4\pi{\sigma_{0}}^{3}\nu/(3\omega)$. The scale and shape of the spectrum described by equation (LABEL:E4) depend on whether $j$ equals $0$ or $2$ (i.e. on whether the dimensionless frequency $k$ lies below or above the threshold frequency $k_{2}$) and on the five parameters $\kappa$, $\zeta$, $\beta$, $\sigma_{0}$ and $\rho$: parameters whose values are dictated by the characteristics of the magnetospheric current sheet (see Section 4). The parameters $\zeta$, $\beta$ and $\rho$ determine the shape of the spectral distribution while the parameters $\kappa$ and $\sigma_{0}$ respectively determine the position of this distribution along the flux-density (${\cal F}_{\nu}$) and the frequency ($\nu$) axes. ## 3 Fits to the data on examples of various forms of radio spectra In this section, we choose from each of the five galleries of pulsar spectral shapes catalogued by Swainston et al. (2022) two examples with the largest number of known data points and fit them with the spectral distribution function described by equation (LABEL:E4). In the case of each example, we use Mathematica’s ‘NonlinearModelFit’ procedure333 https://reference.wolfram.com/language/ref/NonlinearModelFit.html and the statistical information that it provides to determine the values of the fit parameters in equation (LABEL:E4) and their standard errors. Where, owing to the complexity of the expression in equation (LABEL:E4), this procedure fails to work and so the fits to the data are obtained by elementary iteration, only the values of these parameters are specified. The results are presented in Figs. (1)–(10) and their captions. The horizontal and vertical axes in these logarithmic plots are marked with the values of ${\cal F}_{\nu}$ and $\nu_{\rm MHz}$, respectively, where $\nu_{\rm MHz}$ stands for the frequency of the radiation in units of MHz. It can be seen from Figs. (1)–(10) that the fit residuals in the case of each pulsar are smaller than the corresponding observational errors for the majority of the data points. Since there are no two values of any of the fit parameters for which the spectrum described by equation (LABEL:E4) has the same shape and position, the specified values of the fit parameters are moreover unique. Figure 1: Spectrum of J1915+1009: an example from the simple-power-law gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=8.39\times 10^{3}$ mJy, $\sigma_{0}=8.22\times 10^{-3}$, $\zeta=\beta=\rho=0$ and $\chi=0.150\,\nu_{\rm MHz}$. Figure 2: Spectrum of J0206-4028: a second example from the simple-power-law gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=2.58\times 10^{3}$ mJy, $\sigma_{0}=7.75\times 10^{-3}$, $\zeta=0.668$, $\beta=\rho=0$ and $\chi=0.112\,\nu_{\rm MHz}$. Figure 3: Spectrum of J1024-0719: an example from the broken-power-law gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=2.81\times 10^{-2}$ mJy, $\sigma_{0}=2.81\times 10^{-3}$, $\zeta=~{}0.507$, $\beta=0.5$, $\rho=0$ and $\chi=7.67\times 10^{-5}\,\nu_{\rm MHz}$. Figure 4: Spectrum of J0452-1759: a second example from the broken-power-law gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=30.6\pm 10.6$ mJy, $\sigma_{0}=(1.88\pm 0.18)\times 10^{-3}$, $\zeta=1.75\pm 0.59$, $\beta=0.64\pm 0.18$, $\rho=0.58\pm 0.34$ and $\chi=(2.42\pm 0.62)\times 10^{-3}\nu_{\rm MHz}$. Figure 5: Spectrum of J0332+5434: an example from the low-frequency-turn-over gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=(1.55\pm 1.42)\times 10^{3}$ mJy, $\sigma_{0}=(1.49\pm 0.06)\times 10^{-3}$, $\zeta=0.592\pm 0.033$, $\beta=(3.99\pm 2.00)\times 10^{-2}$, $\rho=0.927\pm 0.077$ and $\chi=(1.58\pm 0.20)\times 10^{-3}\,\nu_{\rm MHz}$. Figure 6: Spectrum of J1845-0743: a second example from the low-frequency- turn-over gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $\kappa=2.22\pm 0.12$ mJy, $\sigma_{0}=(2.94\pm 0.08)\times 10^{-3}$, $\zeta=2.08\pm 0.29$, $\beta=0$, $\rho=0.418\pm 0.056$ and $\chi=(1.77\pm 0.14)\times 10^{-3}\,\nu_{\rm MHz}$. Figure 7: Spectrum of J1829-1751; an example from the high-frequency-cut-off gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=0$, $\kappa=3.16\times 10^{-4}$ mJy, $\sigma_{0}=5.93\times 10^{-4}$, $\zeta=10^{3}$, $\beta=0$, $\rho=0.999$ and $\chi=4.27\times 10^{-5}\,\nu_{\rm MHz}$. Figure 8: Spectrum of J1835-0643: a second example from the high-frequency- cut-off gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=0$, $\kappa=1.58\times 10^{-4}$ mJy, $\sigma_{0}=7.88\times 10^{-4}$, $\zeta=10^{3}$, $\beta=0$, $\rho=0.999$ and $\chi=10^{-4}\,\nu_{\rm MHz}$. Figure 9: Spectrum of J1932+1059: an example from the double-turn-over- spectrum gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=8.38\pm 7.35$ mJy, $\sigma_{0}=(1.49\pm 0.36)\times 10^{-3}$, $\zeta=0.354\pm 0.069$, $\beta=\rho=0$ and $\chi=(5.02\pm 2.84)\times 10^{-4}\,\nu_{\rm MHz}$. Figure 10: Spectrum of J0141+6009: a second example from the double-turn-over- spectrum gallery of Swainston et al. (2022). The curve is a plot of the flux density ${\cal F}_{\nu}$ given by equation (LABEL:E4) for $j=2$, $\kappa=5.01\times~{}10^{2}$ mJy, $\sigma_{0}=7.88\times 10^{-4}$, $\zeta=0.225$, $\beta=0$, $\rho=0.999$ and $\chi=3.98\times~{}10^{-4}\,\nu_{\rm MHz}$. ## 4 The connection between the parameters of a fitted spectrum and the physical characteristics of the source of the observed radiation ### 4.1 Derivation of the connecting relations From equations (5), (3) and (1), it follows that the parameters $\kappa$, $\zeta$, $\rho$ and $\sigma_{0}$ in equation (LABEL:E4) are related to the characteristics of the source of the observed radiation via the quantities $\omega$, $\kappa_{0}$, $\boldsymbol{\cal P}_{2}^{(j)}$ and $\boldsymbol{\cal Q}_{2}^{(j)}$ that appear in the expression for the flux density ${\cal F}_{\nu}$. In this section we use the results of the analysis presented in Ardavan (2021) to express these quantities in terms of the inclination angle of the central neutron star, $\alpha$, the magnitude of the star’s magnetic field at its magnetic pole, $B_{0}=10^{12}{\hat{B}}_{0}$ Gauss, the radius of the star, $r_{s0}=10^{6}d$ cm, the rotation frequency of the star, $\omega=10^{2}{\hat{P}}^{-1}$ rad/s, and the spherical polar coordinates, $R_{P}=D\,\,{\rm kpc}=3.085\times 10^{21}D$ cm, $\varphi_{P}$ and $\theta_{P}$, of the observation point $P$ in a frame whose centre and $z$-axis coincide with the centre and spin axis of the star. From equations (177), (138)–(146) and (136) of Ardavan (2021) it follows that $\kappa_{0}=4.15\times 10^{18}w_{1}^{2}w_{3}^{2}{\hat{B}}_{0}^{2}d^{4}D^{-2}{\hat{P}}^{-1}\quad{\rm Jy},$ (9) $\left|\boldsymbol{\cal P}_{2}^{(0)}\right|=\frac{b}{6}\left|{\tilde{\bf P}}_{2}\right|\left(\sqrt{\frac{2\sigma_{21}}{\partial^{2}f_{2C}/\partial\tau^{2}|_{\tau=\tau_{0{\rm min}}}}}+\sqrt{\frac{-2\sigma_{21}}{\partial^{2}f_{2C}/\partial\tau^{2}|_{\tau=\tau_{0{\rm max}}}}}\right),$ (10) and $\left|\boldsymbol{\cal P}_{2}^{(2)}\right|=\left(\frac{2}{\pi a^{3}}\right)^{1/2}\left|\boldsymbol{\cal P}_{2}^{(0)}\right|,$ (11) where $w_{1}=\left|1-2\alpha/\pi\right|,\qquad w_{3}=1+0.2\sin^{2}\alpha,$ (12) $\displaystyle f_{2C}$ $\displaystyle=$ $\displaystyle({\hat{r}}_{P}^{2}{\hat{r}}_{sC}^{2}\sin^{2}\theta-1)^{1/2}-{\hat{R}}_{P}-\arccos\left({\hat{r}}_{P}^{-1}{\hat{r}}_{sC}^{-1}\csc\theta\right)$ (13) $\displaystyle-\arccos(\cot\alpha\cot\theta)+{\hat{r}}_{sC}+\varphi_{P}-{\hat{r}}_{s0}-2\pi,$ $\displaystyle a$ $\displaystyle=$ $\displaystyle({\hat{r}}_{P}^{2}{\hat{r}}_{sC}^{2}\sin^{2}\theta-1)\left[\frac{({\hat{r}}_{P}^{2}{\hat{r}}_{sC}^{2}\sin^{2}\theta-1)^{1/2}+{\hat{r}}_{sC}}{{\hat{r}}_{sC}({\hat{r}}_{P}^{2}-1)^{1/2}({\hat{R}}_{P}^{2}\sin^{2}\theta-1)^{1/2}}\right.$ (14) $\displaystyle\left.-\frac{1}{({\hat{r}}_{P}^{2}{\hat{r}}_{sC}^{2}\sin^{2}\theta-1)^{1/2}}\right],$ $b=\frac{{\hat{r}}_{P}^{2}{\hat{r}}_{sC}^{2}\sin^{2}\theta-1}{{\hat{r}}_{sC}({\hat{r}}_{P}^{2}-1)^{1/2}({\hat{R}}_{P}^{2}\sin^{2}\theta-1)^{1/2}},$ (15) ${\hat{r}}_{sC}=\frac{({\hat{r}}_{P}^{2}-1)^{1/2}({\hat{R}}_{P}^{2}\sin^{2}\theta-1)^{1/2}-{\hat{z}}_{P}\cos\theta}{{\hat{r}}_{P}^{2}\sin^{2}\theta-1},$ (16) $\sigma_{21}=[{\textstyle\frac{3}{4}}(f_{2C}|_{\tau=\tau_{2{\rm max}}}-f_{2C}|_{\tau=\tau_{2{\rm min}}})]^{1/3},$ (17) $\theta=\arccos(\sin\alpha\cos\tau),$ (18) $\displaystyle\left|{\tilde{\bf P}}_{2}\right|$ $\displaystyle=$ $\displaystyle\cos\alpha[{\hat{r}}_{sC}^{2}(1+\cos^{2}\theta_{P}-2\cos\theta\cos\theta_{P})+\tan^{2}\alpha$ $\displaystyle-\cot^{2}\theta+2{\hat{r}}_{sC}\sin\theta\cos\theta_{P}(\tan^{2}\alpha-\cot^{2}\theta)^{1/2}]^{1/2},\quad\,\,$ and the variables $\tau_{2{\rm min}}$ and $\tau_{2{\rm max}}$ stand for the minimum and maximum of the function $f_{2C}$ (see also equations 7, 9, 174, 175, 93–95, 97 and 88 of Ardavan 2021). The caret on $R_{P}$ and $r_{s0}$ (and $r_{P}=R_{P}\sin\theta_{P}$, $z_{P}=R_{P}\cos\theta_{P}$) is used here to designate a variable that is rendered dimensionless by being measured in units of the light-cylinder radius $c/\omega$. (Note the following two corrections: the vector ${\bf P}_{2}$ in equation 145 and the numerical coefficient $1.54\times 10^{19}$ in equation 177 of Ardavan 2021 have been corrected to read ${\tilde{\bf P}}_{2}$ and $4.15\times 10^{18}$, respectively.) The expression for $|{\tilde{\bf P}}_{2}|$ in equation (LABEL:E19) is derived from that for ${\bf P}_{2}$ given by equations (98), (80), (78) and (62) of Ardavan (2021). In this derivation, we have set the observation point on the cusp locus of the bifurcation surface where ${\hat{r}}_{s}={\hat{r}}_{sC}$, have approximated $(p_{1},p_{2},p_{3})$ by its far-field value $2^{1/3}{\hat{R}}_{P}^{-1}({\hat{R}}_{P}^{-1},-1,1)$ and have let ${\bf P}_{2}={\hat{R}}_{P}^{-1}{\tilde{\bf P}}_{2}$. The factor ${\hat{R}}_{P}^{-2}$ that would have otherwise appeared in the resulting expression for $|\boldsymbol{\cal P}^{(2)}_{2}|^{2}$ is thus incorporated in the coefficient $\kappa_{0}$ in equation (9). For certain values of $\theta_{P}$, denoted by $\theta_{P2S}$, the function $f_{2C}(\tau)$ has an inflection point (see Ardavan 2021, Section 4.4, and Ardavan 2023c, Section 2). For any given inclination angle $\alpha$, the position $\tau_{2S}$ of this inflection point and the colatitude $\theta_{P2S}$ of the observation points for which $f_{2C}(\tau)$ has an inflection point follow from the solutions to the simultaneous equations $\partial f_{2C}/\partial\tau=0$ and $\partial^{2}f_{2C}/\partial\tau^{2}=0$. (Explicit expressions for the derivatives that appear in these equations can be found in Appendix A of Ardavan 2021.) For values of $\theta_{P}$ sufficiently close to $\theta_{P2S}$, the separation between the maximum $\tau=\tau_{2{\rm max}}$ and minimum $\tau=\tau_{2{\rm min}}$ of $f_{2C}$ and hence the value of $\sigma_{21}$ are small. In other words, the focused radiation beam is centred on the colatitude $\theta_{P2S}$ plotted in Fig. 11 (Ardavan, 2021, Section 5.5). Since the fit parameter $\sigma_{0}$, which denotes the maximum value assumed by $\sigma_{21}$, lies between $10^{-2}$ and $10^{-4}$ for most of the examples considered in Section 3, we have chosen the separation between $\tau=\tau_{2{\rm max}}$ and $\tau=\tau_{2{\rm min}}$ in each example such that the value of $\sigma_{21}$ in equation (17) is of the order of $10^{-3}$. Equations (5), (3), and (9)–(11) jointly yield $\kappa=1.59\times 10^{16}\sigma_{0}^{3}{\hat{B}}_{0}^{2}d^{4}D^{-2}{\hat{P}}^{-1}{\tilde{\kappa}}_{\rm th}\qquad{\rm Jy},$ (20) with $\displaystyle{\tilde{\kappa}}_{\rm th}$ $\displaystyle=$ $\displaystyle w_{1}^{2}w_{3}^{2}b^{2}\left|{\tilde{\bf P}}_{2}\right|^{2}\sigma_{21}\left(\frac{1}{\partial^{2}f_{2C}/\partial\tau^{2}|_{\tau=\tau_{2{\rm min}}}}\right.$ (21) $\displaystyle\left.-\frac{1}{\partial^{2}f_{2C}/\partial\tau^{2}|_{\tau=\tau_{2{\rm max}}}}\right),$ when $j=0$, and $\kappa=6.76\times 10^{15}\sigma_{0}^{6}{\hat{B}}_{0}^{2}d^{4}D^{-2}{\hat{P}}^{-1}{\hat{\kappa}}_{\rm th}\qquad{\rm Jy},$ (22) with ${\hat{\kappa}}_{\rm th}=a^{-3}{\tilde{\kappa}}_{\rm th},$ (23) when $j=2$. Equations (20) and (22) can be written as ${\tilde{\kappa}}_{\rm th}={\tilde{\kappa}}_{\rm obs}\quad{\rm and}\quad{\hat{\kappa}}_{\rm th}={\hat{\kappa}}_{\rm obs},$ (24) respectively, where ${\tilde{\kappa}}_{\rm obs}=6.27\times 10^{-17}\sigma_{0}^{-3}({\hat{B}}_{0}d^{2})^{-2}D^{2}{\hat{P}}\kappa$ (25) and ${\hat{\kappa}}_{\rm obs}=1.48\times 10^{-16}\sigma_{0}^{-6}({\hat{B}}_{0}d^{2})^{-2}D^{2}{\hat{P}}\kappa.$ (26) While ${\tilde{\kappa}}_{\rm obs}$ and ${\hat{\kappa}}_{\rm obs}$ only contain the observed parameters of the pulsar and its emission, the values of ${\tilde{\kappa}}_{\rm th}$ and ${\hat{\kappa}}_{\rm th}$ are determined by the physical characteristics of the magnetospheric current sheet that acts as the source of the observed emission. For a given value of $\sigma_{21}$, the right-hand sides of equations (21) and (23) are functions of the inclination angle $\alpha$ and the observer’s distance ${\hat{R}}_{P}$ only (see Fig. 12). The values of the fit parameters together with the relations in equation (24) and the plots of ${\tilde{\kappa}}_{\rm th}$ and ${\hat{\kappa}}_{\rm th}$ in Fig. 12 thus enable us to connect the parameters of the fitted spectra to the physical characteristics of their sources. Figure 11: The relationship between the colatitude $\theta_{P2S}$ on which the focused radiation beam is centred and the star’s inclination angle $\alpha$. Figure 12: Plots of the dependences of the functions ${\hat{\kappa}}_{\rm th}$ and ${\tilde{\kappa}}_{\rm th}$, defined in equations (21) and (23), on the inclination angle $\alpha$ for $\sigma_{21}\simeq 10^{-3}$ and an observation point in the far zone. The red dots and the blue dots depict the values assumed by ${\hat{\kappa}}_{\rm th}$ and ${\tilde{\kappa}}_{\rm th}$, respectively. ### 4.2 Application of the connecting relations to the fitted spectra In this section, we use the relations derived in Section 4.1, the values of the fit parameters given in the captions to Figs. 1–10 and the data listed in the ATNF Pulsar Catalogue (Manchester et al., 2005) to determine (or set limits on) certain attributes of the central neutron stars of the pulsars considered in Section 3 and their magnetospheres. Once the values of the fit parameters $\sigma_{0}$ and $\kappa$ given in the caption to Fig. 1, the period ($0.405$ s) and the distance ($7$ kpc) of the pulsar J1915+1009 are inserted in equation (26), the resulting value of ${\hat{\kappa}}_{\rm obs}$ and the second member of equation (24) yield ${\hat{B}}_{0}d^{2}=1.13\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1915+1009.$ (27) If the central neutron star of this pulsar has the radius $10^{6}$ cm (i.e. $d=1$) and the magnetic field $2.51\times 10^{12}$ Gauss at its magnetic pole (i.e. ${\hat{B}}_{0}=2.51$) as predicted by the formula for magnetic dipole radiation, then equation (27) and the curve delineated by the red dots in Fig. 12 imply that the value of the angle $\alpha$ between the rotation and magnetic axes of J1915+1009 is either $3.8^{\circ}$ or $42.3^{\circ}$. (Here, and in other similar cases, a choice between the two possible values of $\alpha$ can be made by comparing the pulse profile of the pulsar in question with the theoretically predicted ones presented in Section 5.1 of Ardavan 2021.) Depending on whether $\alpha=3.8^{\circ}$ or $42.3^{\circ}$, the direction along which the radiation is observed forms the angles $\theta_{P2S}=3.8^{\circ}$ or $35.1^{\circ}$ with the spin axis of this pulsar (see Fig. 11). Within the framework of the present emission mechanism, the value of ${\hat{B}}_{0}$ can be significantly different from that given by the formula for magnetic dipole radiation, in which case equation (27) and Fig. 12 merely determine the required value of ${\hat{B}}_{0}d^{2}$ as a function of $\alpha$. In the same way, the values of $\sigma_{0}$ and $\kappa$ in the caption to Fig. 2 together with the period $0.361$ s and the distance $1.26$ kpc yield ${\hat{B}}_{0}d^{2}=0.127\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J0206-4028.$ (28) If $d$ is set equal to $1$ and the value of ${\hat{B}}_{0}$ is assumed to be that given by the formula for magnetic dipole radiation, i.e. $0.879$, then equation (28) and the red curve in Fig. 12 imply that $\alpha$ equals either $0.29^{\circ}$ or $62.8^{\circ}$. According to Fig. 11, the values of $\theta_{P2S}$ corresponding to $\alpha=0.29^{\circ}$ and $62.8^{\circ}$ are $0.29^{\circ}$ and $97.2^{\circ}$, respectively. In the case of the broken-power-law example in Fig. 3, the fit parameters $\sigma_{0}=2.81\times 10^{-3}$ and $\kappa=2.81\times 10^{-2}$ mJy together with the period $5.16\times 10^{-3}$ s and the distance $1.22$ kpc yield ${\hat{B}}_{0}d^{2}=1.02\times 10^{-3}\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1024-0719.$ (29) Since ${\hat{\kappa}}_{\rm th}^{-1/2}\geq 1.42$ (see Fig. 12), equation (29) implies that either $d\geq 2.15$ if ${\hat{B}}_{0}$ is given by its magnetic- dipole-radiation value $3.13\times 10^{-4}$ or ${\hat{B}}_{0}\geq 1.45\times 10^{-3}$ if $d=1$. In contrast, the fit parameters for the broken-power-law example in Fig. 4 together with the period $0.549$ s and the distance $0.4$ kpc yield ${\hat{B}}_{0}d^{2}=0.38\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J0452-1759,$ (30) an equation that is satisfied by the magnetic-dipole-radiation value of ${\hat{B}}_{0}$ (i.e. $1.8$) and $d=1$ if $\alpha=57.7^{\circ}$ and so $\theta_{P2S}=84.6^{\circ}$. The values of the fit parameters $\sigma_{0}$ and $\kappa$ used for plotting Fig. 5 together with the period $0.715$ s and the distance $1.67$ kpc yield ${\hat{B}}_{0}d^{2}=25.8\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J0332+5434.$ (31) The constraint ${\hat{\kappa}}_{\rm th}^{-1/2}\geq 1.42$ (see Fig. 12) therefore implies that either $d\geq 5.58$, if ${\hat{B}}_{0}$ has its magnetic-dipole-radiation value $1.22$, or ${\hat{B}}_{0}\geq 36.6$, if $d=1$. Next, the values of $\sigma_{0}$ and $\kappa$ in the caption to Fig. 6 together with the period $0.105$ s and the distance $7.11$ kpc yield ${\hat{B}}_{0}d^{2}=0.207\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1845-0743.$ (32) In this case, too, the constraint ${\hat{\kappa}}_{\rm th}^{-1/2}\geq 1.42$ (see Fig. 12) implies that either $d\geq 1.22$, if ${\hat{B}}_{0}$ has its magnetic-dipole-radiation value $0.198$, or ${\hat{B}}_{0}\geq 0.294$, if $d=1$. Unlike the examples in Figs. 1–6 for which the value of $j$ in equation (LABEL:E4) equals $2$, the example in Fig. 7 is fitted with a flux density for which $j=0$. The first member of equation (24), the values of the fit parameters $\sigma_{0}$ and $\kappa$ in the caption to Fig. 7 and the period $0.307$ s and the distance $5.92$ kpc jointly yield ${\hat{B}}_{0}d^{2}=4.03\times 10^{-6}\,{\tilde{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1829-1751.$ (33) The resulting value of ${\tilde{\kappa}}_{\rm th}^{-1/2}$ for ${\hat{B}}_{0}=1.32$ and $d=1$, i.e. $3.28\times 10^{5}$, implies that $\alpha\simeq 90^{\circ}$ in this case (see the curve delineated by the blue dots in Fig. 12). In the case of the example shown in Fig. 8, too, $j$ equals zero so that the corresponding values of the fit parameters $\sigma_{0}$ and $\kappa$ together with the period $0.306$ s and the distance $5.03$ kpc yield ${\hat{B}}_{0}d^{2}=1.58\times 10^{-6}\,{\tilde{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1835-0643.$ (34) If ${\hat{B}}_{0}$ has the value $3.56$ that is obtained from the formula for magnetic dipole radiation, then ${\tilde{\kappa}}_{\rm th}^{-1/2}=2.25\times 10^{6}$ for $d=1$ and so $\alpha\simeq 90^{\circ}$ according to Fig. 12. For the double-turn-over spectrum shown in Fig. 9, $j$ equals $2$ and the second member of equation (24) in conjunction with the values of the fit parameters $\sigma_{0}$ and $\kappa$, the period $0.277$ s and the distance $0.31$ kpc yield ${\hat{B}}_{0}d^{2}=0.198\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J1932+1059.$ (35) If $d$ is set equal to $1$ and the value of ${\hat{B}}_{0}$ is assumed to be that given by the formula for magnetic dipole radiation, i.e. $0.518$, then equation (35) and the red curve in Fig. 12 imply that $\alpha$ equals $46.5^{\circ}$. Moreover, the colatitude along which this pulsar is observed has the value $\theta_{P2S}=60.8^{\circ}$ according to Fig. 11. Finally, the values of the fit parameters in the caption to Fig. 10, the period $1.22$ s and the distance $2.3$ kpc yield ${\hat{B}}_{0}d^{2}=1.78\times 10^{2}\,{\hat{\kappa}}_{\rm th}^{-1/2}\quad{\rm for}\quad J0141+6009.$ (36) Given the constraint ${\hat{\kappa}}_{\rm th}^{-1/2}\geq 1.42$ (see Fig. 12), this implies that either $d\geq 19.0$, if ${\hat{B}}_{0}$ has its magnetic- dipole-radiation value $0.70$, or ${\hat{B}}_{0}\geq 2.53\times 10^{2}$, if $d=1$. ## 5 Concluding remarks No emission mechanism is as yet identified in the published literature on pulsars whose spectral distribution function can fit the data on all five categories of spectral shapes depicted in Figs. 1–10 (see Jankowski et al., 2018; Swainston et al., 2022, and the references therein). Curved or gigahertz-peaked spectra are generally thought to reflect the free-free absorption of the pulsar radiation in ionised high-density environments rather than being intrinsic to the emission mechanism (see Rajwade et al., 2016; Kijak et al., 2021, and the references therein). As we have seen, however, the spectral distribution function of the caustics that are generated by the superlluminally moving current sheet in the magnetosphere of a non-aligned neutron star single-handedly accounts for all observed features of pulsar spectra (including those that are normally fitted with simple or broken power laws). A study of the characteristics of the radiation that is generated by this superluminally moving current sheet has already provided an all-encompassing explanation for the salient features of the radiation received from pulsars: its brightness temperature, polarization, spectrum, profile with microstructure and with a phase lag between the radio and gamma-ray peaks (Ardavan, 2021, 2022) and the discrepancy between the energetic requirements of its radio and gamma-ray components (Ardavan, 2023b). Fits to the exceptionally broad gamma-ray spectra of the Crab, Vela and Geminga pulsars, for example, are provided by the spectral energy distribution of this radiation over the entire range of photon energies so far detected from them (Ardavan, 2023c, a). Detailed analyses of the structure of the magnetospheric current sheet and the coherent emission mechanism by which this sheet creates the caustics underlying the present spectral distribution function can be found in Ardavan (2021). A heuristic account of the mathematical results of those analyses in more transparent physical terms is presented in Ardavan (2022) and Ardavan (2023c, Section 2). Finally, the following cautionary remark concerning a common misconception is in order: it is often presumed that the plasma equations used in the numerical simulations of the magnetospheric structure of an oblique rotator should, at the same time, predict any radiation that the resulting structure would be capable of emitting (see, e.g. Spitkovsky, 2006; Kalapotharakos et al., 2012). This presumption stems from disregarding the role of boundary conditions in the solution of Maxwell’s equations. As we have already pointed out, the far- field boundary conditions with which the structure of the pulsar magnetosphere is computed are radically different from the corresponding boundary conditions with which the retarded solution of these equations (i.e. the solution describing the radiation from the charges and currents in the pulsar magnetosphere) is derived (see Section 3 and the last paragraph in Section 6 of Ardavan 2021). ## Acknowledgements I thank N. A. Swainston for helpful correspondence. ## Data availability The data used in this paper are available in the public domain. ## References * Ardavan (2021) Ardavan H., 2021, MNRAS, 507, 4530 * Ardavan (2022) Ardavan H., 2022, arXiv:2206.02729 * Ardavan (2023a) Ardavan H., 2023a, arXiv:2312.03471 * Ardavan (2023b) Ardavan H., 2023b, J. High Energy Astrophys., 37, 62 * Ardavan (2023c) Ardavan H., 2023c, A&A, 672, A154 * Bolotovskii & Bykov (1990) Bolotovskii B. M., Bykov V. P., 1990, Sov. Phys-Usp., 33, 477 * Ginzburg (1972) Ginzburg V. L., 1972, Sov. Phys-JETP, 35, 92 * Jankowski et al. (2018) Jankowski F., van Straten W., Keane E. F., Bailes M., Barr E. D., Johnston S., Kerr M., 2018, MNRAS, 473, 4436J * Kalapotharakos et al. (2012) Kalapotharakos C., Contopoulos I., Kazanas D., 2012, MNRAS, 420, 2793 * Kijak et al. (2021) Kijak J., Basu R., Lewandowski W., Ro$\dot{\rm z}$ko K., 2021, ApJ, 923, id.211 * Manchester et al. (2005) Manchester R. N., Hobbs G. B., Teoh A., Hobbs M., 2005, AJ, 129, 1993 * Melrose et al. (2021) Melrose D. B., Rafat M. Z., Masterano A., 2021, MNRAS, 500, 4530 * Olver et al. (2010) Olver F. W. J., Lozier D. W., Boisvert R. F., Clark C. W., 2010, NIST Handbook of Mathematical Functions. Cambridge Univ. Press, Cambridge * Philippov & Kramer (2022) Philippov A., Kramer M., 2022, ARA&A, 60, 495 * Rajwade et al. (2016) Rajwade K., Lorimer D. R., Anderson L. D., 2016, MNRAS, 455, 493 * Spitkovsky (2006) Spitkovsky A., 2006, ApJ, 648, L51 * Stamnes (1986) Stamnes J. J., 1986, Waves in Focal Regions. Hilgar, Boston * Swainston et al. (2022) Swainston N. A., Lee C. P., McSweeney S. J., Bhat N. D. R., 2022, PASA, 39, e056 * Tchekhovskoy et al. (2016) Tchekhovskoy A., Philippov A., Spitkovsky A., 2016, MNRAS, 457, 3384
∎ 11institutetext: Heinke Hihn 22institutetext: Institute for Neural Information Processing Ulm University, Ulm, Germany 22email<EMAIL_ADDRESS>33institutetext: Daniel A. Braun 44institutetext: Institute for Neural Information Processing Ulm University, Ulm, Germany 44email<EMAIL_ADDRESS> # Specialization in Hierarchical Learning Systems A Unified Information-theoretic Approach for Supervised, Unsupervised and Reinforcement Learning Heinke Hihn Daniel A. Braun (Received: date / Accepted: date) ###### Abstract Joining multiple decision-makers together is a powerful way to obtain more sophisticated decision-making systems, but requires to address the questions of division of labor and specialization. We investigate in how far information constraints in hierarchies of experts not only provide a principled method for regularization but also to enforce specialization. In particular, we devise an information-theoretically motivated on-line learning rule that allows partitioning of the problem space into multiple sub-problems that can be solved by the individual experts. We demonstrate two different ways to apply our method: (i) partitioning problems based on individual data samples and (ii) based on sets of data samples representing tasks. Approach (i) equips the system with the ability to solve complex decision-making problems by finding an optimal combination of local expert decision-makers. Approach (ii) leads to decision-makers specialized in solving families of tasks, which equips the system with the ability to solve meta-learning problems. We show the broad applicability of our approach on a range of problems including classification, regression, density estimation, and reinforcement learning problems, both in the standard machine learning setup and in a meta-learning setting. ###### Keywords: Meta-Learning, Information Theory, Bounded Rationality ††journal: Neural Processing Letters ## 1 Introduction Intelligent agents are often conceptualized as decision-makers that learn probabilistic models of their environment and optimize utilities or disutilities like cost or loss functions VonNeumann2007 . In the general case we can think of a utility function as a black-box oracle that provides a numerical score that rates any proposed solution to a supervised, unsupervised or reinforcement learning problem. In context of decision-making, naïvely enumerating all possibilities and searching for an optimal solution is usually prohibitively expensive. Instead, intelligent agents must invest their limited resources in such a way that they achieve an optimal trade-off between expected utility and resource costs in order to enable efficient learning and acting. This trade-off is the central issue in the fields of bounded or computational rationality with repercussions across other disciplines including economics, psychology, neuroscience and artificial intelligence payne1993adaptive ; Simon1955 ; aldrich1999organizations ; manson2006bounded ; gigerenzer2009homo ; damasio2009neuroscience ; Gershman2015 ; Genewein2015 ; Hihn2018 . The information-theoretic approach to bounded rationality is a particular instance of bounded rationality where the resource limitations are modeled by information constraints Edward2014 ; McKelvey1995 ; Tishby2011 ; wolpert2006information ; Ortega2011a ; Ortega2013 ; gottwald2019bounded ; Schach2018 ; lindig2019analyzing closely related to Jaynes’ maximum entropy or minimum relative entropy principle jaynes1996probability . At the heart of information-theoretic models of bounded rationality lies the information utility trade-off for lossy compression, abstraction and hierarchy formation Genewein2015 . The optimal utility information trade-off leads to an optimal arrangement of decision-makers and encourages the emergence of specialized agents which in turn facilitates an optimal division of labor reducing computational effort Hihn2018 ; Gottwald2019 . In the context of machine learning, hierarchical inference can also be regarded as an example of bounded rational decision-making with information constraints, where different models correspond to different experts that are bound by their respective priors over parameters and try to optimize the marginal likelihood given data Genewein2015 . The priors that are acquired by the different expert models can be regarded as efficient abstractions that facilitate generalization to novel problems. Hierarchical inference models have been used successfully, for example, to model human behavior when learning with very limited resources from a handful of examples while excelling at adapting quickly jankowski2011meta ; genewein2015structure ; braun2010structure ; kemp2007learning . We can study sample-efficient adaptation to new problems as an instance of “learning to learn” or meta- learning thrun2012learning ; schmidhuber1997shifting ; caruana1997multitask which is an ongoing and active field of research koch2015siamese ; vinyals2016matching ; Finn2017model ; ravi2017optimization ; ortega2019meta ; botvinick2019reinforcement ; yao2019hierarchically . While we can define meta- learning in different ways lemke2015metalearning ; vilalta2002perspective ; giraud2008metalearning ; brazdil2008metalearning ; hutter2019automated , a common denominator is that systems capable of meta-learning happen to learn on two levels, each with a different time scale: slow learning across different tasks (the meta-level), and fast learning to adapt to each task individually. Understanding efficient meta-learning still poses a challenging machine learning problem, as applying pre-trained models to new tasks naïvely usually leads to poor performance, as with each new incoming batch of data the agent has to perform expensive and slow re-learning of its policy. In this study, we aim to harness the efficient adaptation and the ability of meta-learning of hierarchical inference processes for the learning of decision-making hierarchies formulated in terms of arbitrary utility functions – see Figure 1 and Table 1. The formulation in terms of general utility functions makes this approach applicable to a broad range of machine learning problems that can be formulated as optimization problems, including supervised and unsupervised learning, and reinforcement learning. To this end, we extend our previous work on specialization in hierarchical decision-making systems introduced in Hihn et al. (2019) hihn2019 to the problem of meta-learning. After introducing information-theoretic constraints for learning and decision- making in Section 2, we explain our hierarchical online learning approach to classification, regression, unsupervised learning and reinforcement learning in Section 3 for the case of within task specialization. We extend our mixture-of-experts learning experiments from Hihn et al. (2019) hihn2019 for supervised and reinforcement learning in Sections 4.1 and 4.3 and devise a novel application to density estimation in Section 4.2. The extended experiments in the classification and reinforcement learning setting provide new insights into how the number of experts influences the information processing and classification error and how expert policies partition the reinforcement learning problem space amongst themselves. In Sections 5 and 6 we extend the approach from state-based specialization to the case of across task specialization. We show that this task specialization gives rise to a novel meta-learning approach where the meta-learner assigns previously unseen tasks to experts specialized on similar tasks. In order to split the task space and to assign the partitions to experts, we learn to represent tasks through a common latent embedding, which a gating network uses to distribute tasks among the experts–similar to the posterior over models in a hierarchical inference setup. In Section 7, we discuss novel aspects of the current study in the context of previous literature and conclude with a final summary. ## 2 Information Constraints in Learning and Decision-making $x$$m$$y$Graphical ModelWorld StateSelectorExperts$p(x)$$p(m|x)$$p(y|m,x)$Posterior-$p(m)=\sum_{x}p(x)p(m|x)$$p(y|m)=\sum_{x}p(x|m)p(y|m,x)$Prior$x\thicksim p(x)$$m\thicksim p(m|x)$$y\thicksim p(y|m,x)$Gibbs Sampling Process Figure 1: The hierarchical expert model: after observing a world state $x$, the selector samples an expert $m$ according to a selection policy $p(m|x)$ and then the expert samples an action $y$ from the expert’s posterior action policy $p(y|m,x)$. This can be seen as a Gibbs sampling process. Each posterior is the result of a trade-off between maximizing the utility and minimizing the $\operatorname{\text{D}_{\text{KL}}}$ to the respective prior. ### 2.1 Decision-making with information constraints Given a utility function $\mathbf{U}(x,y)$ that indicates the desirability of each action $y\in\altmathcal{Y}$ taken in each context $x\in\altmathcal{X}$, a fully rational agent picks action $y^{*}_{x}=\operatorname*{arg\,max}_{y}\mathbf{U}(x,y)$. A bounded rational decision-maker with information constraints Ortega2013 is modeled by an upper bound $B$ on the Kullback-Leibler divergence $\operatorname{\text{D}_{\text{KL}}}(p(y|x)||p(y))=\sum_{y}{p(y|x)\log{\frac{p(y|x)}{p(y)}}}$ between the agent’s prior $p(y)$ and posterior policy $p(y|x)$ to express the limitation on the decision-maker’s information processing resources for reducing uncertainty when optimizing $\mathbf{U}(x,y)$. This results in the following optimization problem: $\max_{p(y|x)}\mathbb{E}_{p(y|x)}\left[\mathbf{U}(x,y)\right]-\frac{1}{\beta}\mathbb{E}_{p(x)}\left[\operatorname{\text{D}_{\text{KL}}}(p(y|x)||p(y))\right].$ (1) by introducing a Lagrange multiplier $\beta\in\mathbb{R}^{+}$ that is determined by $B$. For $\beta\rightarrow\infty$ we recover the maximum utility solution and for $\beta\rightarrow 0$ the agent can only act according to the prior. In the case of a known state distribution $p(x)$, the optimal prior is given by the marginal $p(y)=\sum_{x}{p(x)p(y|x)}$ and the expected Kullback- Leibler divergence becomes equal to the mutual information $I(X;Y)$. When aggregating bounded-rational agents into hierarchical decision-making systems that split the action space into soft partitions Genewein2015 , an expert selection policy $p(m|x)$ can be introduced that selects an expert $m$ for a given state $x$ that chooses their action according to a stochastic policy $p(y|x,m)$. Similar to Equation (1), such a hierarchical decision- making system with information constraints can be expressed by the following optimization problem: $\max_{p(y|x,m),p(m|x)}\operatorname{\mathbb{E}}[\mathbf{U}(x,y)]-\frac{1}{\beta_{1}}I(X;M)-\frac{1}{\beta_{2}}I(X;Y|M),$ (2) where $\beta_{1}$ is the resource parameter for the expert selection stage and $\beta_{2}$ for the experts, and $I(\cdot;\cdot|M)$ denotes the conditional mutual information. The optimal solution is a set of coupled equations $\begin{cases}\begin{array}[]{rcl}p(m|x)&=&\frac{1}{Z(x)}p(m)\exp(\beta_{1}\operatorname*{\Delta\text{F}_{\text{par}}}(x,m))\\\\[2.0pt] p(y|x,m)&=&\frac{1}{Z(x,m)}p(y|m)\exp(\beta_{2}\mathbf{U}(x,y))\\\\[2.0pt] \end{array}\end{cases}$ (3) with the marginals $p(m)=\sum_{x}p(x)p(m|x)$ and $p(y|m)=\sum_{x}p(x|m)p(y|x,m)$, and the free-energy difference $\operatorname*{\Delta\text{F}_{\text{par}}}(x,m)=\operatorname{\mathbb{E}}_{p(y|x,m)}[\mathbf{U}]-\frac{1}{\beta}\operatorname{\text{D}_{\text{KL}}}(p(y|x,m)||p(y|m))$. If we assume $\mathbf{U}(x,y)$ to represent the log-likelihood and $m$ to represent different statistical models with parameters $y$ to explain observables $x$, then Equation (2) describes the problem of hierarchical inference in terms of an optimization problem to produce the posteriors $p(y|x,m)$ and $p(m|x)$. ### 2.2 Learning with Information Constraints In learning problems, the true utility $\mathbf{U}(x,y)$ is unknown, instead we are given a function $\mathbf{U}_{D}(x,y)$ after having observed $n$ data samples $D=\\{(x_{i},y_{i})\\}_{i=1}^{n}$ and we know that $\mathbf{U}_{D}(x,y)\rightarrow\mathbf{U}(x,y)$ for $n\rightarrow\infty$. Assuming that we know $p(x)$, we are looking for the best strategy $p(y|x)$ that optimizes $\mathbf{U}(x,y)$. If we were to treat this as an inference problem, we could index different candidate solutions $p(y|x,\theta)$ with a parameter $\theta$ and place a prior $\pi(\theta)$ over this parameter. From PAC-Bayes analyses it is well-known that the Gibbs distribution $\pi_{D}(\theta)\propto\pi(\theta)\exp\left(\gamma\mathbf{U}_{D}(\theta)\right)$ minimizes the generalization error mcallester2003pac ; mcallester1999pac $\mathbb{E}_{\pi_{D}(\theta)}\left[\mathbf{U}(\theta)-\mathbf{U}_{D}(\theta)\right]\leq\sqrt{\frac{\operatorname{\text{D}_{\text{KL}}}(\pi_{D}(\theta)||\pi(\theta))+\log\frac{n}{\delta}}{2(n-1)}}$ where $U_{n}(\theta)=\frac{1}{n}\sum_{x\in D}\sum_{y}p(y|x,\theta)\mathbf{U}_{D}(x,y)$ and $U(\theta)=\sum_{x,y}p(x)p(y|x,\theta)\mathbf{U}(x,y)$. In a classification problem, for example, we could choose the utility $\mathbf{U}(x,y)=\mathbb{I}_{y=h(x)}$ where $\mathbb{I}$ is the indicator function and $h(x)$ is the mapping from $x$ to the true labels $y_{true}$. Just like the prior $\pi(\theta)$ can be seen to regularize the search for the parameter $\theta$, we can therefore regard the $\operatorname{\text{D}_{\text{KL}}}$ as a principled regularizer in the space of probability distributions over $\theta$. Regularization techniques kukavcka2017regularization ; srivastava2014dropout ; ioffe2015batch are typically introduced to overcome the problem of overfitting (i.e., large generalization error), such that we can regard information constraints in $\theta$-space as a particular instance of regularization in the space of hypotheses. Instead of studying information constraints in hypothesis space governing the update from prior $\pi(\theta)$ to posterior $\pi_{D}(\theta)$, we could also directly study information constraints in the output space governing the update between the prior and posterior predictive distributions $p(y|x)=\sum_{\theta}\pi(\theta)p(y|x,\theta)$ and $p_{D}(y|x)=\sum_{\theta}\pi_{D}(\theta)p(y|x,\theta)$. If the update from $\pi(\theta)$ to $\pi_{D}(\theta)$ is bound by $\operatorname{\text{D}_{\text{KL}}}(\pi_{D}(\theta)||\pi(\theta))\leq C_{1}$, then the update from $p(y|x)$ to $p_{D}(y|x)$ will be bound by $\operatorname{\text{D}_{\text{KL}}}(p_{D}(y|x)||p(y|x))\leq C_{2}$. This suggests that one could try to use the $\operatorname{\text{D}_{\text{KL}}}$ in output space directly as a regularizer. Instead of limiting our search for distributions $p(y|x)$ by imposing a prior in $\theta$-space, we limit our search for $p(y|x)$ directly through a prior $p(y)$ for some suitable $C_{2}$. Such output regularization techniques have indeed been proposed in the literature. In an unregularized classifier, for example, the probabilities assigned to incorrect class labels would often be pushed close to zero without $\operatorname{\text{D}_{\text{KL}}}$ regularization, such that $p_{\theta}(y|x)$ collapses to a delta function over the correct class label, which is a sign of overfitting szegedy2016rethinking . To avoid this, it has been suggested pereyra2017regularizing to view the probabilities assigned to incorrect labels as knowledge the learner has extracted from the dataset, which can be achieved by encouraging the learner to produce output distributions that balance high entropy and minimal loss: $\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum\altmathcal{L}(x,y)-\frac{1}{\beta}H(p_{\theta}(y|x)),$ (4) where $x,y$ are training data, $\altmathcal{L}(x,y)$ is a error function and $H(p)=-\sum_{x_{\in}\altmathcal{X}}p(x)\log p(x)$ is the entropy of $p$. This technique introduced by Pereyra et al. (2017) pereyra2017regularizing has immediate connections to our approach through the following observation: adding the $\operatorname{\text{D}_{\text{KL}}}$ between the agent’s posterior $p_{\theta}(y|x)$ and prior $p(y)$ recovers confidence penalty, if the agent’s prior policy is uniform. A similar idea has also been suggested in the context of using label smoothing as an output regularization technique muller2019does . In reinforcement learning encouraging the agent to learn policies with high entropy is a widely applied technique known as maximum entropy reinforcement learning (RL) haarnoja2017reinforcement . Maximum entropy RL typically penalizes deviation from a fixed uniformly distributed prior to promote exploration, but in a more general setting we can discourage deviation from an arbitrary prior policy by optimizing for $\max_{p}\mathbb{E}_{p}\left[\sum_{t=0}^{\infty}\gamma^{t}\left(r(x_{t},a_{t})-\frac{1}{\beta}\log\frac{p(a_{t}|x_{t})}{p(a)}\right)\right],$ (5) where $\beta$ trades off between reward and entropy, such that $\beta\rightarrow\infty$ recovers the standard RL value function and $\beta\rightarrow 0$ recovers the value function under a random policy. While the entropy term is often regarded as a simple method to encourage exploration, we could similarly regard it as an information constraint for regularization to prevent overfitting as in the case of supervised learning. ## 3 Within Task Specialization in Hierarchical Multi-Agent Policies Learning Setup and Variables --- | Utility | $x$ | $m$ | $y$ Supervised Learning | neg. MSE / XEnt | data | expert index | label Unsupervised Learning | log-likelihood | data | model index | parameters Reinforcement Learning | reward | state | policy index | action Table 1: Our method exhibits flexibility in that we can address diverse problems by defining different utility functions and expert networks. Note that in the Meta-Learning setup $x$ is a Dataset $D$ defining a task and $m$ is an expert over tasks. To assign datasets we compute feature representations $h(D)$, as we illustrate in Figure 13 in the Appendix. In the following we introduce the building blocks of our novel gradient based algorithm to learn the components of a hierarchical multi-agent policy with information constraints. In particular, we leverage the hierarchical model introduced earlier to learn a utility driven partitioning of the state and action spaces. We will demonstrate experimentally how limiting the amount of information each agent can process leads to specialization. First we will show how to transform this principle into a general on-line learning algorithm and afterwards we will derive applications to supervised, unsupervised, and reinforcement learning. In Table 1 we summarize the different setups in supervised, unsupervised and reinforcement learning with according utility functions. The model consists of two stages: an expert selection stage followed by an action selection stage. The first stage learns a soft partitioning of the state space and assigns each partition optimally to the experts according to a parametrized stochastic policy $p_{\theta}(m|x)$ with parameters $\theta$ such that under an information-theoretic constraint we can maximize the free energy $\operatorname*{\Delta\text{F}_{\text{par}}}(x,m)$. We start by rewriting Equation (2) as: $\max_{p_{\vartheta}(y|x,m),p_{\theta}(m|x)}\sum_{x,m,y}p(x)p_{\theta}(m|x)p_{\vartheta}(y|x,m)J(x,m,y)$ (6) where we define the objective $J(x,m,y)$ as $J(x,m,y)=\mathbf{U}(x,y)-\frac{1}{\beta_{1}}\log\frac{p_{\theta}(m|x)}{p(m)}-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(y|x,m)}{p(y|m)},$ (7) and $\theta,\vartheta$ are the parameters of the selection policy and the expert policies. Note that each expert policy has a distinct set of parameters $\vartheta=\\{\vartheta_{m}\\}_{m}$, but we drop the $m$ index for readability. As outlined in Section 2.1, the optimal prior to find an optimal utility information trade-off is the marginal of the posterior policy given by $p(y)=\sum_{x}p(x)p(y|x)$. It would be prohibitive to compute the prior in each step, as it would require marginalizing over large action and state spaces. Instead, we approximate $p(m)$ and $p(y|m)$ by exponential running mean averages of the posterior policies with momentum terms $\lambda_{1}$ and $\lambda_{2}$: $\displaystyle p_{t+1}(y|m)$ $\displaystyle=$ $\displaystyle\lambda_{1}p_{t}(y|m)+(1-\lambda_{1})p_{\vartheta}(y|x,m)$ (8) $\displaystyle p_{t+1}(m)$ $\displaystyle=$ $\displaystyle\lambda_{2}p_{t}(m)+(1-\lambda_{2})p_{\theta}(m|x).$ (9) ### 3.1 Specialization in Supervised Learning We set the negative loss as the utility $\operatorname{\mathbf{U}}(y,\hat{y})=-\altmathcal{L}(y,\hat{y})$, where $\hat{y}$ represents the expert’s response (predicted class label, regressed value) and $y$ is the true label. In our implementation we use the cross- entropy loss $\altmathcal{L}(y,\hat{y})=\sum_{i}y_{i}\log\frac{1}{\hat{y}_{i}}=-\sum_{i}y_{i}\log\hat{y}_{i}$ as a performance measure for classification tasks, and for regression the mean squared error $\altmathcal{L}(y,\hat{y})=\sum_{i}||\hat{y}_{i}-y_{i}||^{2}_{2}$ between the prediction $\hat{y}$ and the ground truth values $y$. The selection policy thus optimizes $\max_{\theta}\operatorname{\mathbb{E}}_{p_{\theta}(m|x)}\left[\hat{f}(m,x)-\frac{1}{\beta_{1}}\log\frac{p_{\theta}(m|x)}{p(m)}\right],$ (10) where $\hat{f}(m,x)\coloneqq\mathbb{E}_{p_{\vartheta}(\hat{y}|m,x)}\big{[}-\altmathcal{L}(\hat{y},y)-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(\hat{y}|x,m)}{p(\hat{y}|m)}\big{]}$ is the free energy of expert $m$. Note that this introduces a double expectation, which we can estimate by Monte Carlo sampling. The experts thus simply optimize their free energy objective defined by $\max_{\vartheta}\operatorname{\mathbb{E}}_{p_{\vartheta}(\hat{y}|m,x)}\left[-\altmathcal{L}(y,\hat{y})-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(\hat{y}|x,m)}{p(\hat{y}|m)}\right].$ (11) ### 3.2 Specialization in Unsupervised Learning Unsupervised learning barlow1989unsupervised deals with learning from data in the face of missing labels, such as clustering xu2008clustering and density estimation silverman2018density algorithms. The density estimation method we propose is similar to RBF Networks schwenker2001three or Gaussian Mixture Models biernacki2000assessing . In the following we will show how our method can handle unsupervised learning, where we interpret each expert as a Normal- Wishart distribution. We model this by learning a distribution $p(\bm{\mu},\bm{\Lambda}|\bm{\omega},\lambda,\bm{W},\nu)$ over means $\bm{\mu}$ and covariance matrices $\bm{\Lambda}$ as Normal-Wishart distributions: $p(\bm{\mu},\bm{\Lambda}|\bm{\omega},\lambda,\bm{W},\nu)=\altmathcal{N}\left(\bm{\mu}|\bm{\omega},(\lambda\bm{\Lambda})^{-1}\right)\altmathcal{W}(\bm{\Lambda}|\bm{W},\nu),$ (12) where $\altmathcal{W}$ is Wishart a distribution, $\altmathcal{N}$ a Normal distribution, $\bm{\omega}\in\mathbb{R}^{D}$ is the mean of the normal distribution, $\bm{W}\in\mathbb{R}^{D\times D}$ is the scale matrix, $\nu>D-1$ is the degree of freedom, $\lambda>0$ is a scaling factor, and $D$ denotes the dimensionality of the data. Sampling is straightforward: we first sample $\bm{\Lambda}$ from a Wishart distribution with parameters $\mathbf{W}$ and $\nu$. Next we sample $\bm{\mu}$ from a multivariate normal distribution with mean $\bm{\omega}$ and variance $(\lambda\bm{\Lambda})^{-1}$. We assume the data $x$ follows a normal distribution $x\thicksim\altmathcal{N}(\bm{\mu},(\lambda\bm{\Lambda})^{-1})$. The parameters $\nu$, $\lambda$ are hyper-parameters we set beforehand, such that we are interested in finding the parameters $\bm{\mu}^{*}$ and $\bm{W}^{*}$ maximizing the likelihood of the data: $\omega^{*},\bm{W}^{*}=\operatorname*{arg\,max}_{\bm{W},\omega}\altmathcal{N}(x|\mu,(\lambda\bm{\Lambda})^{-1})p(\mu,\bm{\Lambda}|\omega,\bm{W},\lambda,\nu)$ (13) Thus, in this setting the expert’s task is to find parameters $\omega^{*}$ and $\bm{W}^{*}$ in order to select a tuple $(\bm{\mu},\bm{\Lambda})$ that models the likelihood of the data well. The objective of the selector is to assign data to the experts that not only have a set of parameters that yield high likelihood on the assigned data, but also have low statistical complexity as measured by the $\operatorname{\text{D}_{\text{KL}}}$ between the expert’s posterior and prior distributions. We can now define the free energy difference for each expert as $\hat{f}(x,m)=\mathbb{E}_{p_{\vartheta}(\bm{\mu},\bm{\Lambda}|m)}\left[\ell(x|\bm{\mu},(\lambda\bm{\Lambda})^{-1})-\frac{1}{\beta_{2}}\operatorname{\text{D}_{\text{KL}}}(p(\bm{\mu},\bm{\Lambda})||p_{0}(\bm{\omega}_{0},\bm{\Lambda}_{0})\right],$ (14) where $p(\bm{\mu},\bm{\Lambda})$ is the expert’s posterior Normal-Wishart distribution over the parameters $\mu$ and $\lambda$ and $p_{0}(\bm{\omega}_{0},\bm{\Lambda}_{0})$ is the expert’s prior, $p$ and $p_{0}$ are the experts posterior and prior distribution and $\ell(x|\bm{\mu},(\lambda\bm{\Lambda})^{-1})$ is the Gaussian log likelihood $\ell(x|\bm{\mu},(\lambda\bm{\Lambda})^{-1})=-\frac{1}{2}\log(|(\lambda\bm{\Lambda})^{-1}|)+(x-\bm{\mu})^{T}(\lambda\bm{\Lambda})(x-\bm{\mu})+D\log(2\pi)$ (15) of a data point $x$ given the distribution parameters $\bm{\mu},(\lambda\bm{\Lambda})^{-1}$. This serves as the basis for the selector’s task of assigning data to the expert with maximum free energy by optimizing $\max_{\theta}\operatorname{\mathbb{E}}_{p_{\theta}(m|x)}\left[\hat{f}(x,m)-\frac{1}{\beta_{1}}\log\frac{p_{\theta}(m|x)}{p(m)}\right].$ (16) We can compute the $\operatorname{\text{D}_{\text{KL}}}$ between two Normal- Wishart distributions $p$ and $q$ as $\begin{split}\operatorname{\text{D}_{\text{KL}}}\left[p(\bm{\mu},\bm{\Lambda})\|q(\bm{\mu},\bm{\Lambda})\right]=\frac{\lambda_{q}}{2}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)^{\top}\nu_{p}\mathbf{W}_{p}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)-\\\ \frac{\nu_{q}}{2}\log|\bm{W}_{q}^{-1}\bm{W}_{p}|+\frac{\nu_{p}}{2}(\text{tr}(\bm{W}_{q}^{-1}\bm{W}_{p})-D)+C,\end{split}$ (17) where $C$ is a term that does not depend on the parameters we optimize, so we can omit it, as we are only interested in relative changes in the $\operatorname{\text{D}_{\text{KL}}}$ caused by changes to $\bm{W}$ and $\bm{\omega}$ (see Appendix B for details on the derivation). ### 3.3 Specialization in RL Agents In reinforcement learning we model sequential decision problems by defining a Markov Decision Process (MDP) as a tuple $(\altmathcal{S},\altmathcal{A},P,r)$, where $\altmathcal{S}$ is the set of states, $\altmathcal{A}$ the set of actions, $P:\altmathcal{S}\times\altmathcal{A}\times\altmathcal{S}\rightarrow[0,1]$ is the transition probability, and $r:\altmathcal{S}\times\altmathcal{A}\rightarrow\mathbb{R}$ is a reward function. The aim is to find the parameter $\theta^{*}=\operatorname*{arg\,max}_{\theta}J(p_{\theta})$ of a policy $p_{\theta}$ that maximizes the expected discounted reward $J(p_{\theta})=\mathbb{E}_{p_{\theta}}\left[\sum_{t=0}^{T}\gamma^{t}r(x_{t},a_{t})\right]$. In case of an infinite horizon, we have $T\rightarrow\infty$. We define $r(\tau)=\sum_{t=0}^{T}\gamma^{t}r(x_{t},a_{t})$ as the cumulative reward of trajectory $\tau=\\{(x_{t},a_{t})\\}_{i=0}^{T}$, where we generate the trajectory according to the policy $p(a_{t}|x_{t})$ and the environmental dynamics $P(x_{t+1}|x_{t},a_{t})$. Reinforcement learning Sutton2018 models assume that an agent interacts with an environment over a number of discrete time steps $t$. At each time step $t$, the agent finds itself in a state $x_{t}$ and selects an action $a_{t}$ according to the policy $p(a_{t}|x_{t})$. In return, the environment transitions to the next state $x_{t+1}$ and generates a scalar reward $r_{t}$. Here, we consider policy gradient methods Sutton2000 which are a popular choice to tackle continuous reinforcement learning problems. The main idea is to directly manipulate the parameters $\theta$ of the policy in order to maximize the objective $J(p_{\theta})$ by taking steps in the direction of the gradient $\nabla_{\theta}J(p_{\theta})$. In the following we will derive our algorithm for specialization in hierarchical reinforcement learning agents. Note that in the reinforcement learning setup the reward function $r(x,a)$ defines the utility $\mathbf{U}(x,a)$. In maximum entropy RL (see e.g., Haarnoja et al. (2017) haarnoja2017reinforcement ) the regularization penalizes deviation from a fixed uniformly distributed prior, but in a more general setting we can discourage deviation from an arbitrary prior policy by optimizing for: $\max_{p}\mathbb{E}_{p}\left[\sum_{t=0}^{T}\gamma^{t}\left(r(x_{t},a_{t})-\frac{1}{\beta}\log\frac{p(a_{t}|x_{t})}{p(a)}\right)\right],$ (18) where $\beta$ trades off between reward and entropy, such that $\beta\rightarrow\infty$ recovers the standard RL value function and $\beta\rightarrow 0$ recovers the value function under a random policy. To optimize the objective (18) we define two separate kinds of value function, $V_{\phi}$ for the selector and one value function $V_{\varphi}$ for each expert. Thus, each expert is an actor-critic with separate actor and critic networks. Similarly, the selector has an actor-critic architecture, where the actor network selects experts and the critic learns to predict the expected free energy of the experts depending on a state variable. The selector’s policy is represented by $p_{\theta}$, while each expert’s policy is represented by a distribution $p_{\vartheta}$. #### 3.3.1 Value Functions In standard reinforcement learning the discounted reward is defined as $R_{t}=\sum_{l=0}^{T}\gamma^{l}r(x_{t+l},a_{t+l}),$ (19) which is usually learned through a parameterized value function $V_{\psi}$ by regressing $\psi^{*}=\operatorname*{arg\,min}_{\psi}\frac{1}{|\altmathcal{D}|T}\sum_{\tau\in\altmathcal{D}}\sum_{t=0}^{T}(V_{\psi}(x_{t})-R_{t})^{2}.$ (20) Here $\psi$ are some arbitrary parameters of the value representation, $V_{\psi}(x_{t})$ is the predicted value estimate for state $x_{t}$, and $\altmathcal{D}$ is a set of trajectories $\tau$ up to horizon $T$ collected by roll-outs of the policies. Similar to the standard discounted reward $R_{t}$, we can now define a discounted free energy $F_{t}$ as $F_{t}=\sum_{l=0}^{T}\gamma^{l}f(x_{t+l},m_{t+l},a_{t+l}),$ (21) where $f(x,m,a)=r(x,a)-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(a|x,m)}{p(a|m)}$. Accordingly, we can learn a value function $V_{\varphi}$ for each expert by parameterizing the value function with a neural network and performing regression on $F_{T}$. Similarly, we can define a discounted free energy $\bar{F}_{t}$ for the selector $\bar{F}_{t}=\sum_{l=0}^{T}\gamma^{l}\bar{f}(x_{t+l},m_{t+l}),$ (22) with $\bar{f}(x,m)=\mathbb{E}_{p_{\vartheta}(a|x,m)}[r(x,a)-\frac{1}{\beta_{2}}\log\frac{p(a|x,m)}{p(a|m)}]$ that is learned through the selector’s value function $V_{\phi}$ by regressing $\bar{F}_{t}$. #### 3.3.2 Policy Learning In standard reinforcement learning a common technique to update a parametric policy representation $p_{\omega}(a|x)$ with parameters $\omega$ is to use policy gradients that optimize the cumulative reward $J(\omega)=\mathbb{E}\left[p_{\omega}(a|x)V_{\psi}(x)\right]$ (23) expected under the critic’s prediction $V_{\psi}(x)$, by following the gradient $\nabla_{\omega}J(\omega)=\mathbb{E}\left[\nabla_{\omega}\log p_{\omega}(a|x)V_{\psi}(x)\right].$ (24) This policy gradient formulation Sutton2000 is prone to producing high variance gradients. A common technique to reduce the variance is to formulate the updates using the advantage function instead of the reward arulkumaran2017deep . The advantage function $A(a_{t},s_{t})$ is a measure of how well a certain action $a$ performs in a state $x$ compared to the average performance in that state, i.e., $A(a,x)=Q(x,a)-V_{\psi}(x)$. Here, $V(x)$ is the value function and is a measure of how well the agent performs in state $x$, and $Q(x,a)$ is an estimate of the cumulative reward achieved in state $x$ when the agent executes action $a$. Thus, the advantage is an estimate of how advantageous it is to pick $a$ in state $x$ in relation to a baseline performance $V_{\psi}(x)$. Instead of learning the value and the Q function, we can define the advantage function solely based on the critic’s estimate $V_{\psi}(x)$ in the following way $A(x_{t},a_{t})=\underbrace{r(x_{t},a_{t})+\gamma V_{\psi}(x_{t+1})}_{\approx Q(x_{t},a_{t})}-V_{\psi}(x_{t}),$ (25) giving the following gradient estimates for the policy parameters $\omega\leftarrow\omega-\frac{\alpha}{|\altmathcal{D}|}\sum_{\tau\in\altmathcal{D}}\sum_{t=0}^{T}\nabla_{\omega}\log p_{\omega}(a_{t}|x_{t})A(x_{t},a_{t}),$ (26) where $\alpha$ is a learning rate and $\altmathcal{D}$ is a set of trajectories $\tau$ produced by the policies. Similar to the standard policy update based on the advantage function, the expert selection stage can be formulated by optimizing the expected advantage $\mathbb{E}_{p}(a|x,m)\left[A_{m}(x,a)\right]$ for expert $m$ with $A_{m}(x_{t},a_{t})=f(x_{t},m,a_{t})+\gamma V_{\varphi}(x_{t+1})-V_{\varphi}(x_{t}).$ Accordingly, we can define an expected advantage function $\mathbb{E}_{p}(m|x)\left[\bar{A}(x,m)\right]$ for the selector with $\bar{A}(x,m)=\mathbb{E}_{p_{\vartheta}(a|x,m)}\left[A_{m}(x,a)\right].$ We estimate the double expectation by Monte Carlo sampling, where in practice we use a single $(x,m,a)$ tuple for $\hat{f}(x,m)$, which enables us to employ our algorithm in an on-line optimization fashion. Figure 2: Results for three synthetic classification tasks. Our system successfully enables the linear experts to classify their assigned samples correctly by learning a soft partition of the sample space. As expected the accuracy improves and the information processed by each expert increases, as it specializes on the assigned region. We report all quantities presented in this plot on a 20% hold-out set and averaged over a 10-fold cross-validation scheme. Figure 3: Baseline comparison for the ”Circles” task. Partitioning of the $x_{1},x_{2}$-space is shown for a single decision tree (left column) with depths $1,2,4$, a random forest (middle left column) with $1,2,4$ decision trees of depth $2$, and an Adaboost ensemble (middle right column) of $1,2,4$ decision trees of depth $2$. Our method with $1,2,4$ linear experts is shown in the rightmost column. ## 4 Experiments and Results: Within Task Specialization Figure 4: Here we show the influence of the information processing in the expert selection stage and in the expert stage (measured by $I(X;M)$ and $I(X;Y|M)$) on the classification accuracy synthetic dataset “circles” in Figure 2. The grey area is beyond the efficiency frontier of the system. We created this surface by training the system with varying numbers of experts and different settings for $\beta_{1}$ and $\beta_{2}$. We approximate intermediate values by a linear interpolation. The right figure shows the information processing of the whole system given by $I(X;Y)$, which we get by marginalizing over the experts. To obtain a particular information processing rate one must choose both parameters correctly, which can be a bit difficult, as we sweep through these parameters in discrete steps. This causes the bumps and irregularly spaced points on the Rate-Utility grid. Some points are almost impossible to reach even though they are theoretically possible, as the convergence of the learning algorithm may suffer under certain $\beta_{1},\beta_{2}$ configurations. In the following we will show applications to different learning tasks where the overall complexity of the problem exceeds the processing power of the individual (linear) experts. In particular, we will look at classification, density estimation, and reinforcement learning. See Appendix A for implementation details. In Section 7.2 we discuss how to choose proper values for $\beta_{1}$ and $\beta_{2}$ and their effects on learning. ### 4.1 Classification with Linear Decision-Makers When dealing with complex data for classification (or regression) it is often beneficial to combine multiple classifiers (or regressors). In the framework of ensemble learning, for example, multiple classifiers join together to stabilize learning and improve the results Kuncheva2004 , such as Mixture-of- Experts Yuksel2012 and Multiple Classifier systems Bellmann2018 . The method we propose when applied to classification problems falls within this scope of algorithms. The application to regression is an example of local linear regression atkeson1997locally . We evaluate our method on three synthetic datasets for classification–see Figure 2. Our method is able to partition the problem set into subspaces (third column from the left) and fit a linear decision-maker on each subset, achieving acceptable classification accuracy on synthetic non-linear datasets. The results on the ”Half Moons” dataset show an example where the quality of the classification does not improve with the number of experts, essentially because a satisfactory performance can already be achieved with a single expert. We can see that a single expert is classifying the data reasonably well and adding more experts improves accuracy only marginally, whereas in the remaining two datasets the accuracy increases with the number of experts. Regarding the information processed by each expert $I(X;YvertM)$, a single expert on the ”Half Moons” achieves a competitive score compared to the system with two and four experts, which in turn results in high accuracy. This also manifests in the selection prior $p(m)$ which shows for this dataset a non- uniform division of labor between the experts. In contrast to this, the results on ”Circles” and ”Blobs” show how adding more experts is beneficial if the underlying dataset has a highly non-linear structure. In both settings the information processed by a single expert is close to zero bits and classification accuracy is at chance level. Adding more experts allows for specialization and thus increased processing power $I(X;Y|M)$ which in turn achieves higher classification accuracy. Figure 5: Here we show density estimation results on an artificial dataset. We sampled the data from four bivariate Gaussian distributions ($\mu=\\{[-1,-1],[-1,1],[1,1],[1,1]\\}$ and $\Sigma=0.15I$) and fit experts on the data. As reference we show the density recovered by gaussian kernel density estimation. Blue indicates high data likelihood. We are able to recover this solution with four experts, but our method allows for additional flexibility by setting the number of experts. As we will show in Section 5, this abstraction gives rise to meta-learning. We set $\beta_{1}=20$ and $\beta_{2}=1.0$. In Figure 3 we compare the performance of the ”Circles” dataset to different baselines constructed from decision tree learning: a single decision tree with varying depth, multiple decision trees as part of a random forest, and multiple decision trees within Adaboost. In Figure 4 we report how information processing in the selection and the action stage influences classification accuracy. We can see that under a certain threshold the accuracy is random at best and increases with processing power, which is not surprising. Additionally, we can see that a high selection processing power compensates low processing power in the decision-makers up to a certain degree. If the processing power of the experts is too low, adding more experts does not increase the system’s performance indefinitely, as it becomes harder for the selection stage to pick a certain expert. This happens because the experts can only specialize on a small subspace of the task due to their low information- processing capabilities. ### 4.2 Unsupervised Learning We report unsupervised learning results in Figure 5, where we show how the system deals with unlabeled data. In this experiment the synthetic dataset contains four clusters and our algorithm is able to perform the clustering as we add more and more experts to partition the state space further. If we provide more experts (in this case 8) than there are clusters (in this case 4), the selector neglects the additional experts and only assigns data to four of them. This indicates that the optimization process aims to find the optimal number of experts in this case. ### 4.3 Reinforcement Learning with Linear Decision-Makers Figure 6: Results for the inverted double pendulum problem. The upper row gives the episodic reward for different system configurations. We show the performance of a system with one linear expert, five linear experts, and compare it to Trust Region Policy Optimization (TRPO) Schulman2015 (discussed further in Section 7.3). We set $\beta_{1}=25$ and $\beta_{2}=2.5$. Figure 7: Here we show the pairwise state partition learned by the selection policy on the Cart Pole environment. The task is to keep a pole on a moving cart balanced by applying a control signal $a\in\\{-1,+1\\}$ which moves the cart to the left or the right. On the diagonal we show the histogram (10 bins) of the state dimension over 100 time steps. Note that the matrix is symmetric. In the following we will show how our hierarchical system is able to solve a continuous reinforcement learning problem using an optimal arrangement of linear control policies. We evaluate on a task known as Acrobot Sutton1996 , more commonly referred to as the inverted double pendulum. The task is to swing a double-linked pendulum up and keep it balanced as long as possible, where the agent receives a reward of 10 minus a distance penalty between its current state and the goal state. Each episode terminates as soon as the environment reaches a predefined terminal state (hanging downwards) or after 1000 time steps. To balance the pendulum the agent is able to apply a force to the base joint in the range of $a\in[-1,+1]$, thus creating a movement to the left or to the right. This environment poses a non-linear control problem and thus a single linear controller cannot give an optimal solution. We show how using our approach enables a committee of linear experts to efficiently solve this non-linear task. We report the results in Figure 6. We allowed for five experts ($\beta_{2}=2.5$), but our system learns that three experts are sufficient to solve the task. The priors for each expert (lower right Figure, each color represents an expert) center on -1, 0, and 1, which correspond to swinging the double pendulum to the left, no force, and swinging to the right. The remaining two experts overlap accordingly. We can see that the average information processing in the five expert setup decreases, while in the selection it increases to $\log 3$. Both indicate that the system has learned an optimal arrangement of three experts and is thus able to achieve maximum reward and eventually catches up to the performance of a non-linear neural network controller trained with TRPO Schulman2015 that does not have to struggle with the restriction to linear controllers as our algorithm. Our method successfully learned a partitioning of the double-pendulum state space without having any prior information about any of the system dynamics or the state space. In Figure 7 we show how a model with two linear experts and selector learns to control the cart pole problem. Our method recovers the well known solution in which the pole can be balanced by two linear controllers, where one (dark purple) focuses on keeping the pole upright and the other (dark yellow) on moving the cart such that the other linear controller can take over. ## 5 Across Task Specialization for Hierarchical Meta-Learning The methods presented in Section 3 can be easily extended to achieve meta- learning by changing the way the selection mechanism is trained. Instead of assigning individual states that occur within a task, the selector assigns a complete dataset of a task to an expert. To do so, we must find a feature vector $z(d)$ of the dataset $d$. This feature vector must fulfill the following desiderata: 1) invariance against permutation of data in $d$, 2) high representational capacity, 3) efficient computability, and 4) constant dimensionality regardless of sample size $K$. In the following we propose methods to extract such features for image classification, regression, and reinforcement learning problems – see Figure 13. While the experts are trained on the training dataset $D_{\text{meta-train}}$, their performance used to optimize the selection policy is based on the validation dataset $D_{\text{meta-val}}$. The validation dataset contains previously unseen samples that are similar to those in the training set, thus providing a measure for the generalization of each expert. In effect, the selector operates on complete datasets, while the experts operate on single samples. ### 5.1 Specialization in Meta-Supervised Learning Algorithm 1 Expert Networks for Supervised Meta-Learning. 1:Input: Data Distribution $p(\altmathcal{D})$, number of samples $K$, batch- size $M$, training episodes $N$ 2:Hyper-parameters: resource parameters $\beta_{1}$, $\beta_{2}$, learning rates $\eta_{x}$, $\eta_{x}$ for selector and experts 3:Initialize parameters $\theta,\vartheta$ 4:for $i$ = 0, 1, 2, …, $N$ do 5: Sample batch of $M$ datasets $D_{i}\thicksim p(\altmathcal{D})$, each consisting of a training dataset $D_{\text{{meta-train}}}$ and a meta- validation dataset $D_{\text{{meta-val}}}$ with $2K$ samples each 6: for $D\in D_{i}$ do 7: Find Latent Embedding $z(D_{\text{{meta-train}}})$ 8: Select expert $m\thicksim p_{\theta}(m|z(D_{\text{{meta-train}}})$ 9: Compute $\hat{f}(m,D_{\text{meta-val}})$ 10: end for 11: Update selection parameters $\theta$ with $\hat{f}(m,D_{\text{meta-val}})$ 12: Update Autoencoder with positive samples in $D_{i}$ 13: Update experts $m$ with assigned $D_{\text{{meta-train}}}$ 14:end for 15:return $\theta$, $\vartheta$ In a supervised learning task we are usually interested in a dataset consisting of multiple input and output pairs $D=\\{(x_{i},y_{i})\\}^{N}_{i=1}$ and the learner’s task is to find a function $f(x)$ that maps from input to output, for example through a deep neural network. To do this, we split the dataset into training and test sets and fit a set of parameters $\theta$ on the training data and evaluate on test data using the learned function $f_{\theta}(x)$. In meta-learning, we are instead working with meta-datasets $\altmathcal{D}$, each containing regular datasets split into training and test sets. We thus have different sets for meta- training, meta-validation, and meta-test, i.e., $\altmathcal{D}=\\{D_{\text{meta-train}},D_{\text{meta-val}},D_{\text{meta- test}}\\}$. The goal is to train a learning procedure (the meta-learner) that can take as input one of its training sets $D_{\text{meta-train}}$ and produce a classifier (the learner) that achieves low prediction error on its corresponding test set $D_{\text{meta-test}}$. The meta-learning is then updated using performance measure based on the learner’s performance on $D_{\text{meta-val}}$, compare Algorithm 1. This may not always be the case, but our work (among others, e.g., Finn et al. (2017) Finn2017model ) follow this paradigm. The rationale being that the meta-learner is trained such that it implicitly optimizes the base learner’s generalization capabilities. The dataset generating distribution $p(\altmathcal{D})$ is unknown to the learner but remains fixed over course of training. The case where $p(\altmathcal{D})$ is changing is study in the field of (meta) continual learning, but is not the focus of this work. For image classification, we propose to pass the images with positive labels in the dataset through a convolutional autoencoder and use the outputs of the bottleneck layer. Convolutional autoencoders are generative models that learn to reconstruct their inputs by minimizing the Mean-Squared-Error between the input and the reconstructed image (see e.g., Vincent et al. vincent2008extracting . In this way we get similar embeddings $z(d)$ for similar inputs belonging to the same class. We compute the latent representation for each positive sample in $d$ and pass it through a pooling function $h(z(d))$ to find a single embedding for the complete dataset–see Figure 8 for an overview of our proposed model. We found that max pooling yields the best results, while one could use others, such as mean or min pooling. Yao et al. (2019) yao2019hierarchically propose a similar feature set. For regression, we transform the training data into a feature vector $z(d)$ by binning the points into $N$ bins according to their $x$ value and collecting the $y$ value. If more than one point falls into the same bin we average the $y$ values, thus providing invariance against the order of the data in $D_{\text{meta-train}}$. We use this feature vector to assign each dataset to an expert according to $p_{\theta}(m|h(z(D_{\text{meta-train}})))$, which we abbreviate to $p_{\theta}(m|D_{\text{meta-train}})$. Figure 8: Left: The selector assigns the new input encoding to one of the three experts $\theta_{0}$, $\theta_{1}$ or $\theta_{2}$, depending on the similarity of the input to previous inputs seen by the experts. Right: Our proposed method consists of three main stages. First, we feed the training dataset $D_{\text{train}}$ through a convolutional autoencoder to find a latent representation $z(d_{i})$ for each $d_{i}\in D_{\text{train}}$, which we get by flattening the preceding convolutional layer (“flattening layer”). We apply a pooling function to the resulting set of image embeddings which serves as input to the selection network. In contrast to the objective defined by Equation (10), the selection policy now selects experts based on their free-energy that is computed over datasets $D_{\text{meta-val}}$ and the selection policy depends on the training datasets $D_{\text{meta-train}}$ $\max_{\theta}\operatorname{\mathbb{E}}_{p_{\theta}(m|D_{\text{meta- train}})}\left[\hat{f}(m,D_{\text{meta- val}})-\frac{1}{\beta_{1}}\log\frac{p_{\theta}(m|D_{\text{meta- train}})}{p(m)}\right],$ (27) where $\hat{f}(m,D_{\text{meta- val}})\coloneqq\mathbb{E}_{p_{\vartheta}(\hat{y}|m,x)}\big{[}-\altmathcal{L}(\hat{y},y)-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(\hat{y}|x,m)}{p(\hat{y}|m)}\big{]}$ is the free energy of expert $m$ on dataset $D_{\text{meta-val}}$, $\altmathcal{L}(\hat{y},y)$ is loss function, and $(x,y)\in D_{\text{meta- val}}$. The experts optimize their free energy objective on the training dataset $D_{\text{meta-train}}$ defined by $\max_{\vartheta}\operatorname{\mathbb{E}}_{p_{\vartheta}(\hat{y}|m,x)}\left[-\altmathcal{L}(\hat{y},y)-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(\hat{y}|x,m)}{p(\hat{y}|m)}\right],$ (28) where $(x,y)\in D_{\text{meta-train}}$. ### 5.2 Specialization in Meta-Reinforcement Learning Algorithm 2 Expert Networks for Meta-Reinforcement Learning. 1:Input: Environment Distributions $p(T)$ and $p(T^{\prime})$, number of roll- outs $K$, batch-size $M$, training episodes $N$, number of tuples $L$ used for expert selection 2:Hyper-parameters: resource parameters $\beta_{1}$, $\beta_{2}$, learning rates $\eta_{x}$, $\eta_{x}$ for selector and experts 3:Initialize parameters $\theta,\vartheta$ 4:for $i$ = 1, 2, 3, …, $N$ do 5: Sample batch of $M$ environments $E_{i}^{\text{meta-train}}\thicksim p(T)$ 6: for $E\in E_{i}^{\text{meta-train}}$ do 7: for $k$ = 1, 2, 3, …, $K$ do 8: Collect $\tau=\\{(x_{t},a_{t},r_{t},t)\\}_{t=1}^{L}$ tuples by following random expert 9: Select expert $m\thicksim p_{\theta}(m|\tau)$ with RNN policy 10: Collect trajectory $\tau_{k}=\\{(x_{t},a_{t},r_{t},t)\\}_{t=L}^{T}$ by following $p_{\vartheta}(a|x,m)$ 11: end for 12: Compute $F_{t}=\sum_{l=0}^{T}\gamma^{l}f(x_{t+l},m_{t+l},a_{t+l})$ for trajectories $\tau$ 13: where $f(x,m,a)=r_{\text{meta- train}}(x,a)-\frac{1}{\beta_{2}}\log\frac{p_{\vartheta}(a|x,m)}{p(a|m)}$. 14: end for 15: Compute $\bar{F}_{t}=\sum_{l=0}^{T}\gamma^{l}\bar{f}(x_{t+l},m_{t+l})$ with 16: $\bar{f}(x,m)=\mathbb{E}_{p_{\vartheta}(a|x,m)}\left[r_{\text{meta- train}}(x,a)-\frac{1}{\beta_{2}}\log\frac{p(a|x,m)}{p(a|m)}\right]$ 17: Update selection parameters $\theta$ with $\bar{F}$ collected in batch $i$ 18: Update experts $m$ with roll-outs collected in batch $i$ 19:end for 20:return $\theta$, $\vartheta$ In meta reinforcement learning we extend the problem to a set of tasks $t_{i}\in T$, where a MDP $t_{i}=(\altmathcal{S},\altmathcal{A},P_{i},r_{i})$ defines each task $t_{i}$. We are now interested in finding a set of policies $\Theta$ that maximizes the average cumulative reward across all tasks in $T$ and generalizes well to new tasks sampled from a different set of tasks $T^{\prime}$. In this setting we use a dynamic recurrent neural network (RNN) with independent recurrent units li2018independently to classify trajectories and a second RNN to learn the value function (see Appendix A for details). We feed the RNN with $L$ $(s_{t},a_{t},r_{t},s_{t+1})$ tuples to describe the underlying Markov Decision Process describing the task. At $t=0$ we sample the expert according to the learned prior distribution $p(m)$, as there is no other information available until we have collected $L$ samples at which point an expert is selected – see Algorithm 2. Lan et al. (2019) lan2019meta propose a similar feature set. Importantly, the expert policies are trained on the meta-training environments, but evaluated on unseen but similar validation environments. In this setting we define the discounted free energy $\bar{F}_{t}$ for the selector as $\bar{F}_{t}=\sum_{l=0}^{T}\gamma^{l}\bar{f}(x_{t+l},m_{t+l}),$ (29) with $\bar{f}(x,m)=\mathbb{E}_{p_{\vartheta}(a|x,m)}\left[r_{\text{meta- train}}(x,a)-\frac{1}{\beta_{2}}\log\frac{p(a|x,m)}{p(a|m)}\right]$, where $r_{\text{meta-val}}$ is a reward function defined by a validation environment (see Figure 10 for details). ## 6 Experimental Results: Across Task Specialization and Meta-Learning In this section we present our experimental results in the meta-learning domain. To show the flexibility of our method we evaluate on regression, classification and reinforcement learning problems. In regression, we evaluate how our method adapts to changing sine functions, for classification we look at the Omniglot dataset lake2015human . To evaluate on reinforcement learning we introduce a range of versions of the double pendulum task Sutton1996 . We provide all experimental details such as network architectures and hyper- parameters in Appendix B. Figure 9: Here show how the system is able to adapt to new problems as the number of experts increases. The single expert system is not able to learn the underlying structure of the sine wave, where the two expert system is already able to capture the periodic structure. Adding more experts improves adaptation further, as the results show. Each expert is a shallow neural network with a single hidden layer and and output layer (see Appendix A for details). In the bottom row we show the rate-utility curve describing the trade-off between information processing and expected utility (transparent area represents one standard deviation), where increasing $I(X;M)$ improves adaptation. To obtain these results we set $\beta_{1}=25$ and $\beta_{2}=1.25$. ### 6.1 Sinusoid Regression We adopt this task from Finn et al. (2017) Finn2017model . In this $K$-shot problem, each task consists of learning to predict a function of the form $y=a\cdot\sin(x+b)$, with both $a\in[0.1,5]$ and $b\in[0,2\pi]$ chosen uniformly, and the goal of the learner is to find $y$ given $x$ based on only $K$ pairs of $(x,y)$. Given that the underlying function changes in each iteration it is impossible to solve this problem with a single learner. As a loss function we use Mean-Squared-Error and the dataset embedding is described in Algorithm 1. Each expert is a shallow neural network consisting of a single hidden layer connected to an output layer (see Appendix A for details). Our results show that by combing expert networks, we are able to reduce the generalization error iteratively as we add more experts to our system–see Figure 9 for $K=5$ and $K=10$ settings. In Figure 9 we show how the system is able to capture the underlying problem structure as we add more experts and in Figure 11 we visualize how the selector’s partition of the problem space looks like. In Appendix A, Figure 15 we show additional results and give an overview of our algorithm in Algorithm 1. ### 6.2 Few-Shot Classification A special case of meta-learning for classification are $K$-Shot $N$-way tasks, where a training set consists of $K$ labeled examples of each of the $N$ classes ($K\cdot N$ examples per dataset) and a corresponding test set is also provided. In our study, we focus on the following variation of $K$-Shot $N$-Way tasks: $2K$ samples ($K$ positive and $K$ negative examples) define a training dataset which the meta-learner must assign to an expert learner that has a specialized policy to classify the $2K$ samples. To evaluate experts performance we use a meta-validation set consisting of a different set of $2K$ samples. Note that we draw the negative examples from any of the remaining classes. The Omniglot dataset lake2011one consists of over 1600 characters from 50 alphabets (see Figure 12 for examples ). As each character has merely 20 samples each drawn by a different person, this forms a difficult learning task and is thus often referred to as the ”transposed MNIST” dataset. The Omniglot dataset is a standard meta-learning benchmarking dataset Finn2017model ; vinyals2016matching ; ravi2017optimization . Omniglot Few-Shot Results --- | One Conv. Block Baselines | Methods | Pre-Training | MAML | Matching Nets | MAML | Matching Nets K | % Acc | % Acc | % Acc | % Acc | % Acc 1 | 50.6 ($\pm$ 0.03) | 81.2 ($\pm$ 0.03) | 52.7 ($\pm$ 0.05) | 95.2 ($\pm$ 0.03) | 95.0 ($\pm$ 0.01) 5 | 54.1 ($\pm$ 0.09) | 88.0 ($\pm$ 0.01) | 55.3 ($\pm$ 0.04) | 99.0 ($\pm$ 0.01) | 98.7 ($\pm$ 0.01) 10 | 55.8 ($\pm$ 0.02) | 89.2 ($\pm$ 0.01) | 60.9 ($\pm$ 0.06) | 99.2 ($\pm$ 0.01) | 99.4 ($\pm$ 0.01) Our Method | Number of Experts | | 2 | 4 | | % Acc | I(M;X) | % Acc | I(M;X) | 1 | 66.4 ($\pm$ 0.02) | 0.99 ($\pm$ 0.01) | 75.8 ($\pm$ 0.02) | 1.96 ($\pm$ 0.01) | 5 | 67.3 ($\pm$ 0.01) | 0.93 ($\pm$ 0.01) | 75.5 ($\pm$ 0.01) | 1.95 ($\pm$ 0.10) | 10 | 76.2 ($\pm$ 0.04) | 0.95 ($\pm$ 0.30) | 86.7 ($\pm$ 0.01) | 1.90 ($\pm$ 0.03) | | 8 | 16 | | % Acc | I(M;X) | % Acc | I(M;X) | 1 | 77.3 ($\pm$ 0.01) | 2.5 ($\pm$ 0.02) | 82.8 ($\pm$ 0.01) | 3.2 ($\pm$ 0.03) | 5 | 78.4 ($\pm$ 0.01) | 2.7 ($\pm$ 0.01) | 85.2 ($\pm$ 0.01) | 3.3 ($\pm$ 0.02) | 10 | 90.1 ($\pm$ 0.01) | 2.8 ($\pm$ 0.02) | 95.9 ($\pm$ 0.01) | 3.1 ($\pm$ 0.02) | Table 2: Classification accuracy after 10 gradient steps on the validation data. Adding experts consistently improves performance, obtaining the best results with an ensemble of 16 experts. Pre-training refers to a single expert system trained on the complete dataset. Our method outperforms the pre- training, Matching Nets, and the MAML baseline (see Appendix A for experimental details), when the network architecture is reduced to a single convolution block. This corresponds to our expert network architecture. Using the suggested architectures by the respective studies, we achieve classification accuracy $\geq 95\%$. In this experiment we set $\beta_{1}=20.0$ and $\beta_{2}=2.5$ for 2 and 4 experts and $\beta_{1}=50.0$ and $\beta_{2}=1.25$ for 8 and 16 experts. | Task Distribution ---|--- Paramater | $T$ | $T^{\prime}$ Distance Penalty | [$10^{-3},10^{-1}$] | [$10^{-3},10^{-2}$] Goal Position | [0.3, 0.4] | [0, 3] Start Position | [-0.15, 0.15] | [-0.25, 0.25] Motor Torques | $[0,5]$ | $[0,3]$ Motor Actuation | [185, 215] | [175, 225] Inverted Control | $p=0.5$ | $p=0.5$ Gravity | [0.01, 4.9] | [4.9, 9.8] Figure 10: We sample all parameters uniformly from the specified range for each environment, where we use $T$ for training and $T^{\prime}$ for meta evaluation. During training we draw environments from $T$, but evaluate on a different environment also drawn from $T$ to measure generalization. The agent achieves higher reward when adding more experts while the information- processing of the selection and of the expert stage increases, indicating that the added experts specialize successfully. We achieve comparable results to MAML Finn2017model , Proximal Meta-Policy Search (Pro-MPS) rothfuss2018promp , and GrBAL nagabandi2018learning . Shaded areas and error bars represent one standard deviation. See Appendix A for experimental details. We consider three experimental setups: 1) how does a learner with only a single hidden layer perform when trained naïvely compared to with sophisticated methods such as MAML Finn2017model and Matching Nets vinyals2016matching as a baseline? 2) does the system benefit from adding more experts and if so, at what rate? and 3) how does our method compare to the aforementioned algorithms? Regarding 1) we note that introducing constraints by reducing the representational power of the models does not facilitate specialization is it would by explicit information-processing constraints. In the bottom row of Figure 9 we address question 2). We can interpret this curve as the rate-utility curve showing the trade-off between information processing and expected utility (transparent area represents one standard deviation), where increasing $I(X;M)$ improves adaptation. The improvement gain grows logarithmically, which is consistent with what rate- distortion theory would suggest. In Table 2 we present empirical results addressing question 3). We train the learner on a subset of the dataset ($\approx 80\%$, $\approx$ 1300 classes) and evaluate on the remaining $\approx 300$ classes, thus investigating the ability to generalize to unseen data. In each round we build the datasets $D_{\text{meta-train}}$ and $D_{\text{meta-test}}$ by selecting a target class $c_{t}$ and sample $K$ positive and $K$ negative samples. To generate negative samples we draw $K$ images randomly out of the remaining $N-1$ classes. We present the selection network with the feature representation of the $K$ positive training samples (see Figure 8), but evaluate the experts’ performance on the $2K$ test samples in $D_{\text{meta- test}}$. We can interpret the free energy of the experts in this setting as a measure of how well the expert is able to generalize to new samples of the target class. Using this optimization scheme, we train the expert networks to become experts in recognizing a subset of classes. We train the experts using the $2K$ samples from the training dataset that the selection network assigned to an expert—see Table 2 for results. We followed the design of Vinyals et al. vinyals2016matching to design our experts but reduce the number of blocks to one to introduce effects of resource limitation, whereas in the original study the authors used four blocks. The single convolutional block consists of 32 3$\times$3 filters with strided convolutions followed by a batch normalization layer and a ReLu non-linearity. The output is fed into a softmax layer giving a distribution over the classes (see also Appendix A). This reduced representational capacity drives specialization, as the expert can not reliably classify all characters from the training data, but a subset is feasible (see also Figure 12). To evaluate our method we compare different ensemble sizes against three baselines: pre-training, MAML Finn2017model and Matching Networks vinyals2016matching . In the pre-training setting we train a single convolutional neural network on batches drawn from the training data and evaluate on validation data by allowing 10 gradient steps for fine-tuning. Note, that when using the architecture of 4 blocks as suggested in the original paper vinyals2016matching ; Finn2017model , we are able to achieve $\geq$95% accuracy on the test data in both MAML and matching nets, but not on the pre-training setting. ### 6.3 Meta Reinforcement Learning In meta reinforcement learning the goal is to find a policy that performs well over a set of tasks. We create a set of RL tasks by extending the “Inverted Double Pendulum problem” Sutton1996 implemented in OpenAI Gym Brockman2016 by allowing a broad range of task parameters. Each time we create a new environment we sample from a specified distribution, where we modify inertia, motor torques, reward function, goal position and invert the control signal – see Table 10 for details. We create one environment per training episode, where during a single training episode parameters remain unchanged. We measure the free energy of an expert on a second task with parameters also drawn from $T$. To evaluate the agents’ meta-learning capabilities we define a second set of tasks $T^{\prime}$ where the parameter distributions are different, providing new but similar reinforcement learning problems. In each episode we sample $M$ environments from $T$ and update the system batch-wise. After training we evaluate on tasks from $T^{\prime}$, thus testing the agents generalization. We trained the system for 1000 Episodes with 64 tasks from $T$ and evaluate for 100 system updates on tasks from $T^{\prime}$. We report the results in Figure 10, where we can see improving performance as we add more experts, where the mutual information characterizing the selection stage indicates that the selector is able to identify suitable experts for the tasks. Figure 11: Here we show the soft-partition found by the selection policy for the sine prediction problem $y=a\cdot\sin(x+b)$, where we sample $a,b$ uniformly at each trial and each color represents an expert. To generate these plots we train a system on $K=1,5$ or $10$, sample $a,b$ and $K$ points and feed the dataset to the selection policy. We can see that the selection policy becomes increasingly more precise as we provide more points per dataset (denoted by $K$) to the system. ## 7 Discussion ### 7.1 Analyzing the Discovered Meta-Knowledge Figure 12: Upper Row: The system learns to assign characters from the same alphabet to the same expert. This happens without any prior information of the concept of alphabets or any other label provided. Lower Row: Here we report selection results on 15 characters sampled from 3 alphabets. To illustrate how the system operates we first show characters from alphabets that the system has difficulties assigning to experts (here alphabets 0, 36, and 46) in the lower left figure. In the lower right figure we show characters from alphabets that the selector assigns with high confidence (10, 11, and 12). In contrast to monolithic approaches that train a single agent as a black-box we can analyze and interpret the meta-knowledge discovered by our hierarchical method. In the following we will discuss this in the supervised and reinforcement learning setting. In meta-regression the problem space defined by a set of sine functions $y=a\cdot\sin(x+b)$ is split among the ensemble of expert regressors based on only $K\in\\{1,5,10\\}$. As expected, the assignment becomes more accurate the more samples we have – see Figure 11 where we report how the selection network partitions the sine task space. The shape of the emerged clusters indicates that the selection is mainly based on the amplitude $a$ of the current sine function, indicating that from an adaptation point-of-view it is more efficient to group sine functions based on amplitude $a$ instead of phase $b$. We can also see that an expert specializes on the low values for $b$ as it covers the upper region of the $a\times b$ space. The selection network splits this region among multiple experts if we increase the set of experts to 8 or more. We can also analyze the class assignment policy learned by the selector network. The assignment map in Figure 12 suggests that the selector learns to group characters by their alphabet based only on features. The selector’s policy spreads different characters from the same alphabet (e.g., alphabets 0, 36, and 46) across multiple experts while assigning similar characters from different alphabets to the same experts. This specializations gives rise to the meta-learning ability as it is able to adapt expert parameters within only a few gradient steps. We generated this plot by passing images of characters through the system (i.e., computing their latent embedding and assigning them to experts) after training is complete to obtain an overview of the class distribution. For reinforcement learning we demonstrate this idea by looking at how the selection networks splits the state space into to linearly controllable regions in Figure 7. We can see that one expert receives the states around zero (dark purple) and the other experts sees only states near the boundary (dark yellow). We derive from this that the purple expert specializes on balancing the pole and the yellow expert specializes on moving the cart such that the other expert can easily balance the pole. This is consistent with the fact that a linear policy can balance the pole when it is close to standing upwards. ### 7.2 Critical issues A limitation of our method is low sample efficiency in the RL domain. To alleviate this problem one could imagine to combine our system with model- based RL methods which promise to reduce the number of samples the agent needs to see in order to learn efficiently. Another research direction would be to investigate our systems performance in continual adaptation tasks, such as in the work of Yao et al. (2019) yao2019hierarchically . There the agent is continuously exposed to datasets (e.g., additional classes and samples). The restriction to binary meta classification tasks is another limitation of our method, which we leave for feature work. Another open question remains the tuning of $\beta$ values. As the utility function can in principle be unbounded whereas the information-processing quantities are obviously bounded, the agent may be faced with learning two values that differ greatly in magnitude. This becomes especially challenging in reinforcement learning scenarios, where a value function of varying magnitude has to be learned. This poses a difficult learning problem and there have been several proposals to tackle this. A method dubbed ”Pop-Art” proposed by van Hasselt et al. (2016) vanHasselt2016learning , where they treat the incoming utility values as a stream of data and normalize the values to given range. In the reinforcement learning setup we also tried cooling schedule for $\beta$, as suggested by Fox et al. (2015) fox2016taming . In their work the authors propose to change the weight of the entropy penalty in MaxEnt RL as time progresses, thus encouraging exploration in the beginning and penalizing it the more information an agent has gathered. We did not observe any significant performance gains. The specific value of $\beta$ depends on the scale of the utility function. As the value of $\beta$ strongly influences the outcome of the experiments, it must be chosen with care and comes with the same limitations as any other regularization technique. If it is chosen to small, the regularization term dominates the utility term and the experts are not able to learn anything. On the other hand, if it is set to a large value, the regularization term vanishes, such that no specialization can occur and thus the selector has a hard time assigning data to experts. To remedy this, there are in principle two ways to choose $\beta$: one is to set a information-processing limit for each expert and then (manually or with some optimization technique) tune beta such hat this constraint is satisfied. This has the advantage that this value can be interpreted, e.g., ”the experts can process 1 bit of information, i.e., distinguishing two options”. The other way is to run a grid search over a pre- defined range and chose the one that fits best. In this work, we used the second strategy. ### 7.3 Related Work Investigating information-theoretic cost functions and constraints in learning systems has recently enjoyed increasing interest in machine learning grau2018soft ; galashov2019information ; grover2019uncertainty ; tschannen2019mutual . The method we propose in this study falls into a wider class of algorithms that aim to deal more efficiently with learning and decision-making problems Daniel2012 ; Neumann2013 ; Martius2013 ; Leibfried2015 ; Grau-Moya2016 ; Peng2017 ; Grau-Moya2017 ; Ghosh2017 ; Hihn2018 ; Schach2018 ; Gottwald2019 ; gottwald2019bounded . Applying such constraints to methods for reinforcement learning is often motivated by the aim of stabilizing learning and reducing sample complexity. One such approach is Trust Region Policy Optimization (TRPO) introduced by Schulman et al. (2015) Schulman2015 , where a $\operatorname{\text{D}_{\text{KL}}}$ penalty between the old and the new policy is imposed to limit update steps, providing a theoretical guarantee for monotonic policy improvement. In our approach we define this region by $\operatorname{\text{D}_{\text{KL}}}$ between the agent’s posterior and prior policy, thus allowing to learn this region and to adapt it over time. This basic idea has been extend to meta-learning by rothfuss2018promp , which we use to compare our method against in meta-rl experiments. Daniel et al. (2012) follow a similar idea with relative entropy policy search methods Daniel2012 . The algorithm builds on learning a gating policy that can decide which sub- policy to choose. The authors impose a $\operatorname{\text{D}_{\text{KL}}}$ constraint between the data distribution and the next policy to achieve this. Both these approaches smoothen the updates as large steps are discouraged. Our method follows the same principle, but we enforce small updates by discouraging deviation from the agent’s prior and by limiting the representational power (e.g., by linear decision makers). The hierarchical structure we employ is related to Mixture of Experts (MoE) models. Jacobs et al. (1991) Jacobs1991 introduced MoE as tree structured models for complex classification and regression problems, where the underlying approach is the divide and conquer paradigm. As in our approach, three main building blocks define MoEs: gates, experts, and a probabilistic weighting to combine expert predictions. Learning proceeds by finding a soft partitioning of the input space and assigning partitions to experts performing well on the partition. In this setting, the model response is then a sum of the experts’ outputs, weighted by how confident the gate is in the expert’s opinion. Yuksel et al. (2012) Yuksel2012 provide an extensive overview of recent developments in MoEs. The approach we propose allows learning such models, but also has applications to more general decision-making settings such as reinforcement learning. Ghosh et al. (2017) Ghosh2017 recently applied the divide-and-conquer principle to reinforcement learning. They argue that dividing a central policy into sub-policies improves the learning phase by improving sample efficiency. To evaluate this approach they assume _pre- defined_ partitions on the action and state space on which they train local policies. The information-theoretic constraints during training enforce similarity between the local policies such that a single central policy arises as weighted combination of all local policies. In contrast, in our approach all posterior expert policies remain close to their priors thereby minimizing their informational surprise. This mechanism leads to the emergence of specialized policies. In effect, this enforces the local policies to be as diverse as possible. Crucially, in our method the partitioning is not predefined but a result of the optimization process itself. Untangling the underlying structure in control systems usually requires a-priori knowledge of the system dynamics, e.g., Abramova2012 ; Randlov2000 ; Yoshimoto2005 . The algorithm proposed by Abramova et al. (2012) Abramova2012 splits the state space of the inverted pendulum into predefined bins to fit a linear control policy to stabilize individually. Then, the authors suggest to control the whole system by learning a selection policy over the given linear controllers. In contrast to this, our approach relies only on the reward signal to learn the selection policy and the linear control policy simultaneously. This fact alone poses a difficult learning problem as both system parts have to adjust to one another on different timescales. Other decentralized approaches (e.g., Allamraju & Chowdhary (2017) allamraju2017communication ) have trained separate decentralized models to fuse them into a single model. In contrast, our method learns sub-policies that act on their own. Most other methods for meta-learning such as the work of Finn2017model and ravi2017optimization find an initial parametrization of a single learner, such that the agent can adapt quickly to new problems. This initialization represents prior knowledge and can be regarded as an abstraction over related tasks and our method takes this idea one step further by finding a possibly disjunct set of such compressed task properties. Another way of thinking of such abstractions by lossy compression is to go from a task-specific posterior to a task-agnostic prior strategy. By having a set of priors the task specific information is available more locally then with a single prior, as in MAML Finn2017model and the work of ravi2017optimization . In principle, this can help to adapt within fewer iterations. Thus our method can be seen as the general case of such monolithic meta-learning algorithms. Instead of learning similarities within a problem, we can also try to learn similarities between different problems (e.g., different classification datasets), as is described in the work of yao2019hierarchically . In this way, the partitioning is governed by different tasks, where our study however focuses on discovering meta-information within the same task family, where the meta-partitioning is determined solely by the optimization process and can thus potentially discover unknown dynamics and relations within a task family. ## 8 Conclusion In summary, we introduce and evaluate a promising novel on-line learning paradigm for hierarchical multi-agent systems. The main idea of our approach is an optimal soft partitioning by considering the agents’ information constraints. The partitioning is automatic without relying on any prior information about the underlying problem structure or control dynamics in the case of model free learning. This makes our model abstract and principled. We apply it on a variety of tasks including multi-agent decision-making, mixture- of-expert regression, and divide-and-conquer reinforcement learning. We have extended this idea to a novel information-theoretic approach to meta-learning. In effect, the hierarchical structure equips the system with optimal initializations covering the input space which facilitates quick adaptation to new similar tasks. To build this hierarchical structure, we have proposed feature extraction models for classification, regression and reinforcement learning, that are able to extract task relevant information efficiently, invariant to the order of the inputs. The main strength of our approach is that it follows from simple principles that give rise to a large range of applications. Moreover, we can interpret the system performance by studying the information processing both at the selection stage and at the expert level, as we have shown by analyzing the discovered meta-knowledge. This can help to alleviate the problems inherent to black-box-approaches, for example based on deep neural networks. Contributions: H.H. and D.A.B conceived the project, H.H. designed and implemented the algorithms and experiments and evaluated the results. Both authors discussed the results and wrote the manuscript. Both authors read and approved the manuscript. Funding: This research was supported by the European Research Council, grant number ERC-StG-2015-ERC, Project ID: 678082, “BRISC: Bounded Rationality in Sensorimotor Coordination”. Conflicts of Interest: The authors declare no conflicts of interest. ## References * [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016. * [2] Ekaterina Abramova, Luke Dickens, Daniel Kuhn, and Aldo Faisal. Hierarchical, heterogeneous control of non-linear dynamical systems using reinforcement learning. In European Workshop On Reinforcement Learning at ICML, 2012. * [3] Howard Aldrich. Organizations evolving. Sage, 1999. * [4] Rakshit Allamraju and Girish Chowdhary. Communication efficient decentralized gaussian process fusion for multi-uas path planning. In Proceedings of the 2017 American Control Conference (ACC), pages 4442–4447. IEEE, 2017. * [5] Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine, 34(6):26–38, 2017. * [6] Christopher G Atkeson, Andrew W Moore, and Stefan Schaal. Locally weighted learning for control. In Lazy learning, pages 75–113. Springer, 1997. * [7] S Balasundaram and Yogendra Meena. Robust support vector regression in primal with asymmetric huber loss. Neural Processing Letters, 49(3):1399–1431, 2019. * [8] Horace B Barlow. Unsupervised learning. Neural computation, 1(3):295–311, 1989. * [9] Peter Bellmann, Patrick Thiam, and Friedhelm Schwenker. Multi-classifier-systems: Architectures, algorithms and applications. In Computational Intelligence for Pattern Recognition, pages 83–113. Springer, 2018. * [10] Christophe Biernacki, Gilles Celeux, and Gérard Govaert. Assessing a mixture model for clustering with the integrated completed likelihood. IEEE transactions on pattern analysis and machine intelligence, 22(7):719–725, 2000. * [11] Mathew Botvinick, Sam Ritter, Jane X Wang, Zeb Kurth-Nelson, Charles Blundell, and Demis Hassabis. Reinforcement learning, fast and slow. Trends in cognitive sciences, 2019. * [12] Daniel A Braun, Carsten Mehring, and Daniel M Wolpert. Structure learning in action. Behavioural brain research, 206(2):157–165, 2010. * [13] Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: Applications to data mining. Springer Science & Business Media, 2008. * [14] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. * [15] Rich Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. * [16] Antonio Damasio. Neuroscience and the emergence of neuroeconomics. In Neuroeconomics, pages 207–213. Elsevier, 2009. * [17] Christian Daniel, Gerhard Neumann, and Jan Peters. Hierarchical relative entropy policy search. In Artificial Intelligence and Statistics, pages 273–281, 2012\. * [18] Vul Edward, Goodman Noah, Griffiths Thomas L., and Tenenbaum Joshua B. One and done? optimal decisions from very few samples. Cognitive Science, 38(4):599–637, 2014. * [19] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017. * [20] Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pages 202–211, 2016. * [21] Alexandre Galashov, Siddhant M Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojciech M Czarnecki, Yee Whye Teh, Razvan Pascanu, and Nicolas Heess. Information asymmetry in kl-regularized rl. In Proceedings of the International Conference on Representation Learning, 2019. * [22] Tim Genewein, Eduard Hez, Zeynab Razzaghpanah, and Daniel A Braun. Structure learning in bayesian sensorimotor integration. PLoS Comput Biol, 11(8):e1004369, 2015. * [23] Tim Genewein, Felix Leibfried, Jordi Grau-Moya, and Daniel Alexander Braun. Bounded rationality, abstraction, and hierarchical decision-making: An information-theoretic optimality principle. Frontiers in Robotics and AI, 2:27, 2015. * [24] Samuel J Gershman, Eric J Horvitz, and Joshua B Tenenbaum. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273–278, 2015. * [25] Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, and Sergey Levine. Divide-and-conquer reinforcement learning. In Proceedings of the International Conference on Representation Learning, 2018. * [26] Gerd Gigerenzer and Henry Brighton. Homo heuristicus: Why biased minds make better inferences. Topics in cognitive science, 1(1):107–143, 2009. * [27] Christophe Giraud-Carrier. Metalearning-a tutorial. In Tutorial at the 7th international conference on machine learning and applications (ICMLA), San Diego, California, USA, 2008. * [28] Sebastian Gottwald and Daniel A. Braun. Bounded rational decision-making from elementary computations that reduce uncertainty. Entropy, 21(4), 2019. * [29] Sebastian Gottwald and Daniel A Braun. Systems of bounded rational agents with information-theoretic constraints. Neural computation, 31(2):440–476, 2019. * [30] Jordi Grau-Moya, Matthias Krüger, and Daniel A Braun. Non-equilibrium relations for bounded rational decision-making in changing environments. Entropy, 20(1):1, 2017. * [31] Jordi Grau-Moya, Felix Leibfried, Tim Genewein, and Daniel A Braun. Planning with information-processing constraints and model uncertainty in markov decision processes. In Proceeedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 475–491. Springer, 2016\. * [32] Jordi Grau-Moya, Felix Leibfried, and Peter Vrancx. Soft q-learning with mutual-information regularization. In Proceedings of the International Conference on Learning Representations, 2019. * [33] Aditya Grover and Stefano Ermon. Uncertainty autoencoders: Learning compressed representations via variational information maximization. In Proceedings of the The 22nd International Conference on Artificial Intelligence and Statistics, pages 2514–2524, 2019. * [34] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1352–1361. JMLR. org, 2017. * [35] Heinke Hihn, Sebastian Gottwald, and Daniel A Braun. Bounded rational decision-making with adaptive neural network priors. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition, pages 213–225. Springer, 2018. * [36] Heinke Hihn, Sebastian Gottwald, and Daniel A Braun. An information-theoretic on-line learning principle for specialization in hierarchical decision-making systems. In Proceedings of the 2019 IEEE Conference on Decision-Making and Control (CDC), 2019. * [37] Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren. Automated Machine Learning. Springer. * [38] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. * [39] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991. * [40] Norbert Jankowski, Wlodzislaw Duch, and Krzysztof Grkabczewski. Meta-learning in computational intelligence, volume 358. Springer Science & Business Media, 2011. * [41] Edwin T Jaynes. Probability theory: the logic of science. Washington University St. Louis, MO, 1996. * [42] Charles Kemp, Amy Perfors, and Joshua B Tenenbaum. Learning overhypotheses with hierarchical bayesian models. Developmental science, 10(3):307–321, 2007. * [43] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Representation Learning, 2014. * [44] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Representation Learning, 2013. * [45] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2, 2015. * [46] Jan Kukačka, Vladimir Golkov, and Daniel Cremers. Regularization for deep learning: A taxonomy. arXiv preprint arXiv:1710.10686, 2017. * [47] Ludmila I Kuncheva. Combining pattern classifiers: methods and algorithms. John Wiley & Sons, 2004. * [48] Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011. * [49] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. * [50] Lin Lan, Zhenguo Li, Xiaohong Guan, and Pinghui Wang. Meta reinforcement learning with task embedding and shared policy. In Proceedings of the International Joint Conference on Artificial Intelligence, 2019. * [51] Felix Leibfried and Daniel A Braun. A reward-maximizing spiking neuron as a bounded rational decision maker. Neural computation, 27(8):1686–1720, 2015. * [52] Christiane Lemke, Marcin Budka, and Bogdan Gabrys. Metalearning: a survey of trends and technologies. Artificial intelligence review, 44(1):117–130, 2015. * [53] Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5457–5466, 2018. * [54] Cecilia Lindig-Leon, Sebastian Gottwald, and Daniel Alexander Braun. Analyzing abstraction and hierarchical decision-making in absolute identification by information-theoretic bounded rationality. Frontiers in Neuroscience, 13:1230, 2019. * [55] Steven M Manson. Bounded rationality in agent-based models: experiments with evolutionary programs. International Journal of Geographical Information Science, 20(9):991–1012, 2006. * [56] Georg Martius, Ralf Der, and Nihat Ay. Information driven self-organization of complex robotic behaviors. PloS one, 8(5):e63400, 2013. * [57] David A McAllester. Pac-bayesian model averaging. In Proceedings of the twelfth annual conference on Computational learning theory, pages 164–170, 1999. * [58] David A McAllester. Pac-bayesian stochastic model selection. Machine Learning, 51(1):5–21, 2003. * [59] Richard D. McKelvey and Thomas R. Palfrey. Quantal response equilibria for normal form games. Games and Economic Behavior, 10(1):6 – 38, 1995. * [60] Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In Advances in Neural Information Processing Systems, pages 4694–4703, 2019. * [61] Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. In International Conference on Learning Representations, 2018. * [62] Gerhard Neumann, Christian Daniel, Andras Kupcsik, Marc Deisenroth, and Jan Peters. Information-theoretic motor skill learning. In Proceedings of the AAAI Workshop on Intelligent Robotic Systems, 2013. * [63] Pedro Ortega and Daniel Braun. Information, utility and bounded rationality. Lecture Notes in Artificial Intelligence, 6830:269–274, 2011. * [64] Pedro A. Ortega and Daniel A. Braun. Thermodynamics as a theory of decision-making with information-processing costs. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 469(2153), 2013. * [65] Pedro A Ortega, Jane X Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alex Pritzel, Pablo Sprechmann, et al. Meta-learning of sequential strategies. arXiv preprint arXiv:1905.03030, 2019. * [66] John W Payne, John William Payne, James R Bettman, and Eric J Johnson. The adaptive decision maker. Cambridge university press, 1993. * [67] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. * [68] Zhen Peng, Tim Genewein, Felix Leibfried, and Daniel A Braun. An information-theoretic on-line update principle for perception-action coupling. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 789–796. IEEE, 2017. * [69] Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. * [70] Jette Randløv, Andrew G Barto, and Michael T Rosenstein. Combining reinforcement learning with a local control algorithm. In Proceedings of the International Conference on Machine Learning, 2000. * [71] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations, 2017. * [72] Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. In International Conference on Learning Representations, 2018. * [73] Sonja Schach, Sebastian Gottwald, and Daniel A Braun. Quantifying motor task performance by bounded rational decision theory. Frontiers in neuroscience, 12, 2018. * [74] Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105–130, 1997. * [75] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the International Conference on Machine Learning, pages 1889–1897, 2015. * [76] Friedhelm Schwenker, Hans A Kestler, and Günther Palm. Three learning phases for radial-basis-function networks. Neural networks, 14(4-5):439–458, 2001. * [77] Bernard W Silverman. Density estimation for statistics and data analysis. Routledge, 2018. * [78] Herbert A. Simon. A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1):99–118, 1955. * [79] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014\. * [80] Richard S Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in neural information processing systems, pages 1038–1044, 1996. * [81] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. * [82] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000. * [83] C Szegedy, V Vanhoucke, S Ioffe, J Shlens, and Z Wojna. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, 2016. * [84] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. * [85] Naftali Tishby and Daniel Polani. Information theory of decisions and actions. In Perception-Action Cycle: Models, Architectures, and Hardware. Springer, 2011. * [86] Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. In Proceedings of the International Conference on Representation Learning, 2020. * [87] Hado P van Hasselt, Arthur Guez, Matteo Hessel, Volodymyr Mnih, and David Silver. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, pages 4287–4295, 2016. * [88] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial intelligence review, 18(2):77–95, 2002. * [89] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008. * [90] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016. * [91] John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior (commemorative edition). Princeton university press, 2007. * [92] David H Wolpert. Information theory – the bridge connecting bounded rational game theory and statistical physics. In Complex Engineered Systems, pages 262–290. Springer, 2006. * [93] Rui Xu and Don Wunsch. Clustering, volume 10. John Wiley & Sons, 2008. * [94] Huaxiu Yao, Ying Wei, Junzhou Huang, and Zhenhui Li. Hierarchically structured meta-learning. In Proceedings of the International Conference on Machine Learning, pages 7045–7054, 2019. * [95] Junichiro Yoshimoto, Masaya Nishimura, Yoichi Tokita, and Shin Ishii. Acrobot control by learning the switching of multiple controllers. Artificial Life and Robotics, 9(2):67–71, 2005. * [96] Seniha Esen Yuksel, Joseph N Wilson, and Paul D Gader. Twenty years of mixture of experts. IEEE transactions on neural networks and learning systems, 23(8):1177–1193, 2012. ## Appendix A Experimental Details We implemented all experiments in Tensorflow and ran experiments working with images on a single NVIDIA Tesla V100-SXM2 32GB GPU. We implemented our experiments in TensorFlow [1], Scikit-learn [67] and OpenAI Gym [14]. ### A Non-Meta-Learning Setting For classification (Section 4.1) we use a linear predictor as experts, such that the expert’s response is a linear combination of the inputs weighted by some learned parameters $\omega$: $y=\omega^{T}x$ and the selector is a two- layer MLP with 10 units each and tanh non-linearities. In reinforcement learning (Section 4.3) the selection network is a two layer network with 32 units per layer and tanh non-linearities. Experts are two layer networks with tanh non-linearities that learn log-variance and mean of a Gaussian distribution to predict the control signal. The critic networks have the same architecture but learn directly the value function. As the action space is continuous in the interval [-1,1] we learn $\mu$ and $\log(\sigma)$ of a Gaussian by parameterizing the distribution with a neural network. We sample actions by re-parameterizing the distribution to $p(a)=\mu+\sigma\epsilon$, where $\epsilon\thicksim\altmathcal{N}(0,1)$, so that the distribution is differentiable w.r.t. the network outputs (“re- parametrization trick“ introduced by Kingma and Welling (2013) [44]). We train all networks using Adam [43] with a learning rate of $3\cdot 10^{-4}$ with Mini-Batch sizes of 32 and sample 1024 data from each task (“Half Moon” etc.) for 10000 episodes. We average the results presented are over 10 random seeds. ### B Meta-Learning Setting Figure 13: Dataset features in the meta-learning setup. In supervised meta- learning we pass $K$ samples through an convolutional autoencoder producing a $K\times N$ feature matrix, where $N$ is the innermost feature dimension of the autoencoder’s bottleneck layer. To find a single feature vector, we apply a max operator along the columns of the feature matrix. For regression we compute a histogram over the data points. In reinforcement learning, at the start of a trial we sample an expert according to the prior $p(m)$, run the policy for $K$ time steps and use these $K$ $(s,a,r,s^{\prime})$ tuples as input to an recurrent neural network, that acts the selection network to find an expert, which remains active until the environment resets. The features that we use in the meta-learning setting for the selector in the different learning scenarios are depicted in Figure 13. For regression (Section 6.1) we use a two layer selection network with 16 units each followed by tanh non-linearities. The experts are shallow neural networks with a single hidden layer that learn log-variance and mean of a Gaussian distribution which they use for prediction. We use the “Huber Loss” instead of MSE as it is more robust [7]. We optimize all networks using Adam [43]. We set $\beta_{1}=25$ and $\beta_{2}=1.25$. For Omniglot (Section 6.2) we followed the design of Vinyals et al. [90] but reduce the number of blocks to one. We used a single convolutional block consisting of 32 3$\times$3 filters with strided convolutions followed by a batch normalization layer and a ReLu non-linearity. The output is fed into a softmax layer giving a distribution over classes. During training we used a meta-batch size of 16. The convolutional autoencoder is a 3 layer network consisting of 16, 16, and 4 filters each with size 3$\times$3 with strided convolutions followed by a leaky ReLu non-linearity. The layers are mirrored by de-convolotional layers to reconstruct the image. This results in an image embedding with dimensionality 64. The selection network is a two layer network with 32 units, followed by a ReLu non-linearity, a dropout layer [79] per layer and is fed into a softmax normalization to produce a distribution over the experts. For 8 and more experts we add a third layer with 32 units. To improve generalization we add a MaxNorm regularization on the weights. We augment the dataset by rotating each image in 90, 180, and 270 degrees resulting in 80 images per class. We also normalize the images two be in (0,1) range. We evaluate our method by resetting the system to the state after training and allow for 10 gradient updates and report the final accuracy. We train all networks using Adam [43] with a learning rate of $3\cdot 10^{-4}$. In this experiment we set $\beta_{1}=20.0$ and $\beta_{2}=2.5$ for 2 and 4 experts and $\beta_{1}=50.0$ and $\beta_{2}=1.25$ for 8 and 16 experts. In meta reinforcement learning (Section 6.3) the selector’s actor and critic net are build of RNNs with 200 hidden units each. The critic is trained to minimize the Huber loss between the prediction and the cumulative reward. The experts are two layer networks with 64 units each followed by ReLu non- linearities and used to learn the parameters of a Gaussian distribution. The critics have the same architecture (except for the output dimensionality). The actors learning rate is set to $10^{-4}$ and the critics to $10^{-3}$. We optimize all networks using Adam [43]. We set $\beta_{1}=25.0$ and $\beta_{2}=2.5$. To evaluate MAML on 2-way $N$-shot omniglot dataset we used a inner learning rate of $\alpha=0.05$ and one inner update step per iteration for all settings. We used a single convolutional block followed by a fully connected layer with 64 units and a ReLU non-linearity. For matching networks we used the same architecture. Note, that we reduce the number of layers to make the tests comparable. Using the suggested architectures [19, 90] we achieve classification accuracy $\geq 95\%$. ## Appendix B DKL between two Normal Wishart Distributions KL divergence from $Q$ to $P$ is defined as $\operatorname{\text{D}_{\text{KL}}}(P\|Q)=\int p(m)\log\frac{p(m)}{q(m)}dx$ and a Normal-Wishart distribution is defined as $f(\bm{\mu},\bm{\Lambda}|\bm{\omega},\lambda,\bm{W},\nu)=\altmathcal{N}\left(\bm{\mu}|\bm{\omega},(\lambda\bm{\Lambda})^{-1}\right)\altmathcal{W}(\bm{\Lambda}|\bm{W},\nu),$ where $\bm{\omega}\in\mathbb{R}^{D},\bm{W}\in\mathbb{R}^{D\times D},\nu>D-1,\lambda>0$ are the parameters of the distribution. We optimize over $\bm{\omega}$ and $\bm{W}$, which makes part of the $\operatorname{\text{D}_{\text{KL}}}$ terms constant. So we have: $\displaystyle D_{\mathrm{KL}}\left[\altmathcal{N}_{0}(\bm{\mu})\altmathcal{W}_{0}(\bm{\Lambda})\|\altmathcal{N}_{1}(\bm{\mu})\altmathcal{W}_{1}(\bm{\Lambda})\right]$ $\displaystyle=$ $\displaystyle\int\altmathcal{N}_{0}(\bm{\mu})\altmathcal{W}_{0}(\bm{\Lambda})\log\frac{\altmathcal{N}_{0}(\bm{\mu})\altmathcal{W}_{0}(\bm{\Lambda})}{\altmathcal{N}_{1}(\bm{\mu})\altmathcal{W}_{1}(\bm{\Lambda})}d\bm{\mu}d\bm{\Lambda}$ Now let $\displaystyle p(\bm{\mu},\bm{\Lambda})$ $\displaystyle=$ $\displaystyle p(\bm{\mu}|\bm{\Lambda})p(\bm{\Lambda})=\altmathcal{N}\left(\bm{\mu}|\bm{\mu}_{0},(\lambda_{0}\bm{\Lambda})^{-1}\right)\altmathcal{W}\left(\bm{\Lambda}|\bm{W}_{0},\nu_{0}\right)$ $\displaystyle q(\bm{\mu},\bm{\Lambda})$ $\displaystyle=$ $\displaystyle q(\bm{\mu}|\bm{\Lambda})q(\bm{\Lambda})=\altmathcal{N}\left(\bm{\mu}|\bm{\mu}_{1},(\lambda_{1}\bm{\Lambda})^{-1}\right)\altmathcal{W}\left(\bm{\Lambda}|\bm{W}_{1},\nu_{1}\right).$ We can find the $\operatorname{\text{D}_{\text{KL}}}$ as follows: $\displaystyle D_{\mathrm{KL}}\left[p(\bm{\mu},\bm{\Lambda})\|q(\bm{\mu},\bm{\Lambda})\right]$ $\displaystyle=$ $\displaystyle\int_{\bm{\mu}}\int_{\bm{\Lambda}}p(\bm{\mu},\bm{\Lambda})\log\frac{p(\bm{\mu},\bm{\Lambda})}{q(\bm{\mu},\bm{\Lambda})}d\bm{\mu}d\bm{\Lambda}$ $\displaystyle=$ $\displaystyle\int_{\bm{\mu}}\int_{\bm{\Lambda}}p(\bm{\mu}|\bm{\Lambda})p(\bm{\Lambda})\log\frac{p(\bm{\mu}|\bm{\Lambda})p(\bm{\Lambda})}{q(\bm{\mu}|\bm{\Lambda})q(\bm{\Lambda})}d\bm{\mu}d\bm{\Lambda}$ $\displaystyle\stackrel{{\scriptstyle\text{indep.}}}{{=}}$ $\displaystyle\int_{\bm{\mu}}\int_{\bm{\Lambda}}p(\bm{\mu}|\bm{\Lambda})p(\bm{\Lambda})\log\frac{p(\bm{\mu}|\bm{\Lambda})}{q(\bm{\mu}|\bm{\Lambda})}d\bm{\mu}d\bm{\Lambda}$ $\displaystyle+\int_{\bm{\mu}}\int_{\bm{\Lambda}}p(\bm{\mu}|\bm{\Lambda})p(\bm{\Lambda})\log\frac{p(\bm{\Lambda})}{q(\bm{\Lambda})}d\bm{\mu}d\bm{\Lambda}$ $\displaystyle=$ $\displaystyle\int_{\bm{\Lambda}}p(\bm{\Lambda})\left[\int_{\bm{\mu}}p(\bm{\mu}|\bm{\Lambda})\log\frac{p(\bm{\mu}|\bm{\Lambda})}{q(\bm{\mu}|\bm{\Lambda})}d\bm{\mu}\right]d\bm{\Lambda}$ $\displaystyle+\int_{\bm{\Lambda}}p(\bm{\Lambda})\log\frac{p(\bm{\Lambda})}{q(\bm{\Lambda})}d\bm{\Lambda}$ $\displaystyle\stackrel{{\scriptstyle\text{def.}}}{{=}}$ $\displaystyle\mathbb{E}_{p(\bm{\Lambda})}\left[\operatorname{\text{D}_{\text{KL}}}\left[p(\bm{\mu}|\bm{\Lambda})\|q(\bm{\mu}|\bm{\Lambda})\right]\right]+\operatorname{\text{D}_{\text{KL}}}\left[p(\bm{\Lambda})\|q(\bm{\Lambda})\right].$ where $\mathbb{E}_{p(\bm{\Lambda})}[X]=\nu\bm{W}$ and $\operatorname{\text{D}_{\text{KL}}}$ between two Normal Distributions $p(\bm{\mu}|\bm{\Lambda})$ and $q(\bm{\mu}|\bm{\Lambda})$ is $\displaystyle\operatorname{\text{D}_{\text{KL}}}\left[p(\bm{\mu}|\bm{\Lambda})\|q(\bm{\mu}|\bm{\Lambda})\right]$ $\displaystyle=$ $\displaystyle\frac{1}{2}\bigg{[}\mathrm{tr}\left((\lambda_{q}\bm{\Lambda})(\lambda_{p}\bm{\Lambda})^{-1}\right)+\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)^{\top}(\lambda_{q}\bm{\Lambda})\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)$ $\displaystyle-D+\log\frac{\|\lambda_{p}\bm{\Lambda}\|}{\|\lambda_{q}\bm{\Lambda}\|}\bigg{]}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left[D\frac{\lambda_{q}}{\lambda_{p}}+\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)^{\top}\lambda_{q}\bm{\Lambda}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)-D+D\log\frac{\lambda_{q}}{\lambda_{p}}\right],$ so we have $\displaystyle\mathbb{E}_{p(\bm{\Lambda})}\left[D_{\mathrm{KL}}\left[p(\bm{\mu}|\bm{\Lambda})\|q(\bm{\mu}|\bm{\Lambda})\right]\right]$ $\displaystyle=$ $\displaystyle\frac{\lambda_{q}}{2}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)^{\top}\nu_{p}\mathbf{W}_{p}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)$ $\displaystyle+\underbrace{\frac{D}{2}\left(\frac{\lambda_{q}}{\lambda_{p}}-\log\frac{\lambda_{q}}{\lambda_{p}}-1\right)}_{\text{constant term}}.$ The $\operatorname{\text{D}_{\text{KL}}}$ between two Wishart Distributions $p(\bm{\Lambda})$ and $q(\bm{\Lambda})$ is $\displaystyle D_{KL}[p(\bm{\Lambda})$ $\displaystyle vertq(\bm{\Lambda})]$ $\displaystyle=$ $\displaystyle H(p(\bm{\Lambda}),q(\bm{\Lambda}))-H(p(\bm{\Lambda}))$ $\displaystyle=$ $\displaystyle-\frac{\nu_{q}}{2}\log|\bm{W}_{q}^{-1}\bm{W}_{p}|+\frac{\nu_{p}}{2}(\text{tr}(\bm{W}_{q}^{-1}\bm{W}_{p})-D)$ $\displaystyle+\underbrace{\log\frac{\Gamma_{D}\left(\frac{\nu_{q}}{2}\right)}{\Gamma_{D}\left(\frac{\nu_{p}}{2}\right)}+\tfrac{\nu_{p}-\nu_{q}}{2}\psi_{D}\left(\frac{\nu_{p}}{2}\right)}_{\text{constant term}}$ where $\Gamma_{D}(m)$ is the multivariate Gamma distribution and $\psi_{D}(m)$ is the derivative of the log of the multivariate Gamma distribution, each parameterized by the dimensionality of $\bm{\mu}$ and $\bm{W}$ denoted by $D$. As we keep $\nu$ and $\lambda$ fixed, we can use an estimate, which is only off by constant factor $C$: $\begin{split}\operatorname{\text{D}_{\text{KL}}}\left[p(\bm{\mu},\bm{\Lambda})\|q(\bm{\mu},\bm{\Lambda})\right]=\frac{\lambda_{q}}{2}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)^{\top}\nu_{p}\mathbf{W}_{p}\left(\bm{\mu}_{q}-\bm{\mu}_{p}\right)-\\\ \frac{\nu_{q}}{2}\log|\bm{W}_{q}^{-1}\bm{W}_{p}|+\frac{\nu_{p}}{2}(\text{tr}(\bm{W}_{q}^{-1}\bm{W}_{p})-D)+C\end{split}$ (30) where $C=\frac{D}{2}\left(\frac{\lambda_{q}}{\lambda_{p}}-\log\frac{\lambda_{q}}{\lambda_{p}}-1\right)+\log\frac{\Gamma_{D}\left(\frac{\nu_{q}}{2}\right)}{\Gamma_{D}\left(\frac{\nu_{p}}{2}\right)}+\tfrac{\nu_{p}-\nu_{q}}{2}\psi_{D}\left(\frac{\nu_{p}}{2}\right)$ ## Appendix C Additional Results Figure 14: Here we show the soft-partition found by the selection policy for the sine prediction problem $y=a\cdot\sin(x+b)$ for 16 experts, where we sample $a,b$ uniformly at each trial and each color represents an expert. Figure 15: Here show how the system is able to adapt to new problems as the number of experts increases additional results for the $K=1$ (upper row) and $K=5$ (lower row) settings.
# Ellipsephic harmonic series revisited Jean-Paul Allouche CNRS, IMJ-PRG, Sorbonne, 4 Place Jussieu F-75252 Paris Cedex 05, France <EMAIL_ADDRESS> Yining Hu Institute for Advanced Study in Mathematics Harbin Institute of Technology Harbin, PR China <EMAIL_ADDRESS> Claude Morin <EMAIL_ADDRESS> ###### Abstract Ellipsephic or Kempner-like harmonic series are series of inverses of integers whose expansion in base $B$, for some $B\geq 2$, contains no occurrence of some fixed digit or some fixed block of digits. A prototypical example was proposed by Kempner in 1914, namely the sum inverses of integers whose expansion in base $10$ contains no occurrence of a nonzero given digit. Results about such series address their convergence as well as closed expressions for their sums (or approximations thereof). Another direction of research is the study of sums of inverses of integers that contain only a given finite number, say $k$, of some digit or some block of digits, and the limits of such sums when $k$ goes to infinity. Generalizing partial results in the literature, we give a complete result for any digit or block of digits in any base. Keywords: Kempner-like series. Ellipsephic numbers. Sum of digits. Counting blocks of digits. MSC Classification: 11A63, 11B85, 68R15. ## 1 Introduction While the harmonic series $\sum\frac{1}{n}$ is divergent, restricting the indices in the sum to integers satisfying innocent-looking conditions can yield convergent series. One of the first such examples is probably the 1914 result of Kempner [21] stating that the sum of inverses of integers whose expansion in base $10$ contains no occurrence of a given digit ($\neq 0$) converges. After Kempner’s paper and the 1916 paper of Irwin [20], several papers addressed extensions or generalizations of this result, as well as closed forms of numerical computations of the sums of the corresponding series: see, e.g., [2, 9, 10, 11, 12, 14, 15, 16, 17, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33] and the references therein. We will revisit harmonic series with missing digits or missing blocks of digits: these series are called Kempner-like harmonic series or ellipsephic harmonic series in the literature (for the origin of the term ellipsephic coined by C. Mauduit, one can look at [13, p. 12] and [18, Footnote, p. 6]; also see the discussion in [6]). More precisely, let $B$ an integer $\geq 2$ and let $a_{w,B}(n)$ denote the number of occurrences of the word (string) $w$ in the base $B$ expansion of the integer $n$ (where $|w|$ is the length of the string $w$), and $s_{B}(n)$ the sum of the digits of the base $B$ expansion of $n$. It was proven by Farhi [15] that, for any digit $j$ in $\\{0,1,\dots,9\\}$, one has $\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ a_{j,10}(n)=k\end{subarray}}\frac{1}{n}=10\log 10.$ As explained in [6], a post on mathfun asked for the value of $\displaystyle\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ s_{2}(n)=k\end{subarray}}\frac{1}{n}\cdot$ It was proved in [6] that $\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ s_{B}(n)=k\end{subarray}}\frac{1}{n}=\frac{2\log B}{B-1}\cdot$ (1.1) Thus we have of course $\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ a_{1,2}(n)=k\end{subarray}}\frac{1}{n}=\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ s_{2}(n)=k\end{subarray}}\frac{1}{n}=2\log 2.$ Here we will evaluate all such series, where $a_{1,B}(n)$ is replaced with $a_{w,B}(n)$, where $w$ is any block of digits in base $B$, by proving that $\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ a_{w,B}(n)=k\end{subarray}}\frac{1}{n}=B^{|w|}\log B.$ (1.2) To prove this result, we will replace $1/n$ with the seemingly more complicated function $\log_{B}(\frac{n}{n+1})$, and make use of results proved in (or inspired by) [4, 19]. In passing we will generalize [4, 6] and re-prove [19]. ## 2 “Reducing” the problem Let $B$ be an integer $\geq 2$. Let $w$ be a string of letters in $\\{0,1,\dots,B-1\\}$. Let $a_{w,B}(n)$ be the number of (possibly overlapping) occurrences of of the string $w$ in the base $B$ expansion of $n$. First we note that the series $\displaystyle\sum_{\begin{subarray}{c}n\geq 1\\\ a_{w,B}(n)=k\end{subarray}}\frac{1}{n}$ converges: the proof is the same as in [4, Lemma 1, p. 194], namely one uses a counting argument for the case of a single digit, and one replaces the base with some of its powers for the case of a block of digits. Now, to evaluate the series, the idea is to replace it with a convergent series $\displaystyle\sum_{\begin{subarray}{c}n\geq 1\\\ a_{w,B}(n)=k\end{subarray}}b_{w}(n)$ whose sum, say $A_{w}(k)$, tends to a limit, say $A_{w}$ when $k\to\infty$. Furthermore, if we have the property $b_{w}(n)-1/(B^{|w|}n)={\mathcal{O}}_{w}(1/n^{2})$ when $n$ tends to infinity, then we obtain * $\sum_{\begin{subarray}{c}n\geq 1\\\ a_{w,B}(n)=k\end{subarray}}\frac{1}{n}\ \text{converges, and \ \ }$ * $\lim_{k\to\infty}\sum_{\begin{subarray}{c}n\geq 1\\\ a_{w,B}(n)=k\end{subarray}}\frac{1}{n}=B^{|w|}A_{w}.$ (Note that if $a_{w,B}(n)$ tends to infinity, then $n$ must also tend to infinity.) Inspired by [4, 19], we define $L(n)$ by $L(0):=0$ and $L(n):=\log_{B}\left(\dfrac{n}{n+1}\right)$ for $n\geq 1$. For a string $w$ over the alphabet $[0,B-1]$, let $v(w)$ denote the integer whose expansion in base $B$ is $w$ (with possible leading $0$’s if $w\neq 0^{j}$). ###### Proposition 1. Let $w$ be a nonempty string over the alphabet $[0,B-1]$, $g=B^{|w|-1},\quad h=\left\lfloor\frac{v(w)}{B}\right\rfloor.$ Then, for all $k\geq 0$, $\sum_{\begin{subarray}{c}n\\\ a_{w,B}(gn+h)=k\end{subarray}}L(Bgn+v(w))=-1,$ (2.1) where the sum is over $n\geq 1$ if $w=0^{j}$ and $n\geq 0$ otherwise. ###### Proof. Let $c$ be the last letter of $w$. Let $d_{w}(k)$ be defined by $d_{w}(k)=\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}L(Bn+c).$ (Note that this series converges since $L(n)\sim\frac{1}{n\log B}$ when $n$ goes to infinity.) By writing $n=gr+m$, with $r\geq 0$ and $0\leq m\leq g-1$, we see that $d_{w}(k)=\sum_{m=0}^{g-1}\sum_{\begin{subarray}{c}r\geq 0\\\ a_{w,B}(gr+m)=k\end{subarray}}L(Bgr+Bm+c).$ Similarly, if we let $e_{w}(k)=\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(Bn+c)=k\end{subarray}}L(Bn+c)$ (which is convergent, like $d_{w}(k)$), then we have $e_{w}(k)=\sum_{m=0}^{g-1}\sum_{\begin{subarray}{c}r\geq 0\\\ a_{w,B}(Bgr+Bm+c)=k\end{subarray}}L(Bgr+Bm+c).$ Note that $a_{w,B}(Bgr+Bm+c)-a_{w,B}(gr+m)=\begin{cases}1&\mbox{if }m=h\\\ 0&\mbox{otherwise}\end{cases}$ for $r\geq 0$ if $w\neq 0^{j}$ and for $r\geq 1$ if $w=0^{j}$. For $w=0^{j}$ and $r=0$, the above difference is $0$ for all $m$ because we do not pad leading $0$’s in this case. Therefore $d_{w}(k)-e_{w}(k)=\sum_{\begin{subarray}{c}r\\\ a_{w,B}(gr+h)=k\end{subarray}}L(Bgr+v(w))\\\ -\sum_{\begin{subarray}{c}r\\\ a_{w,B}(gr+h)=k-1\end{subarray}}L(Bgr+v(w))$ (2.2) where the sum is over $r\geq 1$ if $w=0^{j}$ and $r\geq 0$ otherwise. If we could show that $d_{w}(k)=e_{w}(k)$ for $k>0$, then it would follow from equation (2.2) that the value of the sum $\sum_{\begin{subarray}{c}r\\\ a_{w,B}(gr+h)=k\end{subarray}}L(Bgr+v(w))$ is independent of $k$ and hence equal to $d_{w}(0)-e_{w}(0)$. To prove this, notice that $L(n)-\sum_{j=0}^{B-1}L(Bn+j)=\begin{cases}0&\mbox{ if }n>0\\\ 1&\mbox{ if }n=0,\end{cases}$ (2.3) and $\displaystyle\quad\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}L(n)$ $\displaystyle=\sum_{j=0}^{B-1}\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(Bn+j)=k\end{subarray}}L(Bn+j)$ $\displaystyle=\\!\\!\\!\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(Bn+c)=k\end{subarray}}L(Bn+c)+\sum_{\begin{subarray}{c}j=0\\\ j\neq c\end{subarray}}^{B-1}\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}L(Bn+j).$ Hence $\displaystyle e_{w}(k)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(Bn+c)=k\end{subarray}}L(Bn+c)$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}(L(n)-\sum_{\begin{subarray}{c}j=0\\\ j\neq c\end{subarray}}^{B-1}L(Bn+j))$ $\displaystyle=\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}(L(n)-\sum_{\begin{subarray}{c}j=0\end{subarray}}^{B-1}L(Bn+j))+\sum_{\begin{subarray}{c}n\geq 0\\\ a_{w,B}(n)=k\end{subarray}}L(Bn+c)$ By (2.3), the first sum is $1$ if $k=0$, and $0$ if $k>0$. The second sum is the definition of $d_{w}(k)$. ∎ ###### Lemma 2. Let $t$ be an integer whose expansion in base $B$ is $t=b_{1}b_{2}\ldots b_{s}$. For $1\leq r\leq s$, If $b_{1}\ldots b_{r}$ is not a suffix of $w$, then $\sum_{\begin{subarray}{c}n\\\ a_{w,B}(B^{r}n+v(b_{1}\ldots b_{r}))=k\end{subarray}}L(B^{s}n+t)=\sum_{\begin{subarray}{c}n\\\ a_{w,B}(B^{r-1}n+v(b_{1}\ldots b_{r-1}))=k\end{subarray}}L(B^{s}n+t).$ If $b_{1}\ldots b_{r}$ is a suffix of $w$, then $\sum_{\begin{subarray}{c}n\\\ a_{w,B}(B^{r}n+v(b_{1}\ldots b_{r}))=k\end{subarray}}L(B^{s}n+t)=\\\ \sum_{\begin{subarray}{c}n\\\ a_{w,B}(B^{r-1}n+v(b_{2}\ldots b_{r}))=k\end{subarray}}L(B^{s-1}n+t^{\prime})\\\ -\sum_{\begin{subarray}{c}j=0\\\ j\neq b_{1}\end{subarray}}^{B-1}\sum_{\begin{subarray}{c}n\\\ a_{w,B}(B^{r-1}n+v_{j})=k\end{subarray}}L(B^{s}n+t_{j})$ (2.4) where $t^{\prime}=v(b_{2}\cdots b_{s})$, $t_{j}=v(jb_{2}\cdots b_{s})$, and $v_{j}=v(jb_{2}\ldots b_{r-1})$ if $r\geq 2$ and $v_{j}=0$ if $r=1$. The proof is the same as in [4]. ###### Theorem 3. There is a rational function $b_{w}(n)$ such that for all $k\geq 0$ we have $\sum_{\begin{subarray}{c}n\\\ a_{w,B}(n)=k\end{subarray}}\log_{B}(b_{w}(n))=-1.$ (2.5) (The summation is over $n\geq 1$ for $w=0^{j}$ and $n\geq 0$ otherwise.) and $\log(b_{w}(n))=-\frac{1}{B^{|w|}n}+\mathcal{O}(1/n^{2}).$ (2.6) ###### Proof. The existence of $b_{w}(n)$ follows from Proposition 1 and iterated applications of Lemma 2. The process of obtaining $b_{w}(n)$ can be visualized by a tree $T$ whose root is $\sum_{\begin{subarray}{c}n\\\ a_{w,B}(gn+h)=k\end{subarray}}L(Bgn+v(w))$ and a node is a leaf if the condition of $n$ for sum is $a_{w,B}(n)=k$, has a child corresponding to the right side if we are in the first case in Lemma 2, and has $B$ children corresponding to the $B$ terms (minus signs are included in the terms) of the right side if we are in the second case. Then $b_{w}(n)$ is the sum of the summands of the leaves of this tree. To prove (2.6), we first notice that $\log B\cdot L(an+b)=\log\left(1-\frac{1}{an+b+1}\right)=-\frac{1}{an}+\mathcal{O}(1/n^{2})$ where $a$ and $b$ are positive constants. In particular, in (2.1), $L(Bgn+v(w))=-\frac{1}{\log B\cdot B^{|w|}n}+\mathcal{O}(1/n^{2}).$ Then, we note that, in $T$, the first order term in the summand of each node that is not a leaf is the sum of the first order term in its children. For example, when a node has $B$ children and the first order term of the summand is $-\dfrac{1}{\log B\cdot B^{s}n}$, then the sum of first order terms of the summands of its children is $-\frac{1}{\log B\cdot B^{s-1}n}-\sum_{\begin{subarray}{c}j=0\\\ j\neq b_{1}\end{subarray}}^{B-1}\left(-\frac{1}{\log B\cdot B^{s}n}\right)=-\frac{1}{\log B\cdot B^{s}n}.$ By induction we conclude that $\log_{B}(b_{w}(n))=-\frac{1}{\log B\cdot B^{|w|}n}+\mathcal{O}(1/n^{2}).\qed$ ###### Remark 1. Theorem 3 above generalizes the case $B=2$ in [4] (also see [5]). It can also give another proof of [19, Theorem 3]. ###### Remark 2. Actually the same “reducing trick” can be used to re-prove Equation (1.1) by using a result in [3]. Namely, up to notation, it was proved in [3, Lemme, p. 142] that, for all $k\geq 0$, $\sum_{s_{B}(n)=k}\log\left(\frac{n+1}{B\lfloor n/B\rfloor+B}\right)=-\log B.$ (2.7) Define the fractional part of $n/B$ by $\\{n/B\\}:=n/B-\lfloor n/B\rfloor$. Then, we have when $n$ tends to infinity, $\begin{array}[]{lll}\displaystyle\log\left(\frac{n+1}{B\lfloor n/B\rfloor+B}\right)&=&\displaystyle\log\left(1+\frac{1-B+B\\{n/B\\}}{n+B(1-\\{n/B\\})}\right)\\\ &=&\displaystyle\log\left(1+\frac{1-B+B\\{n/B\\}}{n}\right)+O(1/n^{2})\\\ &=&\displaystyle\frac{1-B}{n}+\left(\frac{B\\{n/B\\}}{n}\right)+O(1/n^{2}).\\\ \end{array}$ Thus (convergences are consequences of, e.g., Equation 2.7, see [3]): $\displaystyle-\log B$ $\displaystyle=\sum_{s_{B}(n)=k}\log\left(\frac{n+1}{B\lfloor n/B\rfloor+B}\right)$ $\displaystyle=(1-B)\sum_{s_{B}(n)=k}\frac{1}{n}\ +\sum_{s_{B}(n)=k}\frac{B\\{n/B\\}}{n}+O(1/n^{2}).$ Hence (note that the term $O(1/n^{2})$ below can be chosen independent of $k$), $\displaystyle-\log B$ $\displaystyle=(1-B)\sum_{s_{B}(n)=k}\frac{1}{n}+\ \sum_{0\leq j\leq B-1}\sum_{s_{B}(n)=k-j}\frac{j}{Bn+j}\ +O(1/n^{2})$ $\displaystyle=(1-B)\sum_{s_{B}(n)=k}\frac{1}{n}+\ \sum_{0\leq j\leq B-1}\sum_{s_{B}(n)=k-j}\frac{j}{Bn}\ +O(1/n^{2}).$ Now, if $k$ tends to infinity, we have that $k-j$ tends to infinity for $j\in[0,B-1]$, and also that $n$ must tend to infinity, hence, letting $\lim_{k\to\infty}\sum_{s_{B}(n)=k}\frac{1}{n}:=\ell$, $-\log B=(1-B)\ \ell+\sum_{0\leq j\leq B-1}\frac{j}{B}\ \ell=-\frac{B-1}{2}\ \ell,\ \ \ \text{thus \ \ $\ell=\frac{2\log B}{B-1}\cdot$}$ ## 3 Statements and Declarations The authors have no competing interests. No funds, grants, or other support were received. ## References * [1] * [2] Alexander, R.: Remarks about the digits of integers. J. Austral. Math. Soc. 12, 239–241 (1971) * [3] Allouche, J.-P., Cohen, H., Mendès France, M., Shallit, J.: De nouveaux curieux produits infinis. Acta Arith. 49, 141–153 (1987) * [4] Allouche, J.-P., Shallit, J. O.: Infinite products associated with counting blocks in binary strings. J. London Math. Soc. 39, 193–204 (1989) * [5] Allouche, J.-P., Hajnal, P., Shallit, J. O.: Analysis of an infinite product algorithm. SIAM J. Discrete Math. 2, 1–15 (1989) * [6] Allouche, J.-P., Morin, C.: Kempner-like harmonic series. Preprint at https://arxiv.org/abs/2305.18180 (2023) * [7] Aloui, K.: Sur les entiers ellipséphiques : somme des chiffres et répartition dans les classes de congruence. Period. Math. Hungar. 70, 171–208 (2015) * [8] Aloui, K., Mauduit, C., Mkaouar, M.: Somme des chiffres et répartition dans les classes de congruence pour les palindromes ellipséphiques. Acta Math. Hungar. 151, 409–455 (2017) * [9] Baillie, R.: Sums of reciprocals of integers missing a given digit. Amer. Math. Monthly 86, 372–374 (1979). Also see ERRATA. Amer. Math. Monthly 87, 866 (1980) * [10] Baillie, R.: Summing the curious series of Kempner and Irwin. Preprint at https://arxiv.org/abs/0806.4410 (2023) * [11] Behforooz, G. H.: Thinning out the harmonic series. Math. Mag. 68, 289–293 (1995) * [12] Boas, R. P.: Some remarkable sequences of integers. In: Honsberger, R. (ed.) Mathematical Plums. The Dolciani Mathematical Expositions, 4, pp. 38–61. Mathematical Association of America, Washington, D.C. (1979) * [13] Col, S.: Propriétés multiplicatives d’entiers soumis à des conditions digitales. Thèse, Nancy (2006) * [14] Craven, B. D.: On digital distribution in some integer sequences. J. Austral. Math. Soc. 5, 325–330 (1965) * [15] Farhi, B.: A curious result related to Kempner’s series. Amer. Math. Monthly 115, 933–938 (2008) * [16] Fischer, H.-J.: Die Summe der Reziproken der natürlichen Zahlen ohne Ziffer $9$. Elem. Math. 48, 100–106 (1993) * [17] Gordon, R. A.: Comments on “Subsums of the harmonic series”. Amer. Math. Monthly 126, 275–279 (2019) * [18] Hu, N.: Fractal Uncertainty Principles for Ellipsephic Sets. M.Sc. Thesis, University of British Columbia (2021) http://dx.doi.org/10.14288/1.0396939 * [19] Hu, Y.: Patterns in numbers and infinite sums and products. J. Number Theory 162, 589–600 (2016) * [20] Irwin, F.: A curious convergent series. Amer. Math. Monthly 23, 149–152 (1916) * [21] Kempner, A. J.: A curious convergent series. Amer. Math. Monthly 21, 48–50 (1914) * [22] Kløve, T.: Power sums of integers with missing digits. Math. Scand. 28, 247–251 (1971) * [23] Köhler, G., Spilker, J.: Dirichlet-Reihen zu Kempners merkwürdiger konvergenter Reihe. Math. Semesterber. 56, 187–199 (2009) * [24] Lubeck, B., Ponomarenko, V.: Subsums of the harmonic series. Amer. Math. Monthly 125, 351–355 (2018) * [25] Mukherjee, R., Sarkar, N.: A short note on a curious convergent series. Asian-Eur. J. Math. 14, Paper No. 2150158 (2021) * [26] Nathanson, M. B.: Dirichlet series of integers with missing digits. J. Number Theory 222, 30–37 (2021) * [27] Nathanson, M. B.: Convergent series of integers with missing digits. Ramanujan J. 58, 667–676 (2022) * [28] Nathanson, M. B.: Curious convergent series of integers with missing digits. Integers 21A, Ron Graham Memorial Volume, Paper No. A18 (2021) * [29] Schmelzer, T., Baillie, R.: Summing a curious, slowly convergent series. Amer. Math. Monthly 115, 525–540 (2008) * [30] Segal, A. C., Lepp, B., Fine, N. J.: A limit problem [E 2204]. Amer. Math. Monthly 77, 1009–1010 (1970) * [31] Wadhwa, A. D.: An interesting subseries of the harmonic series. Amer. Math. Monthly 82, 931–933 (1975) * [32] Wadhwa, A. D.: Some convergent subseries of the harmonic series. Amer. Math. Monthly 85, 661–663 (1978) * [33] A. Walker, A. Walker, Arithmetic progressions with restricted digits. Amer. Math. Monthly 127, 140–150 (2020)
# A preference elicitation interface for collecting dense recommender datasets with rich user information Demo Pantelis P. Analytis Cornell University<EMAIL_ADDRESS>, Tobias Schnabel Cornell University<EMAIL_ADDRESS>, Stefan Herzog MPI for Human Development<EMAIL_ADDRESS>, Daniel Barkoczi MPI for Human Development<EMAIL_ADDRESS>and Thorsten Joachims Cornell University<EMAIL_ADDRESS> ###### Abstract. We present an interface that can be leveraged to quickly and effortlessly elicit people’s preferences for visual stimuli, such as photographs, visual art and screensavers, along with rich side-information about its users. We plan to employ the new interface to collect dense recommender datasets that will complement existing sparse industry-scale datasets. The new interface and the collected datasets are intended to foster integration of research in recommender systems with research in social and behavioral sciences. For instance, we will use the datasets to assess the diversity of human preferences in different domains of visual experience. Further, using the datasets we will be able to measure crucial psychological effects, such as preference consistency, scale acuity and anchoring biases. Last, we the datasets will facilitate evaluation in counterfactual learning experiments. preference elicitation, recommender system datasets, visual art ††copyright: none††ccs: Human-centered computing Collaborative filtering††ccs: Human-centered computing Social media††ccs: Human-centered computing Collaborative and social computing devices ## 1\. Introduction Over the last three decades the recommender systems community has made immense progress in the way we represent, understand and learn people’s preferences as a function of previously collected explicit or implicit evaluations. Research in recommender systems has by all means increased the quality of the curated and recommended content in the online world. Several large datasets have been a crucial component of this success, as they have commonly functioned as test- beds on which new theories and algorithms have been compared (Movielens, LastFM and Netflix to name just a few). Most of these datasets, however, are very sparse. They contain thousands items and even the most popular among the items have been evaluated only by a small subset of their users. Given the large fraction of missing ratings, it is challenging to accurately estimate even simple quantities like the average quality of an item, especially since the patterns of missing data are subject to strong selection biases (Pradel/etal/12, ). This presents fundamental challenges when evaluating recommendation algorithms on sparse datasets. Further, it becomes an obstacle for scholars in the social and behavioral sciences as workarounds have to be developed for dealing with missing values. To the best of our knowledge, the only dense collaborative filtering dataset was the outcome of the Jester Interface (goldberg2001eigentaste, ). The interface curated 100 jokes of various styles and topics. People utilized a slider to evaluate 5 jokes that were presented to them sequentially. The first evaluations were used to estimate people’s preferences and to recommend them the remaining jokes. The users continued to read and evaluate jokes until the pool of 100 items was exhausted. In total, more than 70.000 people have evaluated at least some of the jokes, and more than 14.000 have evaluated all the jokes, resulting to a fully evaluated subset of the dataset. Figure 1. The design of the preference elicitation interface. We replicate the design of the _Jester_ interface, using a continuous bar that people can use to express how much they liked or disliked an item. Participants have to wait for at least 5 seconds before they can proceed to the next item. ## 2\. The interface and data collection We plan to collect new datasets in different domains of people’s visual experience, ranging from photographs and paintings to designs for screensavers. Our interface replicates the design of the Jester interface, adding new elements that can counteract its limitations. At the outset, people are provided with instructions about how to use the interface. Then, before the presentation of the stimuli we collect demographic information about the users. To reduce possible order effects, the visual stimuli are presented in random order. As in Jester, users are asked to evaluate items using a slider bar; they can move the marker of the slide bar to the left to indicate that they did not like the item, or to the right to indicate that they liked it. We implement a continuous scale, which allows a fine-grained evaluation of the presented items. Finally, to limit anchoring bias, the slide bar is initially semi-transparent and the colors become vivid only when the user has clicked on it.111The interface can be accessed at http://abc-webstudy.mpib- berlin.mpg.de/recstrgs/study_simulator.php. Both the code for the interface and the collected data will be publicly available. Once all the items have been evaluated, we collect further psychologically relevant information about the users. Numerous studies have shown that side information can substantially improve estimates of people’s preference and it complements first hand evaluations (park2009pairwise, ). In the first experiments we will deploy the visual-art expertise questionnaire developed by Chatterjee et al. (chatterjee2010assessment, ) to gauge people’s familiarity with the visual arts and a succinct version of the big-five questionnaire to quickly assess the people’s personalities (rammstedt2007measuring, ) (see Figure 2). It takes about 20 minutes to complete the current version of the interface, including the instructions, questionnaires and evaluation phase. We intend to conduct the first experiments at Amazon’s Mechanical Turk labor market. Several studies have shown that for effortless tasks the results produced on mTurk are comparable to laboratory studies (paolacci2010running, ). The visual stimuli used in this interface evoke immediate aesthetic judgments, and thus can quickly be transformed to evaluations. Eventually, we intend to develop a data visualization tool that will reward people who complete the study with information about their preference profiles and how they relate to those of other individuals. Thus, we intend to create an inherently motivating interface using as a reward the informational value generated by the collected data. In this way, we will reduce the cost of data collection, but also introduce basic ideas behind collaborative filtering and recommender systems to the wider public. Figure 2. At the end of the evaluation phase we collect additional information about the users. We invited the users to complete a questionnaire about their expertist in the visual arts and a 10-question version of the big-five questionnaire. ## 3\. Potential applications We envisage several new applications for the developed datasets. Here we foreshadow a few of these potential uses, keeping in mind that the community that will have access to the produced datasets will certainly come up with more. First, they will facilitate cross-fertilization with the cognitive and behavioral sciences. For instance, social and cognitive psychologists have extensively studied simple strategies for inference and estimation where different features are used to predict an objective truth. The new datasets will open the way to study strategies for social preference learning in domains where no objective truth exists (analytis2015you, ). Also, we can manipulate the design of the interface to study relevant behavioral effects, such as to study the consistency of evaluations or to investigate the effect of the granularity of the evaluation scale on the predictions. To sum up, the datasets will allow us to better understand preference diversity and its implications for different recommender systems algorithms as well as for psychological social learning strategies. Moreover, we believe that the new datasets can fuel existing streams of research in recommender systems and machine learning. For instance, dealing with selection-biases and with data missing not at random is a growing research stream in recommender systems and machine learning (schnabel2016unbiased, ). To evaluate algorithms tuned to deal with such problems, we can impose selection biases ex-ante and remove data from the dense dataset accordingly. This set-up could complement existing sparse datasets for learning, with the difference that selection biases can be controlled and varied in order to test robustness. Moving on to the broader class of counterfactual simulations, dense datasets greatly simplify evaluation since they can serve as ground-truth when conducting simulations (salganik2006experimental, ). ## References * [1] Pantelis P Analytis, Daniel Barkoczi, and Stefan M Herzog. You’re special, but it doesn’t matter if you’re a greenhorn: Social recommender strategies for mere mortals. In Cognitive Science Society, pages 1799–1804. Cognitive Science Society, 2015. * [2] Anjan Chatterjee, Page Widick, Rebecca Sternschein, William B Smith, and Bianca Bromberger. The assessment of art attributes. Empirical Studies of the Arts, 28(2):207–222, 2010. * [3] Ken Goldberg, Theresa Roeder, Dhruv Gupta, and Chris Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2):133–151, 2001. * [4] Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 2010. * [5] Seung-Taek Park and Wei Chu. Pairwise preference regression for cold-start recommendation. In Proceedings of the third ACM conference on Recommender systems, pages 21–28. ACM, 2009. * [6] B. Pradel, N. Usunier, and P. Gallinari. Ranking with non-random missing ratings: influence of popularity and positivity on evaluation metrics. In RecSys, pages 147–154, 2012. * [7] Beatrice Rammstedt and Oliver P John. Measuring personality in one minute or less: A 10-item short version of the big five inventory in english and german. Journal of Research in Personality, 41(1):203–212, 2007. * [8] Matthew J Salganik, Peter Sheridan Dodds, and Duncan J Watts. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762):854–856, 2006. * [9] Tobias Schnabel, Adith Swaminathan, Peter I Frazier, and Thorsten Joachims. Unbiased comparative evaluation of ranking functions. In ICTIR, pages 109–118, 2016.
# An analytical reconstruction formula with efficient implementation for a modality of Compton Scattering Tomography with translational geometry Cécilia Tarpaua,b,c,∗ ID , Javier Cebeirod ID , Geneviève Rolleta ID , Mai K. Nguyenb ID and Laurent Dumasc ID a LPTM (UMR 8089), CY Cergy Paris Université, CNRS, Cergy-Pontoise, France bETIS (UMR 8051), CY Cergy Paris Université, ENSEA, CNRS, Cergy-Pontoise, France c LMV (UMR 8100), Université de Versailles Saint-Quentin, CNRS, Versailles, France d CEDEMA, Universidad Nacional de San Martín, San Martín, Argentina ∗ Corresponding author ###### Abstract In this paper, we address an alternative formulation for the exact inverse formula of the Radon transform on circle arcs arising in a modality of Compton Scattering Tomography in translational geometry proposed by Webber and Miller (Inverse Problems (36)2, 025007, 2020). The original study proposes a first method of reconstruction, using the theory of Volterra integral equations. The numerical realization of such a type of inverse formula may exhibit some difficulties, mainly due to stability issues. Here, we provide a suitable formulation for exact inversion that can be straightforwardly implemented in the Fourier domain. Simulations are carried out to illustrate the efficiency of the proposed reconstruction algorithm. Keywords: Analytic inversion, Compton Scattering Tomography, Image formation, Image reconstruction, Radon transform on double circle arcs ## 1 Introduction Compton Scattering Tomography (CST) is an imaging technique whose objective is to exploit wisely Compton scattered photons by the object to scan in order to reconstruct its electron density map. Since the early proposition of this type of imaging by Lale [1], Clarke [2] and Farmer [3], studies about CST systems proved already promising results in medical imaging, for instance for the identification of lung tumours [4, 5], but also for earthquake engineering [6], cultural heritage imaging [7, 8], landmine detection [9] and agricultural measurements [10]. In fact, CST has made it possible to widen the fields of application for tomography to the imaging of one-sided large objects, because, in some configurations, sources and detectors can be placed at the same side of the object. The proposition of CST systems is strongly related to the study of the associated integral transform, which models data acquisition [11, 12]. These integral transforms are generalizations of the classical Radon transform on lines studied by Radon [13] and Cormack [14]. While, in two-dimensional CST, manifolds are on circle arcs or double circle arcs, in three dimensions, data is acquired on toric surfaces. These circular geometries for the considered manifolds originate from the Compton effect. This physical phenomenon occurs when a photon emitted by the source with energy $E_{0}$ collides with an electron as it passes through matter. This photon is scattered, and deviated by an angle $\omega$ from its original direction. The photon looses also a part of its energy. The Compton formula gives us the one-to-one correspondence between the energy of scattered photons $E(\omega)$ and the related scattering angle $\omega$ $E(\omega)=\frac{E_{0}}{1+\frac{E_{0}}{m\,c^{2}}\left(1-\cos(\omega)\right)}.$ (1.1) $m$ is the electron mass and $c$ the speed of light. This relation ensures also that scattering sites of photons with identical energy $E(\omega)$ are located on a circle arc labelled by the scattering angle $\omega$. Figure 1 illustrates the general functioning principle of a CST system. Figure 1: General functioning principle of a CST system. Photons are emitted by source $S$, interact at sites $M$, and are recorded at site $D$. When a photon is detected carrying an energy $E(\omega_{1})$ (resp. $E(\omega_{2})$), the possible interaction sites lie on the upper (resp. lower) circle arc which subtends the angle $(\pi-\omega_{1})$ (resp. $(\pi-\omega_{2})$). Several issues around the study of these generalized Radon transforms are then of interest, such as existence and uniqueness of the solution, its stability or the range conditions. The main important problem remains the reconstruction of the image, mostly coming from the proposition of analytical inverse formulas. (a) (b) (c) (d) (e) (f) Figure 2: Previous proposed CST modalities (a, b, c, d, e, f). (a): Fixed source and detectors placed on a line. (b): Rotating pair source-detector diametrically opposed. (c): Rotating pair source detector. (d): Fixed source and detectors placed on a ring. (e): Detector rotating around a fixed source. (f) Source-detector translating simultaneously along two parallel lines. In all figures: The source $S$ is represented by a red point. The detector(s) $D$ is (are) represented by blue point(s). The $M$, $M_{i}$ or $M^{\prime}_{i}$ in black, are running points and examples of scattering site. An example of trajectory for a photon whose scattering site is $M$ is shown in purple. The corresponding scattering angle is denoted $\omega$. The object to scan is represented in grey. The red continuous curves are the examples of scanning circles arcs. For (c), (e) and (f), the dashed circles (resp. lines) represents the circular (resp. linear) paths on which move the sensors. In that way, the first studied two-dimensional CST system was made of a fixed line of detectors containing a fixed source [15]. See Figure 2a. The associated scanning circle arcs consist of a family of semicircles with a common fixed end point (the point source) and another extremity (the considered detector) on the line. Then, circular geometries for CST have been proposed, the first one considers a pair source - detector diametrically opposed in rotation around the object [16, 17, 18] and data acquisition is modelled by a Radon transform on circle arcs having a fixed source-detector chord length. See Figure 2b. The second consists also of a pair source - detector moving on a circle, but the distance between the two is no longer constant and depends on the scattering angle [19] (Fig. 2c). With this modality, data measurement is performed on a family of circles orthogonal to the fixed circular path of the pair source - detector. In a third modality (see Fig. 2d), on the contrary, it was considered a set of detectors placed on a ring containing the source, thus obtaining a completely fixed CST modality [20, 21, 22, 23]. The corresponding Radon transform measures the contribution of the photons scattered at the points located on circles arcs having a common extremity (the point source) and another (the considered detector) on the detector ring. Some configurations, mentioned above, employ collimators to split up photons coming from different circle arcs. Geometries without collimation have also been studied. Consequently, for a given position of detector, the acquisition is performed on a family of double circle arcs and the amount of registered data is potentially doubled. This feature may also reduce the acquisition time and finally radiation exposition in comparison with a similar geometry with collimation. Among the proposed CST systems without collimation, one supposes a fixed source and a detector rotating around this source [24]. See Figure 2e. The other one, proposed in [25], is made of a source and a detector which translate along a line (Fig. 2f). The direct reconstruction of volumes is also of interest with the proposition of three-dimensional CST systems. These modalities use uncollimated sources and detectors, and data acquisition is performed on toric surfaces. In many cases, these 3D modalities correspond to extensions of 2D systems. Circular geometries become thus spherical or cylindrical [26, 27, 28] and linear geometries become planar [25]. Here, we are interested in the two-dimensional modality proposed in [25]. The purpose of this article is to develop an alternative formulation suitable for the development of a faster and efficient reconstruction algorithm. The associated reconstruction algorithm will use only classical tools such as Fast Fourier Transform (FFT) algorithm. The paper is outlined as follows. Section 2 recalls the general setup of the system and the model for data acquisition for the modality. Section 3 introduces the main result of the paper, that is, the alternative formulation for the inverse Radon transform on double circle arcs. Section 4 will give the discrete formulation of the forward operator, as well as the proposed strategy to reconstruct the object under study. Section 5 discusses the obtained simulation results with a study of the influence of some parameters on reconstruction quality. ## 2 Setup and measurement model of the CST system under study ### 2.1 Setup The system under study is made of a source, assumed to be monochromatic, and a detector separated by a fixed distance from each other. The source and the detector move respectively on a horizontal line of equation $z=3$ and $z=1$. The horizontal position of the pair source-detector is labelled by $x_{0}$ (see Figure 3). Alternatively, this system may be sketched with fixed lines of sources and detectors that will be used in pair. The object, placed below the detector path, is scanned transversely. No collimation is used at the detector, hence the acquisition is performed on a family of double circle arcs (called toric sections in the original publication [25])111We position ourselves in the same frame of study as the original article, with the same working assumptions. Thus, first order Compton scattering is the only source of attenuation for radiation and data acquisition is performed with a pair source detector, assumed to be point-like. These conditions are common in the literature [16, 26, 29, 30, 31, 20, 24] and have been already discussed in [15, 19, 26, 24].. We parameterize these circle arcs and define the corresponding Radon transform in the next paragraph. Figure 3: Setup and parameterization of the CST modality proposed in [25]. The source $S$ and the detector are respectively represented by a red and a blue point. To make the difference between the four half-arcs, $S_{1},S_{3}$ and $S_{2},S_{4}$ are respectively depicted in red and green. $\Omega_{1,2,3,4}$ denote the centres of the circles supporting the half-arcs $S_{1,2,3,4}$. The point $M$ is an example of a scattering site. ### 2.2 Modelling of data acquisition using the CST system Given a scattering angle $\omega$, data acquisition is performed on a family of double circle arcs of radius $r$ where $r=1/\sin(\pi-\omega)$ (or, equivalently, $\omega=\pi-\arcsin{(1/r)}$). For parameterization, these double circle arcs are obtained with the union of four half arcs denoted $S_{j}(x_{0},r)$, $j\in\\{1,2,3,4\\}$ of respective equation $\displaystyle x_{1}$ $\displaystyle=x_{0}+\sqrt{r^{2}-1}+\sqrt{r^{2}-(z-2)^{2}},\;\;x_{2}=x_{0}+\sqrt{r^{2}-1}-\sqrt{r^{2}-(z-2)^{2}},$ $\displaystyle x_{3}$ $\displaystyle=x_{0}-\sqrt{r^{2}-1}+\sqrt{r^{2}-(z-2)^{2}},\;\;x_{4}=x_{0}-\sqrt{r^{2}-1}-\sqrt{r^{2}-(z-2)^{2}}$ and $z\in]2-r,1[$. See Figure 3. The Radon transform which mathematically models data measurement with this CST system is then defined as follows : ###### Definition 1. Let $f$ be an unknown function, non-negative, continuous and compactly supported in the half plane $z<1$. The Radon transform on double circle arcs $\mathcal{R}_{\mathcal{D}}$ maps $f$ into the set of its integrals over the family of double circle arcs as $\mathcal{R}_{\mathcal{D}}f(x_{0},r)=\int_{\bigcup_{j=1}^{4}S_{j}(x_{0},r)}f(x,y)ds.$ (2.1) where $ds$ refers to the elementary arc length measure on the considered double circle arc. Then, after computation of the arc length measure, we have the explicit reformulation for $\mathcal{R}_{\mathcal{D}}$ [25, Proposition 3.1] : $\mathcal{R}_{\mathcal{D}}f(x_{0},r)=\int_{1}^{r}\frac{1}{\sqrt{1-\left(\frac{z}{r}\right)^{2}}}\left(\sum_{j=1}^{2}f_{1}\left(\sqrt{r^{2}-1}+(-1)^{j}r\sqrt{1-\left(\frac{z}{r}\right)^{2}}+x_{0},z\right)\right.+\\\ \left.f_{1}\left(-\sqrt{r^{2}-1}+(-1)^{j}r\sqrt{1-\left(\frac{z}{r}\right)^{2}}+x_{0},z\right)\right)dz,$ (2.2) where $f_{1}(x,z)=f(x,2-z)$. In the original study of this modality, the invertibility of the corresponding Radon transform as well as its analytical inversion formula has been established. The invertibility was proven using the theory of integral equations and resulted in a Volterra integral equation with a weakly singular kernel in the Fourier domain. This study also leads to a formulation for inversion formula as an integral transformation with a kernel computed iteratively. The numerical calculation of this kind of kernel may require high computational time and/or memory. Furthermore, as mentioned in the Remark 3.4 of the original paper, the proposed approach by Webber is severely ill-posed, particularly in terms of stability. Implementing such a method can lead to large instabilities, even when these are due to small changes in the data. ## 3 An alternative formulation for the inversion formula of the Radon transform on double circle arcs In this section, we state the main result of the paper, a different formulation for the associated inversion formula, that will be easier to implement numerically. Let us introduce before some notations that will be used in the proofs. ### 3.1 Notations It is useful to define the following transform pairs. ###### Definition 2 (Fourier transform). Let $f$ be a compactly supported function in $\mathbb{R}^{n}$. The n-dimensional Fourier transform of $f$, denoted $\widehat{f}$, is given by $\widehat{f}(\boldsymbol{\xi})=\int_{\mathbb{R}^{n}}f(x)e^{-i\boldsymbol{x}\cdot\boldsymbol{\xi}}d\boldsymbol{x}$ (3.1) with $\boldsymbol{\xi}\in\mathbb{R}^{n}$. The inverse Fourier transform is $f(\boldsymbol{x})=\frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^{n}}\widehat{f}(\boldsymbol{\xi})e^{i\boldsymbol{x}\cdot\boldsymbol{\xi}}d\boldsymbol{\xi}.$ (3.2) ###### Definition 3 (Fourier cosine transform [32]). Let $f$ be a compactly supported function in $\mathbb{R}^{+}$. The Fourier cosine transform of $f$, denoted $\widehat{f}^{c}$, is given by $\widehat{f}^{c}(\xi)=\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}f(x)\cos(x\xi)dx$ (3.3) with $\xi\in\mathbb{R}$. The inverse Fourier cosine transform is $f(\xi)=\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}\widehat{f}^{c}(\xi)\cos(x\xi)d\xi.$ (3.4) We define also the Hankel transform. ###### Definition 4 (Hankel transform [33]). Let $f$ be a compactly supported function in $\mathbb{R}^{+}$. The zero-order Hankel transform of $f$ is defined as $\mathcal{H}_{0}f(\eta)=\int_{0}^{\infty}f(r)J_{0}(\eta r)rdr$ (3.5) where $J_{0}$ stands for the Bessel function of the first kind of order $0$. Finally, we recall the integral representation of the Bessel function $J_{0}$: $J_{0}(x)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{ix\sin{\theta}}d\theta.$ (3.6) ### 3.2 Inversion formula ###### Proposition 1. Denoting $\mathcal{G}f(x_{0},r)$ the operator whose Fourier transform according to the first variable is $\widehat{\mathcal{G}}f(\xi,r)=\frac{\widehat{\mathcal{R}}_{\mathcal{D}}f(\xi,r)}{2r\cos{(\xi\sqrt{r^{2}-1})}},$ (3.7) if $r>1$ and $0$ when $r\in[0,1]$, the unknown function $f$ is completely recovered from $\widehat{\mathcal{G}}f$ as follows $f(x,z)=\frac{1}{4\pi}\int_{-\infty}^{\infty}e^{ix\xi}\int_{0}^{\infty}\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})\cos{(\sigma(2-z))}\sigma d\sigma d\xi.$ (3.8) ###### Proof. With the change of variables $s=z/r$ in (2.2) and taking the Fourier transform of $\mathcal{R}_{\mathcal{D}}$ respectively to variable $x_{0}$, one gets $\widehat{\mathcal{R}}_{\mathcal{D}}f(\xi,r)=\int_{-\infty}^{\infty}dx_{0}\int_{1/r}^{1}ds\frac{re^{-ix_{0}\xi}}{\sqrt{1-s^{2}}}\cdot\\\ \left(\sum_{j=1}^{2}f_{1}\left(\sqrt{r^{2}-1}+(-1)^{j}r\sqrt{1-s^{2}}+x_{0},rs\right)+f_{1}\left(-\sqrt{r^{2}-1}+(-1)^{j}r\sqrt{1-s^{2}}+x_{0},rs\right)\right).$ (3.9) With the second change of variables $x=x_{0}\pm\sqrt{r^{2}-1}+(-1)^{j}r\sqrt{1-s^{2}}$, one gets $\displaystyle\widehat{\mathcal{R}}_{\mathcal{D}}f(\xi,r)=4\int_{1/r}^{1}ds\frac{r}{\sqrt{1-s^{2}}}\widehat{f}_{1}(\xi,rs)\cos{(\xi\sqrt{r^{2}-1})}\cos{(\xi r\sqrt{1-s^{2}})}$ (3.10) where $\widehat{f}_{1}$ stands for the one-dimensional Fourier transform relatively to the variable $x$. Using relation (3.7), multiplying both sides of (3.10) by $r\cdot J_{0}(\eta r)$ with $\eta\geq 1$ and integrating with respect to variable $r$, for $r>1$, one recognizes the Hankel transform of $\widehat{\mathcal{G}f}$, denoted $\mathcal{H}_{0}\widehat{\mathcal{G}f}$ $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\eta)=2\int_{1}^{\infty}dr\int_{1/r}^{1}ds\frac{r}{\sqrt{1-s^{2}}}\widehat{f}_{1}(\xi,rs)\cos{(\xi r\sqrt{1-s^{2}})}J_{0}(\eta r).$ (3.11) Then, with the double substitution $(r=\sqrt{z^{2}+b^{2}},s=z/\sqrt{z^{2}+b^{2}})$, one gets $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\eta)=2\int_{1}^{\infty}dz\widehat{f}_{1}(\xi,z)\int_{0}^{\infty}db\cos{(\xi b)}J_{0}(\eta\sqrt{z^{2}+b^{2}}).$ (3.12) The result of the $b$-integral is given in the table [34, p. 55, eq. (35)]. Finally, one gets for $0<\xi<\eta$ $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\eta)=2\int_{1}^{\infty}dz\widehat{f}_{1}(\xi,z)\frac{1}{\sqrt{\eta^{2}-\xi^{2}}}\cos{(z\sqrt{\eta^{2}-\xi^{2}})}$ (3.13) and $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\eta)=0$ if $\eta\leq\xi$. Let $0<\xi<\eta$. Then with the fact $f_{1}(x,z)=0$ for $z\in[0,1]$, $\int_{0}^{\infty}dz\widehat{f}_{1}(\xi,z)\cos{(z\sqrt{\eta^{2}-\xi^{2}})}=\frac{\sqrt{\eta^{2}-\xi^{2}}}{2}\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\eta).$ (3.14) The left-hand side is the Fourier cosine transform of $\widehat{f}_{1}(\xi,z)$ according the variable $z$. We can then extract $\widehat{f}_{1}(\xi,z)$, applying the inverse cosine transform to (3.14) $\widehat{f}_{1}(\xi,z)=\frac{1}{2}\int_{0}^{\infty}d\sigma\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})\cos{(z\sigma)}\sigma$ (3.15) where $\sigma=\sqrt{\eta^{2}-\xi^{2}}$. The final equation is obtained going back to variable $f$, and applying the inverse Fourier transform. ∎ ###### Remark 1. The projections $\mathcal{\widehat{G}}f$ (3.7) contain zeros in the denominator, since the cosine function vanishes when $\xi\sqrt{r^{2}-1}=2k\pi\pm\frac{\pi}{2}$, $k\in\mathbb{Z}$. From (3.11) to the end of the demonstration, it was supposed that $r$ is different from $\sqrt{1+\left(\frac{\pi}{2\xi}(2k+1)\right)}$. Furthermore, this may be a source of instability in the simulations. The addition of a regularization parameter for simulations is discussed in Section 4.2 to prevent this. ###### Remark 2. Another reconstruction algorithm is also possible from the projections of $\mathcal{G}f$. This process is achieved performing geometric inversion. Geometric inversion is a mapping converting a point $X$ into a point $\tilde{X}$ such that $\tilde{X}X^{T}=q^{2}$, where $q\in\mathbb{R}_{+}^{*}$ is a constant value. The mapped point $\tilde{X}$ has the same direction as the original point $X$ but a distance of $q^{2}/||X||$ to the origin of the considered coordinate system. As an example, geometric inversion converts circles passing through the origin into straight lines. In the present case, the Radon transform on double circle arcs is converted into a Radon transform on an apparent family of circle arcs of similar geometry as the one studied in [19, 35]. Although the inverse problem can be alternatively solved using geometric inversion, the approach we employ here is more straightforward. ## 4 Numerical formulations for the forward and inverse transform ### 4.1 Image formation Let $N_{SD}$ be the number of positions for the pair source - detector and $N_{r}$ the number of double scanning circle arcs per sensor position. We denote $x_{0,k}$, $k\in\\{1,...,N_{SD}\\}$ and $r_{l}$, $l\in\\{1,...,N_{r}\\}$ the discrete variables corresponding respectively to $x_{0}$ and $r$. The matrix of projection data $\mathcal{R}_{\mathcal{D}}f(x_{0,k},r_{l})$ is then computed, writing (2.2) under a discrete form, with the change of variables $z=r\cos{\theta}$ $\mathcal{R}_{\mathcal{D}}f(x_{0,k},r_{l})=r_{l}\,\Delta_{\theta}\;\cdot\\\ \sum_{\theta\in\left[\arcsin{\left(\frac{1}{r_{l}}\right)},\frac{\pi}{2}\right]}\left(\sum_{j=1}^{2}f_{1}(x_{0,k}+\sqrt{r_{l}^{2}-1}+(-1)^{j}r_{l}\cos{\theta},r_{l}\sin{\theta})+f_{1}(x_{0,k}-\sqrt{r_{l}^{2}-1}-(-1)^{j}r_{l}\cos{\theta},r_{l}\sin{\theta})\right),$ (4.1) where $\Delta_{\theta}$ is the sampling angular distance of $\theta$. The above Cartesian parameterization allows having a constant distance between running points of the considered scanning circle arcs during simulations. ### 4.2 Image reconstruction For image reconstruction, we need to compute the projections $\mathcal{G}f$ in the Fourier domain according to (3.7). This expression contains zeros in the denominator. This may induce instabilities on reconstruction. For simulations, we add a small regularization parameter denoted $\epsilon$ $\widehat{\mathcal{G}}f(\xi,r)=\frac{\widehat{\mathcal{R}}_{\mathcal{D}}f(\xi,r)}{2r}\frac{\cos{(\xi\sqrt{r^{2}-1})}}{\epsilon^{2}+\cos{(\xi\sqrt{r^{2}-1})}^{2}}.$ (4.2) In terms of computational cost, the most demanding step in the implementation of (3.8) is the calculation of $\mathcal{H}_{0}\widehat{\mathcal{G}}f$. The idea is to establish a relation between the above operator with the Fourier transform of $\mathcal{G}^{\ddagger}f$, defined as follows $\mathcal{G}^{\ddagger}f(x,z)=\int_{-\infty}^{\infty}\mathcal{G}f(x_{0},\sqrt{(x-x_{0})^{2}+z^{2}})\,dx_{0}.$ (4.3) We have now the following proposition. ###### Proposition 2. Let $(\sigma,\xi)\in[0,\infty[\times\mathbb{R}$. $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})$ is related with the two-dimensional Fourier transform of the operator $\mathcal{G}^{\ddagger}$ as $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})=2\pi\widehat{\mathcal{G}}^{\ddagger}f(\xi,\sigma).$ (4.4) Consequently, from the inversion formula (3.8), it follows in the Fourier domain $\widehat{f}_{1}(\xi,\sigma)=2\pi^{2}|\sigma|\widehat{\mathcal{G}}^{\ddagger}f(\xi,\sigma).$ (4.5) where $\widehat{f}_{1}$ is the two-dimensional Fourier transform of $f_{1}$. ###### Proof. From the definitions of Fourier and Hankel transforms and with the integral representation of the Bessel function, one gets $\displaystyle\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})$ $\displaystyle=\frac{1}{2\pi}\int_{-\infty}^{\infty}dx_{0}\int_{0}^{\infty}dr\,\mathcal{G}f(x_{0},r)\left(\int_{-\pi}^{\pi}e^{ir\sqrt{\xi^{2}+\sigma^{2}}\sin(\theta)}d\theta\right)\,e^{-i\xi x_{0}}.$ (4.6) There is an angle $\phi{(\sigma,\xi)}\in[0,2\pi[$ which corresponds to the angular coordinate of the point $(\sigma,\xi)$, such that $\sigma=\sqrt{\xi^{2}+\sigma^{2}}\cos{(\phi{(\sigma,\xi)})}$ and $\xi=\sqrt{\xi^{2}+\sigma^{2}}\sin{(\phi{(\sigma,\xi)})}$. Using the property of periodicity of trigonometric functions, it follows that $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})=\frac{1}{2\pi}\int_{-\infty}^{\infty}dx_{0}\int_{0}^{\infty}dr\,\mathcal{G}f(x_{0},r)\left(\int_{-\pi}^{\pi}e^{ir\xi\sin{\theta}+\sigma\cos{\theta}}d\theta\right)\,e^{-i\xi x_{0}}.$ (4.7) Changing variables $x=r\cos{\theta}$ and $z=r\sin{\theta}$, $\mathcal{H}_{0}\widehat{\mathcal{G}}f(\xi,\sqrt{\xi^{2}+\sigma^{2}})=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}dxdz\left(\int_{-\infty}^{\infty}dx_{0}\mathcal{G}f(x_{0},\sqrt{(x-x_{0})^{2}+z^{2}})\right)e^{-i(x\xi+z\sigma)}$ (4.8) The right-hand side of (4.8) is the two-dimensional Fourier transform of $\mathcal{G}^{\ddagger}$, weighted by $2\pi$ (4.4). We are now able to reformulate the inversion formula (3.15) as $\widehat{f}_{1}(\xi,z)=2\pi\int_{-\infty}^{\infty}d\sigma\widehat{\mathcal{G}}^{\ddagger}f(\xi,\sigma)e^{iz\sigma}|\sigma|.$ (4.9) Taking the Fourier transform according to variable $z$ to the above equation (4.9) leads to (4.5). ∎ This leads to the reconstruction algorithm summed up in Algorithm 1. Data: $\mathcal{R}_{\mathcal{D}}f(x_{0},r)$, projections on double circular arcs of function $f$ Result: $f(x,y)$ 1 Compute the one-dimensional Fourier transform of $\mathcal{R}_{\mathcal{D}}f(x_{0},r)$ using FFT relative to the first variable ; 2 Compute $\widehat{\mathcal{G}}f(\xi,r)$ according to (4.2) and perform the inverse FFT to recover $\mathcal{G}f(x_{0},r)$; 3 For each $x_{0}$, interpolate the obtained data and sum on all values of $x_{0}$ to have the back-projected data $\mathcal{G}^{\ddagger}f(x,z)$; 4 Perform the 2D FFT of $\mathcal{G}^{\ddagger}f(x,z)$ and weight by $2\pi^{2}|\sigma|$; 5 Compute the inverse FFT of the result to recover $f$ ; Algorithm 1 Reconstruction of object $f$ ## 5 Simulations results The original object used for simulations is Derenzo phantom, an object made of multiple circles of different sizes. The circles in the object also allow the study of the performance of the algorithm in front of different contrasts and spatial resolution, as well as its ability to reconstruct features locally tangent to lines of any slope. The unit length used here is the pixel. We suppose thus that the distance between the source and the detector paths of the modality is two pixels. The size of the object is $N\times N=256\times 256$ pixels in all simulations. Furthermore, given the linear geometry of this modality, there is no loss of generality to consider the object centred relatively to the $z$-axis. ### 5.1 Data acquisition Data measurement is calculated according to (4.1). Figure 4 shows an example of data obtained for Derenzo phantom. (a) (b) Figure 4: (a) Original object: Derenzo phantom. (b) Corresponding acquired data for $N_{SD}=2048$ and $N_{r}=1024$. A distance of one pixel is left between the upper part of the image and the detector path ($\delta=1$, see 5.2.1). ### 5.2 Influence of some parameters on reconstruction quality In the following paragraphs, we study the influence of the other general parameters of the system such as the position of the object, the number of required positions for sensors or the number of scanning circle arcs. In addition to a visual comparison of the reconstruction quality, we propose here to measure quantitatively the error rate between the original object $f_{0}$ and the reconstruction $f$ with the Normalized Mean Squared Error (NMSE) $=||f-f_{0}||_{2}^{2}/N^{2}$ where $||.||_{2}$ refers to the $2$-norm. The regularization parameter $\epsilon$, which has to be small, was arbitrarily set to $0.01$. #### 5.2.1 Position of the object relative to the detector path We analysed here the influence of the position of the object on reconstruction quality. Firstly, the number of positions $N_{SD}$ for the pair source - detector and the number of scanning circles per position $N_{r}$ was chosen to largely satisfy the well-known condition [33] $N_{SD}\times N_{r}\geq N^{2}$ and are set arbitrarily to $N_{SD}=1024$ and $N_{r}=1024$. A convenient choice for these parameters will be discussed later. We performed various acquisition, modifying the gap $\delta$ between the detector path and the upper part of the object (see Figure 3). Consequently, the object is in the square of Cartesian coordinates $\left(x\in\left[-\frac{N}{2}+1,\frac{N}{2}\right];z\in[-N-\delta,-\delta+1\right])$. Figure 5 shows the reconstruction results for $\delta=1,26$ and $51$ pixels which correspond respectively a position for the object in the respective domains $[-N-1,0]$, $[-N-26,-25]$ and $[-N-51,-50]$ along the z-axis. The difference between the three reconstructions is on the top of the object. If the object is close to the line of movement of the detector, then this part is less well reconstructed. Indeed, this distance between the object and the detector path allows having arcs of circle tangent horizontally to this part of the object. An offset of $\delta=51$ seems to be a good trade-off between quality of reconstruction and the applicability of such a measure in practical use. For the rest of the simulations, $\delta$ is set to $51$. (a) Reconstruction for $\delta=1$ NMSE = 0.0112 (b) Reconstruction for $\delta=26$ NMSE = 0.0074 (c) Reconstruction for $\delta=51$ NMSE = 0.0061 Figure 5: Reconstruction results of the Derenzo phantom 4a for $\delta=1$ (a), $\delta=26$ (b) and $\delta=51$ (c) pixel(s). #### 5.2.2 Number of necessary positions for the pair source-detector We studied then the number of different positions required for a good quality of reconstruction. The influence of two running parameters is analysed, first, the farthest position $x_{0,max}$ from the object for the source-detector pair (that is an array of length $[-x_{0,max},x_{0,max}]$ for the source and detector paths) and the distance $\Delta_{x_{0}}$ between two adjacent positions of the pair. Figures 6a, 6b and 6c show the reconstruction results when the farthest position from the object to the pair source detector is respectively $2N,3N$ and $4N$ with a common $\Delta_{x_{0}}$ set to $1$. Reconstruction from a domain $[-x_{0,max},x_{0,max}]=[-2N,2N]$ appears to be blurred with strong artefacts in the upper parts of the image. For $x_{0,max}=3N$ and $4N$, reconstruction quality seems to be visually equivalent, even if the NMSE for $x_{0,max}=4N$ is higher. This may be due to numerical approximations. For the rest of the simulation, $x_{0,max}$ is set to $3N$. The influence of the distance $\Delta_{x_{0}}$ between two adjacent positions of the pair is now evaluated. Figures 6d, 6e and 6f show the result for $\Delta_{x_{0}}=2,1$ and $0.5$ pixels. This represents a respective amount of $0.5,1$ and $2$ detectors per unit length. In Figure 6d $(\Delta_{x_{0}}=2)$, we can see streaks suggesting a lack of data for reconstructing the object. On the contrary, the doubling of the number of detectors between Fig. 6e and Fig. 6f does not bring a better quality of reconstruction. Consequently, the use of one detector per unit length seems to be a good trade-off, and the value will remain constant in the rest of the paper. (a) Reconstruction for $x_{0,max}=2N$. NMSE = 0.0084 (b) Reconstruction for $x_{0,max}=3N$. NMSE = 0.0049 (c) Reconstruction for $x_{0,max}=4N$. NMSE = 0.0061 (d) Reconstruction for $0.5$ detectors per unit length. NMSE = 0.0046 (e) Reconstruction for $1$ detectors per unit length. NMSE = 0.0049 (f) Reconstruction for $2$ detectors per unit length. NMSE = 0.0049 Figure 6: Evaluation of the number of source-detector positions on reconstruction quality. First row: Reconstruction results of the Derenzo phantom 4a for $x_{0,max}=2N$ 6a, $3N$ 6b and $4N$ 6c where $\Delta_{x_{0}}=1$. Second row: Reconstruction results for $0.5$ 6d, $1$ 6e and $0.5$ 6f detector per unit length and $x_{0,max}=3N$ remains constant. #### 5.2.3 Number of scanning circles per position of the pair source- detector The number of scanning circles necessary for reconstruction is now under study. In the same way, two parameters are of interest, that is, the maximum radius $r_{max}$ of the scanning double circle arcs to be taken and the discretization step $\Delta_{r}$ that have to be chosen. We first evaluate the consequences of the value of $r_{max}$ with three examples on Fig. 7a, 7b and 7c where $r_{max}$ is set respectively to $2N,3N$ and $4N$ and $\Delta_{r}=1$. For $r_{max}=2N$, the reconstruction suggests a lack of data in front of the obtained results for $r_{max}=3N$ and $r_{max}=4N$. Notice the higher NMSE for $r_{max}=4N$, probably due to numerical approximations. We were finally looking for the appropriate discretization step for $r$, setting $\Delta r$ to $1,2$ and $4$. Reconstruction results are shown respectively in Fig. 7d, 7e and 7f. Reconstructions with a large discretization step exhibit blur. (a) Reconstruction for $r_{max}=2N$. NMSE = 0.0058 (b) Reconstruction for $r_{max}=3N$. NMSE = 0.0040 (c) Reconstruction for $r_{max}=4N$. NMSE = 0.0049 (d) Reconstruction for $\Delta_{r}=1$. NMSE = 0.0040 (e) Reconstruction for $\Delta_{r}=2$. NMSE = 0.0043 (f) Reconstruction for $\Delta_{r}=4$. NMSE = 0.0043 Figure 7: Evaluation of the number of scanning circles on reconstruction quality. First row: Reconstruction results of the Derenzo phantom 4a for $r_{max}=2N$ 7a, $3N$ 7b and $4N$ 7c where $\Delta_{r}=1$. Second row: Reconstruction results for $\Delta_{r}=1$ 7d, $2$ 7e and $4$ 7f detector per unit length and $r_{max}=3N$ remains constant. #### 5.2.4 Discussions The above simulations results show some interesting issues for the CST modality and the reconstruction quality that can be expected with such a system. First, this system is able to scan objects whose depth is largely oversized relative to the source-detector distance. However, this seems to be counterbalanced by the need for a consequent length for the source and detector linear paths, since sufficient reconstruction results appear for source and detector paths of length size six times greater than the object size. The necessary number of scanning circle arcs is also very important since, if we relate the values of $r_{max}$ with the scattering angles, the reconstruction quality is largely improved when $r_{max}$ is high whereas the angular distance between two values of $r_{max}$ is very small. Considering a larger distance between the source and the detector paths for the system may reduce partially the required amount of projection data. Moreover, the geometry offers a sufficient reconstruction quality for every tangent with arbitrary slopes. However, one can notice that vertical slopes are slightly less well reconstructed than the other ones. This can be seen if we pay carefully attention to the left and right sides of the reconstruction. This may be problematic if the object to scan is essentially made of vertical features. One way to avoid this is to perform additional scans by rotating the object, if possible. Some artefacts that look like shadows around the circles which compose the object remains clearly visible. The issue of artefacts in CST has already been addressed in different manners, for instance in [36], where microlocal analysis was employed to alleviate artefacts in reconstructions from limited data. Moreover, in [37], a penalized iterative algorithm was developed for a mixed modality. Regarding our approach, it can be combined in a pipeline with post-processing stages based on machine learning, as we did in [38] for limited data issues in classical computed tomography. Some work is on the way. ## 6 Concluding remarks In this article, we proposed a new reconstruction algorithm for a recently proposed CST modality with translational geometry. The algorithm can be numerically implemented efficiently and reconstructions exhibit good quality. A quantitative study of the required data has also been carried out. This study proved the ability for such a CST system to reconstruct larger objects than the system itself. This advantage is moderated by the fact that it is necessary to have an important amount of source-detector positions, which can make the acquisition time longer. An interesting issue of this work concerns a reconstruction algorithm for the three-dimensional extension of this system with planar paths for the source and the detector. ## 7 Acknowledgements We would like to thank Prof. T. T. Truong for stimulating discussions. C. Tarpau research work is supported by grants from Région Île-de-France (in Mathematics and Innovation) 2018-2021 and LabEx MME-DII (Modèles Mathématiques et Économiques de la Dynamique, de l’Incertitude et des Interactions) (No. ANR-11-LBX-0023-01). J. Cebeiro research work is supported by a postdoctoral grant from the University of San Martín. He is also partially supported by SOARD-AFOSR (grant number FA9550-18-1-0523). ## References * [1] P. Lale, “The examination of internal tissues, using gamma-ray scatter with a possible extension to megavoltage radiography,” _Physics in Medicine & Biology_, vol. 4, no. 2, p. 159, 1959. * [2] R. Clarke and G. Van Dyk, “Compton-scattered gamma rays in diagnostic radiography,” in _Medical Radioisotope Scintigraphy. VI Proceedings of a Symposium on Medical Radioisotope Scintigraphy_ , 1969. * [3] F. Farmer and M. P. Collins, “A new approach to the determination of anatomical cross-sections of the body by compton scattering of gamma-rays,” _Physics in Medicine & Biology_, vol. 16, no. 4, p. 577, 1971. * [4] G. Redler, K. C. Jones, A. Templeton, D. Bernard, J. Turian, and J. C. Chu, “Compton scatter imaging: A promising modality for image guidance in lung stereotactic body radiation therapy,” _Medical physics_ , vol. 45, no. 3, pp. 1233–1240, 2018. * [5] K. C. Jones, G. Redler, A. Templeton, D. Bernard, J. V. Turian, and J. C. Chu, “Characterization of Compton-scatter imaging with an analytical simulation method,” _Physics in Medicine & Biology_, vol. 63, no. 2, p. 025016, 2018\. * [6] S. Gautam, F. Hopkins, R. Klinksiek, and I. Morgan, “Compton interaction tomography I. Feasibility studies for applications in earthquake engineering,” _IEEE Transactions on Nuclear Science_ , vol. 30, no. 2, pp. 1680–1684, 1983. * [7] P. G. Prado, M. K. Nguyen, L. Dumas, and S. X. Cohen, “Three-dimensional imaging of flat natural and cultural heritage objects by a Compton scattering modality,” _Journal of Electronic Imaging_ , vol. 26, no. 1, p. 011026, 2017. * [8] G. Harding and E. Harding, “Compton scatter imaging: A tool for historical exploration,” _Applied Radiation and Isotopes_ , vol. 68, no. 6, pp. 993–1005, 2010. * [9] E. M. Hussein, M. Desrosiers, and E. J. Waller, “On the use of radiation scattering for the detection of landmines,” _Radiation Physics and Chemistry_ , vol. 73, no. 1, pp. 7–19, 2005. * [10] P. E. Cruvinel and F. A. Balogun, “Compton scattering tomography for agricultural measurements,” _Engenharia Agricola_ , vol. 26, no. 1, pp. 151–160, 2006. * [11] T. T. Truong and M. K. Nguyen, “Recent developments on Compton scatter tomography: theory and numerical simulations,” in _Numerical Simulation-From Theory to Industry_. IntechOpen, 2012. * [12] M. K. Nguyen and T. T. Truong, _Imagerie par rayonnement gamma diffusé_. Hermès Science, 2006\. * [13] J. Radon, “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten,” _Akad. Wiss._ , vol. 69, pp. 262–277, 1917. * [14] A. M. Cormack, “Representation of a function by its line integrals, with some radiological applications,” _Journal of Applied Physics_ , vol. 34, no. 9, pp. 2722–2727, 1963. * [15] S. J. Norton, “Compton scattering tomography,” _Journal of applied physics_ , vol. 76, no. 4, pp. 2007–2015, 1994. * [16] M. K. Nguyen and T. T. Truong, “Inversion of a new circular-arc Radon transform for Compton scattering tomography,” _Inverse Problems_ , vol. 26, no. 6, p. 065005, 2010. * [17] G. Rigaud, M. K. Nguyen, and A. K. Louis, “Novel numerical inversions of two circular-arc Radon transforms in Compton scattering tomography,” _Inverse Problems in Science and Engineering_ , vol. 20, no. 6, pp. 809–839, 2012. * [18] G. Rigaud, R. Régnier, M. K. Nguyen, and H. Zaidi, “Combined modalities of Compton scattering tomography,” _IEEE Transactions on Nuclear Science_ , vol. 60, no. 3, pp. 1570–1577, 2013. * [19] T. T. Truong and M. K. Nguyen, “Radon transforms on generalized Cormack’s curves and a new Compton scatter tomography modality,” _Inverse Problems_ , vol. 27, no. 12, p. 125001, 2011. * [20] C. Tarpau, J. Cebeiro, M. Morvidone, and M. K. Nguyen, “A new concept of Compton Scattering tomography and the development of the corresponding circular Radon transform,” _IEEE Transactions on Radiation and Plasma Medical Sciences_ , vol. (accepted for publication), 2019, [10.1109/TRPMS.2019.2943555]. * [21] C. Tarpau and M. K. Nguyen, “Compton scattering imaging system with two scanning configurations,” _Journal of Electronic Imaging_ , vol. 29, no. 1, p. 013005, 2020. * [22] J. Cebeiro, M. K. Nguyen, M. Morvidone, and C. Tarpau, “An interior Compton Scatter Tomography,” in _25th IEEE Nuclear Science Symposium and Medical Imaging Conference 2018 (IEEE NSS/MIC’18)_ , Sydney, Australia, Nov. 2018\. * [23] G. Rigaud, “Compton Scattering Tomography: Feature Reconstruction and Rotation-Free Modality,” _SIAM Journal on Imaging Sciences_ , vol. 10, no. 4, pp. 2217–2249, 2017. [Online]. Available: https://doi.org/10.1137/17M1120105 * [24] C. Tarpau, J. Cebeiro, M. K. Nguyen, G. Rollet, and M. A. Morvidone, “Analytic inversion of a Radon transform on double circular arcs with applications in Compton Scattering Tomography,” _IEEE Transactions on Computational Imaging_ , vol. 6, pp. 958–967, 2020. * [25] J. Webber and E. L. Miller, “Compton scattering tomography in translational geometries,” _Inverse Problems_ , vol. 36, no. 2, p. 025007, 2020. * [26] J. W. Webber and W. R. Lionheart, “Three dimensional Compton scattering tomography,” _Inverse Problems_ , vol. 34, no. 8, p. 084001, 2018. * [27] G. Rigaud and B. N. Hahn, “3D Compton scattering imaging and contour reconstruction for a class of Radon transforms,” _Inverse Problems_ , vol. 34, no. 7, p. 075004, 2018. * [28] J. Cebeiro, C. Tarpau, M. A. Morvidone, D. Rubio, and M. K. Nguyen, “On a three dimensional Compton scattering tomography system with fixed source,” _Inverse Problems_ , vol. 37, no. 5, p. 054001, 2021. * [29] J. W. Webber and S. Holman, “Microlocal analysis of a spindle transform,” _AIMS Inverse Problems and Imaging_ , vol. 13, no. 2, pp. 231–261, 2019. [Online]. Available: http://aimsciences.org//article/id/7ad5560c-e076-4384-9e9d-1dab4121da6d * [30] J. Cebeiro, M. K. Nguyen, M. Morvidone, and A. Noumowé, “New “improved” Compton scatter tomography modality for investigative imaging of one-sided large objects,” _Inverse Problems in Science and Engineering_ , vol. 25, no. 11, pp. 1676–1696, 2017. * [31] T. Truong and M. Nguyen, “Compton scatter tomography in annular domains,” _Inverse Problems_ , vol. 35, no. 5, p. 054005, 2019. * [32] I. S. Gradshteyn, I. M. Ryzhik, D. Zwillinger, and V. Moll, _Table of integrals, series, and products; 8th ed._ Amsterdam: Academic Press, Sep 2014. [Online]. Available: https://cds.cern.ch/record/1702455 * [33] R. N. Bracewell, “Numerical transforms,” _Science_ , vol. 248, no. II May, pp. 697–704, 1990. * [34] H. Bateman, _Tables of Integral Transforms_. New York: McGraw-Hill Book Compagny, 1954, vol. 1. * [35] T. T. Truong, “Function reconstruction from reflection symmetric radon data,” _Symmetry_ , vol. 12, no. 6, 2020. [Online]. Available: https://www.mdpi.com/2073-8994/12/6/956 * [36] J. W. Webber and E. T. Quinto, “Microlocal analysis of a compton tomography problem,” _SIAM Journal on Imaging Sciences_ , vol. 13, no. 2, pp. 746–774, 2020. * [37] J. W. Webber, E. T. Quinto, and E. L. Miller, “A joint reconstruction and lambda tomography regularization technique for energy-resolved x-ray imaging,” vol. 36, no. 7, p. 074002, jul 2020. [Online]. Available: https://doi.org/10.1088/1361-6420/ab8f82 * [38] I. Ayad, C. Tarpau, M. K. Nguyen, and N. S. Vu, “Deep morphological network-based artifact suppression for limited-angle tomography,” in _Proceedings of the 25th International Conference on Image Processing, Computer Vision and Pattern Recognition (IPCV’21)_ , Las Vegas, United States, Jul. 2021.
and hence \begin{align}\Prob_{\Policy}(\AOHistoryRV_{ i, t}= \AOHistory_{i,t}) \sum_{\History_t\in\HistorySet^\DecP_t} \Prob_\Policy(\AOHistoryRV_{i,t}=\AOHistory_{i,t}\mid \HistoryRV_{t}=\History_t)\Prob_\Policy(\HistoryRV^\DecP_t=\History_t) \\&\overset{(\ref{eq:21})}{=} \sum_{\History_t\in\HistorySet^\DecP_t} \Prob_\Policy(\AOHistoryRV_{i,t}=\AOHistory_{i,t}\mid \HistoryRV_{t}=\History_t)\Prob_{\Isom^*\Policy}(\HistoryRV^\DecP_t=\Isom\History_t) \\&\overset{(\ref{eq:22})}{=} \sum_{\History_t\in\HistorySet^\DecP_t} \Prob_{\Isom^*\Policy}(\AOHistoryRV_{\Isom i,t}=\Isom\AOHistory_{i,t}\mid \HistoryRV_{t}=\Isom\History_t) \Prob_{\Isom^*\Policy}(\HistoryRV^\DecP_t=\Isom\History_t) \\&= \sum_{\History_t\in\Isom^{-1}\left(\HistorySet^\DecP_t\right)} \Prob_{\Isom^*\Policy}(\AOHistoryRV_{\Isom i,t}=\Isom\AOHistory_{i,t}\mid \HistoryRV_{t}=\History_t) \Prob_{\Isom^*\Policy}(\HistoryRV^\DecP_t=\History_t) \\&=\Prob_{\Isom^*\Policy}(\AOHistoryRV_{\Isom i,t}=\Isom\AOHistory_{i,t})\label{eq:23}. \end{align} In line (<ref>), we have used that, by Corollary <ref>, \(\Isom\) is a bijective map when applied to histories, and thus \(\Isom^{-1}\left(\HistorySet^\DecP_t\right)=\HistorySet^\DecPSecond_t\). This concludes the proof. It is an immediate corollary that the expected return of a policy is not changed by the pushforward. Let \(\DecP,\DecPSecond\) be Dec-POMDPs, let \(\Isom\in\Iso(\DecP,\DecPSecond)\), and let \(\Policy\in\PolicySet^\DecP\). Then \[J^{\DecP}(\Policy)=J^\DecPSecond(\Isom^*\Policy).\] Corollary <ref> and Theorem <ref>, it is \begin{multline} \Prob_\Policy(\Isom\left(\HistoryRV^\DecP\right)=\History^\DecPSecond) \overset{\text{Corollary~\ref{corollary-action-isomorphism-is-bijective-histories}}}{=} \Prob_\Policy(\HistoryRV^\DecP=\Isom^{-1}\History^\DecPSecond) \\ \overset{\text{Theorem~\ref{lem-pull-back-isomorphism-compatibility}}}{=} \Prob_{\Isom^*\Policy}(\HistoryRV^\DecPSecond = \Isom(\Isom^{-1}\History^\DecPSecond)) =\Prob_{\Isom^*\Policy}(\HistoryRV^\DecPSecond = \History^\DecPSecond) \end{multline} for any history \(\History^\DecPSecond\in\HistorySet^\DecPSecond\). This shows that the random variable \(\Isom\left(\HistoryRV^\DecP\right)\) has the same image distribution under \(\Prob_\Policy\) as the variable \(\HistoryRV^\DecPSecond\) under \(\Prob_{\Isom^*\Policy}\). In particular, this means that for any \(t=0,\dotsc,\Tmax\), the variables \(\Isom(\RewardRV_t^\DecP)=\RewardRV^\DecP_t\) and \(\RewardRV^\DecPSecond_t\) have the same distribution in the respective probability spaces (*). Using the definition of the expected return, it follows that \begin{equation}J^{\DecP}(\Policy)=\E_{\Policy}\left[\sum_{t=0}^\Tmax \RewardRV^\DecP_t\right] \overset{\text{(*)}}{=} \E_{\Isom^*\Policy}\left[\sum_{t=0}^\Tmax \RewardRV^\DecPSecond_t\right]=J^\DecPSecond(\Isom^*\Policy).\end{equation} §.§ Relation to group theory The study of symmetries is a focus of group theory, and the concepts introduced above hence correspond to group-theoretic notions. For instance, as we show below, \(\Aut(\DecP)\) is a group, and its elements do act on the elements of a Dec-POMDP in the sense of group actions. We discuss this here as we will need these results later, in the discussion of symmetric profiles of learning algorithms in Appendix <ref>, as well as in the discussion of random tie-breaking functions in Appendix <ref>. For a reference on the group-theoretic concepts discussed here, see [][ch. 3]rotman2012introduction. We begin by showing that \(\Aut(\DecP)\) is a group. Let \(\DecP\) be a Dec-POMDP. Then \((\Aut(\DecP),\circ)\) is a group, where \(\circ\) is the function composition. First, we show that the binary operation \[ \circ\colon \Aut(\DecP)\times\Aut(\DecP)\rightarrow\Aut(\DecP),(\Auto,\AutoSecond)\mapsto\Auto\circ\AutoSecond\] is well-defined. By Equation <ref>, an automorphism \(\Auto\in\Aut(\DecP)\) is a bijective self-map, so we can compose any two automorphisms \(\Auto,\AutoSecond\in\Aut(\DecP)\). Moreover, by Lemma <ref>, for any \(\Auto\in\Aut(\DecP),\AutoSecond\in\Aut(\DecP)\), we also have \(\AutoSecond\circ\Auto\in\Iso(\DecP,\DecP)=\Aut(\DecP)\). This shows that \(\Aut(\DecP)\) is closed under function composition. Second, note that \(\circ\) is an associative operation as function composition is associative. Moreover, for the identity map \(\Id\), it is \(\Id\circ \Auto=\Auto\) for any \(\Auto\in\Aut(\DecP)\), so \(\Aut(\DecP)\) has a neutral element. Lastly, by Lemma <ref>, it is also \(\Auto^{-1}\in \Aut(\DecP)\), and since \(\Auto^{-1}\circ\Auto=\Id\), this implies that \(\Auto\) has an inverse in \(\Aut(\DecP)\). This concludes the proof. Next, we turn to group actions, which formalize the idea that elements of groups can be applied to sets. In the case of symmetry groups, this connects the abstract group elements with their role as transformation of an underlying set. For instance, consider the set \(X\) of the vertices of an equilateral triangle in \(\mathbb{R}^2\) and the cyclic group \(\faktor{\mathbb{Z}}{3\mathbb{Z}}\). Each element of \(\faktor{\mathbb{Z}}{3\mathbb{Z}}\) can be regarded as a rotation of the vertices of the triangle, mapping one vertex to another. Let \((G,\cdot)\) be a group with identity \(\Id\) and let \(X\) be any set. A group action is defined as a map \(\alpha\colon G\times X\rightarrow X\) such that (i) Identity: \(\alpha(\Id,x)=x\) for any \(x\in X\) (ii) Compatibility: \(\alpha(f, \alpha(g,x)) = \alpha(f\cdot g,x)\) for any \(f,g\in G\), \(x\in X\). It is common to write \(\Auto x:=\alpha(\Auto,x)\) for \(\Auto\in G\), \(x\in X\), if it is clear which group action is referred to. We have already proven these two properties for isomorphisms and both their actions on joint actions and observations, as well as the pushforward of policies, in Lemma <ref> and Lemma <ref>, respectively. Hence, it follows that also \(\Aut(\DecP)\) acts on these sets in the sense of group actions. Let \(\DecP\) be a Dec-POMDP. The actions of \(\Aut(\DecP)\) on \(\ActionSet\) and \(\ObservationSet\), defined as \(\alpha_A\colon (\Auto,\Action)\mapsto\Auto_A(\Action)\) respectively \(\alpha_O\colon (\Auto,\Observation)\mapsto\Auto_O(\Observation)\) as in Equations <ref> and <ref> are group actions. Similarly, the pushforward of policies by automorphisms \((\Auto,\Policy)\mapsto\Auto^*\Policy\) as defined in Definition <ref> is an action of \(\Aut(\DecP)\) on \(\PolicySet^\DecP\). This follows directly from Lemma <ref> and Lemma <ref>. Of course, automorphisms also act on states and agents, and one can also easily see that they act on histories and action-observation histories. Some further results immediately follow from this, such as the fact that \(\PlayerSet\) decomposes into equivalence classes of orbits under \(\Aut(\DecP)\). The orbit of agent \(i\) is defined as the set of all agents \(j\) that can be obtained from \(i\) by applying automorphisms. Let \(\DecP\) be a Dec-POMDP and assume that \(\Aut(\DecP)\) acts on the set \(X\). Then for \(x\in X\), the set \[\Aut(\DecP)x:=\{\Auto x\mid\Auto\in\Aut(\DecP)\}\] is called the orbit of \(x\) under \(\Aut(\DecP)\). For instance, for \(i\in\PlayerSet\), the set \(\Aut(\DecP)i:=\{\Auto i\mid\Auto\in\Aut(\DecP)\}\) is the orbit of agent \(i\) under \(\Aut(\DecP)\). It is a standard result from group theory that orbits form a partition \(\{\Aut(\DecP)i\mid i\in\PlayerSet\}\subseteq\PowerSet(\PlayerSet)\) of the set. This follows from the fact that since group actions have an identity and are invertible, belonging to the same orbit is an equivalence relation. For example, in the case of the triangle in \(\mathbb{R}^2\), all the vertices in \(V\) can be reached from any other vertex by rotations, so they all belong to the same orbit. In general, though, the orbits may form any other partition of the set. §.§ Dec-POMDP labelings and relabeled Dec-POMDPs In the following, assume that a Dec-POMDP \(\DecP\) is given. Here, we want to define a set of isomorphic Dec-POMDPs \(\DecPSet\) as described in Section <ref>, in which the sets of states, actions, etc. are of the form \(\{1,2,\dotsc,k-1,k\}\subseteq\mathbb{N},k\in\mathbb{N}\). This set can then be used to define the LFC game for \(\DecP\) in a way that does not depend on labels. We begin by defining a labeling of \(\DecP\). A labeling \(\Isom\) is a special Dec-POMDP isomorphism from \(\DecP\) to another, relabeled Dec-POMDP, that can be constructed using \(\Isom\). A Dec-POMDP labeling is a tuple of bijective maps \[\Isom:=(\Isom_{N},\Isom_{S},(\Isom_{A_i})_{i\in\PlayerSet},(\Isom_{O_i})_{i\in\PlayerSet}),\] _N →{1,…,||} _S →{1,…,||} ∀i∈ _A_i _i→{1,…,|_i|} ∀i ∈ _O_i _i→{1,…,|_i|}. We denote \(\Sym(\DecP)\) for the set of labelings of \(\DecP\). Note that if \(X\) is some set, then \(\Sym(X)\) usually denotes the symmetric group of \(X\). The symmetric group is the set of permutations of \(X\), together with the operation of function composition. We use the same notation, as \(\Sym(\DecP)\) can be understood of as containing all the permutations of the different sets that \(\DecP\) consists of, with the caveat that we first map those sets to subsets of the first \(k\) natural numbers. This is done for simplification, especially regarding the treatment of the individual action and observation sets of different agents. Next, we introduce the pushforward Dec-POMDP \(\Isom^*\DecP\) for a labeling \(\Isom\in\Sym(\DecP)\). This is a Dec-POMDP that is isomorphic to \(\DecP\), with isomorphism \(\Isom\). In the following, we let \(\Isom\in\Sym(\DecP)\) act on joint actions, observations, etc., in the same way as before for isomorphisms. For instance, for \(\Action\in\ActionSet\), it is \(\Isom\Action:=(\Isom_{A_{\Isom_N^{-1}(i)}}(\Action_{\Isom_N^{-1}(i)}))_{i\in \{1,\dotsc,|\PlayerSet|\}}\). The compatibility of these actions with function composition and function inversion trivially still hold. Let $f\in\Sym(D)$. Let \(N\in\mathbb{N}\) such that \(\{1,\dotsc,N\}=\PlayerSet\). The pushforward of \(\DecP\) by \(\Isom\), called a relabeled Dec-POMDP, is the Dec-POMDP \[ \] * \(\hat{\PlayerSet}:=\{1,\dotsc,N\}=\PlayerSet\). * \(\hat{\StateSet}:=\{1,\dotsc,|\StateSet|\}\). * $\hat{\ActionSet}_{i}:=\{1,\dotsc,|\ActionSet_{\Isom^{-1}i}|\}$ for \(i\in \hat{\PlayerSet}\). * $\hat{P}(s'\mid s,a):=P(\Isom^{-1}s'\mid \Isom^{-1}s,\Isom^{-1}a)$ for \(s',s\in\hat{\StateSet},\Action\in\hat{\ActionSet}\). * $\hat{\RewardFunction}(s,a):=\RewardFunction(f^{-1}s,f^{-1}a)$ for \(\State\in\hat{\StateSet},\Action\in\hat{\ActionSet}\). * $\hat{\ObservationSet}_{i}:=\{1,\dotsc,|\ObservationSet_{\Isom^{-1}i}|\}$ for \(i\in \hat{\PlayerSet}\). * $\hat{O}(o\mid s,a):=O(\Isom^{-1}o\mid \Isom^{-1}s,\Isom^{-1}a)$ for \(\Observation\in\hat{\ObservationSet},\State\in\hat{\StateSet},\Action\in\hat{\ActionSet}\). * $\hat{b}_{0}(s):=b_{0}(f^{-1}s)$ for \(\State\in\hat{\StateSet}\). * \(\hat{\Tmax}:=\Tmax\). First, we have to check that this is well-defined, e.g., that \(\Isom^{-1}\Action\in\ActionSet\) for any \(\Action\in\hat{\ActionSet}\). For states, it is clear from the definition of \(\Sym(\DecP)\) that \(\Isom^{-1}(\hat{\StateSet})=\Isom^{-1}(\{1,\dotsc,|\StateSet|\})=\StateSet\). Moreover, the same applies to agents, i.e., \(\Isom^{-1}i\in \PlayerSet\) for any \(i\in\{1,\dotsc,|\PlayerSet|\}\). This leaves joint actions and observations. For $\hat{\ActionSet},\hat{\ObservationSet}$ as defined above, it is $\Isom^{-1}(\hat{\ActionSet})=\ActionSet$ and $\Isom^{-1}(\hat{\ObservationSet})=\ObservationSet$. Let $\hat{a}\in\hat{\ActionSet}$. Then for any \(i\in \hat{\PlayerSet}\), we can define \(j\in\PlayerSet\) and \(\Action_{j}\in\ActionSet_{j}\) such that \(\Isom j = i\) and \(\Isom \Action_j = \hat{\Action}_i\). \[\Isom^{-1}\hat{a}= (\Isom^{-1} \hat{\Action}_{\Isom j}))_{j\in \PlayerSet} ( \Isom^{-1}\Isom\Action_{j}))_{j\in\PlayerSet} The same argument works for $\hat{o}\in\hat{\ObservationSet}$. Importantly, it can be \(\Isom^*\DecP\neq\DecP\) for a labeling \(\Isom\in\Sym(\DecP)\). Nevertheless, it is easy to see from the definitions that \(\Isom^*\DecP\) is actually isomorphic to \(\DecP\), with isomorphism \(\Isom\). This also implies that it is \(\Isom^*\DecP=\DecP\) if and only if \(\Isom\) is an automorphism. For any \(\Isom\in \Sym(\DecP)\), it is \(\Isom\in\Iso(\DecP,\Isom^*\DecP)\). This follows directly from the definition of an isomorphism, together with Lemma <ref>. For instance, considering transition probabilities, it is \[P(s'\mid s,a)=P(\Isom^{-1}\Isom s'\mid \Isom^{-1}\Isom s, \Isom^{-1}\Isom a)=\hat{P}(\Isom s'\mid \Isom s \mid \Isom a) \] for any \(s',s\in\StateSet,\Action\in\ActionSet\). Similar calculations apply to all the other relevant functions. It follows as a corollary that \(\Isom^*\Policy\) is a policy for the Dec-POMDP \(\Isom^*\DecP\), where \(\Policy\in\PolicySet^\DecP,\Isom\in\Sym(\DecP)\). Note also that the results in Lemma <ref> still apply to the pushforward by labelings. Let \(\Isom\in\Sym(\DecP)\) and \(\Policy\in\PolicySet^\DecP\). Then \(\Isom^*\Policy\in\PolicySet^{\Isom^*\DecP}\). Follows from the definition of \(\Isom^*\Policy\) and Lemma <ref>. Lastly, we provide some further useful results about labelings. First, the set \(\Sym(\DecP)\) already contains all the isomorphisms in \(\Iso(\DecP,\Isom^*\DecP)\). We will need this result later to relate results about isomorphisms and automorphisms to the relabeled Dec-POMDPs used in an LFC game. Let \(\Isom\in\Sym(\DecP)\). Then \[\Iso(\DecP,\Isom^*\DecP)=\{\IsomSecond\in\Sym(\DecP)\mid \IsomSecond^*\DecP=\Isom^*\DecP\}\] “\(\supseteq\)”: for any \(\IsomSecond\in\Sym(\DecP)\) such that \(\IsomSecond^*\DecP=\Isom^*\DecP\), it is also \(\Iso(\DecP,\IsomSecond^*\DecP)=\Iso(\DecP,\Isom^*\DecP)\), and thus it follows from Lemma <ref> that \(\IsomSecond\in \Iso(\DecP,\IsomSecond^*\DecP)=\Iso(\DecP,\Isom^*\DecP)\). “\(\subseteq\)”: Let \(\IsomHat\in \Iso(\DecP,\Isom^*\DecP)\). Note that the set of agents, states, and the individual action and observation sets in \(\Isom^*\DecP\) are all of the form \(\{1,\dotsc,k\}\) where \(k\in\mathbb{N}\) depends on the respective set. Now consider, for instance, the map \(\IsomHat_{A_i}\) for \(i\in\PlayerSet\). Then by the definition of an isomorphism, \(\IsomHat_{A_i}\) is a bijective map, and its domain and codomain are \(\ActionSet_i\) and \(\{1,\dotsc,k\}\) for some \(k\in\mathbb{N}\). Moreover, since \(\IsomHat_{A_i}\) is bijective, it must be \(k=|\ActionSet_i|\). Mutatis mutandis, the same applies to all of the other maps that are part of the tuple \(\IsomHat\). Hence, \(\IsomHat\) satisfies the definition of a Dec-POMDP labeling, so \(\IsomHat\in\Sym(\DecP)\). Next, it follows that \(\IsomHat\in\Iso(\DecP,\IsomHat^*\DecP)\) by Lemma <ref>, and thus \(\Id=\IsomHat\circ\IsomHat^{-1}\in \Iso(\Isom^*\DecP,\IsomHat^*\DecP)\) by the assumption and Lemma <ref>. Hence, using the definition of an isomorphism, it follows that also \(\Isom^*\DecP=\IsomHat^*\DecP\). This shows that \[\IsomHat\in\{\IsomSecond\in\Sym(\DecP)\mid \IsomSecond^*\DecP=\Isom^*\DecP\},\] which concludes the proof. Second, we show that labelings and pushforward are compatible with composition with isomorphisms. Let \(\DecP,\DecPSecond\) be isomorphic Dec-POMDPs with \(\Isom\in\Iso(\DecP,\DecPSecond)\). Then \[\Sym(\DecP)=\Sym(\DecPSecond)\circ \Isom.\] Moreover, it is \(\IsomSecond^*\DecPSecond=(\IsomSecond\circ\Isom)^*\DecP\) for any \(\IsomSecond\in\Sym(\DecPSecond)\). First, let \(\IsomSecond\in\Sym(\DecP)\) and define \(\IsomHat:=\IsomSecond\circ\Isom^{-1}\). Note that \(\IsomHat\) has as components bijective maps with a domain and codomain that satisfies the definition of a labeling of \(\DecPSecond\). Hence, \(\IsomHat\in\Sym(\DecPSecond)\). Next, let \(\IsomSecond\in\Sym(\DecPSecond)\). Then similarly, \(\IsomSecond\circ\Isom\) fulfills the requirements for a labeling in \(\Sym(\DecP)\). To prove the second statement, let again \(\IsomSecond\in\Sym(\DecPSecond)\). Note that since \(\DecP\) and \(\DecPSecond\) are isomorphic, they must have the same set of players and sets of states with the same cardinalities. Now let \(i\in\PlayerSet^\DecP\). Using the definition of an isomorphism and of a labeling, it is then \[\ActionSet^{\IsomSecond^*\DecPSecond}_i=\{1,\dotsc,|\ActionSet_{\IsomSecond^{-1}i}^\DecPSecond|\} =\{1,\dotsc,|\ActionSet_{\Isom^{-1}(\IsomSecond^{-1}i)}^\DecP|\}=\{1,\dotsc,|\ActionSet_{(\IsomSecond\circ\Isom)^{-1}i}^\DecP|\}= \ActionSet^{(\IsomSecond\circ\Isom)^*\DecP}_i.\] A similar argument applies to the sets \(\ObservationSet_i\) for \(i\in\PlayerSet\). Finally, let \(\State,\State'\in\StateSet^{\IsomSecond^*\DecPSecond},\Action\in\ActionSet^{\IsomSecond^*\DecPSecond}\). Using again the definition of an isomorphism and a labeling, it follows that \begin{multline}P^{\IsomSecond^*\DecPSecond}(\State'\mid\State,\Action) = P^\DecPSecond(\IsomSecond^{-1}\State'\mid\IsomSecond^{-1}\State,\IsomSecond^{-1}\Action) \\ = P^\DecP(\Isom^{-1}\IsomSecond^{-1}\State'\mid\Isom^{-1}\IsomSecond^{-1}\State,\Isom^{-1}\IsomSecond^{-1}\Action) Again, an analogous argument applies to the observation probability kernel and reward function, as well as the initial state distribution. This concludes the proof. §.§ The label-free coordination game and problem Here, we recall the definitions of an LFC game and of the LFC problem. To begin, we define a measure space of policies and recall the definition of a learning algorithm. For any Dec-POMDP \(\DecP\), let \(\Delta(\PolicySet^\DecP)\) be the set of measures on the space \((\PolicySet^\DecP,\mathcal{F}^\DecP)\) where \(\mathcal{F}^\DecP:=\otimes_{i\in\PlayerSet}\mathcal{F}^\DecP_i\) is a product \(\sigma\)-Algebra and \(\mathcal{F}_i^\DecP\subseteq\PowerSet(\PolicySet_i^\DecP)\) are \(\sigma\)-Algebras that make the random variables and sets discussed in this paper measurable. For instance, for \(i\in\PlayerSet\), this could be the Borel \(\sigma\)-Algebra with respect to the standard topology on \(\PolicySet_i^\DecP\) that comes from regarding \(\PolicySet_i^\DecP\) as a subset of \([0,1]^{\ActionSet^\DecP_i\times\AOHistorySet^\DecP_i}\). Although we do not investigate this here, all the relevant functions and sets should be measurable in that sense. Let \(\DecPSet\) be a finite set of Dec-POMDPs. A learning algorithm for \(\DecPSet\) is a map \[\LA\colon \DecPSet\rightarrow \bigcup_{D\in\mathcal{D}}\Delta(\PolicySet^\DecP)\] such that \(\LA(D)\in\Delta(\PolicySet^D)\) for all \(D\in\mathcal{D}\). We write \(\LASet^\mathcal{D}\) for the set of learning algorithms for \(\mathcal{D}\). Note that this definition is general enough so as to include planning algorithms that construct a policy directly from the environment dynamics, instead of incrementally updating a policy from experience. Nevertheless, here, we imagine that \(\LA(\DecP)\) is a policy that was trained by an RL algorithm, using a simulator of \(\DecP\). Note also that a learning algorithm can learn different joint policies in different training runs, which we formalize as outputting a measure over joint policies. Similarly to the case of policies, for a distribution \(\Distr\in\Delta(\PolicySet^\DecP)\) and an isomorphism \(\Isom\in\Iso(\DecP,\DecPSecond)\), we can define a pushforward distribution \(\Isom^*\Distr:=\Distr \circ (\Isom^*)^{-1}\in\Delta(\PolicySet^\DecPSecond)\), which is the image measure of \(\Distr\) under \(\Isom^*\). It is apparent that for two isomorphisms \(\Isom\in\Iso(\DecP,\DecPSecond),\IsomSecond\in\Iso(\DecPSecond,\DecPThird)\), it is \(\IsomSecond^*(\Isom^*\Distr)=(\IsomSecond\circ\Isom)^*\Distr\). In the following, for some distributions \(\Distr^{(i)}\in\Delta(\PolicySet^\DecP)\) for \(i\in\PlayerSet\) and bounded measurable function \(\eta\colon \PolicySet^\DecP\times\dotsb\times\PolicySet^\DecP\rightarrow\mathbb{R}\), we will use the notational shorthands \[\E_{\Policy^{(i)}\sim\Distr^{(i)},\,i\in\PlayerSet} \left[\eta(\Policy^{(1)},\dotsc,\Policy^{(N)})\right] \left[\eta(\Policy^{(1)},\dotsc,\Policy^{(N)})\right]\right]\dots\right] \] \[\E_{\Policy^{(i)}\sim\Distr^{(i)}} \left[ \eta(\Policy^{(1)},\dotsc,\Policy^{(N)})\right]:=\int_{\PolicySet^\DecP}\eta(\Policy^{(1)},\dotsc,\Policy^{(N)})\mathrm{d}\Distr^{(i)}(\Policy^{(i)}).\] Note that by Fubini's theorem [see][ch. 8]williams1991probability, it is \[\E_{\Policy^{(i)}\sim\Distr^{(i)},\,i\in\PlayerSet} \left[\eta(\Policy^{(1)},\dotsc,\Policy^{(N)})\right] \] Now we define the LFC game for a Dec-POMDP. Let \(\DecP\) be a Dec-POMDP and define \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\). The label-free coordination (LFC) game for \(\DecP\) is defined as a tuple \(\Gamma^\DecP:=(\PlayerSet^\DecP,(\LASet^\DecPSet)_{i\in\PlayerSet},(U^{\DecP})_{i\in\PlayerSet})\) where * \(\PlayerSet^\DecP\) is the set of players, called principals. * \(\LASet^\DecPSet\) is the set of strategies for all principals \(i\in\PlayerSet^\DecP\). * the common payoff for the strategy profile \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecP\) is \begin{equation}U^\DecP(\LAProfile):= \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\Big[\E_{\Isom_j\sim\U(\Iso(\DecP_j,\DecP)),\,j\in\PlayerSet}\Big[\\ \E_{\Policy^{(k)}\sim\Isom_k^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \Big[ where \(\mathcal{U}(\DecPSet)\) is a uniform distribution over \(\DecPSet\) and \(\U(\Iso(\DecP_j,\DecP))\) a uniform distribution over \(\Iso(\DecP_j,\DecP)\). Note that set of strategies \(\LASet^\DecP\) is continuous. The game \(\Gamma^\DecP\) could hence be considered a continuous game, which is a generalization of the concept of a normal-form game to continuous strategy spaces [see][]glicksberg1952generalization. In continuous games, it is usually assumed that the set of strategies is compact and that the payoffs are continuous functions, which we believe does apply in our case. Moreover, we believe that there is some other normal-form game with finite strategy space such that the mixed strategies in that game correspond to the set of strategies \(\LASet^\DecPSet\) in \(\Gamma^\DecP\) [for a reference on these concepts from game theory, see][]osborne1994course,gibbons1992game. In particular, one can see that the set of strategies \(\LASet^\DecPSet\) is already convex. We do not need any further characterization of an LFC game in the following, so we do not investigate issues such as compactness or convexity of the set of strategies. The formalism at hand was chosen primarily to work well with an intuitive formulation of the LFC problem and to suit our discussion of the OP algorithm. Next, to recall the definition of the LFC problem, let any set \(\DecPSetSecond\) of Dec-POMDPs be given, and denote \(\overline{\DecPSetSecond}:=\bigcup_{\DecP\in\DecPSetSecond}\DecPSet^\DecP\) where \(\DecPSet^\DecP:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\) is the set of all relabeled problems of \(\DecP\). The LFC problem for \(\DecPSetSecond\) is then defined as the problem of finding one learning algorithm \(\LA\in\LASet^{\overline{\DecPSetSecond}}\) to be used by principals in a randomly drawn game \(\Gamma^\DecP\) for \(\DecP\sim\U(\DecPSetSecond)\). Let \(\DecPSetSecond\) be any set of Dec-POMDPs. Define the objective \(U^\DecPSetSecond\colon \LASet^{\overline{\DecPSetSecond}}\rightarrow\mathbb{R}\) via \begin{equation} \end{equation} \(\LA\in\LASet^{\overline{\DecPSetSecond}}\). Then we define the Label-free coordination (LFC) problem for \(\DecPSetSecond\) as the optimization problem \begin{equation} \max_{\LA\in\Sigma^{\overline{\DecPSetSecond}}}U^\DecPSetSecond(\LA)\end{equation} and we call \(U^\DecPSetSecond(\LA)\) the value of \(\LA\) in the LFC problem for \(\DecPSetSecond\). If \(\DecPSetSecond=\{\DecP\}\), we write \(U^\DecP:=U^{\{\DecP\}}\) in a slight abuse of notation and refer to this as the LFC problem for \(\DecPSecond\). The aim of the LFC problem is to find a general learning algorithm to recommended to principals in any LFC game. For this reason, we defined the problem here for a distribution over LFC games. However, a learning algorithm is optimal in the problem for a set of Dec-POMDPs if and only if it is optimal in the problem for each Dec-POMDP in that set. That is because, as one can easily see, the sets \(\DecPSet^\DecP,\DecPSet^\DecPSecond\) do never overlap for two non-isomorphic Dec-POMDPs \(\DecP,\DecPSecond\), and we will show in Corollary <ref> that the LFC problems for two isomorphic Dec-POMDPs are identical. So to evaluate a learning algorithm in the LFC problem for a set of Dec-POMDPs, we can decompose the set into equivalence classes of isomorphic Dec-POMDPs and evaluate the learning algorithm separately for each of these classes. In the following, we will thus simplify our analysis and restrict ourselves entirely to problems defined for single Dec-POMDPs. Note that the objective in the LFC problem is then simply \begin{equation}U^\DecP(\LA)=U^{\{\DecP\}}(\LA)=\E_{\DecPSecond\sim\U(\{\DecP\})}\left[U^\DecPSecond(\LA,\dotsc,\LA)\right]=U^\DecP(\LA,\dotsc,\LA). \end{equation} If we then prove, e.g., that a learning algorithm is optimal in any such problem, it follows that it is also optimal for the problem defined for any set of Dec-POMDPs. Now we will provide two different expressions of the payoff in an LFC game. To that end, let \(\DecP\) be a Dec-POMDP and define \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\). First, we provide an expression in terms of labelings. Let \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\). \begin{equation} \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \end{equation} It follows from Lemma <ref> that it is \(\Sym(\DecP)=\bigcup_{\DecPSecond\in\DecPSet}\Iso(\DecP,\DecPSecond)\), where one can easily see that the union is disjoint (i). Moreover, it follows from Lemma <ref> that \(|\Iso(\DecP,\DecPSecond)|=|\Iso(\DecP,\DecPThird)|\) for any Dec-POMDPs \(\DecPSecond,\DecPThird\in\DecPSet\), so there exists \(M\in\mathbb{N}\) such that \(M=|\Iso(\DecP,\DecPSecond)|\) for any \(\DecPSecond\in\DecPSet\), and from (i) it follows that \(|\Sym(\DecPSecond)|=|\DecPSet||M|\) (ii). Next, by Lemma <ref>, for any \(\Isom\in\Iso(\DecP,\DecP_j)\), it is \(\Isom^{-1}\in \Iso(\DecP_j,\DecP)\) for any \(\Isom\in\Iso(\DecP_j,\DecP)\). In addition, by the same Lemma, for \(\Isom\in\Iso(\DecP_j,\DecP)\), it is \(\Isom=(\Isom^{-1})^{-1}\) and \(\Isom^{-1}\in \Iso(\DecP,\DecP_j)\) and thus \(\Isom\in\{\IsomSecond^{-1}\mid\IsomSecond\in\Iso(\DecP,\DecP_j)\}\). Hence, it is \begin{equation} \label{eq:507} \{\IsomSecond\mid\IsomSecond\in\Iso(\DecP_j,\DecP)\}=\{\IsomSecond^{-1}\mid\IsomSecond\in\Iso(\DecP,\DecP_j)\}.\end{equation} Using the above, it follows that \begin{align}&U^{\DecPSet}(\LAProfile_1,\dotsc,\LAProfile_N)\label{eq:36c} \\ \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\IsoProfile_j\sim\Iso(\DecP_j,\DecP), \,j\in\PlayerSet}\left[ \E_{\Policy^{(k)}\sim\SymProfile_k^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \left[ \\ \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\IsoProfile_j\in\Iso(\DecP,\DecP_j),\,j\in\PlayerSet}\left[ \E_{\Policy^{(k)}\sim(\IsoProfile_k^{-1})^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \left[ %\E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ % M^{-1}\prod_{j\in\PlayerSet}\sum_{\IsoProfile_j\in\Iso(\DecP_j,\DecP)}\left[ % \E_{\Policy^{(k)}\sim\SymProfile_k^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} % \left[ %\E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ % M^{-1}\prod_{j\in\PlayerSet}\sum_{\IsoProfile_j\in\Iso(\DecP,\DecP_j)}\left[ % \E_{\Policy^{(k)}\sim(\IsoProfile_k^{-1})^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} % \left[ %% (M\cdot |\DecPSet|)^{-1}\sum_{\SymProfile_j\in\Sym(\DecP), \,j\in\PlayerSet}\left[ % \E_{\Policy^{(k)}\sim(\SymProfile_k^{-1})*\LAProfile_k(\SymProfile_i^*\DecP),\,k\in\mathcal{N}} % \left[ % (|\Sym(\DecP)|\cdot|\PlayerSet|)^{-1}\sum_{\SymProfile_j\in\Sym(\DecP), \,j\in\PlayerSet}\left[ % \E_{\Policy^{(k)}\sim(\SymProfile_k^{-1})^*\LAProfile_k(\SymProfile_i^*\DecP),\,k\in\mathcal{N}} % \left[ \\ \E_{\SymProfile_i\sim \U(\Sym(\DecPSecond)),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim(\SymProfile_j^{-1})^*\LAProfile_j(\SymProfile_j^*\DecP),\,j\in\mathcal{N}} \left[ \\ &=\E_{\SymProfile\sim \U(\Sym(\DecPSecond)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \end{align} This concludes the proof. Second, we can prove a useful decomposition of the payoff in an LFC game into isomorphisms and automorphisms. We can already see here the connection to the OP objective (we will recall the OP objective in Appendix <ref>). Recall the projection operator, \(\Proj_i(x):=x_i\) for \(x=(x_i)_{i\in\PlayerSet}\). Let \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\). For any \(\DecPSecond,\DecPThird\in\DecPSet\), choose \(\Isom_{\DecPSecond,\DecPThird}\in\Iso(\DecPSecond,\DecPThird)\) arbitrarily. Then \begin{align} % \\ &=\label{eq:36a} % \E_{\DecP_i\sim \U(\DecPSet),\SymProfile_i\sim \U(\Iso(\DecP,\DecP_i)),\,i\in\PlayerSet}\left[ % \E_{\Policy^{(j)}\sim(\SymProfile_j^{-1})^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} % \left[ % J^\DecP((\Policy^{(k)}_k)_{k\in\PlayerSet})\right]\right] \\ &=\label{eq:36b} \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ \E_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\left[ \end{align} Let \(i\in\PlayerSet,\DecP_i\in\DecPSet\). We know from Lemma <ref> that for any \(\Isom\in \Iso(\DecP_i,\DecP)\) there is a unique \(\Auto\in\Aut(\DecP)\) such that \(\Isom=\Auto\circ\Isom_{\DecP_i,\DecP}\). Also, for the pushforward measure, it is \((\Auto\circ\Isom_{\DecP_i,\DecP})^*\LAProfile_i(\DecP_i)=\Auto^*\Isom_{\DecP_i,\DecP}^*\LAProfile_i(\DecP_i)\) for any \(\Auto\in\Aut(\DecP)\). Using this, it follows that \begin{align}&\label{eq:36e} \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\IsoProfile_j\sim\U(\Iso(\DecP_j,\DecP)), \,j\in\PlayerSet}\left[ \E_{\Policy^{(k)}\sim\SymProfile_k^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \left[ \\ \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\AutProfile\sim\U(\Aut(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(k)}\sim\AutProfile_k^*\Isom_{\DecP_k,\DecP}^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \left[ \\ \E_{\DecP_i\sim \U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(k)}\sim\Isom_{\DecP_k,\DecP}^*\LAProfile_k(\DecP_k),\,k\in\mathcal{N}} \left[ \E_{\AutProfile\sim\U(\Aut(\DecP)^\PlayerSet)}\left[ \end{align} where we have used a change of variables for pushforward measures in the last line. Finally, we can show that the payoff in an LFC game is equal for isomorphic Dec-POMDPs, up to a possible permutation of principals. Intuitively, this means that the game does not depend on labels for the problem. Let \(\DecP,\DecPSecond\) be isomorphic and \(\Isom\in\Iso(\DecP,\DecPSecond)\) arbitrary. Define \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\) and \(\DecPSetSecond:=\{\Isom^*\DecPSecond\mid\Isom\in\Sym(\DecPSecond)\}\). Then \(\DecPSet=\DecPSetSecond\), and for any profile of algorithms \(\LAProfile=(\LAProfile_1,\dotsc,\LAProfile_N)\in\Sigma^\DecPSet\), it is \[U^{\DecP}(\LAProfile_1,\dotsc,\LAProfile_N) \] First, note that by Lemma <ref>, it is \begin{multline}\DecPSetSecond=\{\IsomSecond^*\DecPSecond\mid\IsomSecond\in\Sym(\DecPSecond)\}\overset{\text{Lemma~\ref{d-sym-closed-under-labeling}}}{=} \{(\IsomSecond\circ\Isom)^*\DecP\mid\IsomSecond\in\Sym(\DecPSecond)\} \\= \{\IsomHat^*\DecP\mid\IsomHat\in\Sym(\DecPSecond)\circ\Isom\}\overset{\text{Lemma~\ref{d-sym-closed-under-labeling}}}{=}\{\IsomHat^*\DecP\mid\IsomHat\in\Sym(\DecP)\}=\DecPSet.\end{multline} Now let \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\) arbitrary. Then, using the expression of \(U^\DecP\) from Lemma <ref>, it is \begin{align}&U^{\DecP}(\LAProfile_{1},\dotsc,\LAProfile_N) \\&= \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \label{eq:25-1} \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \label{eq:25-2} \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \label{eq:25-3} \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim\Isom^*(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \label{eq:25-4} \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim((\SymProfile_i\circ \Isom^{-1})^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \label{eq:25-41} \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim((\SymProfile_i\circ \Isom^{-1})^{-1})^*\LAProfile_i((\SymProfile_i\circ\Isom^{-1})^*\DecPSecond),\,i\in\mathcal{N}} \left[ \label{eq:25-5} \\ \E_{\SymProfile\sim \U(\Sym(\DecPSecond)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecPSecond),\,i\in\mathcal{N}} \left[ \label{eq:25-6} \\ \E_{\SymProfile\sim \U(\Sym(\DecPSecond)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_{\Isom^{-1}i}^{-1})^*\LAProfile_{\Isom^{-1}i}(\SymProfile_{\Isom^{-1}i}^*\DecPSecond),\,i\in\mathcal{N}} \left[ \label{eq:25-7} \\ \E_{\SymProfile\sim \U(\Sym(\DecPSecond)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_{i}^{-1})^*\LAProfile_{ \Isom^{-1}i}(\SymProfile_{i}^*\DecPSecond),\,i\in\PlayerSet} \left[ \label{eq:25-8} \\&=U^{\DecPSecond}(\LAProfile_{\Isom^{-1}1},\dotsc,\LAProfile_{\Isom^{-1}N}). \end{align} Here, in (<ref>), we use Theorem <ref>; in (<ref>) and (<ref>), we use the definition of the pushforward policy; in (<ref>), we apply a change of variables for pushforward measures, applied to each of the measures \((\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP)\) separately; in (<ref>), we use the associativity of the pushforward measure as well as the fact that \((\SymProfile_i\circ \Isom^{-1})^{-1}=\Isom\circ\IsoProfile_i^{-1}\); in (<ref>) and in (<ref>), we use the second respectively first part of Lemma <ref>; in (<ref>), we again use a change of variables for pushforward measures, this time applied to the joint measure \(\otimes_{i\in\PlayerSet}(\SymProfile_i^{-1})^*\LAProfile_i(\SymProfile_i^*\DecP)\); and in (<ref>) we use the symmetry of the set \(\Sym(\DecPSecond)^\PlayerSet\) with respect to player permutations, concluding the proof. First, this result implies that the LFC problems for two isomorphic problems are identical. Let \(\DecP,\DecPSecond\) be two isomorphic Dec-POMDPs and let \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\) and \(\DecPSetSecond:=\{\Isom^*\DecPSecond\mid\Isom\in\Sym(\DecPSecond)\}\). Then \(\DecPSet=\DecPSetSecond\) and \(U^\DecP(\LA)= U^\DecPSecond(\LA)\) for any \(\LA\in\LASet^{\DecPSet}=\LASet^\DecPSetSecond\). By Theorem <ref>, we have \(\DecPSet=\DecPSetSecond\). Now let \(\LA\in\LASet^\DecPSet=\LASet^\DecPSetSecond\). Then again by Theorem <ref>, it is \[U^\DecP(\LA) =U^\DecP(\LA,\dotsc,\LA) \overset{\text{Theorem~\ref{zero-shot-coordination-game-invariant-to-isomorphism}}}{=} U^\DecPSecond(\LA,\dotsc,\LA)=U^\DecPSecond(\LA).\] Second, the theorem shows that symmetries between the agents in a Dec-POMDP are also symmetries between principals in \(\Gamma^\DecP\). Let \(\DecP\) be a Dec-POMDP. Then it is \[U^{\DecP}(\LA_1,\dotsc,\LA_N) \] for any \(\Auto\in\Aut(\DecP)\). This follows from Theorem <ref>, using that \(\Aut(\DecP)=\Iso(\DecP,\DecP)\). §.§ Optimal symmetric strategy profiles Above, we have shown that the payoff in an LFC game is invariant with respect to symmetries of the agents in \(\DecP\). Similarly to the case of Dec-POMDPs and their symmetries, we can also apply the concept of symmetry to profiles of learning algorithms. We can then ask whether a profile of learning algorithms is invariant to symmetries of the principals, in which case we say that the profile is symmetric. In the following, we will show that optimal profiles among the ones that are symmetric are Nash equilibria of an LFC game. Since we will show in Appendix <ref> that a profile in which all principals choose OP with tie-breaking is an optimal symmetric profile, it will result as a corollary that all principals using OP with tie-breaking is a Nash equilibrium of the game. To begin, we define symmetric principals and profiles of learning algorithms. In the following, let again \(\DecP\) be a Dec-POMDP and \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\). We say that two principals \(i,j\in\PlayerSet\) are symmetric if there exists an automorphism \(\Auto\in\Aut(\DecP)\) such that \(i=\Auto j\). A profile of learning algorithms \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecP\) is called symmetric if it is \(\LA_i=\LA_{\Auto^{-1}i}\) for any automorphism \(\Auto\in\Aut(\DecP)\) and principal \(i\in\PlayerSet\). An optimal symmetric profile is then defined as a symmetric profile \(\LAProfile\) such that for all other symmetric profiles \(\tilde{\LAProfile}\), it is \(U^\DecP(\LAProfile)\geq U^\DecP(\tilde{\LAProfile})\). Note that if there are non-symmetric principals in \(\PlayerSet\), then for a single learning algorithm \(\LA\in\LASet^\DecPSet\), the property that \(\LA,\dotsc,\LA\) is an optimal symmetric profile in the LFC game for \(\DecP\) is stronger than the property that the algorithm \(\LA\) is optimal in the LFC problem for \(\DecP\). The latter only requires that \(U^{\DecP}(\LA,\dotsc,\LA)\geq U^\DecP(\LA',\dotsc,\LA')\) for all \(\LA'\in\LASet^\DecPSet\), while \(\LA,\dotsc,\LA\) being an optimal symmetric profile means that \(U^\DecP(\LA,\dotsc,\LA)\geq U^\DecP(\LAProfile_1,\dotsc,\LAProfile_N)\) for all symmetric profiles \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\), where a profile could potentially include different learning algorithms for non-symmetric principals. For a simple characterization of symmetric profiles, consider the orbit \(\Aut(\DecP)i\) of a principal \(i\in\PlayerSet\), as defined in Appendix <ref>. Clearly, saying that a profile is symmetric can equivalently be expressed as saying that all principals from the same orbit are assigned the same learning algorithm. A profile \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecP\) is symmetric if and only if it is \(\LAProfile_i=\LAProfile_j\) for any two symmetric principals \(i,j\in\PlayerSet\). This can be easily seen from the definition of the orbit and the properties of actions of automorphisms on agents and principals. Lastly, we define a Nash equilibrium of an LFC game. A profile of learning algorithms \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\) is a Nash equilibrium of the LFC game for \(D\) if, for any principal \(i\in\PlayerSet\) and learning algorithm \(\LAProfile_i'\in\LASet^\DecPSet\), it is \[ U^\DecP(\LAProfile)\geq U^\DecP(\LAProfile_i,\LAProfile_{-i}). \] Now we show that any optimal symmetric profile is a Nash equilibrium. An analogous result for normal-form games was proven in emmons2021symmetry. Our proof closely follows that proof, adapted to our setting. Any optimal symmetric strategy profile in an LFC game is a Nash equilibrium. In the following, fix a Dec-POMDP \(\DecP\) and let \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\). Let \(U:=U^\DecP\). Let \(\LAProfile_1,\dots,\LAProfile_N\) be an optimal symmetric strategy profile. Towards a contradiction, assume that \(\LAProfile\) is not a Nash equilibrium, i.e., that there is \(i\in\PlayerSet\) and \(\tilde{\LAProfile}_i\in\LASet^\DecPSet\) such that \(U(\tilde{\LAProfile}_i,\LAProfile_{-i})>U(\LAProfile)\). We show that then there is another symmetric strategy profile \(\hat{\LAProfile}\) that achieves a higher payoff than \(\LAProfile\), \(U(\hat{\LAProfile})>U(\LAProfile)\), contradicting the assumption that \(\LAProfile\) was optimal among the symmetric profiles. To that end, for arbitrary \(p\in(0,1]\) define the profile \(\hat{\LAProfile}_j:=p \tilde{\LAProfile}_i + (1-p)\LAProfile_i\) for any \(j\in\Aut(\DecP)i\) and \(\hat{\LAProfile}_j:=\LAProfile_j\) for \(j\in\PlayerSet\setminus\Aut(\DecP)i\). Note that, since we jointly change all learning algorithms in one orbit \(\Aut(\DecP)i\), the remaining profile \(\hat{\LAProfile}\) is symmetric by Lemma <ref>. Next, let \(K:=|\Aut(\DecP)i|\), choose Dec-POMDPs \(\DecP_1,\dotsc,\DecP_N\in\DecPSet\), and measurable sets \(\mathcal{Z}_1,\dotsc,\mathcal{Z}_N\subseteq\PolicySet^\DecP\). Then it is \begin{align} \\ \prod_{l\in\Aut(\DecP)i}\left(p\tilde{\LAProfile}_i(\DecP_l)(\mathcal{Z}_l) + (1-p)\LAProfile_i(\DecP_l)(\mathcal{Z}_l)\right) \\\nonumber &= (1-p)^K\prod_{j\in\PlayerSet}\LAProfile_j(\DecP_j)(\mathcal{Z}_j) \\ \nonumber &\quad\quad+\sum_{j\in\Aut(\DecP)i}p(1-p)^{K-1}\tilde{\LAProfile}_i(\DecP_j)(\mathcal{Z}_j)\prod_{l\in\PlayerSet\setminus\{ j\}}\LAProfile_l(\DecP_l)(\mathcal{Z}_l) \\&\quad\quad+ \sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}\Mixture^{j,k,\DecP_j}\left(\prod_{j\in\PlayerSet}\mathcal{Z}_j\right) \\ \nonumber &= (1-p)^K\otimes_{j\in\PlayerSet}\LAProfile_j(\DecP_j)\left(\prod_{j\in\PlayerSet}\mathcal{Z}_j\right) \\ \nonumber &\quad\quad+\sum_{j\in\Aut(\DecP)i}p(1-p)^{K-1}\left(\tilde{\LAProfile}_i(\DecP_j)\otimes_{l\in\PlayerSet\setminus\{ j\}}\LAProfile_l(\DecP_l)\right)\left(\prod_{j\in\PlayerSet}\mathcal{Z}_j\right) \\&\quad\quad+\label{eq:84} \sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}\Mixture^{j,k,\DecP_j}\left(\prod_{j\in\PlayerSet}\mathcal{Z}_j\right) \end{align} where \(\Mixture^{k,\DecP_1,\dotsc,\DecP_N}\) is some measure (not necessarily a probability measure) on the space \(\PolicySet^\DecP\times\dotsb\times\PolicySet^\DecP\) that depends on \(k\) and \(\DecP_1,\dotsc,\DecP_N\), but not on \(p\). This tells us that we can also decompose the integral with respect to the measure \(\otimes_{j\in\PlayerSet}\hat{\LAProfile}(\DecP_j)\) as in (<ref>). It follows that \begin{align}&U(\hat{\LAProfile}_1,\dotsc,\hat{\LAProfile}_N)\label{eq:85aa} \\&=\E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(j)}\sim(\SymProfile_j^{-1})^*\hat{\LAProfile}_j(\SymProfile_j^*\DecP),\,j\in\mathcal{N}} \left[ \\ =\E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(j)}\sim\hat{\LAProfile}_j(\SymProfile_j^*\DecP),\,j\in\mathcal{N}} \left[ %&=\E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\Big[ % \int \\ \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\Big[ \E_{\Policy^{(j)}\sim\LAProfile_j(\SymProfile_j^*\DecP),\,j\in\mathcal{N}} \left[ \right] \\\nonumber \E_{\Policy^{(m)}\sim\LAProfile_m(\SymProfile_m^*\DecP),\,m\in\mathcal{N}\setminus j} \left[ \E_{\Policy^{(j)}\sim\tilde{\LAProfile}_i(\SymProfile_j^*\DecP)} \left[ \right]\right] \\ \sum_{k=2,\dotsc,K}p^k(1-p)^{K-k} \int \left(\Policy^{(1)},\dotsc,\Policy^{(N)}\right) \Big] \\ \sum_{j\in\Aut(\DecP)i}p(1-p)^{K-1}U(\LAProfile_1,\dotsc,\LAProfile_{j-1},\tilde{\LAProfile}_i,\LAProfile_{j+1},\dotsc,\LAProfile_N) \\\displaybreak \sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k \\ \sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k \end{align} for some constants \(C_k\in\mathbb{R}\) for \(k=2,\dotsc,K\), and where we write \(B(k=0,p)\) to denote the probability that a binomial distribution with \(K\) trials and success chance \(p\) has \(0\) successful trials (and \(B(k=1,p)\) analogously). Here, in (<ref>), we use a change of variables for pushforward measures, separately for each measure \(\hat{\LAProfile}_j(\SymProfile_j^*\DecP)\); in (<ref>), we use the linearity of the expectation; and in (<ref>) we use that \(U\) and \(\LAProfile\) are both invariant to symmetries between principals, and thus for \(\Auto\in\Aut(\DecP)\) with \(\Auto^{-1}j=i\), it is \begin{multline} \\ Now note that if we can show that \[(1-B(m=0,p))U(\LAProfile)<B(m=1,p)U(\tilde{\LAProfile}_i,\LAProfile_{-i})+\sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k,\] then it would also follow that \begin{multline} \overset{\text{(\ref{eq:85aa})--(\ref{eq:85b})}}{=} \\ \end{multline} Thus, this would show that \(\hat{\LAProfile}\) is a symmetric profile with higher payoff than \(\LAProfile\), proving the required contradiction. In the following, we show the equivalent condition \[U(\LAProfile)<\frac{B(1,p)}{B(m>0,p)}U(\tilde{\LAProfile}_i,\LAProfile_{-i})+\frac{\sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k} where \(B(m>0,p)=\sum_{k=1}^KB(m=k,p)=1-B(m=0,p)\). To that end, note that since \(U(\LAProfile)<U(\tilde{\LAProfile}_i,\LAProfile_{-i})\) by assumption, we can choose some small \(\epsilon>0\) such that still \[U(\LAProfile)<U(\tilde{\LAProfile}_i,\LAProfile_{-i}) - \epsilon.\] Moreover, note that \(B(m>0,p)\) is a polynomial in \(p\), and the degree of its nonzero term with lowest degree is \(1\). Similarly, for \(\sum_{k=2,\dotsc,K}p^{K-k}(1-p)^2C_k\), that lowest degree is \(2\). Hence, it is \[\frac{\sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k}{B(m>0,p)}= \frac{\sum_{k=2,\dotsc,K}p^{k-1}(1-p)^{K-k}C_k}{C+Q(p)}\] for some constant \(C\neq 0\) and polynomial \(Q\) in \(p\), and it follows that \[\lim_{p\rightarrow 0}\frac{\sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k}{B(m>0,p)} \lim_{p\rightarrow 0}\frac{\sum_{k=2,\dotsc,K}p^{k-1}(1-p)^{K-k}C_k}{C+Q(p)}=0.\] With the same argument, it is also \(\lim_{p\rightarrow0}\frac{B(m>1,p)}{B(m>0,p)}=0\) and thus \[\lim_{p\rightarrow0}\frac{B(m=1,p)}{B(m>0,p)} 1 - \lim_{p\rightarrow0}\frac{B(m>1,p)}{B(m>0,p)}=1.\] Hence, we can find some \(p>0\) that is small enough such that both \[\frac{\sum_{k=2,\dotsc,K}p^k(1-p)^{K-k}C_k}{B(m>0,p)}<\frac{\epsilon}{2}\] \[1+\frac{\epsilon}{2|U(\tilde{\LAProfile}_i,\LAProfile_{-i})|}>\frac{B(m=1,p)}{B(m>0,p)}>1-\frac{\epsilon}{2|U(\tilde{\LAProfile}_i,\LAProfile_{-i})|}.\] It follows that \begin{multline} U(\LAProfile)<U(\tilde{\LAProfile}_i,\LAProfile_{-i}) - \epsilon \right) \\ \end{multline} which is what we wanted to show. This concludes the proof. §.§ Evaluating equivariant learning algorithms In the following, let \(\DecP\) be a Dec-POMDP and define \(\DecPSet:=\{\Isom^*\DecP\mid\Isom\in\Sym(\DecP)\}\). In the special case in which a learning algorithm is in a sense independent from the used labels, we can find a simpler form of the algorithm's value in the LFC problem for \(\DecP\). To do so, we define the notion of an equivariant learning algorithm. Let \(\LA\in\LASet^\DecPSet\). Then \(\LA\) is called equivariant if for any two labelings \(\Isom,\IsomSecond\in\Sym(\DecP)\), it is \[(\Isom^{-1})^*\LA(\Isom^*\DecP)=(\IsomSecond^{-1})^*\LA(\IsomSecond^*\DecP).\] We believe that a learning algorithm implemented via neural networks and one-hot encodings, as used in our experiments, should be equivariant. To see this, note that by a symmetry argument, the distribution over functions corresponding to a randomly initialized neural network is invariant with respect to coordinate permutations. Assume that actions, observations, and agents of a given problem are implemented as one-hot vectors, i.e., elements of a canonical basis \(\{e_1,\dotsc,e_k\}\in\mathbb{R}^k\) where \(k\in\mathbb{N}\) is the cardinality of the respective set. Then the distribution over randomly initialized neural network policies will also not depend on particular assignments of actions, etc., to one-hot vectors. We conjecture that, if the used optimizer is equivariant with respect to coordinate permutations (i.e., if the parameter dimensions are permuted, then the prescribed updates to the parameters are equally permuted), then the resulting learning algorithm is equivariant. We leave a rigorous exploration of this issue to future work. An equivariant algorithm can be evaluated in the LFC problem for \(\DecP\) by evaluating its cross-play value in any Dec-POMDP \(\DecPSecond\in\DecPSet\). The resulting policies can be permuted by random automorphisms or they can be evaluated as they are. Let \(\LA\in\LASet^\DecPSet\) be equivariant. Then for any \(\Isom\in\Sym(\DecP)\) and \(\DecPSecond=\Isom^*\DecP\in \DecPSet\), it is \begin{align}U^\DecP(\LA)&= \E_{\Policy^{(i)}\sim\LA(\DecPSecond),\,i\in\mathcal{N}}\left[\E_{\AutProfile\in\U(\Aut(\DecPSecond)^\PlayerSet)} \left[ \\ \E_{\Policy^{(i)}\sim\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \end{align} First, let \(\Isom\in\Sym(\DecP)\) and \(\DecPSecond:=\Isom^*\DecP\). Using the expression of the payoff in the LFC game for \(\DecP\) from Lemma <ref>, it is \begin{align} \E_{\SymProfile\sim \U(\Sym(\DecP)^\PlayerSet)}\left[ \E_{\Policy^{(i)}\sim(\SymProfile_i^{-1})^*\LA(\SymProfile_i^*\DecP),\,i\in\mathcal{N}} \left[ \\&=\label{eq:102a} \E_{\Policy^{(i)}\sim(\Isom^{-1})^*\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \\&=\label{eq:102c} \E_{\Policy^{(i)}\sim\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \\&=\label{eq:102b} \E_{\Policy^{(i)}\sim\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \end{align} where we use equivariance in (<ref>), a change of variables for pushforward measures in (<ref>), and Theorem <ref> in (<ref>). Second, by Lemma <ref>, it is \(\Iso(\DecP,\DecPSecond)\subseteq\Sym(\DecP)\). Thus, using Lemma <ref>, it is \(\Aut(\DecPSecond)\circ\Isom=\Iso(\DecP,\DecPSecond)\subseteq\Sym(\DecP)\). Hence, it follows that \begin{align} \E_{\Policy^{(i)}\sim(\Isom^{-1})^*\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \\&=\label{eq:101a} \E_{\AutProfile\sim \U(\Aut(\DecPSecond)^\PlayerSet}\left[ \E_{\Policy^{(i)}\sim(\Isom^{-1}\circ\AutProfile_i)^*\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \\&=\label{eq:101b} \E_{\AutProfile\sim \U(\Aut(\DecPSecond)^\PlayerSet}\left[ \E_{\Policy^{(i)}\sim\AutProfile_i^*\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \\&=\label{eq:101c} \E_{\AutProfile\sim \U(\Aut(\DecPSecond)^\PlayerSet}\left[ \E_{\Policy^{(i)}\sim\LA(\DecPSecond),\,i\in\mathcal{N}} \left[ \end{align} where we again use equivariance in (<ref>), a change of variables and Theorem <ref> in (<ref>), and another change of variables in (<ref>). This concludes the proof. If a learning algorithm is equivariant, we can use either of the expressions above to evaluate it in the LFC problem. One may choose the first expression, i.e., apply random automorphisms to policies, since this transformation maps different policies that are equivalent under OP to a unique automorphism-invariant policy and thus reduces the variance of the cross-play values across different samples \(\Policy\sim\LA(\DecP)\). We will introduce these notions of equivalence and automorphism-invariant policies in Appendix <ref>. Note that an equivariant learning algorithm is not necessarily one that performs well in the LFC problem. For instance, a SP algorithm may be equivariant. However, if an algorithm is equivariant and it does well in cross-play, then the preceding shows that it will also do well in the LFC problem. § CHARACTERIZATION OF OTHER-PLAY AND OF LABEL-FREE COORDINATION GAMES In this section, we define for a given policy \(\Policy\) a policy \(\Psi(\Policy)\) that corresponds to agents choosing local policies that are randomly permuted by automorphisms, and that is itself invariant to pushforward by automorphisms. We then use this self-map on policies \(\Psi\), called the symmetrizer, to characterize both the OP objective and the payoff in an LFC game. This characterization helps us to analyze the OP-optimal policies in the two-stage lever game in Appendix <ref>, and it allows us to prove a stronger result, showing that any OP algorithm that is not concentrated on only one equivalence class in the two-stage lever game is suboptimal in the corresponding LFC problem. We also use this notion of equivalence classes of policies to define OP with tie-breaking in Appendix <ref>, which allows us to then show that random tie-breaking functions exist. In the following, in Appendix <ref>, we recall the definition of our generalization of OP, and we define policies that are invariant to automorphism. Afterwards, in Appendix <ref>, we introduce the concept of a policy corresponding to a distribution over policies. In Appendix <ref>, we introduce the other-play distribution, in which each agent's local policy is chosen as pushforward by a random automorphism, and we define the symmetrizer. Using these concepts, in Appendix <ref>, we give a new expression for both the OP objective and the payoff in an LFC game. The OP objective can be understood of as transforming a policy into one that is invariant to automorphisms, and evaluating that policy in SP. Finally, in Appendix <ref>, we show that our generalized OP objective can in general not be understood of as the SP objective in a modified Dec-POMDP. §.§ Generalization of other-play In the following, fix a Dec-POMDP \(\DecP\). Recall that for a profile of automorphisms \(\AutProfile\in\Aut(\DecP)^\PlayerSet\) and a joint policy \(\Policy\in\PolicySet^\DecP\), we define the joint policy \(\AutProfile^*\Policy:=\hat{\Policy}\), where the local policy \(\hat{\Policy}_i\) of agent \(i\in\PlayerSet\) is given by the local policy of agent \(i\) in the pushforward policy \(\AutProfile^*_i\Policy\). That is, for \(i\in\PlayerSet\), we define \[\hat{\Policy}_i:= \Proj_i(\AutProfile^*_i\Policy)= \Policy_{\AutProfile_i^{-1}i}(\AutProfile_i^{-1}\cdot\mid \AutProfile_i^{-1}\cdot).\] Using this, we define the OP objective as the expected return of a policy that is randomly permuted by such profiles of automorphisms. Define \(J_\OP^\DecP\colon\PolicySet^\DecP\rightarrow\mathbb{R}\) via \begin{equation} J^{D}_\OP(\Policy):=\E_{\AutProfile\sim \U(\Aut(D)^\mathcal{N})}\left[J^{D}(\AutProfile^*\Policy) \right]\label{eq:1appendix} \end{equation} for \(\Policy\in\PolicySet^\DecP\), where \(\U(\Aut(D)^\mathcal{N})\) is a uniform distribution over \(\Aut(D)^\mathcal{N}\). We say that \(J^\DecP_\OP\) is the other-play (OP) objective of \(\DecP\), and \(J^\DecP_\OP(\Policy)\) is the OP value of \(\Policy\in\PolicySet^\DecP\). It is clear that this objective always admits a maximum. For instance, we can consider \(\PolicySet^\DecP\) as a subset of \(\prod_{i\in\PlayerSet}[0,1]^{\ActionSet_i\times\AOHistorySet_i}\) with its standard topology. As \(\PolicySet^\DecP\) is a Cartesian product of simplices, it is a compact subset of this space. Moreover, one can check that the objective \(J^\DecP(\Policy)\) is continuous in the policy \(\Policy\), and that for \(\AutProfile\in\Aut(\DecP)^\PlayerSet\), the map \(\Policy\mapsto\AutProfile^*\Policy\) is continuous as well. Thus, also \(J_\OP^\DecP(\Policy)\) is continuous in \(\Policy\), as it is a finite linear combination of continuous functions. By the extreme value theorem, it follows that the function always attains a maximum. Next, recall our formal definition of an OP learning algorithm as any algorithm that achieves an optimal OP value in expectation. Let \(\DecPSet\) be a finite set of Dec-POMDPs. A learning algorithm \(\sigma\in\Sigma^\DecPSet\) is called an OP learning algorithm if for any \(\DecP\in\DecPSet\), it is \[\E_{\Policy\sim\sigma^\OP(\DecP)}[J^\DecP_\OP(\Policy)]=\max_{\Policy\in\PolicySet^\DecP}J^\DecP_\OP(\Policy).\] Now fix again a Dec-POMDP \(\DecP\) and consider the notion of invariance to automorphism. A policy \(\Policy\in\PolicySet\) is called invariant to automorphism if \(f^*\Policy=\Policy\) for any \(f\in\Aut(\DecP)\). Clearly, if a policy is invariant to automorphism, then it has the same OP value and expected return. Let \(\Policy\in\PolicySet\) be invariant to automorphism. Then \[J_\OP(\Policy)=J(\Policy).\] It is \begin{multline}J_\OP(\Policy)= \E_{\AutProfile\sim \U(\Aut(D)^\mathcal{N})}\left[J(\AutProfile^*\Policy) \right] =\E_{\AutProfile\sim \U(\Aut(D)^\mathcal{N})}\left[J((\Proj_i(\AutProfile_i^*\Policy))_{i\in\PlayerSet}) \right] \\\overset{\text{(*)}}{=}\E_{\AutProfile\sim \U(\Aut(D)^\mathcal{N})}\left[J((\Proj_i(\Policy))_{i\in\PlayerSet}) \right] where we have used invariance to automorphism in (*). In the following, we will show that if a policy \(\Policy\) is not already invariant to automorphism, then one can understand the OP objective as first transforming the policy into a policy \(\Psi(\Policy)\) that is invariant to automorphism by applying the symmetrizer \(\Psi\), and then evaluating the expected return of that policy. In that way, the OP objective ensures that policies cannot make use of arbitrary symmetry-breaking. §.§ Policies corresponding to distributions over policies In this section, let some Dec-POMDP \(\DecP\) be fixed. As a first step towards defining the symmetrizer \(\Psi\), we will define policies corresponding to distributions over policies for general distributions. Afterwards, we will turn to the particular policy \(\Psi(\Policy)\) that corresponds to the OP distribution of \(\Policy\). Recall that we introduced the set of distributions over policies as \(\Delta(\PolicySet)\), the set of measures on the space \((\PolicySet,\mathcal{F})\). Let \(\Distr\in\Delta(\PolicySet)\). For a given distribution \(\Distr\in\Delta(\PolicySet)\) and agent \(i\in\PlayerSet\), the marginal distribution \(\Distr_i\) is defined as \(\Distr_i(\mathcal{Z}_i):=\Distr(\Proj_i^{-1}(\mathcal{Z}_i))\) for any measurable set of local policies \(\mathcal{Z}_i\subseteq\PolicySet_i\). We say that \(\Distr\) has independent local policies, if \(\Distr=\otimes_i\Distr_i\), i.e., \(\Distr\) decomposes into independent marginal distributions over local policies for each agent. We denote such distributions by \(\Mixture\). Now let \(\Mixture\in\Delta(\PolicySet)\) be a distribution with independent local policies. We want to construct a policy \(\Policy^\Mixture\) that represents each agent sampling a local policy \(\Policy_i\sim\Mixture_i\) in the beginning of an episode, and then choosing actions according to that policy until the end of the episode. This policy should be equivalent to \(\Mixture\) in the sense that it should yield the same expected return as \(\Mixture\), where we define the expected return of \(\Mixture\) as \begin{equation} \label{expected-return-mixture} \end{equation} The statement that such a policy exists is analogous and more general than a result by kuhn1953contributions, which says that in an extensive-form game, given some conditions on the game, for every mixed strategy there is an equivalent behavior strategy. Kuhn's theorem is relevant to Dec-POMDPs since there is a correspondence between Dec-POMDPs and extensive-form games oliehoek2006dec. For Dec-POMDPs, the analogous result states that for every distribution over deterministic policies, there is an equivalent stochastic policy. We cannot directly apply Kuhn's theorem here, as we require a result for distributions over stochastic policies instead of deterministic policies. This is because such a result fits better with our remaining setup—for instance, the domain of the OP objective is the set of stochastic policies, and learning algorithms are defined as distributions over stochastic policies. Nevertheless, our proof is based on similar ideas as, for instance, the proof of Kuhn's theorem in maschler_solan_zamir_2013, after translating between the different formal frameworks of extensive-form games and Dec-POMDPs. Maybe in the following you can say again that more specifically $\Omega$ is the set of complete histories. (Right?) \(\Omega\) is just any set on which all of the random variables are defined, but indeed one could just define \(\Omega:=\HistorySet\) and that would work. If you think this is better, I could explicitly define it that way. To define a policy \(\Policy^\Mixture\) that is equivalent to \(\Mixture\), we begin by defining a new measure space \((\PolicySet\times\Omega,\mathcal{F}\otimes\PowerSet(\Omega))\), which is the product space of the space of policies \((\PolicySet,\mathcal{F})\) with the Dec-POMDP environment \((\Omega,\PowerSet(\Omega))\). On this space, for a given distribution \(\Mixture\), we define a probability measure \(\Prob_{\Mixture}\) which represents the procedure outlined above, i.e., in which a policy \(\Policy\) is chosen according to \(\Mixture\), and then samples in \(\Omega\) are distributed according to \(\Prob_\Policy\). Formally, we define \(\Prob_\Mixture\) as the unique measure such that \begin{equation}\label{defn-semidirect-product} \Prob_\Mixture(\mathcal{Z}\times \mathcal{Q}):=\E_{\Policy\sim\Mixture}\left[\mathds{1}_{\mathcal{Z}}\Prob_\Policy(\mathcal{Q})\right] \footnote{\(\Prob_\Mixture\) is the semidirect product of \(\Mixture\) with the Markov kernel \(\kappa(\cdot\mid \Policy):=\Prob_\Policy(\cdot)\).} \end{equation} for any measurable sets \(\mathcal{Z}\subseteq\PolicySet\) and \(\mathcal{Q}\subseteq\Omega\). Note that the product sets \(\mathcal{Z}\times\mathcal{Q}\) for measurable \(\mathcal{Z}\subseteq\PolicySet\) and \(\mathcal{Q}\subseteq\Omega\) are a \(\pi\)-system and generate the product \(\sigma\)-Algebra \(\mathcal{F}\otimes\PowerSet(\Omega)\). Hence, by Carathéodory's extension theorem, there is a unique measure satisfying this definition [see][ch. 1]williams1991probability. On this new product space \(\PolicySet\times\Omega\), define the random variable \(\PolicyLatent\) as the projection onto \(\PolicySet\). We can define histories \(\HistoryRV\) and all other random variables defined on the space \((\Omega,\PowerSet(\Omega))\) by composing them with the projection onto \(\Omega\) (for notational convenience, we denote these random variables using the same symbols in both spaces). Note that the conditional probability of a particular trajectory \(\History\) given the policy \(\PolicyLatent\) is just the probability of that history under \(\Prob_\PolicyLatent\), that is, \begin{equation}\label{conditional-probability-mixture} \Prob_\Mixture(\HistoryRV=\History\mid \PolicyLatent)=\Prob_\PolicyLatent(\HistoryRV=\History) \end{equation} for any \(\History\in\HistorySet\). Here, the conditional probability is a random variable, defined via the conditional expectation \[\Prob_\Mixture(\HistoryRV=\History\mid\PolicyLatent):=\E_\Mixture\left[\mathds{1}_{\HistoryRV=\History}\mid \PolicyLatent\right].\] Intuitively, for a given sample \((\Policy,\omega)\), the value of that random variable is the best estimate of the probability of \(\{\HistoryRV=\History\}\) given \(\PolicyLatent=\Policy\), ignoring \(\omega\). As, in general, \(\{\PolicyLatent=\Policy\}\) may have zero probability, it is impossible to define the conditional probability \(\Prob(\HistoryRV=\History\mid\PolicyLatent=\Policy)\) via \(\Prob(\HistoryRV=\History\mid\PolicyLatent=\PolicyLatent):=\frac{\Prob(\HistoryRV=\History,\PolicyLatent=\Policy)}{\Prob(\PolicyLatent=\Policy)}\). It is still possible to define the conditional expectation \(\E_\Mixture\left[\mathds{1}_{\HistoryRV=\History}\mid \PolicyLatent\right]\), though. Here, we briefly give the definition of the conditional expectation and show that (<ref>) is correct. For a reference on conditional expectations, refer to [][ch. 9]williams1991probability. Applied to our setup, the conditional expectation of \(\mathds{1}_{\HistoryRV=\History}\) given \(\PolicyLatent\) is any random variable (which can be shown to be almost surely unique), denoted by \(\E_\Mixture\left[\mathds{1}_{\HistoryRV=\History}\mid \PolicyLatent\right]\), that is measurable with respect to \[\sigma(\PolicyLatent):=\{\PolicyLatent^{-1}(\mathcal{Z})\mid \mathcal{Z}\in\mathcal{F}\}=\{\mathcal{Z}\times\Omega\mid\mathcal{Z}\in\mathcal{F}\}\] such that \begin{equation}\label{eq:46}\E_{\Mixture}\left[\mathds{1}_{\mathcal{X}}\E_\Mixture\left[\mathds{1}_{\HistoryRV=\History} \mid \PolicyLatent\right]\right] for all \(\mathcal{X}\in\sigma(\PolicyLatent)\). The fact that the random variable is measurable with respect to \(\sigma(\PolicyLatent)\) can be equivalently expressed as saying that it can be written as a function of \(\PolicyLatent\). Moreover, Equation <ref> says that the conditional expectation should represent correct averages of the random variable \(\mathds{1}_{\HistoryRV=\History}\) over the level-sets in \(\sigma(\PolicyLatent)\). To show that (<ref>) is correct, let \(\mathcal{X}\in\sigma(\PolicyLatent)\) arbitrary. Since \(\mathcal{X}\) is of the form \(\mathcal{Z}\times\Omega\), and \(\{\HistoryRV=\History\}=\PolicySet\times\mathcal{Q}\) for some \(\mathcal{Q}\subseteq\Omega\), it is \((\mathcal{Z}\times\Omega)\cap\{\HistoryRV=\History\}=\mathcal{Z}\times\mathcal{Q}\). Using Equation <ref>, it follows that \begin{multline} \E_\Mixture\left[\mathds{1}_{\mathcal{X}}\Prob_\PolicyLatent(\HistoryRV=\History)\right] \E_\Mixture\left[\mathds{1}_{\mathcal{Z}\times\Omega}\Prob_\PolicyLatent(\mathcal{Q})\right] =\E_{\Policy\sim \Mixture}\left[\mathds{1}_{\mathcal{Z}}(\Policy)\Prob_\Policy(\mathcal{Q})\right] \\ \overset{(\ref{defn-semidirect-product})}{=} \Prob_\Mixture(\mathcal{Z}\times \mathcal{Q}) \end{multline} Hence, letting \(\E_\Mixture[\mathds{1}_{\HistoryRV=\History}\mid \PolicyLatent]:=\Prob_\PolicyLatent(\HistoryRV=\History)\) satisfies condition (<ref>). \(\Prob_\PolicyLatent(\HistoryRV=\History)\) is also \(\sigma(\PolicyLatent)\)-measurable, since it is just a function of \(\PolicyLatent\). We will repeatedly make use of (<ref>) in the following, together with the tower property, which, applied to our case, says that \begin{equation}\label{eq:equation-tower-prop-z}\Prob_\Mixture(\HistoryRV=\History)=\E_\Mixture\left[\Prob_\Mixture\left(\HistoryRV=\History\mid\PolicyLatent\right)\right] \overset{(\ref{conditional-probability-mixture})}{=} \E_\Mixture\left[\Prob_\PolicyLatent\left(\HistoryRV=\History\right)\right] \end{equation} for any history \(\History\in\HistorySet\) [see][ch. 9.7]williams1991probability. Using the measure space defined above, we now define a local policy \(\Policy_i^{\Mixture_i}\) for an agent \(i\in\PlayerSet\), corresponding to a distribution \(\Mixture_i\in\Delta(\PolicySet_i)\). For an action \(\Action_i\in\ActionSet_i\) and an action-observation history \( \AOHistory_{i,t}\in\AOHistorySet_{i,t}\), we define \(\Policy_i^{\Mixture_i}(a_i\mid \AOHistory_{i,t})\) as the probability that agent \(i\), who follows a policy that is sampled from \(\Mixture_i\), plays action \(\Action_i\), conditional on $\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\}$. Let \(i\in\PlayerSet\) and let \(\Mixture_i\in\Delta(\PolicySet_i)\) be a distribution over local policies of agent \(i\). We define the local policy \(\Policy_i^{\Mixture_i}\) corresponding to \(\Mixture_i\) in the following way. For \(\Action_i\in\ActionSet_i\), \(t\in\{0,\dotsc,\Tmax\}\) and \(\AOHistory_{i,t}\in\AOHistorySet_{i,t}\), let \begin{equation} \Policy^{\Mixture_i}_i(a_i\mid \AOHistory_{i,t}) := \Prob_{\Mixture_i\otimes\Mixture_{-i}}(\ActionRV_{i,t}=a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}),\end{equation} where \(\Mixture_{-i}\) is any distribution over \(\PolicySet_{-i}\) with independent local policies such that \[\Prob_{\Mixture_i\otimes\Mixture_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0.\] If no such distribution exists, we let \(\Policy^\Mixture_i(a_i\mid \AOHistory_{i,t}):=\frac{1}{|\ActionSet_i|}\). Note that if \(\Prob_{\Mixture_i\otimes\Mixture_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})=0\) for all the distributions \(\Mixture_{-i}\in\Delta(\PolicySet_{-i})\) over opponent policies, then agent \(i\)'s action-observation history \(\AOHistory_{i,t}\) is almost never reached, independent of the other agents' policies. In that case, we can define the policy arbitrarily and this will never matter for the distribution over histories under \(\Prob_{\Mixture_i\otimes\Mixture_{-i}}\), and as we will see, neither for the distribution under \(\Prob_{\Policy_i^{\Mixture_i},\Policy_{-i}}\) for arbitrary \(\Policy_{-i}\). First, we need to make sure that \(\Policy_i^{\Mixture_i}\) is well-defined, i.e., it does not depend on the chosen distribution \(\Mixture_{-i}\). Unfortunately, the proof for the following Lemma is somewhat technical. Let \(i\in\PlayerSet\), \(\Mixture_i\in\Delta(\PolicySet_i)\), \(t\in\{0,\dotsc,\Tmax\}\), \(\AOHistory_{i,t}\in\AOHistorySet_i\), and \(\Action_i\in\ActionSet_i\). Let \(\Mixture_{-i},\Mixture'_{-i}\in\Delta(\PolicySet_{-i})\) be any two distributions with independent local policies such that \(\Prob_{\Mixture_i\otimes\Mixture_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\) and \(\Prob_{\Mixture_i\otimes\Mixture'_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\). Then it is \[\Prob_{\Mixture_i\otimes\Mixture_{-i}}(\ActionRV_{i,t}=a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}) =\Prob_{\Mixture_i\otimes\Mixture'_{-i}}(\ActionRV_{i,t}=a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}).\] In the following, let \(i\in\PlayerSet, t\in\{0,\dotsc,\Tmax\}\) be fixed. Our goal is to find an expression for the distribution of \(\PolicyLatent_i\) given \(\AOHistoryRV_{i,t}\) that only depends on \(\Mixture_i\). If we can do that, we can also show that the probability of a particular action chosen by an agent is independent of the distributions over other agents' policies. To begin, we analyze the joint distribution of \(\PolicyLatent_i\) and \(\AOHistoryRV_{i,t}\) for an arbitrary distribution \(\Mixture\in\Delta(\PolicySet)\) with independent local policies. Note that by assumption, the \(Z_j\) are independent for \(j\in\PlayerSet\) under \(\Mixture\). We now show that conditioning on \(\AOHistoryRV_{i,t}\) still leaves \(Z_i\) independent from \(Z_{-i}\). [draw, circle, text centered, inner sep=0pt,minimum size=1cm] (z) $Z_i$; [draw, circle, below=of z, text centered,fill=black!20, inner sep=0pt,minimum size=1cm] (a10) $A_{i,0}$; [below=of a10, text centered, inner sep=0pt,minimum size=1cm] (dots1) $\dots$; [draw, circle, right=of a10, text centered,fill=black!20, inner sep=0pt,minimum size=1cm] (o11) $O_{i,1}$; [below=of o11, text centered, inner sep=0pt,minimum size=1cm] (dots2) $\dots$; [draw, circle, right=of o11, text centered,fill=black!20, inner sep=0pt,minimum size=1cm] (a11) $A_{i,1}$; [below=of a11, text centered, inner sep=0pt,minimum size=1cm] (dots3) $\dots$; [draw, circle, right=of a11, text centered,fill=black!20, inner sep=0pt,minimum size=1cm] (o12) $O_{i,2}$; [circle, below=of o12, text centered, inner sep=0pt,minimum size=1cm] (dots4) $\dots$; [draw, circle, right=of o12, text centered, inner sep=0pt,minimum size=1cm] (a12) $A_{i,2}$; [circle, below=of a12, text centered, inner sep=0pt,minimum size=1cm] (dots5) $\dots$; [circle, right=of a12, text centered, inner sep=0pt,minimum size=1cm] (dots6) $\dots$; [magenta, very thick](z) to [out=0, in=120] (a11); [magenta, very thick](a11)–(o12); [magenta, very thick](o12)–(dots4); [teal, very thick] (z) to [out=15,in=120] (a12); [teal, very thick] (a12) – (dots5); z a10; [-> ] (z) to [out=0,in=120] (a11); [-> ] (z) to [out=15,in=120] (a12); [-> ] (z) to [out=30,in=120] (dots6); [-> ] (a10) – (o11); [-> ] (a10) – (dots1); [-> ] (o11) – (a11); [-> ] (dots2) – (o11); [-> ] (a11) – (o12); [-> ] (a11) – (dots3); [-> ] (dots4) – (o12); [-> ] (o12) – (a12); [-> ] (a12) – (dots5); [-> ] (a12) – (dots6); [-> ] (a10) to [out=30,in=150] (a11); [-> ] (a10) to [out=30,in=140] (a12); [-> ] (a11) to [out=30,in=160] (a12); Part of a Bayesian graph of the random variables on the space \(\PolicySet\times\Omega\). Here, the gray-marked nodes are part of the action-observation history \(\AOHistoryRV_{i,2}\). Conditional on \(\AOHistoryRV_{i,2}\), there is an unblocked path, marked in teal, from \(Z_i\) to \(A_{i,2}\) and to nodes below \(A_{i,2}\) not displayed here, making them \(d\)-connected and thus dependent. The path marked in a lighter magenta, on the other hand, is blocked, illustrating that the part of the graph below \(O_{i,2}\) may be independent of \(Z_i\). To see this, one can consider a Bayesian graph of the random variables on the space \(\PolicySet\times\Omega\). For a reference on Bayesian graphs and the concepts discussed below, refer to [][ch. 1.2]Pearl2009-bb. In Figure <ref>, we have displayed a part of this Bayesian graph with nodes for the agent \(i\), indicating left-out parts of the graph with dots. The arrows in the graph illustrate the dependence relationships between the different variables: if there is an arrow from one node to another, this means that the other node depends on that node. Nodes belonging to the action-observation history \(\AOHistoryRV_{i,2}\) of agent \(i\) are marked in gray. Given such a Bayesian graph, the \(d\)-separation criterion tells us which variables are dependent after conditioning on a set \(\mathcal{V}\) of variables. The criterion specifies valid, “unblocked” paths in the graph, depending on the graph structure and the nodes that are being conditioned on. The \(d\)-separation criterion says that two variables are dependent conditional on the variables in \(\mathcal{V}\) if and only if there is an unblocked path in the graph between them; the variables are then said to be \(d\)-connected. If there is no path, then the variables are \(d\)-separated. An unblocked path can contain a chain \(X\rightarrow W\rightarrow Y\) or a fork \(X\leftarrow W\rightarrow Y\) if \(W\) is not being conditioned on. If it contains a collider \(X\rightarrow W\leftarrow Y\), on the other hand, i.e., the two incident edges to \(W\) in the path are both directed towards the node, then the path is blocked, unless \(W\) or a descendant of \(W\) in the graph is in \(\mathcal{V}\). For instance, the path that is marked in a lighter magenta in Figure <ref> is blocked—the path cannot go from \(Z_i\) over \(\ActionRV_{i,1}\) to \(\ObservationRV_{i,2}\). An unblocked path is marked in teal. changed the following a bit Using the \(d\)-separation criterion, one can tell that \(Z_i\) may be \(d\)-separated from left-out parts of the graph in Figure <ref> indicated by the dots below \(\ActionRV_{i,0}\), for instance, but that it is \(d\)-connected to some variables in the part below \(\ActionRV_{i,2}\). We do not work this out here completely, but once considering the entire graph, one can see that there is no unblocked path from \(Z_i\) to the \(Z_{-i}\), because such a path would inevitably have to traverse a collider that is not being conditioned on and that has no descendants that are being conditioned on (for instance, the observations of all other agents are colliders that connect the \(Z_{-i}\) with \(Z_i\)). Generalizing from the example of \(\AOHistory_{i,2}\), we can conclude that \(Z_i\) and \(Z_{-i}\) are independent given \(\AOHistoryRV_{i,t}\). Now let \(\mathcal{Z}_j\subseteq \PolicySet_j\) be measurable sets for \(j\in\PlayerSet\), and let \(\AOHistory_{i,t}\in\AOHistorySet_{i,t}\) arbitrary. Then, using the above, it follows that \begin{align} &\E_{\Policy_{-i}\sim\Mixture_{-i}}[\mathds{1}_{\Policy_{-i}\in \mathcal{Z}_{-i}}\E_{\PolicyLatent_i\sim\mu_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_\Policy(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]] \\\label{eq:27b} \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i,\AOHistoryRV_{i,t}=\AOHistory_{i,t},\PolicyLatent_{-i}\in \mathcal{Z}_{-i}) \\ \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t},\PolicyLatent_{-i}\in C_{-i}) \Prob_{\Mixture}(\AOHistoryRV_{i,t}=\AOHistory_{i,t},\PolicyLatent_{-i}\in \mathcal{Z}_{-i}) \\ \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}) \Prob_{\Mixture}(\AOHistoryRV_{i,t}=\AOHistory_{i,t},\PolicyLatent_{-i}\in \mathcal{Z}_{-i}) \\\label{eq:27c} \E_{\Policy_{-i}\sim\Mixture_{-i}}[\mathds{1}_{\Policy_{-i}\in \mathcal{Z}_{-i}}\Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\E_{\Policy_i\sim\mu_i}[\Prob_\Policy(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]], \end{align} where we use the argument about conditional independence in (<ref>) and the definition of \(\Prob_\Mixture\) from Equation <ref> in (<ref>) and (<ref>). Since the sets \(\mathcal{Z}_{j}\) for \(j\in\PlayerSet\setminus\{i\}\) were arbitrary, it follows that \(\Mixture_{-i}\)-almost surely, it is \begin{multline}\label{equation-marginals-are-equal-1}\E_{\Policy_i\sim\mu_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \\= \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})].\end{multline} Moreover, since \(\Mixture\) was arbitrary, Equation <ref> holds for any distribution with independent local policies. If we divide by the term \(\E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]\), this becomes \begin{equation}\label{eq:200} \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}) =\frac{\E_{\Policy_i\sim\mu_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]}{ \E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]},\end{equation} which gives us a formula for the distribution of \(\PolicyLatent_i\) given \(\AOHistoryRV_{i,t}\) under the measure \(\Prob_\Mixture\) that is independent of the distributions \(\Mixture_{-i}\). But to be able to do so, we have to find a value for \(\PolicyLatent_{-i}\) such that \(\E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,Z_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]\) is nonzero under \(\Mixture\). Now let \(\Mixture_i\in\Delta(\PolicySet_i)\) and let \(\Mixture_{-i},\Mixture'_{-i}\in\Delta(\PolicySet_{-i})\) be any two distributions with independent local policies such that \(\Prob_{\Mixture_i\otimes\Mixture_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\) and \(\Prob_{\Mixture_i\otimes\Mixture'_{-i}}( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\). Define \(\Mixture_i':=\Mixture_i\), \(\Mixture:=\Mixture_i\otimes\Mixture_{-i}\), and \(\Mixture':=\Mixture_i\otimes\Mixture'_{-i}\). Our goal is to show that \[\Prob_{\Mixture}(\ActionRV_{i,t}=a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}) =\Prob_{\Mixture'}(\ActionRV_{i,t}=a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}).\] To that end, we define a third distribution \(\MixtureHat:=\otimes_{j\in\PlayerSet}(\frac{1}{2}\Mixture_j+\frac{1}{2}\Mixture_j')\). Apparently, it is then \(\Mixture_i=\MixtureHat_i=\Mixture'_i\) and also \(\MixtureHat\) has independent local policies. We now prove separately for \(\Mixture\) and \(\Mixture'\) that the distribution of \(\PolicyLatent_i\) given \(\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\}\) under \(\Mixture\) respectively \(\Mixture'\) is equal to the one under \(\MixtureHat\). To do so, we use that \(\Mixture\) and \(\Mixture'\) are absolutely continuous with respect to \(\MixtureHat\) to find desired values for \(\PolicyLatent_{-i}\) such that we can apply Equation <ref>. First, let \(\mathcal{Z}_i\in\PolicySet_i\) be an arbitrary measurable set. Using Equation <ref>, we can find measurable sets \(\mathcal{Z}_{-i},\hat{\mathcal{Z}}_{-i}\subseteq\PolicySet_{-i}\) such that \(\Mixture_{-i}(\mathcal{Z}_{-i})=1=\MixtureHat_{-i}(\hat{\mathcal{Z}}_{-i})\), and such that \begin{multline}\label{eq:28a}\E_{\Policy_i\sim\mu_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \\= \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})].\end{multline} for any \(\Policy_{-i}\in \mathcal{Z}_{-i}\) \begin{multline}\label{eq:28b}\E_{\Policy_i\sim\MixtureHat_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,\PolicyHat_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \\= \Prob_{\hat{\Mixture}}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,\PolicyHat_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})].\end{multline} for any \(\PolicyHat_{-i}\in \hat{\mathcal{Z}}_{-i}\). Next, it follows that \(\Prob_\Mixture(\{Z_{-i}\in \mathcal{Z}_{-i}\}\cap\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\})>0\), which by definition of \(\MixtureHat\) implies \(\Prob_{\MixtureHat}(\{Z_{-i}\in \mathcal{Z}_{-i}\}\cap\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\})>0\) and thus by definition of \(\hat{\mathcal{Z}}_{-i}\) also \(\Prob_{\MixtureHat}(\{Z_{-i}\in \mathcal{Z}_{-i}\cap\hat{\mathcal{Z}}_{-i}\}\cap\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\})>0\). Since \begin{multline}\Prob_{\MixtureHat}(\{Z_{-i}\in \mathcal{Z}_{-i}\cap\hat{\mathcal{Z}}_{-i}\}\cap\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\}) \\ there must be \(\Policy_{-i}\in \Proj_{\PolicySet_{-i}}(\{Z_{-i}\in \mathcal{Z}_{-i}\cap\hat{\mathcal{Z}}_{-i}\}\cap\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\})\) such that \[\E_{\Policy_i\sim\MixtureHat_i}[\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})]>0.\] Lastly, using that \(\MixtureHat_i=\Mixture_i\), it follows that \begin{equation}\E_{\Policy_i\sim\Mixture_i}[\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] Hence, we can use Equations <ref> and <ref> to conclude that \begin{align} \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}) \E_{\Policy_i\sim\mu_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \E_{\Policy_i\sim\mu_i}[\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \\ \E_{\Policy_i\sim\MixtureHat_i}[\mathds{1}_{\Policy_i\in \mathcal{Z}_i}\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \E_{\Policy_i\sim\MixtureHat_i}[\Prob_{\Policy_i,\Policy_{-i}}(\AOHistoryRV_{i,t}=\AOHistory_{i,t})] \\ \Prob_{\MixtureHat}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}). \end{align} Now note that one can make an exactly analogous argument for \(\Mixture'\) and \(\MixtureHat\), potentially using a different \(\Policy_{-i}\). Hence, it follows that \begin{equation}\label{equation-marginals-are-equal-2} \Prob_{\Mixture}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})=\Prob_{\MixtureHat}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})=\Prob_{\Mixture'}(Z_i\in \mathcal{Z}_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}). \end{equation} Since \(\mathcal{Z}_i\in\PolicySet_i\) was arbitrary, it follows that the distribution of \(Z_i\) given \(\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\}\) is equal under \(\Prob_\Mixture\) and \(\Prob_{\Mixture'}\). To conclude the proof, we can use this to show that also the distribution of actions under \(\Prob_\Mixture\) and \(\Prob_{\Mixture'}\) are equal, conditional on \(\{\AOHistoryRV_{i,t}=\AOHistory_{i,t}\}\). For \(\Action_i\in\ActionSet_i\), it is \begin{align} \label{eq:32a} \Prob_{\Mixture}(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}) &=\E_{\Mixture}\left[\Prob_\Mixture(\ActionRV_{i,t}=\Action_i\mid\PolicyLatent, \AOHistoryRV_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\\label{eq:32c} &=\E_{\Mixture}\left[\Prob_\PolicyLatent(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\&=\E_{\Mixture}\left[\PolicyLatent_i(\Action_i\mid \AOHistory_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\ \E_{\Mixture'}\left[\PolicyLatent_i(\Action_i\mid \AOHistory_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\&=\E_{\Mixture'}\left[\Prob_\PolicyLatent(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\ \label{eq:32d} &=\E_{\Mixture'}\left[\Prob_{\Mixture'}(\ActionRV_{i,t}=\Action_i\mid \PolicyLatent,\AOHistoryRV_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}\right] \\ \label{eq:32b} &=\Prob_{\Mixture'}(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}), \end{align} where we have used the tower property in (<ref>) and (<ref>), and Equation <ref> in (<ref>) and (<ref>). This is what we wanted to show. In the following, we write \(\Policy^\Mixture:=(\Policy_i^{\Mixture_i})_{i\in\PlayerSet}\) for the joint policy corresponding to a distribution \(\Mixture\) with independent local policies. Our next goal is to prove that the distribution over histories is the same under \(\Prob_{\Policy^\Mixture}\) as under \(\Prob_{\Mixture}\). Consider any distribution \(\Mixture\in\Delta(\PolicySet)\) with independent local policies. Let \(\Policy^\Mixture\) be the joint policy corresponding to \(\Mixture\), as defined above. Then the history \(\HistoryRV\) has the same distribution under \(\Prob_\Mixture\) as under \(\Prob_{\Policy^\Mixture}\). In particular, it is \(J^D(\Mixture)=J^D(\Policy^\Mixture)\). Fix a distribution \(\Mixture\in\Delta(\PolicySet)\) with independent local policies. We show by induction that for all \(t\in\{0,\dotsc,\Tmax\}\), it is \(\Prob_\Mixture(\HistoryRV_t=\History_t)=\Prob_{\Policy^\Mixture}(\HistoryRV_t=\History_t)\) for any \(\History_t\in\HistorySet_t\). First, note that by Definition <ref>, for any \(i\in\PlayerSet,\Action_i\in\ActionSet_i, t\in\{0,\dotsc,h\}\), and \(\AOHistory_{i,t}\in\AOHistorySet_i\) such that \(\Prob_\Mixture( \AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\), it is \begin{equation}\label{eq:5}\Prob_{\Policy^\Mixture}(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t})=\Policy^{\Mixture_i}_i(a_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}) = \Prob_\Mixture(\ActionRV_{i,t}=\Action_i\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t}). \end{equation} In particular, this holds for \(t=0\), in which case it is \(\AOHistoryRV_{i,0}=\emptyset\) for \(i\in\PlayerSet\). Hence, for \(\Action\in\ActionSet, \State\in\StateSet\), it is \begin{multline}\Prob_\Mixture(\StateRV_0=\State,\ActionRV_0=\Action) %\overset{\text{(i)}}{=}\E_\Mixture[\Prob_\Mixture(\StateRV_0=\State,\ActionRV_{0}=\Action\mid Z)] \overset{(\ref{eq:equation-tower-prop-z})}{=} \E_\Mixture[\Prob_Z(\StateRV_0=\State,\ActionRV_{0}=\Action)] \\ \overset{\text{(i)}}{=}b_0(\State)\E_\Mixture[\Prob_Z(\ActionRV_0=\Action)] b_0(\State)\E_\Mixture[\prod_{i\in\PlayerSet}Z_i(\ActionRV_{i,0}\mid \emptyset)] \overset{\text{(ii)}}{=}b_0(\State)\prod_{i\in\PlayerSet}\E_\Mixture[Z_i(\Action_{i}\mid \emptyset)] \\ \overset{\text{(\ref{eq:equation-tower-prop-z})}}{=} \overset{(\ref{eq:5})}{=} \\ \end{multline} Here, we also use (i) the fact that that the initial state distribution does not depend on the policy, and (ii) the fact that \(\Mixture\) has independent local policies and thus also the \(Z_i\) are independent in the probability space \(\Prob_\Mixture\). Next, assume that \(0\leq t-1\leq \Tmax\) and \(\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1})=\Prob_{\Policy^\Mixture}(\HistoryRV_{t-1}=\History_{t-1})\) for any \(\History_{t-1}\in\HistorySet_{t-1}\). Let \(\History_{t-1}=(\State_0,\Action_0,\Reward_0,\State_1,\dotsc,\Reward_{t-1})\in\HistorySet_{t-1}\) arbitrary such that \(\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1})=\Prob_{\Policy^\Mixture}(\HistoryRV_{t-1}=\History_{t-1})>0\). As in the proof of Lemma <ref>, it follows from considering the \(d\)-separation criterion on a Bayesian graph of the random variables defined on \(\PolicySet\times\Omega\) that conditioning on \(\HistoryRV_{t-1}\) does not make the \(Z_i\) dependent [see][ch. 1.2]Pearl2009-bb. The same criterion also says that, after conditioning on \(A_{i,t-1}\) and \(\AOHistoryRV_{i,t-1}\), one cannot gain additional information about \(Z_i\) from the other components of \(\HistoryRV_{t-1}\) or from \(\ObservationRV_{i,t}\) (that is, \(A_{i,t-1}\) and \(\AOHistoryRV_{i,t-1}\) \(d\)-separate \(Z_i\) from the rest of the history). Using this in (<ref>), it follows for arbitrary \(\State_t,\Observation_t,\Action_t,\Reward_t\) that \begin{align}\label{eq:26c} &\Prob_\Mixture(\StateRV_t=\State_t,\ObservationRV_t=\Observation_t,\ActionRV_t=\Action_t,\RewardRV_t=\Reward_t\mid \HistoryRV_{t-1}=\History_{t-1}) \\ &=\E_\Mixture[\Prob_\Mixture(\StateRV_t=\State_t,\ObservationRV_t=\Observation_t,\ActionRV_t=\Action_t,\RewardRV_t=\Reward_t\mid \HistoryRV_{t-1},Z)\mid \HistoryRV_{t-1}=\History_{t-1}] \\ =\E_\Mixture[P(\State_t\mid \State_{t-1},\Action_{t-1})O(\Observation_t\mid \State_t,\Action_{t-1})\prod_{i\in\PlayerSet}Z_i(\Action_{i,t}\mid\AOHistory_{i,t})\mathds{1}_{\RewardFunction(\State,\Action)=\Reward}\mid \HistoryRV_{t-1}=\History_{t-1}] \\ \nonumber P(\State_t\mid \State_{t-1},\Action_{t-1})O(\Observation_t\mid \State,\Action_{t-1}) \\\label{eq:26a} &\quad\prod_{i\in\PlayerSet}\E_\Mixture[Z_i(\Action_{i,t}\mid \AOHistory_{i,t})\mid \ActionRV_{i,t-1}=\Action_{i,t-1}, \AOHistoryRV_{i,t-1}=\AOHistory_{i,t-1}]\mathds{1}_{\RewardFunction(\State_t,\Action_t)=\Reward_t} \\ &=P(\State_t\mid \State_{t-1},\Action_{t-1})O(\Observation_t\mid \State,\Action_{t-1})\prod_{i\in\PlayerSet}\Prob_\Mixture(\ActionRV_{i,t}=\Action_{i,t}\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\mathds{1}_{\RewardFunction(\State_t,\Action_t)=\Reward_t} \\\label{eq:26b} &=P(\State_t\mid \State_{t-1},\Action_{t-1})O(\Observation_t\mid \State,\Action_{t-1})\prod_{i\in\PlayerSet}\Prob_{\Policy^\Mixture}(\ActionRV_{i,t}=\Action_{i,t}\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})\mathds{1}_{\RewardFunction(\State_t,\Action_t)=\Reward_t} \\ &= \Prob_{\Policy^\Mixture}(\StateRV_t=\State_t,\ObservationRV_t=\Observation_t,\ActionRV_t=\Action_t,\RewardRV_t=\Reward_t\mid \HistoryRV_{t-1}=\History_{t-1}).\label{eq:26d} \end{align} In (<ref>), we again use Equation <ref>, which is possible since \(\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1})>0\) implies that also \(\Prob_\Mixture( \AOHistoryRV_{i,t-1}=\AOHistory_{i,t-1})>0\), where \(\AOHistory_{i,t-1}\) is defined as the projection of \(\History_{t-1}\) onto \(\AOHistorySet_{i,t-1}\). To conclude the inductive step, let \(\History_t\in\HistorySet_t\) and choose \(\History_{t-1}\) as the projection of \(\History_t\) onto \(\HistorySet_{t-1}\). If \[\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1})=\Prob_{\Policy^\Mixture}(\HistoryRV_{t-1}=\History_{t-1})=0,\] necessarily also \(\Prob_\Mixture(\HistoryRV_{t}=\History_{t})=\Prob_{\Policy^\Mixture}(\HistoryRV_{t}=\History_{t})=0\) and there is nothing more to show. Assume now that this is not the case. Using the inductive hypothesis and Equations (<ref>)–(<ref>) in (*), it then follows that \begin{multline}\Prob_\Mixture( \HistoryRV_{t}=\History_t) =\Prob_\Mixture( \HistoryRV_{t}=\History_t\mid \HistoryRV_{t-1}=\History_{t-1})\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1}) \\=\Prob_{\Mixture}(\StateRV_t=\State_t,\ObservationRV_t=\Observation_t,\ActionRV_t=\Action_t,\RewardRV_t=\Reward_t\mid \HistoryRV_{t-1}=\History_{t-1})\Prob_\Mixture(\HistoryRV_{t-1}=\History_{t-1}) \\\overset{\text{(*)}}{=}\Prob_{\Policy^\Mixture}(\StateRV_t=\State_t,\ObservationRV_t=\Observation_t,\ActionRV_t=\Action_t,\RewardRV_t=\Reward_t\mid \HistoryRV_{t-1}=\History_{t-1})\Prob_{\Policy^\Mixture}(\HistoryRV_{t-1}=\History_{t-1}) \\=\Prob_{\Policy^\Mixture}( \HistoryRV_{t}=\History_t). \end{multline} This concludes the induction. In particular, it follows that \[\Prob_\Mixture(\HistoryRV_\Tmax=\History_\Tmax)=\Prob_{\Policy^\Mixture}(\HistoryRV_\Tmax=\History_\Tmax).\] Hence, \(\HistoryRV=\HistoryRV_\Tmax\) has the same distribution under both \(\Prob_\Mixture\) and \(\Prob_{\Policy^\Mixture}\). Turning to the “in particular” statement, we can use the tower property (i) and Equation <ref> (ii) to follow that \begin{multline}J(\Policy^\Mixture)=\E_{\Policy^\Mixture}[\sum_{t=1}^hR_t]=\E_\Mixture[\sum_{t=1}^hR_t] \\ \overset{\text{(i)}}{=}\E_\Mixture\left[\E_\Mixture\left[\sum_{t=1}^hR_t\,\middle\vert\, Z\right]\right] \overset{\text{(ii)}}{=}\E_\Mixture\left[\E_Z\left[\sum_{t=1}^hR_t\right]\right] \\ =\E_{\Policy\sim \Mixture}[J(\Policy)] \end{multline} Before we turn to the OP distribution, we prove another useful Lemma, stating that the mapping from distributions to corresponding joint policies and the pushforward by isomorphisms commute. That is, the policy corresponding to the pushforward of a distribution and the pushforward of the policy corresponding to that distribution are the same. Let \(\DecP,\DecPSecond\) be isomorphic Dec-POMDPs with isomorphism \(\Isom\in\Iso(\DecP,\DecPSecond)\). Let \(\Policy\in\PolicySet^\DecP\) and let \(\Mixture\in \Delta(\PolicySet^\DecP)\) be any distribution with independent local policies. Then \[\Isom^*\Policy^\Mixture=\Policy^{\Isom^*\Mixture}.\] First, we have to find an expression for the marginal distributions \((\Isom^*\Mixture)_i\) and prove that \(\Isom^*\Mixture\) is a distribution with independent local policies. To that end, consider measurable sets \(\mathcal{Z}_i\subseteq\PolicySet^\DecPSecond_i\) for \(i\in\PlayerSet^\DecPSecond\) and define \(\mathcal{Z}:=\prod_{i\in\PlayerSet^\DecPSecond}\mathcal{Z}_i\). In the following, we adopt the notation \(\Policy_i\circ\Isom:= \Policy_i(\Isom\cdot\mid\Isom\cdot)\) and \(\mathcal{Z}_i\circ \Isom:= \{\Policy_i\circ\Isom\mid\Policy_i\in\mathcal{Z}_i\}\). Note that \begin{multline}\label{eq:29}(\Isom^*)^{-1}(\mathcal{Z}) =\{\Policy\mid \Isom^*\Policy\in\mathcal{Z}\} =\{\Policy\mid \forall i\in\PlayerSet^\DecPSecond\colon \Policy_{\Isom^{-1}i}\circ\Isom^{-1}\in\mathcal{Z}_i)\} \\ =\{\Policy\mid \forall j\in\PlayerSet^\DecP\colon \Policy_{j}\in\mathcal{Z}_{\Isom j}\circ\Isom\} =\prod_{j\in\PlayerSet^\DecP}\mathcal{Z}_{\Isom j}\circ\Isom. \end{multline} Hence, using in (i) that \(\Mixture\) has independent local policies, it follows that \begin{multline}\label{eq:30}(\Isom^*\Mixture)(\mathcal{Z}) \overset{(\ref{eq:29})}{=}\Mixture(\prod_{j\in\PlayerSet^\DecP}\mathcal{Z}_{\Isom j}\circ \Isom) \overset{\text{(i)}}{=}\prod_{j\in\PlayerSet^\DecP}\Mixture_j(\mathcal{Z}_{\Isom j}\circ \Isom) =\prod_{i\in\PlayerSet^\DecPSecond}\Mixture_{\Isom^{-1}i}(\mathcal{Z}_{i}\circ \Isom). \end{multline} Now let \(i\in\PlayerSet^\DecPSecond\) arbitrary. With the choice of \(\hat{\mathcal{Z}}_i:=\mathcal{Z}_i\) and \(\hat{\mathcal{Z}}_k:=\PolicySet_k^\DecPSecond\) for all \(k\in\PlayerSet^\DecPSecond\setminus\{i\}\), it is \begin{multline}\label{eq:70}(\Isom^*\Mixture)_i(\mathcal{Z}_i) \\ \overset{(\ref{eq:30})}{=} \prod_{k\in\PlayerSet^\DecPSecond}\Mixture_{\Isom^{-1}k}(\hat{\mathcal{Z}}_{k}\circ \Isom) =\Mixture_{\Isom^{-1}i}(\mathcal{Z}_{i}\circ \Isom) \(\Mixture_{\Isom^{-1}k}(\hat{\mathcal{Z}}_{k}\circ \Isom)=\Mixture_{\Isom^{-1}k}(\PolicySet_{ k}\circ\Isom)=\Mixture_{\Isom^{-1}k}(\PolicySet_{\Isom^{-1} k})=1\) for any \(k\in\PlayerSet^\DecPSecond\setminus\{i\}\). Since \(i\) was arbitrary, this shows that \(\Isom^*\Mixture(\mathcal{Z})=\prod_{i\in\PlayerSet^\DecPSecond}(\Isom^*\Mixture)_i(\mathcal{Z}_i)\). Since the sets \(\mathcal{Z}_i\) were arbitrary and the Cartesian products of these sets are a \(\pi\)-system and generate the product \(\sigma\)-Algebra \(\mathcal{F}\), this shows that \(\Isom^*\Mixture\) has independent local policies. Next, let \(i\in\PlayerSet^\DecPSecond\), \(t\in\{0,\dotsc,\Tmax\}\) and \(\AOHistory_{i,t}\in\AOHistorySet_{i,t}^\DecPSecond\). Note that \(\Proj_i(\Isom^*\Policy^\Mixture)=\Policy_{\Isom^{-1}i}^{\Mixture_{\Isom^{-1}i}}\circ\Isom^{-1}\) and \( \Proj_i(\Policy^{\Isom^*\Mixture}) Hence, it remains to prove that \(\Policy_{\Isom^{-1}i}^{\Mixture_{\Isom^{-1}i}}\circ \Isom^{-1}=\Policy_i^{(\Isom^*\Mixture)_i}\). To that end, let \(j\in\PlayerSet^\DecP\) such that \(\Isom j= i\). By Theorem <ref>, it is \begin{equation} \label{equation-histories-pullbacks} \Prob_{\Policy}(\AOHistoryRV_{j, t} =\Isom^{-1} \AOHistory_{i,t}) \Prob_{\Isom^*\Policy}(\AOHistoryRV_{ i,t}=\AOHistory_{i,t}) \end{equation} for any \(\Policy\in \PolicySet^\DecP\). Letting, \(\Mixture_{-j}\in\Delta(\PolicySet_{-j}^\DecP)\) with independent local policies arbitrary and defining \(\Mixture:=\Mixture_{i}\otimes\Mixture_{-i}\), this means that also \begin{multline} \label{eq:31} \Prob_\Mixture(\AOHistoryRV_{ j, t} =\Isom^{-1} \AOHistory_{i,t}) \overset{\text{(ii)}}{=}\E_\Mixture\left[ \Prob_\PolicyLatent(\AOHistoryRV_{ j, t} = \Isom^{-1}\AOHistory_{i,t}) \right] \Prob_{\Isom^*\PolicyLatent}(\AOHistoryRV_{ i, t} = \AOHistory_{i,t}) \right] \\ \Prob_{\PolicyLatent}(\AOHistoryRV_{i, t} = \AOHistory_{i,t}) \right] \overset{\text{(ii)}}{=}\Prob_{\Isom^*\Mixture}(\AOHistoryRV_{ i, t} = \AOHistory_{i,t}), \end{multline} where we use Equation <ref> in (ii). Since \(\Isom^*\) is a bijection on the space of policies, it is \[\{\Isom^*(\Mixture_j\otimes\Mixture_{-j})\mid \Mixture_{-j}\in\Delta(\PolicySet_{-j}^\DecP)\}=\{(\Isom^*\Mixture)_i\otimes\MixtureHat_{-i}\mid\MixtureHat_{-i}\in\Delta(\PolicySet_{-i}^\DecPSecond)\}.\] Hence, (<ref>) implies that there is some \(\Mixture_{-j}\in\Delta(\PolicySet_{-j}^\DecP)\) with independent local policies such that \(\Prob_{\Mixture_j\otimes\Mixture_{-j}}(\AOHistoryRV_{j, t} =\Isom^{-1} \AOHistory_{i,t})>0\) if and only if there is \(\MixtureHat_{-i}\in\Delta(\PolicySet_{-i}^\DecPSecond)\) with independent local policies such that \(\Prob_{(\Isom^*\Mixture)_i\otimes \MixtureHat_{-i}}(\AOHistoryRV_{i, t} =\AOHistory_{i,t})>0\). Using this fact, it suffices to distinguish the two cases where such a distribution does exist and where it does not exist. First, assume that it does not exist. Then by Definition <ref>, both \(\Policy_{j}^{\Mixture_j}(\cdot\mid \Isom^{-1} \AOHistory_{i,t})\) and \(\Policy_i^{(\Isom^*\Mixture)_i}(\cdot\mid\AOHistory_{i,t})\) are uniform distributions. Since \(\Isom^{-1}\) is a bijection on \(\ActionSet_i^\DecPSecond\), also \(\Policy_{j}^{\Mixture_{j}}(\Isom^{-1}\cdot \mid \Isom^{-1}\AOHistory_{i,t})\) is a uniform distribution, and hence \[\Policy_{j}^{\Mixture_{j}}(\Isom^{-1}\cdot \mid \Isom^{-1}\AOHistory_{i,t})=\Policy_i^{(\Isom^*\Mixture)_i}(\cdot\mid \AOHistory_{i,t}). \] Second, consider the case in which a distribution \(\Mixture_{-j}\in\Delta(\PolicySet_{-j}^\DecP)\) with independent local policies exists such that \[\Prob_{\Mixture_j\otimes\Mixture_{-j}}(\AOHistoryRV_{j, t} =\Isom^{-1} \AOHistory_{i,t})>0,\] and define \(\Mixture:=\Mixture_j\otimes\Mixture_{-j}\). Then for \(\Action_i\in\ActionSet^\DecPSecond_i\), it is \begin{align} \label{eq:33a} \Policy^{\Mixture_j}_{j}(\Isom^{-1} \Action_i\mid \Isom^{-1}\AOHistory_{i,t}) &= \Prob_\Mixture(\ActionRV_{j,t}=\Isom^{-1} \Action_i\mid \AOHistoryRV_{j,t}=\Isom^{-1}\AOHistory_{i,t}) \\ &= \E_\Mixture[\Prob_{\Mixture}(\ActionRV_{j,t}=\Isom^{-1} a_i\mid\AOHistoryRV_{j, t},Z)\mid \AOHistoryRV_{ j,t}=\Isom^{-1}\AOHistory_{i,t}] \\&= \E_\Mixture[Z_{j}(\Isom^{-1} \Action_i\mid\Isom^{-1}\AOHistory_{i,t})\mid \AOHistoryRV_{j,t}=\Isom^{-1}\AOHistory_{i,t}] \\ &=\frac{\int_{\PolicySet^\DecP}\int_{\Omega}(\Isom^*Z)_{i}( \Action_i\mid\AOHistory_{i,t})\mathds{1}_{\AOHistoryRV_{ j,t}={\Isom^{-1}}\AOHistory_{i,t}}\mathrm{d}\Prob_{Z}\mathrm{d}\Mixture \int_{\PolicySet^\DecP}\int_{\Omega}\mathds{1}_{\AOHistoryRV_{j,t}={\Isom^{-1}}\AOHistory_{i,t}} \mathrm{d}\Prob_{Z}\mathrm{d}\Mixture \\ \frac{\int_{\PolicySet^\DecP}(\Isom^*Z)_{ i}( \Action_i\mid\AOHistory_{i,t})\Prob_Z({\AOHistoryRV_{ j,t}={\Isom^{-1}}\AOHistory_{i,t}})\mathrm{d}\Mixture \int_{\PolicySet^\DecP}\Prob_Z(\AOHistoryRV_{j,t}={\Isom^{-1}}\AOHistory_{i,t})\mathrm{d}\Mixture \\ \frac{\int_{\PolicySet^\DecP}(\Isom^*Z)_{ i}( \Action_i\mid\AOHistory_{i,t})\Prob_{\Isom^*Z}({\AOHistoryRV_{i,t}=\AOHistory_{i,t}})\mathrm{d}\Mixture \int_{\PolicySet^\DecP}\Prob_{\Isom^*Z}(\AOHistoryRV_{ i,t}=\AOHistory_{i,t})\mathrm{d}\Mixture \\ \frac{\int_{\PolicySet^\DecPSecond}Z_{i}( \Action_i\mid\AOHistory_{i,t})\Prob_{Z}({\AOHistoryRV_{i,t}=\AOHistory_{i,t}})\mathrm{d}\Mixture\circ(\Isom^*)^{-1} \int_{\PolicySet^\DecPSecond}\Prob_{Z}(\AOHistoryRV_{ i,t}=\AOHistory_{i,t})\mathrm{d}\Mixture\circ(\Isom^*)^{-1} \\ = \E_{\Isom^*\Mixture}[Z_{i}( \Action_i\mid\AOHistory_{i,t})\mid \AOHistoryRV_{ i,t}=\AOHistory_{i,t}] \\ &= \Prob_{\Isom^*\Mixture}(\ActionRV_{i,t}= \Action_i\mid \AOHistoryRV_{ i,t}=\AOHistory_{i,t}) \\\label{eq:33c} &=\Policy_i^{(\Isom^*\Mixture)_i}(a_i\mid \AOHistory_{i,t}), \end{align} where we have used Definition <ref> in (<ref>) and (<ref>), and Theorem <ref> in (<ref>). This concludes the second case and thus the proof. note to myself: I could include a bit more explanation here, e.g. using "Equation <ref> and the tower property" etc. But not super important. low prio §.§ The other-play distribution and the symmetrizer Using the idea of a policy corresponding to a distributions over policies introduced above, we can now define a policy corresponding to the distribution over policies used in the OP objective. In the following, fix again a Dec-POMDP \(\DecP\). Let \(\Policy\in\PolicySet\). We define the OP distribution of \(\Policy\) as the distribution \begin{equation} \label{equation-other-play-mixture} \Mixture:=|\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\delta_{\AutProfile^*\Policy}, \end{equation} where \(\delta\) is the Dirac measure, i.e., for any measurable set \(\mathcal{Z}\subseteq\PolicySet\), it is \[\delta_{\AutProfile^*\Policy}(\mathcal{Z})= \begin{cases}1&\text{if }\AutProfile^*\Policy\in \mathcal{Z} \\ 0&\text{ otherwise.} \end{cases} \] Intuitively, agent \(i\) chooses one of the automorphisms \(\AutProfile_i\in\Aut(\DecP)\) uniformly at random in the beginning of an episode and then follows the local policy \(\Proj_i(\AutProfile_i^*\Policy)\). It can easily be shown that this distribution has independent local policies. Let \(\Policy\in\PolicySet\) and let \(\Mixture\) be the OP distribution of \(\Policy\). Then \(\Mixture\) has independent local policies. Let \(\mathcal{Z}_i\subseteq\PolicySet_i\) measurable for \(i\in\PlayerSet\) and let \(\mathcal{Z}:=\prod_{i\in\PlayerSet}\mathcal{Z}_i\). Note that for any \(i\in\PlayerSet\) and \(\AutProfile\in\Aut(\DecP)^\PlayerSet\), it is \begin{equation}\label{eq:300} \delta_{\AutProfile^*\Policy}(\Proj_i^{-1}(\mathcal{Z}_i))=\prod_{j\in\PlayerSet\setminus\{i\}}\delta_{\Proj_j(\AutProfile^*_j\Policy)}(\PolicySet_j)\delta_{\Proj_i(\AutProfile^*_i\Policy)}(\mathcal{Z}_i)=\delta_{\Proj_i(\AutProfile^*_i\Policy)}(\mathcal{Z}_i).\end{equation} Hence, it follows that \begin{multline} \Mixture(\mathcal{Z}) \\ \\ \overset{(\ref{eq:300})}{=} \prod_{i\in\PlayerSet}|\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\delta_{\AutProfile^*\Policy}(\Proj_i^{-1}(\mathcal{Z}_i)) This shows that \(\Mixture=\otimes_{i\in\PlayerSet}\Mixture_i\). Using the OP distribution of a policy, we can define the symmetrizer \(\Psi^\DecP\) for \(\DecP\), which maps a policy \(\Policy\) to a policy \(\Psi^\DecP(\Policy)\) that corresponds to the OP distribution of \(\Policy\). If it is clear which Dec-POMDP is considered, we also write \(\Psi(\Policy)\). We define the symmetrizer for the Dec-POMDP \(\DecP\) as the map \(\Psi^\DecP\colon\PolicySet^\DecP\rightarrow\PolicySet^\DecP\) such that for any policy \(\Policy\in\PolicySet\) and OP distribution \(\Mixture\) of \(\Policy\), it is \[\Psi^\DecP(\Policy):=\Policy^\Mixture.\] It is clear that if a policy is already invariant to automorphism, then \(\Policy_i=\Psi_i(\Policy)\) for \(i\in\PlayerSet\), excluding action-observation histories that can never be reached under \(\Policy_i\). We formulate a slightly weaker proposition below, which is easier to prove. Let \(\Policy\in\PolicySet\) be invariant to automorphism, and assume that, for all \(t\in\{0,\dotsc,\Tmax\}\) and \(\AOHistory_{i,t}\in\AOHistorySet_{i,t}\), it is \(\Prob_\Policy(\AOHistoryRV_{i,t}=\AOHistory_{i,t})>0\). Then it is \(\Psi(\Policy)=\Policy\). Let \(\Mixture\) be the OP distribution of \(\Policy\). Since \(\Policy\) is invariant to automorphism, it is \[\Mixture=|\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\delta_{\AutProfile^*\Policy}=\delta_{\Policy}.\] Now let \(i\in\PlayerSet\), \(t\in\{0,\dotsc,\Tmax\}\), \(\Action_i\in\ActionSet_i\) and \(\AOHistory_{i,t}\in\AOHistorySet_{i,t}\) arbitrary. Using Equation <ref>, it follows that \[\Prob_{\delta_\Policy}(\AOHistoryRV_{i,t}=\AOHistory_{i,t}) \overset{(\ref{eq:equation-tower-prop-z})}{=} \E_{\delta_\Policy}\left[\Prob_Z(\AOHistoryRV_{i,t}=\AOHistory_{i,t})\right] \begin{multline} \label{eq:40} \Prob_{\delta_\Policy}(\ActionRV_{i,t}=\Action_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}) \overset{(\ref{eq:equation-tower-prop-z})}{=} \E_{\delta_\Policy}\left[ \Prob_Z(\ActionRV_{i,t}=\Action_i\mid\AOHistoryRV_{i,t})\mid \AOHistoryRV_{i,t}=\AOHistory_{i,t} \right] \\ \end{multline} Hence, we can apply Definition <ref> and conclude that \begin{multline}\Psi_i(\Policy)(\Action_i\mid\AOHistory_{i,t})=\Policy^{\Mixture_i}_i(\Action_i\mid\AOHistory_{i,t}) \overset{\text{Definition~\ref{policy-mixture}}}{=}\Prob_{\delta_\Policy}(\ActionRV_{i,t}=\Action_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t}) \\\overset{(\ref{eq:40})}{=}\Prob_{\Policy}(\ActionRV_{i,t}=\Action_i\mid\AOHistoryRV_{i,t}=\AOHistory_{i,t})=\Policy_i(\Action_i\mid\AOHistory_{i,t}).\end{multline} It will be helpful to refer to policies as equivalent if they have the same image under \(\Psi\). Let \(\Policy,\Policy'\in\PolicySet^\DecP\). We say that \(\Policy\) and \(\Policy'\) are equivalent, denoted as \(\Policy\equiv_\DecP\Policy\), if \(\Psi^\DecP(\Policy)=\Psi^\DecP(\Policy')\). Moreover, we write \([\Policy]:=\{\Policy'\mid \Policy'\equiv_D\Policy\}\) for the equivalence class of \(\Policy\). It is clear that \(\equiv_\DecP\) is an equivalence relation, since it is induced by the function \(\Psi^\DecP\). It follows that under \(\equiv_\DecP\), \(\PolicySet^\DecP\) decomposes into a partition of equivalence classes, denoted by \(\faktor{\PolicySet^\DecP}{\equiv_\DecP}\). Applying Lemma <ref> to the symmetrizer in particular, we can show that it commutes with isomorphisms, and that the policy \(\Psi(\Policy)\) is invariant to automorphism. Let \(\Isom\in\Iso(\DecP,\DecPSecond)\) and \(\Policy\in\PolicySet^\DecP\). Then it is \[\Isom^*\Psi^\DecP(\Policy)=\Psi^{\DecPSecond}(\Isom^*\Policy).\] If \(\DecPSecond=\DecP\), then \[\Isom^*\Psi^\DecP(\Policy)=\Psi^{\DecP}(\Policy),\] i.e., \(\Psi^\DecP(\Policy)\) is invariant to automorphism. Let \(\Mixture\) be the other-play distribution of \(\Policy\) and \(\MixtureHat\) the other-play distribution corresponding to \(\Isom^*\Policy\). Then, using the associativity of function composition and pushforward proven in Lemma <ref>, and using the “in particular” part of Lemma <ref>, it follows that \begin{multline}\label{equation-image-measure-pull-back} \MixtureHat \\ \overset{\text{Lemma~\ref{decompose-isomorphisms}}}{=} \\ |\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\delta_{\AutProfile^*\Policy}\circ (\Isom^*)^{-1} =\Mixture\circ (\Isom^*)^{-1}. \end{multline} Thus, using Lemma <ref>, it is \begin{equation}\label{equation-psi-isom-commute} \Isom^*\Psi^\DecP(\Policy) \overset{\text{Lemma~\ref{lemma-isomorphism-mixture-commute}}}{=} \Policy^{\Isom^*\Mixture} \overset{(\ref{equation-image-measure-pull-back})}{=} \Policy^{\MixtureHat} \end{equation} Finally, assume \(\DecPSecond=\DecP\). Then \(\Isom\) is an automorphism and \(\Aut(\DecPSecond)=\Aut(\DecP)=\Aut(\DecP)\circ\Isom\) by Lemma <ref>. Hence, \begin{multline}\label{equation-image-measure-pull-back-auto} \Mixture\circ (\Isom^*)^{-1} |\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\delta_{\AutProfile^*\Policy}\circ (\Isom^*)^{-1} \\ \\ \overset{\text{Lemma~\ref{decompose-isomorphisms}}}{=} \end{multline} By (<ref>), it follows that \[\Isom^*\Psi^\DecP(\Policy) \overset{(\ref{equation-psi-isom-commute})}{=} \Policy^{\Mixture\circ(\Isom^*)^{-1}} \overset{(\ref{equation-image-measure-pull-back-auto})}{=} \Policy^\Mixture=\Psi^\DecP(\Policy), \] which concludes the proof. A direct corollary is that we can define the pushforward purely in terms of equivalence classes of policies. This will also be useful later. Let \(\DecP,\DecPSecond\) be isomorphic Dec-POMDPs with \(\Isom\in\Iso(\DecP,\DecPSecond)\). Let \([\Policy]\in\faktor{\PolicySet^\DecP}{\equiv_\DecP}\). We define the pushforward equivalence class \(\Isom^*[\Policy]\in\faktor{\PolicySet_{\DecPSecond}}{\equiv_{\DecPSecond}}\) via \[f^*[\Policy]:=[f^*\Policy].\] The following corollary show that this is well-defined, that the pushforward of an equivalence class does not depend on the particular chosen isomorphism, and that it is compatible with function composition. (i) The pushforward of an equivalence class is well-defined, i.e., for any \(\Policy,\Policy'\in\PolicySet^\DecP\) such that \(\Policy\equiv_\DecP\Policy'\), it is \([\Isom^*\Policy]=[\Isom^*\Policy']\). (ii)Any two isomorphisms \(\Isom,\Isom'\in\Iso(\DecP,\DecPSecond)\) induce the same pushforward. (iii)Analogous results to those in Lemma <ref> apply to the pushforward of equivalence classes. First, let \(\Policy\equiv_\DecP\Policy'\in\PolicySet^\DecP\) and \(\Isom\in\Iso(\DecP,\DecPSecond)\) arbitrary. Then, using Corollary <ref> and the definition of \(\equiv_\DecP\), it is \[\Psi^{\DecPSecond}(\Isom^*\Policy)=\Isom^*\Psi^\DecP(\Policy)=\Isom^*\Psi^\DecP(\Policy')=\Psi^{\DecPSecond}(\Isom^*\Policy').\] Thus, \([\Isom^*\Policy]=[\Isom^*\Policy']\), which proves the first part. Second, let \(\Isom,\IsomSecond\in \Iso(\DecP,\DecPSecond)\) and \(\Policy\in\PolicySet^\DecP\) arbitrary. By Lemma <ref>, there then exists \(\Auto\in\Aut(\DecPSecond)\) such that \(\IsomSecond=\Auto\circ\Isom\). Hence, using the second and first part of Corollary <ref> and Lemma <ref>, it is \begin{equation} \label{eq:340}\Psi^{\DecPSecond}(\Isom^*\Policy) \overset{\text{Corollary~\ref{corollary-psi-isom-commute}}}{=} \Auto^*\Psi^{\DecPSecond}(\Isom^*\Policy) \overset{\text{Corollary~\ref{corollary-psi-isom-commute}}}{=} \Psi^{\DecPSecond}(\Auto^*(\Isom^*\Policy)) \overset{\text{Lemma~\ref{lemma-pull-back-function-composition-compatible}}}{=} \Psi^{\DecPSecond}((\Auto\circ\Isom)^*\Policy) Finally, it follows that \[\Isom^*[\Policy]=[\Isom^*\Policy]\overset{\ref{eq:340}}{=}[\IsomSecond^*\Policy]=\IsomSecond^*[\Policy], \] which concludes the second part. The third part follows directly from Lemma <ref> by using the definition of the pushforward of equivalence classes. In the following, we say that two equivalence classes \([\Policy],[\Policy']\) for \(\Policy\in\PolicySet^\DecP,\Policy'\in\PolicySet^\DecPSecond\) correspond to each other if there exists an isomorphism \(\Isom\in\Iso(\DecP,\DecPSecond)\) such that \(\Isom^*[\Policy]=[\Policy']\). In that case, in a slight abuse of the terms, we also say that \(\Policy\) and \(\Policy'\) are equivalent, extending the equivalence between policies defined above to policies for different Dec-POMDPs. Using Corollary <ref>, one can see that two policies \(\Policy,\Policy'\in\PolicySet^\DecP\) are equivalent in the sense that \([\Policy]=[\Policy']\) if and only if there exists an isomorphism \(\Isom\in\Iso(\DecP,\DecP)\) such that \(\Isom^*[\Policy]=[\Policy']\), so this extended notion is equivalent to the old one for two policies \(\Policy,\Policy'\in\PolicySet^\DecP\). We continue to reserve the notation \([\Policy]\) and \(\equiv\) for policies from the same Dec-POMDP. §.§ Main characterizations Having defined the symmetrizer \(\Psi\), we can now characterize the OP objective as transforming a policy \(\Policy\) into an invariant policy \(\Psi(\Policy)\) and evaluating the expected return of that policy. This means that we can “pass to the quotient” and consider the OP objective as a map \(\tilde{J}_\OP\) of equivalence classes, \(\tilde{J}_\OP([\Policy]):=J_\OP(\Policy)\) for \([\Policy]\in\faktor{\PolicySet}{\equiv}\), using the equivalence relation on policies introduced above. The result is essentially a rigorous version of hu2020other's Proposition 1 in our setup. It will help us later to analyze the OP-optimal policies in a given example, as it implies that we can restrict ourselves to considering representatives \(\Psi(\Policy)\) of equivalence classes. Let \(\DecP\) be a Dec-POMDP, let \(\Policy\in\PolicySet^\DecP\), and let \(\Psi\) be the symmetrizer for \(\DecP\). Then \(\Psi(\Policy)\) is invariant to automorphism, and it is \begin{equation}\label{eq:350} \end{equation} In particular, we can consider the OP objective as a function of equivalence classes \([\Policy]\in\faktor{\PolicySet^\DecP}{\equiv_\DecP}\), and if there exists an optimal policy for the OP objective, then there also exists an optimal policy that is invariant to automorphism. In the following, fix a Dec-POMDP \(\DecP\). Let \(\Policy\in\PolicySet\) and let \(\Mixture\) be the OP distribution of \(\Policy\), such that \(\Psi(\Policy)=\Policy^\Mixture\). Then by the second part of Corollary <ref>, \(\Psi(\Policy)\) is invariant to automorphism. Moreover, using Proposition <ref>, it is \begin{multline}\label{eq:34} J_\OP(\Policy)=\E_{\AutProfile\sim \U(\Aut(\DecP)^\PlayerSet)}\left[J(\AutProfile^*\Policy) \right] \\= \sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}|\Aut(\DecP)|^{-N}\int_{\Policy'\in\PolicySet}J(\Policy') \mathrm{d}\left(\delta_{\AutProfile^*\Policy}\right) \\ \int_{\Policy'\in\PolicySet}J(\Policy')\mathrm{d}\left(\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}|\Aut(\DecP)|^{-N}\delta_{\AutProfile^*\Policy}\right) \\ \overset{(\ref{equation-other-play-mixture})}{=} \int_{\Policy'\in\PolicySet}J(\Policy')\mathrm{d}\Mixture \overset{(\ref{expected-return-mixture})}{=}J( \Mixture) \overset{\text{Proposition~\ref{mixture-lemma}}}{=} \end{multline} which proves Equation <ref>. Turning to the “in particular” statement, let \(\Policy'\equiv\Policy\) for a second policy \(\Policy'\in\PolicySet\). By Definition <ref>, this means that \(\Psi(\Policy')=\Psi(\Policy)\). Hence, by Equation <ref>, it follows that \(J_\OP(\Policy)=J_\OP(\Policy')\), which shows that the function \(\tilde{J}_{\OP}\colon\faktor{\PolicySet}{\equiv}\rightarrow\mathbb{R},[\Policy]\mapsto J_\OP(\Policy)\) is well-defined. Lastly, assume that there is \(\Policy\in\argmax_{\Policy'\in\PolicySet}J_\OP(\Policy')\) (see Remark <ref> regarding the existence of such a policy). Then by Equation <ref>, it is also \(\Psi(\Policy)\in\argmax_{\Policy'\in\PolicySet}J_\OP(\Policy')\), so \(\Psi(\Policy)\) is an OP-optimal policy that is invariant to automorphism. As a corollary, we can show that isomorphisms do not affect the OP value of a policy (we already know this about SP from Corollary <ref>). In the following, we define \(\PolicySet_\OP^\DecP:=\argmax_{\Policy\in\PolicySet^\DecP}J^\DecP_\OP(\Policy)\) for a Dec-POMDP \(\DecP\). Let \(\DecP\), \(\DecPSecond\) be isomorphic Dec-POMDPs with \(\Isom\in\Iso(\DecP,\DecPSecond)\), and let \(\Policy\in\PolicySet^\DecP\). Then it is \[J_\OP^{\DecP}(\Policy)=J_\OP^\DecPSecond(\Isom^*\Policy).\] In particular, if \(\Policy\in\PolicySet_\OP^\DecP\), then also \(\Isom^*\Policy\in\PolicySet_\OP^{\DecPSecond}\). Using Theorem <ref>, Corollary <ref>, and Corollary <ref>, it is \begin{multline}\label{eq:35} \overset{\text{Theorem~\ref{thm-op-mixture}}}{=} \overset{\text{Corollary~\ref{corollary-psi-isom-commute}}}{=} \\ \overset{\text{Corollary~\ref{cor-sp-invariant-to-pullback}}}{=} \overset{\text{Theorem~\ref{thm-op-mixture}}}{=} \end{multline} Turning to the “in particular” statement, assume that \(\Policy\in\PolicySet^\DecP_\OP\). By Lemma <ref>, it is \(\Isom^{-1}\in\Iso(\DecPSecond,\DecP)\). Hence, for any \(\PolicyTilde\in\PolicySet^{\DecPSecond}\), it follows from the preceding that \begin{equation} \leq J^{\DecP}_\OP(\Policy)=J^{\DecPSecond}_\OP(\Isom^*\Policy). \end{equation} This shows that \(\Isom^*\Policy\in \PolicySet^{\DecPSecond}_\OP\). Finally, we turn to the connection between OP and the payoff in an LFC game. The following result will be helpful in both showing the inadequacy of OP and in proving that OP with tie-breaking is optimal. It shows that equivalent policies in \([\Policy]\in\faktor{\PolicySet}{\equiv}\) are all compatible when played against each other by different principals in the LFC game. The proof is based on Lemma <ref> and Proposition <ref>. Let \(\DecP\) be a Dec-POMDP, let \(\Psi\) be the symmetrizer for \(\DecP\), and define \(\DecPSet:=\{\Isom^*\DecP\mid \Isom\in\Sym(\DecP)\}\). Let \(\LAProfile_1,\dotsc,\LAProfile_N\in\LASet^\DecPSet\). For any \(\DecPSecond\in\DecPSet\), choose \(\Isom_{\DecP,\DecPSecond}\in\Iso(\DecP,\DecPSecond)\) arbitrarily. Then it is \begin{equation} U^\DecP(\LAProfile)=\E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ J^\DecP \left(\left(\Psi_k(\Policy^{(k)})\right)_{k\in\PlayerSet}\right) \right] \right]. \end{equation} First, consider arbitrary joint policies \(\Policy^{(1)},\dots,\Policy^{(N)}\in\PolicySet^\DecP\) and let \(\Mixture^{(i)}\) be the OP distribution of \(\Policy^{(i)}\), so that \(\Policy^{\Mixture^{(i)}}=\Psi(\Policy^{(i)})\) for \(i\in\PlayerSet\). Define the distribution \begin{equation}\label{eq:37} \MixtureHat(\Policy^{(1)},\dotsc,\Policy^{(N)}):=|\Aut(\DecP)|^{-N}\sum_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\otimes_{i\in\PlayerSet}\delta_{\Proj_i(\AutProfile_i^*\Policy^{(i)})} \end{equation} as a function of \(\Policy^{(1)},\dotsc,\Policy^{(N)}\). It can easily be seen that \(\MixtureHat_(\Policy^{(1)},\dotsc,\Policy^{(N)})\in \Delta(\PolicySet^\DecP)\) and that it has independent local policies. Moreover, \(\MixtureHat(\Policy^{(1)},\dotsc,\Policy^{(N)})_i=\Mixture^{(i)}_i\), i.e., the marginal distribution for agent \(i\in\PlayerSet\) is equal in \(\MixtureHat(\Policy^{(1)},\dotsc,\Policy^{(N)})\) and \(\Mixture^{(i)}\). Hence, also the corresponding local policies are identical, that is, \begin{equation}\label{equation-mixture-hat-equal-psi} \Policy_i^{\MixtureHat(\Policy^{(1)},\dotsc,\Policy^{(N)})_i} \end{equation} for \(i\in\PlayerSet\). It follows that \begin{align}\label{eq:36} \\\label{eq:38a} \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ \E_{\AutProfile\in\Aut(\DecP)^\PlayerSet}\left[ \\ \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\Bigg[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \Bigg[ \\&\quad\quad \int_{\Policy\in\PolicySet^\DecP} \right)\Bigg]\Bigg] \\ \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\Bigg[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \Bigg[ \\&\quad\quad\int_{\Policy\in\PolicySet^\DecP} \right)\Bigg]\Bigg] \\ \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[\int_{\Policy\in\PolicySet^\DecP} \\ \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ \\\label{eq:38b} \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ \\ \E_{\DecP_i\sim U(\DecPSet),\,i\in\PlayerSet}\left[ \E_{\Policy^{(j)}\sim\Isom_{\DecP_j,\DecP}^*\LAProfile_j(\DecP_j),\,j\in\mathcal{N}} \left[ \end{align} where we have used Lemma <ref> in (<ref>) and Proposition <ref> in (<ref>). This concludes the proof. note to self: I could change the way I explain this equation a bit, maybe add lemma etc to the equation signs. not super important Based on this result, the LFC game for \(\DecP\) can be understood in the following way. Principal \(i\in\PlayerSet\) observes a randomly relabeled problem \(\DecP_i\in\DecPSet\) and trains a joint policy \(\Policy^{(i)}\sim\LAProfile_i(\DecP_i)\) on this problem. The resulting policy \(\Policy^{(i)}\) is then translated back into a policy \(\Isom_{\DecP_i,\DecP}^*\Policy^{(i)}\) for the original problem, using any isomorphism \(\Isom_{\DecP_i,\DecP}\in\Iso(\DecP_i,\DecP)\). Finally, this joint policy is made invariant to automorphism by applying the symmetrizer, and agent \(i\) in the original problem \(\DecP\) is assigned the local policy \(\Psi_i(\Isom_{\DecP_i,\DecP}^*\Policy^{(i)})\). §.§ Other-play is not self-play in a different Dec-POMDP hu2020other show that the OP objective \(\tilde{J}^\DecP_\OP\) introduced by them can be understood of as the SP objective in a special Dec-POMDP. That is, for every Dec-POMDP \(\DecP\), there is a second Dec-POMDP \(\DecPSecond\) with \(\PolicySet^\DecPSecond=\PolicySet^\DecP\) such that for any \(\Policy\in\PolicySet^\DecP\), it is \(\tilde{J}^\DecP_\OP(\Policy)=\max_{\Policy'\in\PolicySet^\DecP}\tilde{J}^\DecP_\OP(\Policy')\) if and only if \(J^\DecPSecond(\Policy)=\max_{\Policy'\in\PolicySet^\DecPSecond}J^\DecPSecond(\Policy')\). Interestingly, when including player permutations, this is not the case anymore. We will prove this here, using the characterization of OP from the last section. Intuitively, if agents are symmetric, then under OP, they will always act according to the same local policy in the environment. In some Dec-POMDPs, this means that it is optimal for the agents to randomize their actions, to end up with different actions some of the time. For instance, consider the following game: There are two players with two actions \(\ActionSet_i:=\{\Action_{i,1},\Action_{i,2}\}\) for \(i=1,2\) each, and an episode lasts only one step, making this a simple normal-form game. Rewards for each joint action are displayed in Table <ref>. Rewards for each joint action in Example <ref>. \(a_{2,1}\) \(a_{2,2}\) \(a_{1,1}\) \(-\frac{1}{2}\) \(1\) \(a_{1,2}\) \(1\) \(-1\) This example demonstrates that sometimes there does not exist a deterministic policy that is optimal under the OP objective. In Example <ref>, for any deterministic policy \(\Policy\), it is \(J_\OP(\Policy)<\max_{\Policy'\in\PolicySet^\DecP}J_\OP(\Policy')\). Let \(R\in\mathbb{R}^{2,2}\) denote a matrix containing rewards as in Table <ref>. First, note that in this game, players are symmetric, but actions are not. Moreover, there are no observations and only one state. Hence, \(\Aut(\DecP)=\{\Auto,\Id\}\) where \(\Auto_N1=2,\Auto_N2=1\) and \(\Id\) is the identity. Now consider any deterministic policy \(\Policy=(\Policy_1,\Policy_2)\), corresponding to two vectors of action-probabilities \[x,y\in \left\{\begin{bmatrix}1\\0\end{bmatrix},\begin{bmatrix}0\\1\end{bmatrix}\right\}\] for the two players. Due to the symmetry of both players, the OP distribution \(\Mixture\) of \(\Policy\) assigns each policy \(\Policy_1,\Policy_2\) to either player with probability \(\frac{1}{2}\), so in \(\Psi(\Policy)=\Policy^\Mixture\), both players play the distribution \(z:=\frac{1}{2}x + \frac{1}{2}y\) and receive a reward of \[J^\OP(\Policy)\overset{\text{Theorem~\ref{thm-op-mixture}}}{=}J(\Psi(\Policy))=z^\top R z.\] It follows by the definition of \(x,y\) that \[z\in \left\{\begin{bmatrix}1\\0\end{bmatrix},\begin{bmatrix}0\\1\end{bmatrix},\begin{bmatrix}\frac{1}{2}\\\frac{1}{2}\end{bmatrix}\right\}.\] Clearly, then \(z^\top Rz\) is maximized at \(z=[\frac{1}{2},\frac{1}{2}]^\top\), yielding a reward of \begin{equation}\label{eq:510} J^\OP(\Policy)=z^\top R z=\frac{1}{8}. \end{equation} Next, define \(x=[\frac{4}{7},\frac{3}{7}]^\top\) and let \(\Policy^*\) be a policy such that \(\Policy^*_1=\Policy^*_2\) and the two action-probabilities of both local policies are given by the vector \(x\). Note that since \(\Auto^*\Policy^*=(\Policy^*_{\Auto i})_{i=1,2}=\Policy^*\), it follows from the above that \(\Policy^*\) is invariant to automorphism. It follows by Proposition <ref> that we can evaluate the OP value of \(\Policy^*\) by evaluating its expected return. That is, \begin{equation}J^\OP(\Policy^*)\overset{\text{Proposition~\ref{prop-invariant-policy-self-play-value-equals-other-play-value}}}{=}J(\Policy^*)=x^\top R x=-\frac{1}{2}\left(\frac{4}{7}\right)^2+2\frac{4}{7}\frac{3}{7} -1\left(\frac{3}{7}\right)^2=\frac{1}{7}.\end{equation} It follows that \begin{equation} J_\OP(\Policy)\overset{(\ref{eq:510})}{=}\frac{1}{8}<\frac{1}{7}=J^\OP(\Policy^*)\leq \max_{\Policy'\in\PolicySet^\DecP}J_\OP(\Policy'), \end{equation} which concludes the proof. The fact that there is no deterministic optimal policy in Example <ref> is in conflict with a canonical result about Dec-POMDPs. In every Dec-POMDP \(\DecP\), there is a deterministic policy \(\Policy\in(\PolicySet^0)^\DecP\) such that \(J^\DecP(\Policy)=\max_{\Policy'\in\PolicySet^\DecP}J^\DecP(\Policy')\). As a result, we can prove the following. There exists a Dec-POMDP \(\DecP\) such that for any other Dec-POMDP \(\DecPSecond\) with \(\PolicySet^\DecPSecond=\PolicySet^\DecP\), there exists a policy \(\Policy\in\PolicySet^\DecPSecond\) that is optimal for the SP objective of \(\DecPSecond\), but not optimal for the OP objective of \(\DecP\). Let \(\DecP\) be the Dec-POMDP as described in Example <ref>. Assume, towards a contradiction, that there exists a Dec-POMDP \(\DecPSecond\) with \(\PolicySet^\DecPSecond=\PolicySet^\DecP\) such that any optimal policy in that Dec-POMDP is optimal under the OP objective of \(\DecP\). Then by Theorem <ref>, there exists a deterministic policy \(\Policy\in(\PolicySet^0)^\DecPSecond\) such that \(J^\DecPSecond(\Policy)\) is maximal. Hence, by the assumption, also \(J^\DecP_\OP(\Policy)\) is maximized. But by Lemma <ref>, it must be \(\max_{\PolicyTilde\in\PolicySet^\DecP}J^\DecP_\OP(\PolicyTilde)>J^\DecP_\OP(\Policy)\). This is a contradiction, which means that \(\DecP\) is an example of a Dec-POMDP that has the desired properties. This shows that to optimize the OP objective, we have to directly consider that objective and we cannot simply apply an RL algorithm to a different Dec-POMDP. Also, the fact that we need stochastic policies means that it is not immediately clear how to apply a Bellman equation to the objective. § OTHER-PLAY IS NOT OPTIMAL IN THE LABEL-FREE COORDINATION PROBLEM In this section, our goal is to prove a rigorous version of Theorem <ref> from the main text. Recall that in Appendix <ref>, we introduced the symmetrizer \(\Psi\colon\PolicySet\rightarrow\PolicySet\), which maps a joint policy \(\Policy\) to the policy corresponding to agents following randomly permuted local policies \((\AutProfile_i^*\Policy)_i\) where \(\AutProfile_i\sim\U(\Aut(\DecP))\). This represents the random permutations employed in the OP objective, and hence by Theorem <ref>, it is \(J_\OP(\Policy)=J(\Psi(\Policy))\), i.e., the OP value of \(\Policy\) is equal to the SP value of \(\Psi(\Policy)\). Moreover, we defined the equivalence classes \([\Policy]=\Psi^{-1}(\{\Psi(\Policy)\})\) of policies that get mapped to the same policy under \(\Psi\). As defined in Appendix <ref>, an OP learning algorithm is any learning algorithm such that the policies that it learns achieve optimal OP value in expectation. In particular, it can be a learning algorithm that learns different policies in different training runs, as long as it chooses OP-optimal policies with probability \(1\). As we have seen in Theorem <ref>, in an LFC game, it does not matter which policy from an equivalence class \([\Policy]\) is chosen. Unfortunately, though, there can also be different OP-optimal policies that are not equivalent. In this case, if an OP learning algorithm is not concentrated on only compatible policies, it is not optimal in the corresponding LFC problem.
homological Serre grading classes = draw = none # The discrete flow category: structure and computation Bjørnar Gullikstad Hem ###### Abstract. In this article, we use concepts and methods from the theory of simplicial sets to study discrete Morse theory. We focus on the discrete flow category introduced by Vidit Nanda, and investigate its properties in the case where it is defined from a discrete Morse function on a regular CW complex. We design an algorithm to efficiently compute the Hom posets of the discrete flow category in this case. Furthermore, we show that in the special case where the discrete Morse function is defined on a simplicial complex, then each Hom poset has the structure of a face poset of a regular CW complex. Finally, we prove that the spectral sequence associated to the double nerve of the discrete flow category collapses on page 2. Email<EMAIL_ADDRESS> ###### Contents 1. 1 Introduction 2. 2 Preliminaries on simplicial sets 3. 3 Discrete Morse theory 4. 4 p-categories 5. 5 The discrete flow category 6. 6 Hom posets of discrete flow categories 7. 7 Simplicial collapse on simplicial sets 8. 8 Spectral sequences ## 1\. Introduction Differentiable functions on smooth manifolds allow us to deduce properties of the manifolds’ topology, through the use of Morse theory. For Morse theory to apply, the differential function has to satisfy the _Morse condition_. Such functions are called _Morse functions_ , and they are in many ways abundant. A Morse function can be viewed as a “height function” on a manifold that allows you to decompose the manifold into smaller, more manageable parts. Morse theory has had vast applications, not only in topology, but also in other areas, such as in the study of dynamical systems [18]. The combinatorial counterpart of Morse theory, discrete Morse theory, was introduced by Robin Forman [6], and studies _discrete Morse functions_ on _simplicial complexes_. Like Morse theory, discrete Morse theory allows us to deduce information about the topology of a simplicial complex. As the name suggests, the data of a discrete Morse function is discrete, making discrete Morse theory suitable for computer applications. Discrete Morse theory has seen much use in computer applications in later years, particularly in topological data analysis (TDA). As an example, in [17, pp. 13–14], Scoville explains how discrete Morse theory can be used to reduce the computational complexity of homology computations in TDA. One of the main foundations for this article is a paper by Nanda [14] that introduces the _discrete flow category_ , an analogue in discrete Morse theory for the _flow category_ of Cohen, Jones and Segal [4]. The flow category is a category consisting of critical points and flow lines of a Morse function, and whose classifying space captures the homotopy type of the manifold on which the Morse function is defined. Likewise, the discrete flow category consists of critical _cells_ and _gradient paths_ of a discrete Morse function, and has a classifying space with the homotopy type of the simplicial complex on which the discrete Morse function is defined. While the flow category is a _topological category_ , meaning that the Hom sets are topological spaces, the discrete flow category is a _p-category_ , meaning that the Hom sets are posets. Constructing the classifying space of either of these categories utilizes concepts from simplicial sets, inspiring the further study of discrete Morse theory from the perspective of simplicial sets. Another inspiration for this article is a paper by Vaupel, Hermansen and Trygsland on section complexes of simplicial height functions [20]. In the paper, a bisimplicial set is constructed from a _height function_ on a simplicial set. This bisimplicial set gives a spectral sequence that can be used to compute the homology of the simplicial set, and it is shown that under certain conditions the spectral sequence collapses on the second page. In this article we explore Nanda’s discrete flow category (in the special case where it is defined from a discrete Morse function), and prove several properties of it. In particular, we show that in the case where the discrete Morse function is defined on a simplicial complex (not a general CW complex), then the Hom posets in the discrete flow category are CW posets, meaning that they have the structure of a face poset of a regular CW complex. We will refer to this as Theorem A. ###### Theorem A. Let $C$ be the discrete flow category of a discrete Morse function on a simplicial complex. Then the for all objects $w$ and $z$ in $C$, the poset $\operatorname{Hom}_{C}(w,z)^{\operatorname{op}}$ is the face poset of some regular CW complex. This result gives a simpler way of realizing the Hom posets as topological spaces: instead of taking the geometric realization of the nerve, one can take the corresponding regular CW complex. Furthermore, we construct an algorithm, Algorithm 1, to compute the Hom posets of the discrete flow category, where the input is a discrete Morse function defined on a regular CW complex. For the algorithm, we use several of the results needed for proving A. We provide a Python implementation for the algorithm, which can be found at https://github.com/bjornarhem/discrete_flow . Finally, we investigate the spectral sequence associated to the _double nerve_ of the discrete flow category, a bisimplicial set whose realization is the classifying space of the category. As in the paper by Vaupel et al. [20], this spectral sequence computes the homology of the regular CW complex from which we define the discrete flow category. Classifying spaces of p-categories are often complicated and hard to compute, and this is also the case for the discrete flow category. However, as we prove in this article, the spectral sequence associated to its double nerve has a particularly nice structure that causes it to collapse on page 2. We will refer to this as Theorem B. ###### Theorem B. The spectral sequence associated to the double nerve of a discrete flow category collapses on page 2. The definition of the mentioned spectral sequence will be made formal in the text (in particular, there are two possible choices). We also show how you can ignore degeneracies when computing the spectral sequence, which makes the computation relatively simple. Finally, we provide several examples of computing the spectral sequence, in which we make use of Algorithm 1 to compute the Hom sets. We also provide a Python implementation of the spectral sequence computation at https://github.com/bjornarhem/discrete_flow , that outputs the spectral sequence pages given any discrete Morse function. To prove B, we generalize the concept of _simplicial collapse_ to simplicial sets. We then introduce _unique factorization categories_ , a class of categories in which all morphisms have a unique decomposition into “indecomposable” morphisms. Finally, we apply our generalization of simplicial collapse to prove the following result. ###### Theorem C. Let $C$ be a unique factorization category. Then the nerve of $C$ deformation retracts to a simplicial set whose $n$–simplices are all degenerate for $n>1$. This theorem is then used in the proof of B. ### Outline Section 2 and 3 are dedicated to reviewing the theory of simplicial sets and of discrete Morse theory, respectively, which will be needed for the following sections. Section 4 contains the necessary preliminaries on p-categories, and Section 5 is a review of the discrete flow category of a discrete Morse function. In Section 6, we prove several results on the Hom posets of the discrete flow category, and conclude with constructing Algorithm 1 and proving A. In Section 7, we introduce simplicial collapse on simplicial sets and unique factorization categories, and prove C. Finally, in Section 8, we explain how the discrete flow category gives rise to a spectral sequence, and prove B. ### Acknowledgments This work is based on my master thesis, that I wrote for NTNU in Spring 2023. I would like to thank my master thesis supervisors, Marius Thaule and Melvin Vaupel, who have both been incredibly helpful and supportive. They have gone above and beyond, both in encouraging me to explore my ideas and in helping me seek answers to the countless questions I have asked them. Their feedback, which has consistently been both thorough and constructive, has been invaluable in this work. I would also like to thank my PhD supervisor, Kathryn Hess Bellwald, for providing feedback in the final steps of writing this article. ## 2\. Preliminaries on simplicial sets In this section we define our notation related to simplicial sets, and state some results that we will need. For a thorough exposition of simplicial sets, see e.g., [8] or [7]. A _simplicial set_ is a functor $\Delta^{\operatorname{op}}\to\mathsf{Set},$ where $\Delta$ is the simplex category and $\mathsf{Set}$ is the category of sets. A morphism of simplicial sets is a natural transformation between two such functors, and we denote the category of simplicial sets by $\mathsf{sSet}$. The $i$th face map is denoted by $d_{i}$, and the $i$th degeneracy map is denoted by $s_{i}$. For a simplicial set $X$, we denote its geometric realization by $|X|$. For a small category $C$, we denote its nerve by $\mathcal{N}(C)$. ### 2.1. Barycentric subdivision We here define what we mean by the barycentric subdivision of a regular CW complex. Recall that a regular CW complex is a CW complex where each attaching map is a homeomorphism onto its image. ###### Definition 2.1. Let $X$ be a regular CW complex. The barycentric subdivision of $X$, denoted $T(X)$, is a simplicial complex whose vertices are the cells of $X$ and whose $n$–simplices are sequences of distinct cells $(\sigma_{0},\dots,\sigma_{n})$ such that $\sigma_{0}\subseteq\dots\subseteq\sigma_{n}$. An example of a barycentric subdivision is given in Figure 1. Figure 1. The barycentric subdivision of the simplicial complex $\Delta^{2}$. ###### Theorem 2.2. If $X$ is a regular CW complex, then the geometric realization of the barycentric subdivision, $|T(x)|$, is homeomorphic to $X$. For a proof, see [10, pp. 80–81]. ### 2.2. Simplicial homology In this section we state some results on simplicial homology that will be needed in later sections. We get the following result by combining Theorem 2.1 and Theorem 2.4 in [8, Chapter III]. ###### Theorem 2.3. Let $C$ be the (unnormalized) simplicial chain complex associated to a simplicial set, and $D$ its subcomplex of degeneracies. Then, for all $n$ there is an isomorphism $H_{n}(C)\cong H_{n}(C/D),$ induced by the projection map $p\colon C\to C/D$. The following corollary follows immediately from applying the long exact sequence of homology to the short exact sequence of chain complexes $0\to D\to C\to C/D\to 0$. ###### Corollary 2.4. Let $C$ be the (unnormalized) simplicial chain complex associated to a simplicial set, and $D$ its subcomplex of degeneracies. Then, $H_{n}(D)\cong 0,$ for all $n$. ### 2.3. Bisimplicial sets A _bisimplicial set_ is a functor $\Delta^{\operatorname{op}}\times\Delta^{\operatorname{op}}\to\mathsf{Set},$ or equivalently, $\Delta^{\operatorname{op}}\to\mathsf{sSet}.$ For a bisimplicial set $X$, the _diagonal_ of $X$, denoted $\operatorname{diag}X$, is the simplicial set given by: * • $(\operatorname{diag}X)([n])=X([n],[n])$, * • For $\theta\colon[m]\to[n]$, $(\operatorname{diag}X)(\theta)=X(\theta,\theta)$. We define the _geometric realization_ of a bisimplicial set $X$, denoted $|X|$, as the realization of the diagonal, i.e., $|X|\coloneq|\operatorname{diag}X|$. ## 3\. Discrete Morse theory In this section, we briefly summarize the core ideas and definitions from discrete Morse theory. In Morse theory, one studies certain differentiable functions, called _Morse functions_ , on smooth manifolds. These Morse functions allow us to deduce information about the topology of the manifold. For more information about Morse theory, see e.g. [13]. Discrete Morse theory is a discrete analogue of this, where one studies real-valued functions on CW complexes. The functions assign a single real value to each cell of the CW complex, and hence we get a discrete set of values. We will only define discrete Morse functions for regular CW complexes. For the general definition, see [6]. Note also that one often consider only discrete Morse functions on simplicial complexes, which can be considered a special class of regular CW complexes. ###### Definition 3.1. Let $X$ be a regular CW complex. A _discrete Morse function_ on $X$ is a function $f\colon X\to\mathbb{R}$ that assigns a real value to each cell of $X$ such that for each $k$–cell $x$, the following conditions hold. 1. (1) There is at most one $(k+1)$–cell $y$ such that $x\subseteq y$ and $f(x)\geq f(y)$. 2. (2) There is at most one $(k-1)$–cell $y$ such that $y\subseteq x$ and $f(y)\geq f(x)$. ###### Definition 3.2. An $k$–cell $x$ is called _critical_ with respect to a discrete Morse function $f\colon X\to\mathbb{R}$ if the following conditions hold. 1. (1) There are no $(k+1)$–cells $y$ such that $x\subseteq y$ and $f(x)\geq f(y)$, and 2. (2) There are no $(k-1)$–cells $y$ such that $y\subseteq x$ and $f(y)\geq f(x)$. Cells that are not critical are called _regular_. Note that for a discrete Morse function, it is impossible for both of these conditions to fail to hold for a single cell (this is often called the exclusion lemma) [6, Lemma 2.5]. This also implies that if $x\subseteq y$ and $\dim y>\dim x+1$, then $f(y)>f(x)$. An example of a discrete Morse function is given in Figure 2. $\mathbf{\underline{7}}$$\mathbf{\underline{6}}$$\mathbf{2}$$\mathbf{4}$$\mathbf{\underline{1}}$$\mathbf{3}$$\mathbf{5}$ Figure 2. An example of a discrete Morse function. The critical simplices are underlined and colored red. ### 3.1. Gradient vector fields In discrete Morse theory, the essential information in a discrete Morse function is the specification of those pairs such that one of the conditions in 3.2 fails to hold, its _regular pairs_. Therefore, instead of talking about a discrete Morse function $f$, we often talk about its induced _gradient vector field_ $V_{f}$. ###### Definition 3.3. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function, let $x$ be a $k$–cell and let $y$ be a $(k+1)$–cell. Then $\\{x,y\\}$ is called a _regular pair_ if $x\subseteq y$ and $f(x)\geq f(y)$. ###### Definition 3.4. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function. The _induced gradient vector field_ of $f$, denoted $V_{f}$, is defined as the set of _regular pairs_ , i.e., $V_{f}=\\{\\{x,y\\}:x\subsetneq y,f(x)\geq f(y)\\}.$ An example of a gradient vector field is given in Figure 3. Two discrete Morse functions that induce the same gradient vector field are said to be _Forman equivalent_. Figure 3. The gradient vector field for the discrete Morse function in Figure 2. Given a regular CW complex $X$, a _discrete vector field_ on $X$ is a set $V$ of mutually disjoint pairs $\\{x,y\\}$ such that $\dim y=\dim x+1$. Given a discrete vector field $V$, a _$V$ –path_ is a sequence of simplices $(x_{0},y_{0},x_{1},\dots,y_{m},x_{m+1})$ such that $x_{i}\in X_{k},y_{i}\in X_{k+1}$, $x_{i+1}\subseteq y_{i}$, $\\{x_{i},y_{i}\\}\in V$ and $x_{i+1}\neq x_{i}$. Discrete vector fields and gradient vector fields are related in the following way. ###### Theorem 3.5. [6, Theorem 9.3] A discrete vector field $V$ is a gradient vector field of some discrete Morse function if and only if there are no nontrivial $V$–paths that start and end on the same cell. ### 3.2. Simplicial collapse In this section, we restrict our attention to simplicial complexes, and define what is known as a _simplicial collapse_. A _free face_ in a simplicial complex is a $k$–simplex that is the face of exactly one $(k+1)$–simplex. If $x\in X^{k}$ is a free face, and $y$ is its $(k+1)$–dimensional coface, then $\\{x,y\\}$ is called a _free pair_. Removing a free pair from a simplicial complex is called an _elementary collapse_. A sequence of elementary collapses is called a _compound collapse_ or _simplicial collapse_. If the simplicial complex $Y$ can be produced from a simplicial complex $X$ through a compound collapse, we say that $X$ _collapses_ to $Y$ and write $X\searrow Y$. Of particular importance is the fact that elementary collapses, and as a consequence compound collapses, yield a deformation retract. That is, if $X\searrow Y$, then $X$ deformation retracts to $Y$. An example of an elementary collapse is given in Figure 4. Figure 4. An elementary collapse. ## 4\. p-categories A p-category is a category enriched in the category of posets. Thus, the Hom sets all admit a poset structure with respect to which composites is compatible, i.e., if $f\Rightarrow f^{\prime}$ and $g\Rightarrow g^{\prime}$ then $f\circ g\Rightarrow f^{\prime}\circ g^{\prime}$. Here we have used the symbol $\Rightarrow$ to denote a partial order relation between two morphisms, which we often will. A _strict p-functor_ $F\colon C\to D$ maps objects in $C$ to objects in $D$, and maps each Hom poset $\operatorname{Hom}_{C}(x,y)$ monotonically to $\operatorname{Hom}_{D}(F(x),F(y))$, such that 1. (1) $F(\operatorname{id}_{x})=\operatorname{id}_{F(x)}$ for all objects $x$ in $C$, and 2. (2) $F(f\circ g)=F(f)\circ F(g)$ for all composable $f$ and $g$. A p-category can be viewed as a special case of a strict 2–category, where the 2–morphisms are partial order relations between morphisms. One can easily verify that this satisfies all the axioms of a strict 2–category. In particular, for a p-category viewed as a 2–category, there can only be zero or one 2–morphisms between two 1–morphism (either there is a partial order relation, or there is not). There are several ways to define the classifying space of a p-category (and more generally, a 2–category), but they are all homotopy equivalent ([2], [3]). In the rest of this section, we define the _double nerve_ of a p-category and let its geometric realization define the classifying space of a p-category. ### 4.1. Simplicial categories A simplicial category is a simplicial object in $\mathsf{Cat}$, i.e., a functor $\Delta^{\operatorname{op}}\to\mathsf{Cat}.$ Composing such a functor $F\colon\Delta^{\operatorname{op}}\to\mathsf{Cat}$ with the nerve functor $\mathcal{N}\colon\mathsf{Cat}\to\mathsf{sSet}$ gives rise to a bisimplicial set $(\mathcal{N}\circ F)$. You can further take the diagonal of this bisimplicial set to get the simplicial set $\operatorname{diag}(\mathcal{N}\circ F)$. ###### Example 4.1. Consider the following simplicial category $\mathscr{S}$. * • $\mathscr{S}[0]$ is a category with two objects, $a$ and $b$, and two non- identity morphisms, $f\colon a\to b$ and $g\colon a\to b$. * • $\mathscr{S}[1]$ consists of degeneracies of $\mathscr{S}[0]$ and a single non-degenerate morphism $F\colon s_{0}a\to s_{0}b$, with $d_{1}F=f$ and $d_{0}F=g$. * • For $n\geq 2$, all objects and morphisms in $\mathscr{S}[n]$ are degenerate. Figure 5 illustrates the simplicial set $\operatorname{diag}(\mathcal{N}\circ\mathscr{S})$ (omitting unimportant degenerate simplices). Observe that the geometric realization is a suspension of $|\Delta^{1}|$, i.e., homeomorphic to a disk $D^{2}$. $a$$a$$b$$b$$s_{0}\operatorname{id}_{a}$$s_{0}\operatorname{id}_{b}$$s_{0}f$$s_{0}g$$F$$(s_{1},s_{0})F$$(s_{0},s_{1})F$ Figure 5. The simplicial set $\operatorname{diag}(\mathcal{N}\circ\mathscr{S})$. Note that the top and bottom edges are degenerate, as $s_{0}\operatorname{id}_{a}=(s_{0},s_{0})a$, and similar for $s_{0}\operatorname{id}_{b}$. The left and right edges are, however, not degenerate as 1–simplices in $\operatorname{diag}(\mathcal{N}\circ\mathscr{S})$. ### 4.2. The double nerve of a p-category Given a p-category $C$, one can construct a simplicial category $\operatorname{\overline{N}}C$ as follows: * • The objects in $(\operatorname{\overline{N}}C)[0]$ are the objects in $C$. * • For $n\geq 1$, the objects in $(\operatorname{\overline{N}}C)[n]$ are degeneracies of the objects in $(\operatorname{\overline{N}}C)[0]$. * • The morphisms $a\to b$ in $(\operatorname{\overline{N}}C)[0]$ are the morphisms $a\to b$ in $C$. * • For $n\geq 1$, the morphisms $(s_{0})^{n}a\to(s_{0})^{n}b$ in $(\operatorname{\overline{N}}C)[n]$ are $n$–simplices in $\mathcal{N}\left(\operatorname{Hom}_{C}(a,b)\right)$ (where we consider the poset $\operatorname{Hom}_{C}(a,b)$ as an ordinary category and take its nerve). In other words, morphisms in $\operatorname{Hom}_{(\operatorname{\overline{N}}C)[n]}\left((s_{0})^{n}a,(s_{0})^{n}b\right)$ are sets of morphisms $\\{f_{0},\dots,f_{n}\\}$ in $\operatorname{Hom}_{C}(a,b)$ such that $f_{0}\Rightarrow f_{1}\Rightarrow\dots\Rightarrow f_{n}$ (here $(s_{0})^{n}$ means $s_{0}$ applied $n$ times). The face and degeneracy maps are as in $\mathcal{N}\left(\operatorname{Hom}_{C}(a,b)\right)$. * • The composition of $\\{f_{0},\dots,f_{n}\\}\colon(s_{0})^{n}a\to(s_{0})^{n}b$ with $\\{g_{0},\dots,g_{n}\\}\colon(s_{0})^{n}b\to(s_{0})^{n}c$ is $\\{g_{0}\circ f_{0},\dots,g_{n}\circ f_{n}\\}$. It’s easy to verify that $g_{0}\circ f_{0}\Rightarrow g_{1}\circ f_{1}\Rightarrow\dots\Rightarrow g_{n}\circ f_{n}$, so that the composition rule is well-defined. It’s also easily verified that this composition rule is associative, and that $\\{\operatorname{id}_{x},\dots,\operatorname{id}_{x}\\}$ acts as the identity on $(s_{0})^{n}x$. To show that $\operatorname{\overline{N}}C$ is a well- defined simplicial category, it remains to show that the face maps and degeneracy maps are functors between small categories. It’s clear that the face map $d_{i}\colon(\operatorname{\overline{N}}C)[n]\to(\operatorname{\overline{N}}C)[n-1]$ sends a morphisms $\\{f_{0},\dots,f_{n}\\}\in\operatorname{Hom}_{(\operatorname{\overline{N}}C)([n])}\left((s_{0})^{n}a,(s_{0})^{n}b\right)$ to $\operatorname{Hom}_{(\operatorname{\overline{N}}C)([n-1])}\left(d_{i}(s_{0})^{n}a,d_{i}(s_{0})^{n}b\right)$, as $d_{i}(s_{0})^{n}=(s_{0})^{n-1}$. Furthermore, $d_{i}$ commutes with composition: $\displaystyle d_{i}$ $\displaystyle\left(\\{g_{0},\dots,g_{n}\\}\circ\\{f_{0},\dots,f_{n}\\}\right)$ $\displaystyle=d_{i}\\{g_{0}\circ f_{0},\dots,g_{n}\circ f_{n}\\}$ $\displaystyle=\\{g_{0}\circ f_{0},\dots,g_{i-1}\circ f_{i-1},g_{i+1}\circ f_{i+1},\dots,g_{n}\circ f_{n}\\}$ $\displaystyle=\\{g_{0},\dots,g_{i-1},g_{i+1},\dots,g_{n}\\}\circ\\{f_{0},\dots,f_{i-1},f_{i+1},\dots,f_{n}\\}$ $\displaystyle=\left(d_{i}\\{g_{0},\dots,g_{n}\\}\right)\circ\left(d_{i}\\{f_{0},\dots,f_{n}\\}\right)$ Hence, the face maps are functors. Similar computations for $s_{i}$ gives that the degeneracy maps are functors. In conclusion, $\operatorname{\overline{N}}C$ is a well-defined simplicial category. ###### Example 4.2. Let’s consider the p-category $\mathscr{P}$ with two objects, $a$ and $b$, two non-identity morphisms, $f\colon a\to b$ and $g\colon a\to b$, and a single non-identity partial order relation $f\Rightarrow g$ (see also Figure 6 for an illustration). $f$$g$$a$$b$$\Downarrow$ Figure 6. The p-category $\mathscr{P}$. Let $\operatorname{\overline{N}}\mathscr{P}$ be the corresponding simplicial category as described above. Then the objects in $(\operatorname{\overline{N}}\mathscr{P})[0]$ are $a$ and $b$, while the morphisms in $(\operatorname{\overline{N}}\mathscr{P})[0]$ are $f$ and $g$, together with identities. There is a single non-degenerate morphism in $(\operatorname{\overline{N}}\mathscr{P})[1]$, namely $F:=\\{f,g\\}$, which has the faces $d_{1}F=f$ and $d_{0}F=g$. All other objects and morphisms are degenerate. Thus, $\operatorname{\overline{N}}\mathscr{P}$ is precisely the simplicial category $\mathscr{S}$ from 4.1. As mentioned in the previous section, we can concatenate the simplicial category $\operatorname{\overline{N}}C$ with the nerve operation $\mathcal{N}$ to get a bisimplicial set. We will call this composite operation the _double nerve_. ###### Definition 4.3. Let $C$ be a p-category. The _double nerve_ of $C$, denoted $\operatorname{N\overline{N}}C$, is the bisimplicial set $\mathcal{N}\circ(\operatorname{\overline{N}}C)$. The _classifying space_ of $C$ is the geometric realization of the double nerve, i.e., $|\operatorname{N\overline{N}}C|$. Note that in [2] and [3], the intermediate simplicial category $\operatorname{\overline{N}}C$ ($\underline{\text{N}}C$ in their papers) is defined differently; there, the $n$–simplices are horizontal compositions, not vertical compositions. The double nerve, however, is the same in both definitions, except that the indices in the bisimplicial set are swapped. ## 5\. The discrete flow category In this section we present the definition of the discrete flow category, as defined by Nanda in [14]. Note that we only define the discrete flow category of a discrete Morse function on a regular CW complex. For the general discrete flow category, see [14]. ### 5.1. The entrance path category We now describe an example of a p-category, the _entrance path category_ of a regular CW complex. It is similar to the face poset, but has more structure: it includes data on _how_ a cell is a face of another cell. ###### Definition 5.1. Let $X$ be a regular CW complex. The _entrance path category_ $\operatorname{Ent}[X]$ of $X$ is a p-category given by the following. 1. (1) The objects are the cells in $X$. 2. (2) The morphisms $x\to y$ are strictly descending sequences of cells $(x=x_{0}>x_{1}>\dots>x_{k}=y)$, with the understanding that $\operatorname{id}_{x}$ is the sequence $(x)$ with a single element. 3. (3) The partial order is defined so that $f\Rightarrow f^{\prime}$ if and only if $f$ is a (not necessarily contiguous) subsequence of $f^{\prime}$. 4. (4) Composition of morphisms are given by concatenating sequences as follows. $\left(z>x_{1}>\dots>x_{k}\right)\circ\left(y_{0}>\dots>y_{l-1}>z\right)=\left(y_{0}>\dots>y_{l-1}>z>x_{1}>\dots x_{k}\right)$ A theorem, proved in [14, p. 11], states the following. ###### Theorem 5.2. Let $X$ be a finite regular CW complex. Then $X$ is homotopy equivalent to $|\mathcal{N}(\operatorname{Ent}[X])|$. ### 5.2. Localization Given a category $C$ and a collection of morphisms $Q$ that contains all identities and is closed under composition, one can construct a new category $C[Q^{-1}]$, called the _localization_ of $C$ at $Q$. This category is the minimal category containing $C$ where all morphisms in $Q$ have inverses. The localization comes with a functor $L\colon C\to C[Q^{-1}]$ that sends objects in $C$ to their copies in $C[Q^{-1}]$ and morphisms in $C$ to their equivalence classes in $C[Q^{-1}]$. We now describe how to localize a p-category at a special class of morphisms. ###### Definition 5.3. A morphism $f\colon x\to y$ in a p-category $C$ is called an _atom_ if 1. (1) $f\Rightarrow f^{\prime}$ holds for any $f^{\prime}\in C(x,y)$, 2. (2) $x=y$ implies $f=\operatorname{id}_{x}$, and 3. (3) Solutions to $h\circ g\Rightarrow f$ for morphisms $g\colon x\to z$ and $h\colon z\to y$ only exist when * • $z=x$, in which case $(g,h)=(\operatorname{id}_{x},f)$, or * • $z=y$, in which case $(g,h)=(f,\operatorname{id}_{y})$. As an example, in the entrance path category of a regular CW complex, the atoms are precisely the sequences of length 2, i.e., those on the form $(x>y)$, together with the identities. ###### Definition 5.4. A collection $\Sigma$ of morphisms in a p-category is called _directed_ if 1. (1) all morphisms in $\Sigma$ are atoms, 2. (2) if $f\colon x\to y$ is in $\Sigma$, then $x\neq y$, and 3. (3) if $f\colon x\to y$ is in $\Sigma$, then there are no morphisms $y\to x$ in $\Sigma$. We again give an example in the entrance path category. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function. The gradient vector field $V_{f}=\\{(x_{i},y_{i})\\}$ gives a collection $\Sigma=\\{(y_{i}>x_{i})\\}$ of morphisms in $\operatorname{Ent}[X]$. It’s easily verified that $\Sigma$ is directed. Now, given a p-category $C$ and a directed collection $\Sigma$ of morphisms in $C$ so that the union $\Sigma^{+}$ with all identities is closed under composition, we can define the _localization of $C$ at $\Sigma$_. We write this as $\operatorname{Loc}_{\Sigma}C$, and define it as follows. ###### Definition 5.5. Let $C$, $\Sigma$ and $\Sigma^{+}$ be as above. Then $\operatorname{Loc}_{\Sigma}C$ is a p-category given by the following. * • The objects are the same as the objects in $C$. * • The morphisms $w\to z$ are equivalence classes of zigzags of the form ${w}$${y_{0}}$${x_{0}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k}$$g_{k+1}$ where the $f_{i}$’s are in $\Sigma^{+}$, and the $g_{i}$’s are arbitrary, and the equivalence relation is generated by the following relations. Two zigzags are related _horizontally_ if they differ by intermediate identity maps, and _vertically_ if they form the rows of a commutative diagram of the form ${w}$${y_{0}}$${x_{0}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$${w}$${y_{0}^{\prime}}$${x_{0}^{\prime}}$${y_{1}^{\prime}}$${\cdots}$${x_{k}^{\prime}}$${z}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k}$$g_{k+1}$$g_{0}^{\prime}$$f_{0}^{\prime}$$g_{1}^{\prime}$$f_{1}^{\prime}$$f_{k}^{\prime}$$g_{k+1}^{\prime}$$\operatorname{id}_{w}$$u_{0}$$v_{0}$$u_{1}$$v_{k}$$\operatorname{id}_{z}$ where the $u_{i}$’s and $v_{i}$’s are in $\Sigma^{+}$. * • The partial order is defined so that $\gamma^{\prime}\Rightarrow\gamma$ if and only if there exist representatives of $\gamma$ and $\gamma^{\prime}$ that fit into the top and bottom row, respectively, of a (not necessarily commutative) diagram of the form ${w}$${y_{0}}$${x_{0}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$${w}$${y_{0}^{\prime}}$${x_{0}^{\prime}}$${y_{1}^{\prime}}$${\cdots}$${x_{k}^{\prime}}$${z}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k}$$g_{k+1}$$g_{0}^{\prime}$$f_{0}^{\prime}$$g_{1}^{\prime}$$f_{1}^{\prime}$$f_{k}^{\prime}$$g_{k+1}^{\prime}$$\operatorname{id}_{w}$$u_{0}$$v_{0}$$u_{1}$$v_{k}$$\operatorname{id}_{z}$ where, again, $u_{i}$ and $v_{i}$ are in $\Sigma^{+}$. * • Composition of morphisms are given by concatenating representatives as follows $\displaystyle\left[w\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}\dots\ \xleftarrow{f_{k}}x_{k}\xrightarrow{g_{k+1}}z\right]\circ\left[v\xrightarrow{g_{0}^{\prime}}y_{0}^{\prime}\xleftarrow{f_{0}^{\prime}}\dots\ \xleftarrow{f_{k}^{\prime}}x_{k}^{\prime}\xrightarrow{g_{k+1}^{\prime}}w\right]$ $\displaystyle\quad=\left[v\xrightarrow{g_{0}^{\prime}}y_{0}^{\prime}\xleftarrow{f_{0}^{\prime}}\dots\ \xleftarrow{f_{k}^{\prime}}x_{k}^{\prime}\xrightarrow{g_{0}\circ g_{k+1}^{\prime}}y_{0}\xleftarrow{f_{0}}\dots\ \xleftarrow{f_{k}}x_{k}\xrightarrow{g_{k+1}}z\right].$ One also gets a functor (or equivalently, a strict p-functor) in this case, which we write $L_{\Sigma}\colon C\to\operatorname{Loc}_{\Sigma}C$. The functor sends objects to themselves and morphisms to their respective equivalence classes. As mentioned above, a gradient vector field $V_{f}=\\{(x_{i},y_{i})\\}$ gives a directed collection $\Sigma=\\{(y_{i}>x_{i})\\}$ on the entrance path category. Thus, given a discrete Morse function $f\colon X\to\mathbb{R}$, we can localize $\operatorname{Ent}[X]$ at $\Sigma$, which will give us a category where the morphisms corresponding to regular pairs have inverses. We will call this $\Sigma$ the _Morse system_ induced by $f$. ### 5.3. The discrete flow category We are now ready to define the discrete flow category of a discrete Morse function. ###### Definition 5.6. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function, and let $\Sigma$ be its induced Morse system. Let $\operatorname{Loc}_{\Sigma}[X]$ be the p-category localization of $\operatorname{Ent}[X]$ at $\Sigma$. The _discrete flow category_ of $f$, denoted $\operatorname{Flo}_{\Sigma}[X]$, is the full subcategory of $\operatorname{Loc}_{\Sigma}[X]$ generated by the critical cells of $f$. A special case of the main result of [14] states the following. ###### Theorem 5.7. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function, and let $\Sigma$ be its induced Morse system. Then the classifying space of the discrete flow category of $f$ is homotopy equivalent to $X$. ###### Example 5.8. As an example, consider the discrete Morse function on $X\cong S^{1}$ illustrated in Figure 7. $x$$y$$z$$a$$b$$c$ Figure 7. A gradient vector field of a discrete Morse function $f\colon X\to\mathbb{R}$. The objects of $\operatorname{Flo}_{\Sigma}[X]$ are the critical cells of $f$, which are $x$ and $c$. There are two morphisms from $x$ to $c$, corresponding to the two zigzag paths $x>b<y>c$ and $x>a<z>c$. There are no partial order relation between these morphisms. The other Hom posets are $\operatorname{Hom}(x,x)=\\{\operatorname{id}_{x}\\}$, $\operatorname{Hom}(c,c)=\\{\operatorname{id}_{c}\\}$ and $\operatorname{Hom}(c,x)=\emptyset$. Thus, the classifying space $|\mathcal{N}(\operatorname{Flo}_{\Sigma}[X])|$ becomes $S^{1}$, as illustrated in Figure 8. $[x>a<z>c]$$[x>b<y>c]$$x$$c$$\mathcal{N}$$[x>a<z>c]$$[x>b<y>c]$$x$$c$ Figure 8. The discrete flow category and its nerve (the identity morphisms are omitted). ## 6\. Hom posets of discrete flow categories In this section we study the structure of the Hom posets of the discrete flow category of a discrete Morse function. We first develop some technical results on the Hom posets, and in particular show that the posets are _graded_. We then use these results to construct an algorithm to compute said posets. Finally, we prove A: for a discrete Morse function on a simplicial complex $X$, the opposite poset of each Hom poset in its discrete flow category is a _CW poset_ , i.e., it has the structure of a face poset of a regular CW complex. ### 6.1. Preliminaries on posets In [1], Björner describes sufficient conditions for a poset to be the face poset of a regular CW complex. Note that he defines face posets to have an added least element $\hat{0}$, as opposed to our definition of $\operatorname{Fac}[X]$. Björner’s definition of CW posets is as follows. ###### Definition 6.1. A poset $P$ is said to be a _CW poset_ if 1. (1) $P$ has a least element $\hat{0}$, 2. (2) $P$ is nontrivial, i.e., has more than one element, 3. (3) For all $x\in P\setminus\\{\hat{0}\\}$ the realization of the open interval $(\hat{0},x)$ is homeomorphic to a sphere (i.e., to some $S^{k}$, where $k$ depends on $x$). We now formalize the statement that CW posets are the face posets of regular CW complexes (augmented with a least element). ###### Definition 6.2. Let $X$ be a regular CW complex. Let the _face poset_ of $X$, denoted $\operatorname{Fac}[X]$, be the poset where the elements are the cells in $X$ and $x\geq y$ whenever $y$ is contained in $x$. Denote by $\operatorname{Fac}^{+}[X]$ the face poset $\operatorname{Fac}[X]$ augmented with a least element $\hat{0}$. In other words, $\operatorname{Fac}^{+}[X]=\operatorname{Fac}[X]\cup\\{\hat{0}\\}$, where $\hat{0}$ is defined to be smaller than all other elements. The following statement is proved in [1, p. 11]. ###### Theorem 6.3. A poset $P$ is a CW poset if and only if it is isomorphic to $\operatorname{Fac}^{+}[X]$ for some regular CW complex $X$. A poset is _bounded_ if it has a least and greatest element. Furthermore, we say that $x$ _covers_ $y$ if $x>y$ and there exists no element $z$ such that $x>z>y$. ###### Definition 6.4. A graded poset is a poset $P$ equipped with a _rank function_ , which is a function $\rho\colon P\to\mathbb{Z}$ satisfying the following two properties. 1. (1) The function $\rho$ is compatible with the partial order, meaning that $x>y$ implies $\rho(x)>\rho(y)$. 2. (2) The function $\rho$ is compatible with the covering relation, meaning that if $x$ covers $y$ then $\rho(x)=\rho(y)+1$. For an element $x$, we will call $\rho(x)$ the _rank_ of $x$. ### 6.2. Hom posets of a discrete flow category In this section, we let $X$ be a finite regular CW complex, $f\colon X\to\mathbb{R}$ a discrete Morse function, and $\Sigma$ the Morse system on $\operatorname{Ent}[X]$ consisting of the regular pairs of $f$. Furthermore, we let $w$ and $z$ be arbitrary objects in $\operatorname{Flo}_{\Sigma}[X]$ and consider the Hom poset $\operatorname{Hom}(w,z)$ (note that whenever we write $\operatorname{Hom}(w,z)$, we mean $\operatorname{Hom}_{\operatorname{Flo}_{\Sigma}[X]}(w,z)$). First, we construct an algebraic invariant for morphisms in the discrete flow category, which we will use when proving that two different morphisms are not equal. Let $G$ be the free Abelian group on all morphisms in $\operatorname{Ent}[X]$ that are atoms, modulo all identities (i.e., we define identities to be 0). The algebraic invariant will be an element of this group. Given a representative $\gamma=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right)$ of some $[\gamma]\in\operatorname{Hom}(w,z)$, we let $\alpha(g_{i})=\left(g_{i}^{0}+g_{i}^{1}+\dots+g_{i}^{N_{i}}\right)\in G,$ where $g_{i}=g_{i}^{N_{i}}\circ\dots\circ g_{i}^{0}$ is the atom decomposition of $g_{i}$. We now define the algebraic invariant $I$ as follows. ###### Definition 6.5. Let $[\gamma]\in\operatorname{Hom}(w,z)$. Define $I([\gamma])$ as $I([\gamma])=\sum_{i=0}^{k}\alpha(g_{i})-\sum_{i=0}^{k-1}f_{i}.$ (1) ###### Lemma 6.6. The function $I$, as defined in 6.5, is well-defined. ###### Proof. To show that $I$ is well-defined, we must show that it is preserved under horizontal and vertical relations. (H) To see that it is preserved under horizontal relations, it is enough to check that $I([\gamma\circ\gamma^{\prime}])=I([\gamma])+I([\gamma^{\prime}])$, and that $I([x_{i}\xrightarrow{g_{i}}y_{i}\xleftarrow{\operatorname{id}}x_{i+1}\xrightarrow{g_{i+1}}y_{i+1}])=I([x_{i}\xrightarrow{g_{i+1}\circ g_{i}}y_{i+1}]),$ which follows from the fact that identities are defined to be 0 in $G$, and that $\alpha(g\circ h)=\alpha(g)+\alpha(h)$. (V) To see that $I$ is preserved under vertical relations, consider a diagram: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${\cdots}$${x_{k}^{\prime}}$${z}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$g_{0}^{\prime}$$f_{0}^{\prime}$$g_{1}^{\prime}$$f_{1}^{\prime}$$f_{k-1}^{\prime}$$g_{k}^{\prime}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k-1}$$g_{k}$$u_{0}$$v_{1}$$u_{1}$$v_{k}$ We have $\alpha(g_{i}^{\prime})=\alpha(g_{i})+v_{i}-u_{i}$, and $f_{i}^{\prime}=f_{i}+u_{i}-v_{i+1}$ (as elements of $G$). Putting this together gives $\displaystyle I([\gamma])$ $\displaystyle=\sum_{i=0}^{k}\alpha(g_{i}^{\prime})-\sum_{i=0}^{k-1}f_{i}^{\prime}$ $\displaystyle=\sum_{i=0}^{k}\alpha(g_{i})+\sum_{i=0}^{k}v_{i}-\sum_{i=0}^{k}u_{i}-\left(\sum_{i=0}^{k-1}f_{i}+\sum_{i=0}^{k-1}u_{i}-\sum_{i=0}^{k-1}v_{i+1}\right)$ $\displaystyle=\sum_{i=0}^{k}\alpha(g_{i})-\sum_{i=0}^{k-1}f_{i},$ where we use that $v_{0}=\operatorname{id}_{w}=0$ and $u_{k}=\operatorname{id}_{z}=0$. Hence, $I$ is preserved under vertical relations.∎ ###### Theorem 6.7. Let $[\gamma]$ and $[\gamma^{\prime}]$ be morphisms in $\operatorname{Hom}(w,z)$ such that $[\gamma]\Rightarrow[\gamma^{\prime}]$. Let $\tau=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right)$ be a representative of $[\gamma]$. Then it is possible to choose a representative $\tau^{\prime}$ of $[\gamma^{\prime}]$, such that $\tau^{\prime}=\left(w=x_{0}\xrightarrow{g_{0}^{\prime}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}^{\prime}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}^{\prime}}y_{k}=z\right),$ for some $g_{i}^{\prime}$, such that $g_{i}\Rightarrow g_{i}^{\prime}$ holds for all $i$. Furthermore, the $g_{i}^{\prime}$ are unique. Note that a partial order $g_{i}\Rightarrow g_{i}^{\prime}$ means that $g_{i}=(z_{0}>z_{1}>\dots>z_{m})$ is a subsequence of $g_{i}^{\prime}=(z_{0}^{\prime}>z_{1}^{\prime}>\dots>z_{n}^{\prime})$. Therefore, this theorem tells us that a partial order $[\gamma]\Rightarrow[\gamma^{\prime}]$ corresponds to picking a representative of $[\gamma]$ and adding elements to the sequences that constitutes the right- pointing arrows (the $g_{i}$). ###### Proof. We prove this in two parts. First we show that there exists a diagram ${w}$${\bar{y}_{0}}$${\bar{x}_{1}}$${\bar{y}_{1}}$${\cdots}$${\bar{x}_{k}}$${z}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$\bar{g}_{0}$$\bar{f}_{0}$$\bar{g}_{1}$$\bar{f}_{1}$$\bar{f}_{k-1}$$\bar{g}_{k}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k-1}$$g_{k}$$u_{0}$$v_{1}$$u_{1}$$v_{k}$ (2) where the bottom row is $\tau$ and the top row is a representative of $[\gamma^{\prime}]$. Then we show that we can modify this diagram so that the top row is of the form $\tau^{\prime}=\left(w=x_{0}\xrightarrow{g_{0}^{\prime}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}^{\prime}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}^{\prime}}y_{k}=z\right),$ where $\tau^{\prime}$ is still a representative of $[\gamma^{\prime}]$. To show the first part, we show that given a diagram representing a partial order relation (as the one in (2)), we can replace the bottom row by either (V) a vertically related representative, or (H) a horizontally related representative, and modify the top row appropriately to get a new diagram. (V) The case with vertically related representatives is simple; you simply compose the partial order diagram with the vertical relation diagram. (H) For horizontal relations, we must show that you can both add intermediate identity maps, and remove them. The case with adding intermediate identity maps is also simple, you simply add identity maps to both the top and bottom rows and copy the vertical map (clearly, the top rows is also horizontally related). For example, for adding an identity map at an $x_{i}$, you replace ${\cdots}$${x_{i}^{\prime}}$${\cdots}$${\cdots}$${x_{i}}$${\cdots}$$v_{i}$ with ${\cdots}$${x_{i}^{\prime}}$${x_{i}^{\prime}}$${x_{i}^{\prime}}$${\cdots}$${\cdots}$${x_{i}}$${x_{i}}$${x_{i}}$${\cdots}$$\operatorname{id}$$\operatorname{id}$$\operatorname{id}$$\operatorname{id}$$v_{i}$$v_{i}$$v_{i}$ Removing intermediate identity maps is more complicated (in fact, it only works on the bottom row, not the top row). Consider a diagram ${x_{i}^{\prime}}$${y_{i}^{\prime}}$${x_{i+1}^{\prime}}$${y_{i+1}^{\prime}}$${x_{i}}$${y_{i}}$${x_{i+1}}$${y_{i+1}}$$g_{i}^{\prime}$$f_{i}^{\prime}$$g_{i+1}^{\prime}$$g_{i}$$g_{i+1}$$v_{i}$$u_{i}$$v_{i+1}$$u_{i+1}$ If $f_{i}^{\prime}=\operatorname{id}$, we can clearly remove the identity maps from both rows. Suppose not. Then $u_{i}$ must be $\operatorname{id}$ and $v_{i+1}=f_{i}^{\prime}$. As $g_{i+1}\circ v_{i+1}\Rightarrow u_{i+1}\circ g_{i+1}^{\prime}$, and $v_{i+1}$ is an atom on the form $(x_{i+1}^{\prime}>x_{i+1})$ with $\dim x_{i+1}^{\prime}=\dim x_{i+1}+1$, we must have that $v_{i+1}\circ g_{i+1}^{\prime}$ is sequence that starts with $(x_{1}^{\prime}>x_{1})$. There are now two cases: either $g_{i+1}^{\prime}=\hat{g}_{i+1}\circ v_{i+1}$ for some $\hat{g}_{i+1}$, or $g_{i+1}^{\prime}=\operatorname{id}$ and $u_{i+1}=v_{i+1}$. In the first case, the top row is equivalent to $x_{i}^{\prime}\xrightarrow{\hat{g}_{i+1}\circ g_{i}^{\prime}}y_{i+1},$ as $g_{i+1}^{\prime}=\hat{g}_{i}\circ v_{i+1}=\hat{g}_{i}\circ f_{i}^{\prime}$, and we can replace the diagram with ${x_{i}^{\prime}}$${y_{i+1}^{\prime}}$${x_{i}}$${y_{i+1}}$$\hat{g}_{i+1}\circ g_{i}^{\prime}$$g_{i+1}\circ g_{i}$$v_{i}$$u_{i+1}$ The reader can verify that in the case $g_{i+1}^{\prime}=\operatorname{id}$, $f_{i+1}^{\prime}$ will be forced to equal $\operatorname{id}$, which allows us to make a similar rewrite to remove the identity map from the bottom row. Now, we have a diagram ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${\cdots}$${x_{k}^{\prime}}$${z}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$g_{0}^{\prime}$$f_{0}^{\prime}$$g_{1}^{\prime}$$f_{1}^{\prime}$$f_{k-1}^{\prime}$$g_{k}^{\prime}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k-1}$$g_{k}$$u_{0}$$v_{1}$$u_{1}$$v_{k}$ such that the bottom row is a $[\tau]$ and the top row is a representative of $[\gamma^{\prime}]$. Assume either $x_{1}\neq x_{1}^{\prime}$ or $y_{0}\neq y_{0}^{\prime}$. There are three cases to consider: (I) $x_{1}\neq x_{1}^{\prime}$ and $y_{0}\neq y_{0}^{\prime}$ (II) $x_{1}=x_{1}^{\prime}$ and $y_{0}\neq y_{0}^{\prime}$ (III) $x_{1}\neq x_{1}^{\prime}$ and $y_{0}=y_{0}^{\prime}$. Case I In this case, the exhaustion axiom gives that $y_{0}=x_{1}$ and $y_{0}^{\prime}=x_{1}^{\prime}$, which implies that $v_{1}=u_{0}$. The first three squares in the diagram is then: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$g_{0}^{\prime}$$g_{1}^{\prime}$$g_{0}$$g_{1}$$u_{0}$$u_{0}$$u_{1}$ As $g_{1}\circ u_{0}\Rightarrow u_{1}\circ g_{1}^{\prime}$, and $u_{0}$ is an atom on the form $(x_{1}^{\prime}>x_{1})$ with $\dim x_{1}^{\prime}=\dim x_{1}+1$, we must have that $u_{1}\circ g_{1}^{\prime}$ is sequence that starts with $(x_{1}^{\prime}>x_{1})$. There are now two cases: either $g_{1}^{\prime}=\hat{g}_{1}\circ u_{0}$ for some $\hat{g}_{1}$, or $g_{1}^{\prime}=\operatorname{id}_{x_{1}^{\prime}}$ and $u_{1}=u_{0}=g_{1}$. In the first case, the top row is vertically related to $w\xrightarrow{u_{0}\circ g_{0}^{\prime}}y_{0}\xleftarrow{\operatorname{id}}x_{1}\xrightarrow{\hat{g}_{1}}y_{1}^{\prime},$ so we can replace the diagram with: ${w}$${y_{0}}$${x_{1}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$u_{0}\circ g_{0}^{\prime}$$\hat{g}_{1}$$g_{0}$$g_{1}$$u_{1}$ In the second case, we must have $g_{1}=\operatorname{id}$ and $u_{1}=\operatorname{id}$. Hence, the first four squares in the diagram: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${x_{2}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${x_{2}}$$g_{0}^{\prime}$$g_{0}$$f_{1}$$u_{0}$$u_{0}$$u_{0}$$v_{2}$ Now, the top row is vertically related to $w\xrightarrow{u_{0}\circ g_{0}^{\prime}}y_{0}\xleftarrow{\operatorname{id}}x_{1}\xrightarrow{\operatorname{id}}y_{1}^{\prime}\xleftarrow{u_{0}}x_{2}^{\prime},$ so we can replace the diagram with: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${x_{2}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${x_{2}}$$u_{0}\circ g_{0}^{\prime}$$u_{0}$$g_{0}$$f_{1}$$v_{2}$ Case II In this case the exhaustion axiom gives $y_{0}^{\prime}=x_{1}^{\prime}$ and $f_{0}=u_{0}$, so the first three squares of the diagram is: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$g_{0}^{\prime}$$g_{1}^{\prime}$$g_{0}$$u_{0}$$g_{1}$$u_{0}$$u_{1}$ The top row is vertically related to $x_{0}\xrightarrow{u_{0}\circ g_{0}^{\prime}}y_{0}\xleftarrow{u_{0}}x_{1}\xrightarrow{g_{1}}y_{1}^{\prime}$ (see [14, Remark 2.9]). We can thus replace this part of the diagram with: ${w}$${y_{0}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$u_{0}\circ g_{0}^{\prime}$$u_{0}$$g_{1}^{\prime}$$g_{0}$$u_{0}$$g_{1}$$u_{1}$ Case III In this case the exhaustion axiom gives $y_{0}=x_{1}$ and $f_{0}^{\prime}=v_{1}$, so the first three squares of the diagram is: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$g_{0}^{\prime}$$v_{1}$$g_{1}^{\prime}$$g_{0}$$g_{1}$$v_{1}$$u_{1}$ As in the first case, either $g_{1}^{\prime}=\operatorname{id}$ or $g_{1}^{\prime}=\hat{g}_{1}\circ v_{1}$ for some $\hat{g}_{1}$. In the latter case, the top row is vertically related to $w\xrightarrow{g_{0}^{\prime}}y_{0}^{\prime}\xleftarrow{\operatorname{id}}x_{1}\xrightarrow{\hat{g}_{1}}y_{1}^{\prime},$ and we can replace the diagram with ${w}$${y_{0}}$${x_{1}}$${y_{1}^{\prime}}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$$g_{0}^{\prime}$$\hat{g}_{1}$$g_{0}$$g_{1}$$u_{1}$ If, instead, $g_{1}^{\prime}=\operatorname{id}$, we must have $g_{1}=\operatorname{id}$ and $u_{1}=v_{1}$. As in case 1, we look at the first four squares of the diagram, and see that we can replace the top row, in this case with $w\xrightarrow{g_{0}^{\prime}}y_{0}^{\prime}\xleftarrow{\operatorname{id}}x_{1}\xrightarrow{\operatorname{id}}y_{1}\xleftarrow{v_{1}}x_{2}^{\prime}.$ Continuing this process on the right until you reach the end of the diagram, you end up with a new diagram on the form ${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$g_{0}^{\prime\prime}$$f_{0}$$g_{1}^{\prime\prime}$$f_{1}$$f_{k-1}$$g_{k}^{\prime\prime}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k-1}$$g_{k}$ where the and bottom row is $\tau$ and the top row is a representative of $[\gamma^{\prime}]$. It follows from the diagram that $g_{i}\Rightarrow g_{i}^{\prime\prime}$ for all $i$, and hence the top row is our desired representative $\tau^{\prime}$. Finally, we prove that the $g_{i}^{\prime}$ are unique. We do this by using the algebraic invariant $I$, as defined in 6.5, and showing that given $I$, the $g_{i}^{\prime}$ are decided by the $x_{i}$, $y_{i}$ and $f_{i}$. We know that $I$ is an algebraic invariant for morphisms in $\operatorname{Hom}(w,z)$. Thus, given a representation $\tau^{\prime}$ as in the theorem statement, where the $x_{i}$, $y_{i}$ and $f_{i}$ are given, we know what all the atoms in the $g_{i}^{\prime}$ are. It remains to show that these atoms can only appear in one specific order, which will mean that all the $g_{i}^{\prime}$ are uniquely defined. For this, recall that $f\colon X\to\mathbb{R}$ is the Morse function from which the Morse system is induced. We can assume without loss of generality that $f$ is injective on the cells of $X$. Now, for an atom $(x>y)$, let $\beta((x>y))=(\dim x,f(x),f(y))\in\mathbb{R}^{3}$. Then $\beta((x>y))=\beta((x^{\prime}>y^{\prime}))$ if and only if $(x>y)=(x^{\prime}>y^{\prime})$ (as $f$ is injective). Furthermore, using the lexicographical ordering on $\mathbb{R}^{3}$, $\beta$ is decreasing on the atoms in the right-pointing arrows of a representation, in the order they appear: * • If $(x^{\prime}>y^{\prime})$ directly succeeds $(x>y)$ (possibly with an intermediate identity maps), then $x^{\prime}=y$, and $\dim x^{\prime}>\dim y^{\prime}=\dim x>\dim y$. * • If there is a non-identity left-pointing map $f_{i}$ directly between two atoms $(x>y)$ and $(x^{\prime}>y^{\prime})$, then $f_{i}=(x^{\prime}>y)$, and $\dim x=\dim x^{\prime}$. Then, we have $f(y)>f(x^{\prime})$, and either $x=x^{\prime}$ or $f(x)>f(y)$, so $f(x)\geq f(x^{\prime})$. Likewise, either $y=y^{\prime}$ or $f(x^{\prime})>f(y^{\prime})$ so $f(y)\geq f(y^{\prime})$. In conclusion, this gives $\beta((x>y))\geq\beta((x^{\prime}>y^{\prime}))$. Thus, given the $x_{i}$, $y_{i}$ and $f_{i}$ in a representation $\tau^{\prime}$ as in the theorem statement, the atoms in the $g_{i}^{\prime}$, and their order, is uniquely determined, and hence the $g_{i}^{\prime}$ themselves are uniquely determined, which proves the last part of the theorem. ∎ Now, let $\operatorname{Hom}(w,z)^{\operatorname{op}}$ be the _opposite_ of the poset $\operatorname{Hom}(w,z)$, i.e., the poset with elements equal to the elements of $\operatorname{Hom}(w,z)$ and with the partial order reversed. Define further $P_{w,z}$ as $\operatorname{Hom}(w,z)^{\operatorname{op}}$ augmented with a least element $\hat{0}$. In other words, $P_{w,z}=\operatorname{Hom}(w,z)^{\operatorname{op}}\cup\\{\hat{0}\\},$ (3) where $\hat{0}$ is defined to be smaller than all other elements. Next, we show that $P_{w,z}$ is graded by defining a rank function on it. To do this, we first define the following rank function $r$ on $\operatorname{Hom}_{\operatorname{Ent}[X]}(a,b)^{\operatorname{op}}$, for arbitrary objects $a,b\in\operatorname{Ent}[X]$. $r\left(a=x_{0}>x_{1}>\dots>x_{k}=b\right)=k$ (4) Here, we take $r(\operatorname{id}_{x})$ to be $0$. It’s easy to verify that $r$ is, in fact, a rank function. Also, observe that $r$ satisfies the following identity. $r(\gamma\circ\gamma^{\prime})=r(\gamma)+r(\gamma^{\prime})$ (5) We now define a rank function on $P_{w,z}$. ###### Definition 6.8. Define the function $\rho\colon P_{w,z}\to\mathbb{Z}$ as follows. Let $x\in P_{w,z}$. If $x=\hat{0}$, then $\rho(x)=-1$. Else, let $x$ be represented by $\tau=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right).$ Then, $\rho(x)=N+\sum_{i=0}^{k-1}r(f_{i})-\sum_{i=0}^{k}r(g_{i}),$ where $N=\dim w-\dim z$. The rank function $\rho$ can be interpreted as the number of cells that are “skipped” in the $g_{i}$’s in the representative. ###### Theorem 6.9. The function $\rho$, as defined in 6.8, is well-defined and a rank function. ###### Proof. First, to prove well-definedness of $\rho$, we need to show that equivalent representations give the same value of $\rho$. To do this, we first show that horizontally related representations give the same value of $\rho$, and then we show that vertically related representations give the same value of $\rho$. 1. (1) Let $\tau$ and $\tau^{\prime}$ be horizontally related representations. Then $\tau$ can be transformed to $\tau^{\prime}$ by making substitutions of the form $\left(x_{i}\xrightarrow{g_{i}}y_{i}\xleftarrow{\operatorname{id}_{y_{i}}}y_{i}\xrightarrow{g_{i+1}}y_{i+1}\right)\leftrightarrow\left(x_{i}\xrightarrow{g_{i+1}\circ g_{i}}y_{i+1}\right).$ As $r(g_{i+1}\circ g_{i})=r(g_{i+1})+r(g_{i})$ and $r(\operatorname{id}_{y_{i}})=0$, these substitutions preserve the value of $\rho$, so both $\tau$ and $\tau^{\prime}$ give the same value of $\rho$. 2. (2) Let $\tau$ and $\tau^{\prime}$ be vertically related representations. Then they form the bottom and top rows of a commutative diagram: ${w}$${y_{0}^{\prime}}$${x_{1}^{\prime}}$${y_{1}^{\prime}}$${\cdots}$${x_{k}^{\prime}}$${z}$${w}$${y_{0}}$${x_{1}}$${y_{1}}$${\cdots}$${x_{k}}$${z}$$g_{0}^{\prime}$$f_{0}^{\prime}$$g_{1}^{\prime}$$f_{1}^{\prime}$$f_{k-1}^{\prime}$$g_{k}^{\prime}$$g_{0}$$f_{0}$$g_{1}$$f_{1}$$f_{k-1}$$g_{k}$$u_{0}$$v_{1}$$u_{1}$$v_{k}$ We adopt the notation that $v_{0}=\operatorname{id}_{w}$ and $u_{k}=\operatorname{id}_{z}$. Then $r(v_{0})=r(u_{k})=0$. Now, commutativity of the diagram gives that $r(u_{i})+r(f_{i}^{\prime})=r(u_{i}\circ f_{i}^{\prime})=r(f_{i}\circ v_{i+1})=r(f_{i})+r(v_{i+1})$. Similarly, $r(u_{i})+r(g_{i}^{\prime})=r(g_{i})+r(v_{i})$. Applying this, we get $\displaystyle N$ $\displaystyle+\sum_{i=0}^{k-1}r(f_{i}^{\prime})-\sum_{i=0}^{k}r(g_{i}^{\prime})$ $\displaystyle=N+\sum_{i=0}^{k-1}\left(r(f_{i})+r(u_{i})-r(v_{i+1})\right)-\sum_{i=0}^{k}\left(r(g_{i})+r(u_{i})-r(v_{i})\right)$ $\displaystyle=N+r(u_{k})-r(v_{0})+\sum_{i=0}^{k-1}r(f_{i})-\sum_{i=0}^{k}r(g_{i})$ $\displaystyle=N+\sum_{i=0}^{k-1}r(f_{i})-\sum_{i=0}^{k}r(g_{i}).$ Thus, $\tau$ and $\tau^{\prime}$ give the same value of $\rho$. Now, we prove that $\rho$ is a rank function. To prove this, we must prove that it satisfies the two conditions in 6.4. 1. (1) We prove that $\rho$ is compatible with the partial order. First, let $x$ and $y$ be elements in $P_{w,z}$ different from $\hat{0}$ such that $x>y$. Then $x\Rightarrow y$ when considered as elements of $\operatorname{Hom}(w,z)$. We can thus pick representatives $\tau$ and $\tau^{\prime}$ as in Theorem 6.7, and we get that $\rho(x)-\rho(y)=\sum_{i=0}^{k}r(g_{i}^{\prime})-\sum_{i=0}^{k}r(g_{i})=\sum_{i=0}^{k}\left(r(g_{i}^{\prime})-r(g_{i})\right)>0,$ where the last inequality follows from the fact that $g_{i}\Rightarrow g_{i}^{\prime}$ and that $g_{i}\neq g_{i}^{\prime}$ for at least one $i$ (otherwise, $x$ would equal $y$). Now, we must show that $\rho(x)>\rho(\hat{0})$ for $x\neq 0$. For this, let $\tau=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right)$ be a representative of $x$. Observe that $r(g_{i})\leq\dim x_{i}-\dim y_{i}$. Furthermore, $r(f_{i})=\dim x_{i+1}-\dim y_{i}$ (as $\dim x_{i+1}-\dim y_{i}$ is either 0 or 1, due to the fact that the Morse system $\Sigma$ consists of regular pairs of a discrete Morse function). Hence, $\displaystyle\rho(x)=$ $\displaystyle N+\sum_{i=0}^{k-1}r(f_{i})-\sum_{i=0}^{k}r(g_{i})$ (6) $\displaystyle\geq$ $\displaystyle N+\sum_{i=0}^{k-1}\left(\dim x_{i+1}-\dim y_{i}\right)-\sum_{i=0}^{k}\left(\dim x_{i}-\dim y_{i}\right)$ $\displaystyle=$ $\displaystyle N+\dim y_{k}-\dim x_{0}=N+\dim z-\dim w=0.$ Thus, $\rho(x)\geq 0>-1=\rho(\hat{0})$, as desired. 2. (2) We prove that $\rho$ is compatible with the covering relation. First, let $x$ and $y$ be elements in $P_{w,z}$ different from $\hat{0}$ such that $x$ covers $y$. We again choose representatives $\tau$ and $\tau^{\prime}$ as in Theorem 6.7. It follows that $g_{i}=g_{i}^{\prime}$ for all but one $i$, or else we could choose a representative $\gamma$ so that $x>[\gamma]>y$. Similarly, it follows that for this $i$, $g_{i}^{\prime}$ covers $g_{i}$, or else we could find a $\hat{g}$ such that $g_{i}^{\prime}>\hat{g}>g_{i}$, which again would let us choose a representative $\gamma$ so that $x>[\gamma]>y$. As $g_{i}^{\prime}$ covers $g_{i}$, we have that $r(g_{i}^{\prime})=r(g_{i})+1$, and thus $\rho(x)-\rho(y)=\sum_{i=0}^{k}r(g_{i}^{\prime})-\sum_{i=0}^{k}r(g_{i})=1,$ as desired. Now, we must show that if $x$ covers $\hat{0}$, then $\rho(x)=\rho(\hat{0})+1=0$. An element $x$ covers $\hat{0}$ if and only if there are no $z\in\operatorname{Hom}(w,z)$ such that $x\Rightarrow z$. Let the following be a representative of such a $x$. $\tau=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right)$ Then, all $g_{i}$ must be maximal in $\operatorname{Hom}_{\operatorname{Ent}[X]}(x_{i},y_{i})$, or else we could replace this $g_{i}$ to get a representative $\tau^{\prime}$ with $[\tau]\Rightarrow[\tau^{\prime}]$. It is easy maximal elements in $\operatorname{Hom}_{\operatorname{Ent}[X]}(x_{i},y_{i})$ are sequences on the form $(x_{i}=a_{0}>a_{1}>\dots>a_{m}=y_{i})$ with $m=\dim x_{i}-\dim y_{i}$, and hence $r(g_{i})=\dim x_{i}-\dim y_{i}$. We get that the inequality in (6) is an equality in this case, and $\rho(x)=0$, as desired.∎ ### 6.3. Computing the Hom posets In this section, we describe an algorithm for computing the Hom posets in the discrete flow category. We still consider the case of a Morse system induced by a discrete Morse function on a regular CW complex, and use the same notation as the previous section. We first state some results that are needed for the algorithm. First, we prove a useful lemma. Recall that the regular pairs of a discrete Morse function are the pairs of simplices $(x,y)$, where $x\in X^{k}$ and $y\in X^{k+1}$, such that $x<y$ and $f(x)\geq f(y)$ (the set of these is the induced gradient vector field, $V_{f}$). The following lemma essentially tells us that we can always assume, without loss of generality, that this inequality is an equality for all regular pairs. ###### Lemma 6.10. Let $f\colon X\to\mathbb{R}$ be a discrete Morse function. There exists a Forman equivalent discrete Morse function $\tilde{f}$ such that $\tilde{f}(x)=\tilde{f}(y)$ for each regular pair $(x,y)$ of $f$. ###### Proof. Define $\tilde{f}\colon X\to\mathbb{R}$ as follows. $\tilde{f}(x)=\begin{cases}f(y),&\quad\text{if }(x,y)\in V_{f}\text{ for some }y,\\\ f(x),&\quad\text{ otherwise.}\\\ \end{cases}$ It’s clear that $\tilde{f}(x)=\tilde{f}(y)$ for all regular pairs of $f$, so it remains to show that $\tilde{f}$ is a discrete Morse function, and that $\tilde{f}$ and $f$ are Forman equivalent. We prove both of these things by showing that their induced gradient vector fields, $V_{\tilde{f}}$ and $V_{f}$ are the same. Clearly, $V_{f}\subseteq V_{\tilde{f}}$. We need to show that $V_{\tilde{f}}\subseteq V_{f}$. First, observe that $\tilde{f}(x)\leq f(x)$ for all $x$. Now, suppose $(x,y)\in V_{\tilde{f}}$. Then $x<y$ and $\tilde{f}(x)\geq\tilde{f}(y)$. If $\tilde{f}(y)=f(y)$, then $f(y)=\tilde{f}(y)\leq\tilde{f}(x)\leq f(x)$, so $(x,y)\in V_{f}$. Suppose $\tilde{f}(y)\neq f(y)$. Then $(y,z)\in V_{f}$ for some $z$, and $\tilde{f}(y)=f(z)$. Let $w\neq y$ be such that $x<w<z$. If $(x,w)$ is a regular pair, then $\tilde{f}(x)=f(w)<f(z)=\tilde{f}(y)$, which is a contradiction. If $(x,w)$ is not a regular pair, then $\tilde{f}(x)\leq f(x)<f(w)<f(z)=\tilde{f}(y)$; also a contradiction. Hence, we cannot have that $f(y)\neq\tilde{f}(y)$, which completes the proof. ∎ ###### Theorem 6.11. A morphism $[\gamma]\in\operatorname{Hom}(w,z)$ is represented by a unique sequence $(w=x_{0},x_{1},\dots,x_{k}=z)$ of cells, such that * • no two elements in the sequence are equal, and * • for all $i<k$, either $x_{i+1}<x_{i}$ or $\\{x_{i},x_{i+1}\\}$ is a regular pair. It is clear that such a sequence always defines a morphism in $\operatorname{Hom}(w,z)$, so this tells us that we can count the morphisms in $\operatorname{Hom}(w,z)$ by counting the simple paths from $w$ to $z$ in a directed graph with face relations and regular pairs as edges. ###### Proof. We get a sequence representation from a representation $\gamma=\left(x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}\right)$ by concatenating all the $g_{i}$’s. For example, $\left(x_{0}\xrightarrow{g_{0}=(x_{0}>x_{0}^{1}>y_{0})}y_{0}\xleftarrow{f_{0}=(x_{1}>y_{0})}x_{1}\xrightarrow{g_{1}=(x_{1}>x_{1}^{1}>x_{1}^{2}>y_{1})}y_{1}\right)$ becomes $(x_{0},x_{0}^{1},y_{0},x_{1},x_{1}^{1},x_{1}^{2},y_{1}).$ To get a sequence without repeating elements, we choose an appropriate representation. To see that this is possible, first observe that we can eliminate successive repeating elements by removing intermediate identity maps (horizontal relation). Furthermore, by 6.10, given a sequence $(x_{0},\dots,x_{k})$, there is a discrete Morse function $\tilde{f}$ such that $\tilde{f}(x_{0})\geq\tilde{f}(x_{1})\geq\dots\geq\tilde{f}(x_{k})$. Thus, if a cell $x^{\prime}$ repeats in the sequence, the only other cell that can be between the two $x^{\prime}$ is $y^{\prime}$ where $\\{x^{\prime},y^{\prime}\\}$ form a regular pair. Now, we can simplify the representation, as per [14, Remark 2.9], to turn $(\dots,x_{i},x^{\prime},y^{\prime},x^{\prime},x_{i+4},\dots)$ into $(\dots,x_{i},x^{\prime},x_{i+4},\dots)$. It remains to show that for a given morphism, this sequence representation is unique. For this, we show that the sequence representation is decided by the algebraic invariant $I$, as defined in 6.5. Let $(w=x_{0},x_{1},\dots,x_{k}=z)$ and $(w=y_{0},y_{1},\dots,y_{l}=z)$ be sequence representations of $[\tau]$ and $[\sigma]$, such that $I([\tau])=I([\sigma])$. Now, as $x_{0}$ appears only once in the first sequence, the coefficient before the atom $(x_{0}>x_{1})$ (or $(x_{1}>x_{0})$) in $I([\tau])$ is nonzero. Hence, as $y_{0}=x_{0}$ appears only once in the second sequence, we must have $y_{1}=x_{1}$ for $I([\sigma])$ to equal $I([\tau])$. Similarly, we must have $y_{2}=x_{2}$, and so on, so the sequences must be equal in all places. This completes the proof of uniqueness. ∎ For the following theorem, recall the definitions of $r$ (4) and $\rho$ (6.8) from the previous section. ###### Theorem 6.12. Let $[\gamma]\in\operatorname{Hom}(w,z)$ be a morphism represented by the sequence $(x_{0},\dots,x_{k})$. Let $I$ be the set of indices $i\in\\{0,\dots,k-1\\}$ such that $x_{i}>x_{i+1}$. Then, $\rho([\gamma])=\sum_{i\in I}\left(\dim x_{i}-\dim x_{i+1}-1\right)$ ###### Proof. The morphism $[\gamma]$ is represented by $\gamma=\left(x_{j_{0}}\xrightarrow{g_{0}}x_{j_{1}-1}\xleftarrow{f_{0}}x_{j_{1}}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{j_{m}}\xrightarrow{g_{k}}x_{j_{m+1}-1}\right)$ for some sequence $(j_{0},\dots,j_{m+1})$ (with $j_{0}=0$ and $j_{m+1}=k+1$). Then, $\displaystyle\sum_{i\in I}$ $\displaystyle\left(\dim x_{i}-\dim x_{i+1}-1\right)=\sum_{i=0}^{m}\sum_{l=j_{i}}^{j_{i+1}-2}\left(\dim x_{l}-\dim x_{l+1}-1\right)$ $\displaystyle=\sum_{i=0}^{m}\left(\dim x_{j_{i}}-\dim x_{j_{i+1}-1}-(j_{i+1}-j_{i}-1)\right)$ $\displaystyle=\dim x_{j_{0}}-\dim x_{j_{m+1}-1}+\sum_{i=1}^{m}\left(\dim x_{j_{i}}-\dim x_{j_{i}-1}\right)-\sum_{i=0}^{m}\left(j_{i+1}-j_{i}-1\right)$ $\displaystyle=\dim x_{0}-\dim x_{k}+\sum_{i=0}^{m-1}r(f_{i})-\sum_{i=0}^{m}r(g_{i})=\rho([\gamma]).\qed$ ###### Theorem 6.13. Let $[\gamma]$ and $[\tau]$ be morphisms in $\operatorname{Hom}(w,z)$, such that $\rho([\tau])=\rho([\gamma])+1$. Then, $[\tau]$ covers $[\gamma]$ if and only if the sequence representation of $[\gamma]$ equals the sequence representation of $[\tau]$ with exactly one element added or exactly one element removed. ###### Proof. First, suppose $[\tau]$ covers $[\gamma]$. By Theorem 6.7, you can get $[\gamma]$ from $[\tau]$ by adding an element $x^{\prime}$ to the sequence representation of $[\tau]$ and possibly reducing to a non-repeating sequence. There are two cases to consider, (I) The element $x^{\prime}$ was not in the sequence representation of $[\tau]$. (II) The element $x^{\prime}$ was in the sequence representation of $[\tau]$. Case I In this case, the sequence representation of $[\gamma]$ is simply the sequence representation of $[\tau]$ with $x^{\prime}$ added. Case II In this case, adding $x^{\prime}$ to the sequence representation of $[\tau]$ gives something of the form $(x_{0},\dots,x_{i},x^{\prime},x_{i+1},x^{\prime},x_{i+2},\dots,x_{k})$, which is equivalent to $(x_{0},\dots,x_{i},x^{\prime},x_{i+2},\dots,x_{k})$. Hence, the sequence representation of $[\gamma]$ is the sequence representation of $[\tau]$ with $x_{i+1}$ removed. Now, for the other direction, there are again two cases to consider. (I) The sequence representation of $[\gamma]$ equals that of $[\tau]$ with one element added. (II) The sequence representation of $[\gamma]$ equals that of $[\tau]$ with one element removed. Case I Let the sequence representations of $[\tau]$ and $[\gamma]$ be $(x_{0},\dots,x_{k})$ and $(x_{0},\dots,x_{i},x^{\prime},x_{i+1},\dots,x_{k})$, respectively. We use Theorem 6.12 and the fact that $\rho([\tau])=\rho([\gamma])+1$ to conclude that $\dim x_{i}>\dim x^{\prime}>\dim x_{i+1}$. Hence, $[\gamma]$ equals $[\tau]$ with the element $x^{\prime}$ added to a right-pointing arrow, so $[\gamma]\Rightarrow[\tau]$. Case II Let the sequence representations of $[\tau]$ and $[\gamma]$ be $(x_{0},\dots,x_{k})$ and $(x_{0},\dots,x_{i},x_{i+2},\dots,x_{k})$, respectively. We use Theorem 6.12 and the fact that $\rho([\tau])=\rho([\gamma])+1$ to conclude that either $\dim x_{i+1}>\dim x_{i}$ or $\dim x_{i+2}>\dim x_{i+1}$. In the first case, $[x_{i}>x_{i+2}]=[x_{i}<x_{i+1}>x_{i}>x_{i+2}]$. In the second case, $[x_{i}>x_{i+2}] =[x_{i}>x_{i+2}>x_{i+1}<x_{i+2}]$. In any case, $[\gamma]$ can be constructed by adding a single element (either $x_{i}$ or $x_{i+2}$) to a right-pointing arrow of $[\tau]$, so $[\gamma]\Rightarrow[\tau]$. When $[\gamma]\Rightarrow[\tau]$, $[\gamma]$ covers $[\tau]$, as their rank differ by one, and this concludes the proof. ∎ Algorithm 1 Compute Hom poset 1:A regular CW complex $X$, a gradient vector field $V_{f}$, a source $w$, and a target $z$. 2:The Hom poset $\operatorname{Hom}(w,z)$ of the discrete flow category. 3:Construct a directed graph $G$ with cells as vertices and an edge $u\to v$ if $v<u$ or if $\\{u,v\\}$ is a regular pair 4:In $G$, find all paths with non-repeating vertices from $w$ to $z$ 5:For all paths, compute the rank with Theorem 6.12. 6:for $i=1..(\dim w-\dim z)$ do 7: Compute the covering relations between paths of rank $i-1$ and paths of rank $i$, using Theorem 6.13. 8:end for Note that the only data needed for the CW complex $X$ is the graph of its covering relations. Only the covering relations of $\operatorname{Hom}(w,z)$ are output with this algorithm, but the full set of partial order relations is easily computed from this. Regarding complexity of this algorithm, the main computational cost is comparing the covering relations. Comparing two paths to determine if one covers the other takes $\mathcal{O}(l)$ time, where $l$ is the length of the longest path. For each path $\gamma$, the algorithm makes one comparison to each path of rank $\rho(\gamma)+1$. Hence, the algorithm makes $\mathcal{O}(n\cdot W)$ comparisons, where $n$ is the number of morphisms in $\operatorname{Hom}(w,z)$, and $W$ is the maximum number of morphisms of a given rank (the maximal “width” of a rank layer in the Hasse diagram). In total, the complexity of the algorithm is $\mathcal{O}(l\cdot n\cdot W)$ (it’s easy to verify that steps 1.-3. have smaller complexity than this). As $W$ is clearly smaller than $n$, we could write this more simply as $\mathcal{O}(l\cdot n^{2})$. For comparison, listing the sequence representations of all morphisms in $\operatorname{Hom}(w,z)$ takes $\Theta(l\cdot n)$ time (and this is thus a lower bound of how fast an algorithm to compute $\operatorname{Hom}(w,z)$ can be). ### 6.4. The Hom posets are CW posets In this section, we prove A: that in the case where $X$ is a simplicial complex, then for any $w,z\in\operatorname{Flo}_{\Sigma}[X]$ with $\operatorname{Hom}(w,z)$ nonempty, the poset $P_{w,z}$, as defined in (3), is a CW poset. (Note that when $\operatorname{Hom}(w,z)$ is empty, then $P_{w,z}$ is just the poset with one element.) ###### Lemma 6.14. Let $P$ and $Q$ be posets with greatest elements $\hat{1}_{P}$ and $\hat{1}_{Q}$, respectively. Suppose that $|P\setminus\\{\hat{1}_{P}\\}|\cong S^{m}$ and $|Q\setminus\\{\hat{1}_{Q}\\}|\cong S^{n}$. Then the realization of the poset $(P\times Q)\setminus\\{(\hat{1}_{P},\hat{1}_{Q})\\}$ is homeomorphic to $S^{m+n+1}$. ###### Proof. From [16, Proposition 1.9], the realization of $(P\times Q)\setminus\\{(\hat{1}_{P},\hat{1}_{Q})\\}$ is the join of $S^{m}$ and $S^{n}$, which is $S^{m+n+1}$. ∎ ###### Definition 6.15. Let $X$ be a simplicial complex, and let $\sigma\in X$. We define the _coface complex_ as the set $\operatorname{cof}\sigma=\\{\tau\setminus\sigma:\sigma\subsetneq\tau\\}$ In other words, the coface complex of $\sigma$ contains all the proper cofaces of $\sigma$, with $\sigma$ subtracted as a set from the complexes111Note that the coface complex is the same as the _link_ of a simplex, defined in e.g., [15].. ###### Lemma 6.16. Let $X$ be a simplicial complex, and let $\sigma\in X$. The coface complex $\operatorname{cof}\sigma$ is a simplicial complex. ###### Proof. We need to prove that for each $\gamma\in\operatorname{cof}\sigma$, each nonempty subset $\gamma^{\prime}\subseteq\gamma$ is also an element of $\operatorname{cof}\sigma$. Suppose $\gamma\in\operatorname{cof}\sigma$. Then $\gamma=\tau\setminus\sigma$ for some proper coface $\tau$ of $\sigma$. Then, $(\gamma^{\prime}\cup\sigma)\subseteq\tau$, so $\gamma^{\prime}\cup\sigma$ is a simplex in $X$, and $(\gamma^{\prime}\cup\sigma)$ is a proper coface of $\sigma$, so $\gamma^{\prime}=(\gamma^{\prime}\cup\sigma)\setminus\sigma$ is in $\operatorname{cof}\sigma$. ∎ ###### Lemma 6.17. Let $X$ be a simplicial complex, and let $\sigma\in X^{n}$ and $\tau\in X^{m}$ with $\tau\subseteq\sigma$. Then the realization of $\operatorname{Hom}_{\operatorname{Ent}[X]}(\sigma,\tau)\setminus\\{(\sigma>\tau)\\}$ is homeomorphic to $S^{n-m-2}$. ###### Proof. The poset $\operatorname{Hom}_{\operatorname{Ent}[X]}(\sigma,\tau)\setminus\\{(\sigma>\tau)\\}$ contains all sequences $(\sigma>\upsilon_{1}>\dots>\upsilon_{k}>\tau)$, with $k\geq 1$, and the partial order is given by inclusion. This poset is isomorphic to the poset of descending sequences $(\upsilon_{1}>\dots>\upsilon_{k})$ of simplices in $\partial(\sigma\setminus\tau)\subseteq\operatorname{cof}\tau$, i.e., the boundary of $\sigma\setminus\tau$ in the coface complex of $\tau$ (where the ordering is still given by inclusion). This is again isomorphic to the barycentric subdivision $T(\partial(\sigma\setminus\tau))$. Now, $|T(\partial(\sigma\setminus\tau))|\cong|\partial(\sigma\setminus\tau)|\cong|\partial\Delta^{n-m-1}|\cong|S^{n-m-2}|,$ as desired (here we use that $\sigma\setminus\tau$ is a set of cardinality $(n+1)-(m+1)=n-m$, and hence an $(n-m-1)$–simplex). ∎ In the proof of the following lemma, we use the fact that for a poset $P$, $|P^{\operatorname{op}}|\cong|P|$, which is a special case of the fact that $|\mathcal{N}(C^{\operatorname{op}})|\cong|\mathcal{N}(C)|$ for a general category $C$. ###### Lemma 6.18. Let $X$ be a simplicial complex. Let $P=\operatorname{Hom}_{\operatorname{Ent}[X]}(w,z)^{\operatorname{op}}\cup\\{\hat{0}\\}$, where $\hat{0}$ is a least element. Then, for any $g\in P\setminus\\{\hat{0}\\}$, the realization of the open interval $(\hat{0},g)$ is homeomorphic to a sphere. ###### Proof. Let $g=(x_{0},\dots,x_{k})$. Then, $(\hat{0},g]\quad\cong\quad\operatorname{Hom}(x_{0},x_{1})^{\operatorname{op}}\times\operatorname{Hom}(x_{1},x_{2})^{\operatorname{op}}\times\dots\times\operatorname{Hom}(x_{k-1},x_{k})^{\operatorname{op}},$ where the $\operatorname{Hom}(x_{i},x_{i+1})$ are Hom posets in $\operatorname{Ent}[X]$. Now, observe that the atom $(x_{i}>x_{i+1})$ is the greatest element in $\operatorname{Hom}(x_{i},x_{i+1})^{\operatorname{op}}$. Furthermore, by 6.17, the realization of $\operatorname{Hom}(x_{i},x_{i+1})^{\operatorname{op}}\setminus\\{(x_{i}>x_{i+1})\\}$ is homeomorphic to a sphere. Thus, by applying 6.14 to the equation above, we get that the realization $|(\hat{0},g)|=|(\hat{0},g]\setminus\\{g\\}|$ is homeomorphic to a sphere. ∎ ###### Theorem 6.19 (Theorem A). In the case that $X$ is a simplicial complex, the poset $P_{w,z}$, as defined in (3), is a CW poset. ###### Proof. We use 6.1. Conditions (1.) and (2.) are clearly fulfilled, so we need only to show that for all $x\in P_{w,z}\setminus\\{\hat{0}\\}$, the realization of the open interval $(\hat{0},x)$ is homeomorphic to a sphere. Let $x$ be represented by $\tau=\left(w=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=z\right).$ Applying Theorem 6.7, we see that $(\hat{0},x]\quad\cong\quad(\hat{0},g_{0}]\times(\hat{0},g_{1}]\times\dots\times(\hat{0},g_{k}].$ By 6.18, the realization of $(0,g_{i})$ is a sphere. Now, as in the proof for 6.18, we apply 6.14 to the equation above and get that $|(\hat{0},x)|$ is homeomorphic to a sphere. ∎ In conclusion, what this result tells us is that in the discrete flow category, the opposite poset of a Hom set is the face poset of a regular CW complex. The nerve of the face poset of a regular CW complex is its barycentric subdivision, and hence has a geometric realization which is homeomorphic to the CW complex. Furthermore, the nerve of the opposite of a poset is isomorphic to the nerve of the poset. Thus, we get a simpler way to describe the geometric realization of the nerve of a Hom set in the discrete flow category: instead of taking the nerve and realizing, we can construct the CW complex of which it is a face poset, which results in a description with fewer cells. To construct this CW complex, we need only compute the rank values and the covering relations, which we can do with Algorithm 1. ###### Example 6.20. We illustrate the observations in the last paragraph with an example. Consider a regular CW decomposition of the filled sphere $D^{3}$, as illustrated in Figure 9. We define the gradient vector field $V_{f}=\\{(w>y),(b>z)\\}$, and let $\Sigma$ be its corresponding Morse system. We compute $\operatorname{Hom}(f,x)$ in the discrete flow category. $y$$x$$w$$z$$t$$b$$f$ Figure 9. A CW decomposition of $D^{3}$. The 0–cells are $x$ and $y$, the 1–cells are $w$ and $z$, the 2–cells are $t$ and $b$ and the single 3–cell is $f$. As we have shown that $\operatorname{Hom}(f,x)$ is a graded poset, we can represent it by a Hasse diagram. The highest rank is $\dim f-\dim x-1=2$. All rank 2 elements are gradient paths on the form $f=x_{0}\xrightarrow{g_{0}}y_{0}\xleftarrow{f_{0}}x_{1}\xrightarrow{g_{1}}\cdots\ \xleftarrow{f_{k-1}}x_{k}\xrightarrow{g_{k}}y_{k}=x,$ where all $g_{i}$ are atoms. There are four of these: * • $f>x$ * • $f>z<b>x$ * • $f>y<w>x$ * • $f>z<b>y<w>x$ All elements of lower rank can be constructed from these by inserting intermediate cells, and all Hasse diagram edges correspond to inserting an intermediate cell in a sequence. For example, inserting $b$ between $f$ and $x$ in $f>x$ gives $f>b>x$, and inserting $b$ between $f$ and $z$ in $f>z<b>x$ gives $f>b>z<b>x=f>b>x$. We now draw the Hasse diagram, which is illustrated in Figure 10. Figure 10. The Hasse diagram for the Hom poset $\operatorname{Hom}(f,x)$ in the discrete flow category. The corresponding regular CW complex is illustrated in Figure 11, and it’s easy to see that this is a contractible space. Figure 11. The face poset of this regular CW complex is $\operatorname{Hom}(f,x)$. ###### Example 6.21. We illustrate that Theorem 6.19 may fail to hold when $X$ is not a simplicial complex. In fact, 6.18, which may be viewed as a special case of Theorem 6.19 where the Morse system is empty (so that $\operatorname{Flo}_{\Sigma}[X]=\operatorname{Ent}[X]$), fails to hold in our example. First, we describe how the suspension of a simplicial complex is also a simplicial complex. Let $X$ be a simplicial complex. Its cone $CX$ can be viewed as a simplicial complex as follows: $CX=X\cup\\{\\{0\\}\\}\cup\\{\sigma\cup\\{0\\}:\sigma\in X\\}.$ The suspension $\Sigma X$, which is just the union of two cones $CX$ along $X$, can then be viewed as the simplicial complex: $\Sigma X=X\cup\\{\\{0\\}\\}\cup\\{\sigma\cup\\{0\\}:\sigma\in X\\}\cup\\{\\{1\\}\\}\cup\\{\sigma\cup\\{1\\}:\sigma\in X\\}.$ (7) Our counterexample uses the following fact: there exists a simplicial complex $Y$ such that the suspension $\Sigma Y$ is homeomorphic to a sphere, but $Y$ is not homeomorphic to $S^{k}$ for any $k$. For this, we will consider the Poincaré sphere, which is a simplicial complex and a homology 3–sphere, but not homeomorphic to a sphere. Its suspension is not homeomorphic to a sphere, but its double suspension is (for more details, see [19, Example 3.2.11]). Hence, letting $Y$ be the suspension of the Poincaré sphere, we get our desired properties. Now, let $Y$ be the suspension of the Poincaré sphere. As $\Sigma Y$ is a simplicial complex and homeomorphic to $S^{5}$, it is also a regular CW complex of dimension 4. Hence we can construct a new regular CW complex $Z$ by attaching a 5–cell $e$ along $\Sigma Y$. The CW complex $Z$ is illustrated in Figure 12. $1$$0$$e$$Y$ Figure 12. An illustration of the CW complex $Z$. Now, consider the Hom poset $\operatorname{Hom}_{\operatorname{Ent}[Z]}(e,0)\setminus\\{(e>0)\\},$ (8) where 0 is a 0–simplex in $\Sigma Y$ as per the notation in (7). This poset consists of all descending chains $(e>\tau_{1}>\dots>\tau_{k}>0)$, with $k\geq 1$, ordered by inclusion. The set of $\tau_{i}$ satisfying $e>\tau_{i}>0$ is precisely $\\{\\{0\\}\cup\sigma:\sigma\in Y\\}$. Hence, the poset in (8) is isomorphic to the barycentric subdivision of $Y$. Thus, $|\operatorname{Hom}_{\operatorname{Ent}[Z]}(e,0)\setminus\\{(e>0)\\}|\cong|T(Y)|\cong|Y|,$ which is not homeomorphic to a sphere. Thus, the interval $(\hat{0},(e>0))\in P_{e,0}$ (which is just the opposite of the poset in (8)) is not a sphere, and hence $P_{e,0}$ is not a sphere. ## 7\. Simplicial collapse on simplicial sets In this section we define _regular_ simplicial sets, and generalize the concept of _simplicial collapse_ to these simplicial sets. We further prove how the nerves of certain categories are regular, and apply simplicial collapse to a subclass of these categories. ### 7.1. Collapses on simplicial complexes In Section 3.2, we defined _free faces_ and _collapses_. We now provide a more general definition of free faces, that allows for codimension greater than 1. ###### Definition 7.1. Let $X$ be a simplicial complex and let $\tau$ and $\sigma$ be simplices in $X$ such that 1. (1) $\tau\subsetneq\sigma$, and 2. (2) all cofaces of $\tau$ are faces of $\sigma$. Then $\tau$ is called a _free face_ and $\\{\tau,\sigma\\}$ is a _free pair_. Recall that in our old definition of free pairs, removing a free pair of $X$ from a simplicial complex constitutes an elementary collapse. Recall that we write $X\searrow Y$ if $Y$ can be reached through a series of elementary collapses on $X$. The following theorem justifies our generalization of the definition of free pairs. ###### Theorem 7.2. [9, Proposition 9.18] Let $X$ be a simplicial complex and let $\\{\tau,\sigma\\}$ be a free pair according to 7.1. Let $Y\setminus[\tau,\sigma]$ be the simplicial complex where all simplices $\gamma$ satisfying $\tau\subseteq\gamma\subseteq\sigma$ have been removed from $X$. Then $Y$ is a simplicial complex, and $X\searrow Y$. Note that in the previous theorem, we might as well have written $\tau\subseteq\gamma$ instead of $\tau\subseteq\gamma\subseteq\sigma$, as all cofaces of $\tau$ are faces of $\sigma$. An example of a simplicial collapse of the kind described in Theorem 7.2 is given in Figure 13. $a$$b$$c$$b$$c$ Figure 13. A simplicial collapse given by the free pair $\\{\\{a\\},\\{a,b,c\\}\\}$. ### 7.2. Collapses on simplicial sets We generalize free pairs and collapses to simplicial sets. We will restrict ourselves to a certain class of simplicial sets, which we will name _regular_ simplicial sets. ###### Definition 7.3. Let $X$ be a simplicial set. Then $X$ is called _regular_ if for all $n\in\mathbb{N}$ (including 0) and all nondegenerate $x\in X_{n}$, the map $\displaystyle\operatorname{Hom}_{\Delta}([0],[n])$ $\displaystyle\to X_{0}$ $\displaystyle\theta$ $\displaystyle\mapsto X(\theta)(x)$ is injective. This definition says that for a nondegenerate $n$–simplex, all of the $(n+1)$ 0–dimensional faces are different. This also implies that for each nondegenerate $n$–simplex $x$ and all $m<n$, all of the $m$–dimensional faces of $x$ are different222More precisely, the map $\operatorname{Hom}_{\Delta}([m],[n])\to X_{m}$ that maps $\theta$ to $X(\theta)(x)$ is injective., because no two $m$–dimensional faces have the same 0–dimensional faces. In other words, for $x\in X_{n}$ nondegenerate, the simplicial set consisting of $x$, all the faces of $x$, and all their degeneracies, is isomorphic to $\Delta^{n}$. This last part is the motivation behind the definition. Just as for simplicial complexes, the sub-simplicial set generated by a nondegenerate $n$–simplex $x$ looks like $\Delta^{n}$, which will allow us to define free pairs and collapses, just as for simplicial complexes. ###### Example 7.4. An example of a regular simplicial set is given in Figure 14. Observe how there are two 1–simplices with $b$ and $c$ as faces, something that is not possible for simplicial complexes. However, just as for simplicial complexes, all 1–simplices lie between two different 0–simplices, and the 2–simplex $A$ looks like a 2–simplex in a simplicial complex. To make the last point more precise, the simplicial set consisting of $A$ and all its faces and degeneracies, is isomorphic to $\Delta^{2}$. $A$$\alpha$$\delta$$\beta$$\gamma$$a$$b$$c$ Figure 14. A regular simplicial set (degeneracies omitted). ###### Lemma 7.5. The realization of a regular simplicial set is a regular CW complex. ###### Proof. Let $X$ be a regular simplicial set. Recall that a CW complex is regular if all attaching maps are homeomorphisms onto their image. From [11, Theorem 14.1], we know that $|X|$ is a CW complex with one $n$–cell for each nondegenerate simplex. Furthermore, for an $n$–cell corresponding to a nondegenerate $n$–simplex $x$, the attaching map is given by sending $|\partial\Delta^{n}|$ to $|\partial x|$ (in the canonical way). Now, from the observations in the above paragraph, we know that this map is a homeomorphism, so we are finished. ∎ We introduce some new notation. We say that $\tau\in X_{m}$ is a _face_ of $\sigma\in X_{n}$ if $\tau=X(\theta)(\sigma)$ for some $\theta\in\operatorname{Hom}_{\Delta}([m],[n])$. Note that in this definition, degeneracies are also considered faces, but the nondegenerate faces are just as for simplicial complexes. We say that $\sigma$ is a coface of $\tau$ when $\tau$ is a face of $\sigma$. We write $\tau\subseteq\sigma$ when $\tau$ is a face of $\sigma$, and we write $\tau\subsetneq\sigma$ when $\tau$ is a _proper_ face of $\sigma$, i.e., when $\tau\subseteq\sigma$ and $\tau\neq\sigma$. It’s clear that this relation is transitive, as if $\tau\subseteq\tau^{\prime}$ and $\tau^{\prime}\subseteq\tau^{\prime\prime}$, then $\tau=X(\theta)(\tau^{\prime})$ and $\tau^{\prime}=X(\theta^{\prime})(\tau^{\prime\prime})$ for some $\theta,\theta^{\prime}$, so $\tau=X(\theta\circ\theta^{\prime})(\tau^{\prime\prime})$. It’s not, however, antisymmetric, as all simplices are faces of all their degeneracies. ###### Example 7.6. In the simplicial set in Figure 14, the faces of $A$ are * • $A$ itself, * • the degeneracies of $A$, * • the nondegenerate simplices $a$, $b$, $c$, $\alpha$, $\beta$ and $\delta$, * • the degeneracies of the above mentioned simplices. The simplex $\gamma$, on the other hand, is _not_ a face of $A$. ###### Definition 7.7. Let $X$ be a regular simplicial set, and let $\tau$ and $\sigma$ be nondegenerate simplices such that 1. (1) $\tau\subsetneq\sigma$, and 2. (2) all cofaces of $\tau$ are faces of $\sigma$. Then $\tau$ is called a _free face_ and $\\{\tau,\sigma\\}$ is a _free pair_. Note that the fact that $\tau$ and $\sigma$ are nondegenerate and that $\tau\subsetneq\sigma$ implies that $\dim\tau<\dim\sigma$. ###### Theorem 7.8. Let $X$ be a simplicial set and let $\\{\tau,\sigma\\}$ be a free pair according to 7.1. Let $Y\setminus[\tau,\sigma]$ be the simplicial set where all simplices $\gamma$ satisfying $\tau\subseteq\gamma\subseteq\sigma$ have been removed from $X$. Then $Y$ is a simplicial set, and the realization $|Y|$ is a deformation retract of $|X|$. ###### Proof. We first prove that $Y$ is a simplicial set. We must show that for all $x\in Y$, all faces (including degeneracies) of $x$ are contained in $Y$. Let $x$ be an $n$–simplex in $Y$, and suppose $y$ is a face of $x$ not in $Y$. Then $y$ is a coface of $\tau$. But then $x$ is also a coface of $\tau$ (and hence also a face of $\sigma)$, so $x\notin Y$, which is a contradiction. This proves that $Y$ is a well-defined simplicial set. We now show that $|Y|$ is a deformation retract of $|X|$. From 7.5, we know that $|X|$ is a regular CW complex, and so is $|Y|$. Let $e_{\tau}$ and $e_{\sigma}$ denote the cells in $|X|$ corresponding to $\tau$ and $\sigma$, respectively. Then $\overline{e_{\sigma}}$ (i.e., the closure of $e_{\sigma}$) is homeomorphic to $|\Delta^{n}|$ (for some $n$), and under this homeomorphism, $e_{\tau}$ maps to some face of $|\Delta^{n}|$. Thus, $\overline{e_{\sigma}}$ deformation retracts to $\left(\overline{e_{\sigma}}\setminus\\{e^{\prime}:e_{\tau}\subseteq e^{\prime}\\}\right)$ (i.e., $\overline{e_{\sigma}}$ with all the cofaces of $e_{\tau}$ removed), just as for simplicial complexes. This deformation retract then extends to a deformation retract from $|X|$ to $|Y|$ by setting it to be identity everywhere else. ∎ When $X$ deformation retracts to $Y$ in this way, we call it a _collapse_ , and say that $X$ _collapses_ to $Y$. ###### Example 7.9. In Figure 15, $\\{a,A\\}$ is a free pair in the regular simplicial set on the left hand side. The right hand side shows the simplicial set after the corresponding collapse. Observe how the collapse gives a deformation retract on the closure of the 2–cell corresponding to $A$, which extends to a deformation retract on the entire realization by defining it to be identity everywhere else. $A$$\alpha$$\delta$$\beta$$\gamma$$a$$b$$c$$\beta$$\gamma$$b$$c$ Figure 15. A collapse of a regular simplicial set. ### 7.3. Unique factorization categories We now define a certain class of categories, which we will call _unique factorization categories_. In these categories, all morphism can be written as a composition of one or more _indecomposable_ morphisms in a unique way. We shall see that these categories have nerves that are regular simplicial sets, and with many free faces. ###### Definition 7.10. Let $C$ be a category and $f\colon A\to B$ a morphism in $C$. Then $f$ is called _indecomposable_ if it cannot be written as a composite of non-identity morphisms, i.e., if there exist no morphisms $h\neq\operatorname{id}_{B}$ and $g\neq\operatorname{id}_{A}$ such that $f=h\circ g$. We say that a category is _finite_ if both its set of objects and set of morphisms is finite. ###### Definition 7.11. A _finite directed category_ is a finite category such that: * • For all objects $a$, $\operatorname{Hom}(a,a)$ contains only the identity $\operatorname{id}_{a}$. * • For all objects $a\neq b$, if $\operatorname{Hom}(a,b)$ is nonempty, then $\operatorname{Hom}(b,a)=\emptyset$. Any finite poset (viewed as a category) is an example of a finite directed category. It’s clear that for a finite directed category, each morphism $f$ can be written as a composition $f=f_{k}\circ\cdots\circ f_{1}$ of indecomposable morphisms. However, its possible that there are several different ways to do this. For example, consider the face poset of the simplicial complex $\Delta^{2}$. Here, the morphism $(\\{0\\}\leq\\{0,1,2\\})$ has two decompositions: $(\\{0,1\\}\leq\\{0,1,2\\})\circ(\\{0\\}\leq\\{0,1\\})$ and $(\\{0,2\\}\leq\\{0,1,2\\})\circ(\\{0\\}\leq\\{0,2\\})$ ###### Proposition 7.12. Let $C$ be a finite directed category. Then the nerve of $C$ is a regular simplicial set. ###### Proof. A nondegenerate $n$–simplex in $\mathcal{N}(C)$ is a set of composable non- identity morphisms $\\{f_{i}\colon x_{i}\to x_{i+1}:0\leq i<n\\}$. Suppose $x_{i}=x_{j}$ for some $i<j$. If $j=i+1$, then $f_{i}$ is a non-identity morphism in $\operatorname{Hom}(x_{i},x_{i})$, which contradicts 7.11. If $j>i+1$, then both $\operatorname{Hom}(x_{i},x_{i+1})$ and $\operatorname{Hom}(x_{i+1},x_{i})$ are nonempty, which also contradicts 7.11. Hence, any nondegenerate $n$–simplex in $\mathcal{N}(C)$ has $n+1$ different 0–dimensional faces, and so $\mathcal{N}(C)$ is regular. ∎ ###### Definition 7.13. A _unique factorization category_ is a finite directed category $C$ such that all non-identity morphisms $f$ in $C$ can be written as a composition of non- identity indecomposable morphisms in a unique way. ###### Example 7.14. Let $C$ be the entrance path category of a finite CW complex $X$ considered as an ordinary category, i.e., with the poset structure of the Hom sets removed. Then $C$ is a unique factorization category. To see this, observe that each non-identity morphism goes to a cell of strictly lower dimension, so $C$ is finite directed. Furthermore, an entrance path $(x_{0}>\dots>x_{k})$ factors uniquely to $(x_{k}>x_{k-1})\circ\dots\circ(x_{0}>x_{1})$. Similarly, we can show that a discrete flow category of a finite CW complex, with the poset structure of the Hom sets removed, is a unique factorization category. The factorization of a gradient path is given by subsequences between critical cells. As each non-identity morphism goes from a critical cell to a critical cell of lower dimension, this gives a well-defined factorization into indecomposable morphisms. Now, to see that this factorization is unique, suppose we have $f,f^{\prime}\colon a\to b$ and $g,g^{\prime}\colon b\to c$ with $g\circ f=g^{\prime}\circ f^{\prime}$. Then the sequence representations of $g\circ f$ and $g^{\prime}\circ f^{\prime}$ are equal, and as $b$ is critical, both sequence representations contain $b$. Now, the subsequences starting at $a$ and ending at $b$ have to be equal, and these are precisely the sequence representations of $f$ and $f^{\prime}$, so $f=f^{\prime}$. Similarly, $g=g^{\prime}$. This shows that factorizations are unique. To give an example, in the discrete flow category in 6.20, the morphism $[f>t>z<b>x]$ has the unique factorization $[t>z<b>x]\circ[f>t]$. We now prove C. ###### Theorem 7.15 (Theorem C). Let $C$ be a unique factorization category. Then the nerve of $C$ deformation retracts to a simplicial set whose $n$–simplices are all degenerate for $n>1$. ###### Proof. The main idea of the proof is to collapse $\mathcal{N}(C)$ by removing 1–simplices represented by higher compositions. We collapse $\mathcal{N}(C)$ inductively. Let $k$ be the largest integer such that there is a 1–simplex in $\mathcal{N}(C)$ represented by a morphism $f$ that decomposes into $k$ indecomposable morphisms, i.e., $f=f_{k}\circ\dots\circ f_{1}$. We claim that $f$ is a free face, and $f$ is in a free pair with the $k$–simplex $\\{f_{1},\dots,f_{k}\\}$. To see why this is true, let $\\{g_{1},\dots,g_{m}\\}$ be a coface of $f$. Then $g=g_{m}\circ\dots\circ g_{1}$ is a 1–simplex that factors through $f$, and thus $g=f$, as otherwise, $g$ would factor into more than $k$ indecomposable morphisms. It follows that each $g_{i}$ is a composite $g_{i}=f_{j_{i+1}-1}\circ\dots\circ f_{j_{i}}$, and hence, $\\{g_{1},\dots,g_{m}\\}$ is a face of $\\{f_{1},\dots,f_{k}\\}$. Now, we can remove any $k$–composable 1–simplex $f$ from the simplicial set through a series of collapses. Now, all remaining 1–simplices decomposes into $k-1$ or fewer indecomposable factors. We can repeat the process with $k-1$, and so on, until all remaining 1–simplices are indecomposable morphisms. Now, all remaining nondegenerate simplices are of dimension 0 or 1, as any higher simplex $\\{f_{1},\dots,f_{k}\\}$ has the decomposable morphism $f_{k}\circ\dots\circ f_{1}$ as a face. ∎ By combining this theorem with Theorem 2.3, we get the following corollary. ###### Corollary 7.16. Let $C$ be a unique factorization category. Then $H_{n}(\mathcal{N}(C))\cong 0$ for $n\geq 2$. ## 8\. Spectral sequences We define a spectral sequence as a collection of bigraded differential abelian groups $\\{E^{r}_{*,*},\partial^{r}:r=0,1,\dots\\}$, where the bidegree of $\partial^{r}$ is $(-r,r-1)$. For a double complex $C$ we denote its horizontal differentials with $\partial_{h}\colon C_{*,*}\to C_{*-1,*}$ and its vertical differentials with $\partial_{v}\colon C_{*,*}\to C_{*,*-1}$. Furthermore, we let $\operatorname{Tot}C$ denote the total complex of $C$, with the convention that its differentials are given by $\partial^{\operatorname{Tot}}_{n}=\partial_{v}+(-1)^{\text{vertical degree}}\partial_{h}$. We will use the two spectral sequences in the following theorem, which is the homological version of Theorem 2.15 in [12] (see also [21, pp. 141–143] for the homological formulation). ###### Theorem 8.1. Let $C$ be a double complex such that $C_{p,q}=0$ whenever $p<0$ or $q<0$. Then there are two spectral sequences ${}_{I}E$ and ${}_{II}E$ with ${}_{I}E^{2}_{p,q}\cong H_{I}H_{II}(C)_{p,q},\quad\text{and}\quad_{II}E^{2}_{p,q}\cong H_{II}H_{I}(C)_{p,q},$ that both converge to $H_{*}(\operatorname{Tot}C)$. ### 8.1. The spectral sequence of a bisimplicial set To a bisimplicial set $X$, we can associate a double complex $FX$ with $(FX)_{p,q}=\mathbb{Z}X_{p,q}$. The horizontal and vertical differentials are induced by the horizontal and vertical face maps as for the chain complex of a simplicial set, i.e., $\displaystyle\partial_{h}$ $\displaystyle=\sum_{i}(-1)^{i}(d_{i},\operatorname{id}),$ $\displaystyle\partial_{v}$ $\displaystyle=\sum_{i}(-1)^{i}(\operatorname{id},d_{i}).$ A simple calculation shows that $\partial_{h}^{2}=0$, $\partial_{v}^{2}=0$ and $\partial_{h}\partial_{v}=\partial_{v}\partial_{h}$. The following theorem of Dold and Puppe tells us that the homology of $\operatorname{Tot}(FX)$ equals the homology of $\operatorname{diag}X$ [5, Theorem 2.9] (see also [8, Theorem 2.5]). Hence, we can compute the homology of $\operatorname{diag}X$ with either of the spectral sequences in Theorem 8.1. ###### Theorem 8.2 (Dold–Puppe). Let $X$ be a bisimplicial set and $FX$ its associated double complex. Then $H_{*}\operatorname{Tot}(FX)\cong H_{*}(\operatorname{diag}X)$ As a bisimplicial set has an infinite amount of degenerate simplices, computing the spectral sequence associated to $\operatorname{Tot}(FX)$ can be impractical. However, we will show that, just as for the chain complex of a simplicial set, we can in fact ignore the degenerate simplices (both the vertical and horizontal ones). For the proof, we will need the following lemma. ###### Lemma 8.3. Let $C_{*}$ be a chain complex, and let $D_{*}$ and $D^{\prime}_{*}$ be subcomplexes of $C_{*}$. Suppose $\displaystyle H_{n}(D)$ $\displaystyle\cong 0,$ $\displaystyle H_{n}(D^{\prime})$ $\displaystyle\cong 0,\text{ and }$ $\displaystyle H_{n}(D\cap D^{\prime})$ $\displaystyle\cong 0$ for all $n\in\mathbb{Z}$. Then, $H_{n}(D+D^{\prime})\cong 0$ for all $n\in\mathbb{Z}$. Here, $D_{*}\cap D^{\prime}_{*}$ and $D_{*}+D^{\prime}_{*}$ are the subcomplexes of $C_{*}$ defined by 333Note the difference between $D_{*}+D^{\prime}_{*}$ and the direct sum $D_{*}\oplus D^{\prime}_{*}$. $\displaystyle(D\cap D^{\prime})_{n}$ $\displaystyle=D_{n}\cap D^{\prime}_{n},\text{ and}$ $\displaystyle(D+D^{\prime})_{n}$ $\displaystyle=D_{n}+D^{\prime}_{n}=\\{x+y:x\in D_{n},y\in D^{\prime}_{n}\\}.$ ###### Proof. There is a short exact sequence of chain complexes $0\rightarrow D_{*}\cap D^{\prime}_{*}\rightarrow D_{*}\oplus D^{\prime}_{*}\rightarrow D_{*}+D^{\prime}_{*}\rightarrow 0,$ where the first map is $x\mapsto(x,-x)$, and the second map is $(x,y)\mapsto x+y$. The induced long exact sequence in homology then proves the lemma. ∎ Now, let $D^{h}_{p,q}$ be the set of horizontal degeneracies in $X_{p,q}$, i.e., the simplices on the form $(s_{i},\operatorname{id})x$ for some $x\in X_{p-1,q}$ and some $i$. Likewise, let $D^{v}_{p,q}$ be the set of vertical degeneracies in $X_{p,q}$. Then $\mathbb{Z}D^{h}_{p,q}$ and $\mathbb{Z}D^{v}_{p,q}$ are subgroups of $(FX)_{p,q}$. Now, as $D^{h}_{*,q}$ are just the degeneracies in the simplicial set $X_{*,q}$, we have that $\partial_{h}(D^{h}_{p,q})\subseteq D^{h}_{p-1,q}$ (as stated in Section 2.2). Furthermore, for $(s_{i},\operatorname{id})x\in D^{h}_{p,q}$, $\displaystyle\partial_{v}(s_{i},\operatorname{id})x$ $\displaystyle=\left(\sum_{i}(-1)^{i}(\operatorname{id},d_{i})\right)(s_{i},\operatorname{id})x$ (9) $\displaystyle=\sum_{i}(-1)^{i}(s_{i},d_{i})x=(s_{i},\operatorname{id})\left(\sum_{i}(-1)^{i}(\operatorname{id},d_{i})\right)x,$ so $\partial_{v}(D^{h}_{p,q})\subseteq D^{h}_{p,q-1}$. Hence, $FD^{h}$ given by $(FD^{h})_{p,q}=\mathbb{Z}D^{h}_{p,q}$ is a well defined sub-double complex of $FX$, and we can form the quotient double complex $FX/FD^{h}$. Likewise, we have well-defined double complexes $FD^{v}$ and $FX/FD^{v}$. We then also have a sub-double complex $F(D^{h}\cup D^{v})$, again given by $F(D^{h}\cup D^{v})_{p,q}=\mathbb{Z}(D^{h}_{p,q}\cup D^{v}_{p,q})$, and we can form the quotient $FX/F(D^{h}\cup D^{v})$. ###### Proposition 8.4. Let $X$ be a bisimplicial set. Then $H_{*}\operatorname{Tot}(FX)\cong H_{*}\operatorname{Tot}\left(FX/F(D^{h}\cup D^{v})\right).$ ###### Proof. From 2.4, we get that $H_{I}(FD^{h})\cong 0$ everywhere. Theorem 8.1 then gives us that $H_{*}(\operatorname{Tot}FD^{h})\cong 0$. By the same argument for $D^{v}$, we get that $H_{*}(\operatorname{Tot}FD^{v})\cong 0$. Now, the degenerate simplices in the diagonal $\operatorname{diag}X$ are precisely the elements of $D^{h}\cap D^{v}$. Applying 2.4 again, we get that $H_{*}(\operatorname{diag}(D^{h}\cap D^{v}))=0$, and applying Theorem 8.2, we get that $H_{*}(\operatorname{Tot}F(D^{h}\cap D^{v}))\cong 0$. Note now that $\operatorname{Tot}F(D^{h}\cup D^{v})=\operatorname{Tot}(FD^{h}+FD^{v})=(\operatorname{Tot}FD^{h})+(\operatorname{Tot}FD^{v})$ and that $\operatorname{Tot}F(D^{h}\cap D^{v})=(\operatorname{Tot}FD^{h})\cap(\operatorname{Tot}FD^{v})$. It now follows from 8.3 that $H_{*}(\operatorname{Tot}F(D^{h}\cup D^{v}))\cong 0$. Hence, $H_{*}(\operatorname{Tot}FX/F(D^{h}\cup D^{v}))\cong H_{*}\left((\operatorname{Tot}FX)/(\operatorname{Tot}F(D^{h}\cup D^{v}))\right)\cong H_{*}(\operatorname{Tot}FX)$. ∎ ### 8.2. The spectral sequence of the discrete flow category From the discrete flow category $\operatorname{Flo}_{\Sigma}[X]$, we get a bisimplicial set $S:=\operatorname{N\overline{N}}\operatorname{Flo}_{\Sigma}[X]$, as described in Section 4.2. As the realization of the diagonal, $|\operatorname{diag}S|$, is (homotopy equivalent to) the classifying space of $\operatorname{Flo}_{\Sigma}[X]$, we can compute the homology of this classifying space through simplicial homology on $\operatorname{diag}S$. This is again equal to the homology of the total complex, $\operatorname{Tot}FS$, by Theorem 8.2. Finally, we can compute the homology of $\operatorname{Tot}FS$ with one of two the spectral sequences in Theorem 8.1. We will use the spectral sequence ${}_{I}E$, i.e., we will take vertical homology first. As we shall show, this spectral sequence collapses on page 2 (B in the introduction). Recall that an element of $S_{p,q}$ consists of $q$ horizontally composable sets of $p$ vertically composable 2–morphisms. For p-categories, this is a set $\\{f_{0,j}\Rightarrow\dots\Rightarrow f_{p,j}:f_{i,j}\colon x_{j}\to x_{j+1},0\leq j<q\\}$. The face and degeneracy maps, both in the vertical and in the horizontal direction, are as for the ordinary nerve. First, we compute some examples. ###### Example 8.5. We consider the discrete flow category of a discrete Morse function on $S^{2}$, as described in [14, Section 6.1]. Let $S$ be the double nerve of the discrete flow category. By 8.4, we need only consider the elements of $S$ that are not in the image of any degeneracy map (either vertical or horizontal) when constructing the spectral sequence. ${f_{7}}$${f_{0}}$${f_{1}}$${f_{6}}$ ${f_{2}}$${f_{5}}$${f_{4}}$${f_{3}}$ Figure 16. The Hom poset $\operatorname{Hom}(t,w)$. As the discrete flow category has only two objects, $t$ and $w$, there are no nondegenerate horizontal compositions, and thus $E^{0}_{0,q}=0$ for $q\geq 2$. In the bottom row, we have $E^{0}_{0,0}=\mathbb{Z}^{2}$, corresponding to the two objects, and $E^{0}_{p,0}=0$ for $p\geq 1$. Furthermore, $\operatorname{Hom}(t,w)$ (illustrated in Figure 16) has 8 nondegenerate 1-morphisms, 8 nondegenerate 2-morphisms (i.e., partial orders) and no nondegenerate compositions of morphisms, so $E^{0}_{0,1}=E^{0}_{1,1}=\mathbb{Z}^{8}$ and $E^{0}_{p,1}=0$ for $p\geq 2$. [title = Page 0] [”Z^2”](0,0) [”Z^8”](0,1) [”Z^8”](1,1) 0̣ (0,1) [title = Page 1] [”Z”](0,0) [”Z^7”](0,1) [”Z^8”](1,1) 1̣ (1,1) [title = Page 2] [”Z”](0,0) [”Z”](1,1) Figure 17. The first three pages of the spectral sequence for the discrete flow category of a discrete Morse function on $S^{2}$. The 0th page, $E^{0}$, is illustrated in Figure 17, together with the two succeeding pages. To compute $E^{1}$, we need only find the differential $\partial^{0}_{0,1}\colon E^{0}_{0,1}\to E^{0}_{0,0}$. This image takes $f_{i}\colon t\to w$ to $(w-t)$, and hence its image is $\mathbb{Z}(1,-1)\cong\mathbb{Z}$. This gives us $E^{1}$, as illustrated. To compute $E^{2}$, we need the differential $\partial^{1}_{1,1}\colon E^{1}_{1,1}\to E^{1}_{0,1}$. Observe that that $E^{1}_{0,1}$ is generated by $[f_{i}-f_{0}],i\in{1,\dots,7}$. From this, it’s clear that $\partial^{1}_{1,1}$ is surjective (for example, $[f_{3}-f_{0}]$ is $\partial^{1}\left[(f_{2}\Rightarrow f_{3})-(f_{2}\Rightarrow f_{1})+(f_{0}\Rightarrow f_{1})\right])$. Hence, we get $E^{2}$ as illustrated. The resulting homology of the total complex, and hence $S^{2}$, is then as follows. $H_{n}(S^{2})=\begin{cases}\mathbb{Z},&\quad n=0,2,\\\ 0,&\quad\text{ otherwise.}\\\ \end{cases}$ ###### Example 8.6. We compute a slightly more complicated example, the discrete flow category of the discrete Morse function on $D^{3}$ from 6.20. There are three critical cells: $f$, $t$ and $x$. We computed the Hom poset $\operatorname{Hom}(f,x)$ in 6.20. The Hom poset $\operatorname{Hom}(t,x)$ is as for the $S^{2}$ example, i.e., the one illustrated in Figure 16. Finally, $\operatorname{Hom}(f,t)$ consists of just one morphism, $(f>t)$. [title = Page 0] [”Z^3”](0,0) [”Z^30”](0,1) [”Z^8”](0,2) [”Z^60”](1,1) [”Z^8”](1,2) [”Z^32”](2,1) 0̣ (0, 2) 0̣ (0, 1) 0̣ (1, 2) [title = Page 1] [”Z”](0,0) [”Z^20”](0,1) [”Z^52”](1,1) [”Z^32”](2,1) 1̣ (1, 1) 1̣ (2, 1) (2,2) [title = Page 2] [”Z”](0,0) (2,2) Figure 18. The first three pages of the spectral sequence for the discrete flow category of a discrete Morse function on $D^{3}$. We count the nondegenerate simplices to get $E^{0}$. In $E_{p,0}$, as in the previous example, we get the $E^{0}_{p,0}=0$ for $p\geq 1$, and $E^{0}_{0,0}=\mathbb{Z}^{3}$, one copy of $\mathbb{Z}$ for each of the critical cells. There are 30 non-identity morphisms, so $E^{0}_{0,1}=\mathbb{Z}^{30}$. There are 60 non-identity partial order relations, so $E^{0}_{1,1}=\mathbb{Z}^{60}$. The only Hom poset with composable partial orders is $\operatorname{Hom}(f,x)$, and there are 32 possible nondegenerate pairs of composable partial orders, so $E^{0}_{2,1}=\mathbb{Z}^{32}$. Finally, there are no nondegenerate triples of composable partial orders, so $E^{0}_{p,1}=0$ for $p\geq 3$. The only non-identity (horizontally) composable morphisms are $(f>t)$ composed with one of the eight morphisms in $\operatorname{Hom}(t,x)$, so $E^{0}_{0,2}=\mathbb{Z}^{8}$. Similarly, the only nondegenerate horizontally composable partial orders are $\left((f>t)\Rightarrow(f>t)\right)$ composed with one of the eight non-identity partial orders in $\operatorname{Hom}(t,x)$444Even though the partial order $\left((f>t)\Rightarrow(f>t)\right)$ is degenerate, its horizontal composition with a nondegenerate partial order is not.. Thus, $E^{0}_{1,2}=\mathbb{Z}^{8}$. Now, all elements in $S_{2,2}$ includes either a vertical identity (on $(f>t)$) or a horizontal identity (on $f$, $t$ or $x$), and is thus degenerate, so $E^{0}_{2,2}=0$, and the same holds for $E^{0}_{p,2}$ for $p\geq 3$. Finally, any horizontal composition of 3 or more must include an identity (as we only have 3 critical cells), so $E^{0}_{p,q}=0$ for $q\geq 3$. We can now draw the 0th page of the spectral sequence. The first three pages are shown in Figure 18. To get $E^{1}$ from $E^{0}$, me must inspect the three differentials $\displaystyle\partial^{0}_{0,1}\colon E^{0}_{0,1}\to E^{0}_{0,0},$ $\displaystyle\partial^{0}_{0,2}\colon E^{0}_{0,2}\to E^{0}_{0,1},\text{ and }$ $\displaystyle\partial^{0}_{1,2}\colon E^{0}_{1,2}\to E^{0}_{1,1}.$ The image of $\partial^{0}_{0,1}$ is the span of $\\{f-t,f-x,t-x\\}$, so its cokernel is $\mathbb{Z}$. Now, $\partial^{0}_{0,2}$ takes a composable pair $\\{f,g\\}$ to $(f+g-f\circ g)$. Now, as $f\circ g$ is different for each of the composable pairs $\\{f,g\\}$ in $E^{0}_{0,2}$, the kernel of $\partial^{0}_{0,2}$ is 0. Furthermore, we see that computing the cokernel of $\partial^{0}_{0,2}$ amounts to identifying $f\circ g$ with $f+g$ for each of these 8 compositions, so the cokernel is torsion-free and $E^{1}_{0,1}=\mathbb{Z}^{30-8-2}=\mathbb{Z}^{20}$. By a similar argument, the kernel of $\partial^{0}_{1,2}$ is 0 and its cokernel is $\mathbb{Z}^{52}$. Computing $E^{2}$ from $E^{1}$ can be done through tedious computation. Working out this computation gives that $\partial^{1}_{1,1}$ is surjective and that $\operatorname{im}\partial^{1}_{2,1}=\ker\partial^{1}_{1,1}$, so $E^{2}_{0,1},E^{2}_{1,1}$ and $E^{2}_{2,1}$. The spectral sequence now collapses, and we get that the computed homology $\mathbb{Z}$ in degree 0 and 0 everywhere else, as expected from $D^{3}$. The key takeaway from this example is that all rows above $q=1$ became 0 everywhere on the $E^{1}$–page. As we shall now show, this always happens, which causes the spectral sequence to collapse on the second page. For the rest of the section, when we write a _discrete flow category_ , we shall mean a discrete flow category induced from a discrete Morse function on a regular CW complex, as defined in Section 5. For the following result, recall that for a p-category $C$, $\operatorname{\overline{N}}C$ is the simplicial category described in Section 4.2. ###### Theorem 8.7. Let $X$ be a regular CW complex and let $\Sigma$ be a Morse system induced from a discrete Morse function. Then $(\operatorname{\overline{N}}\operatorname{Flo}_{\Sigma}[X])([n])$ is a unique factorization category for all $n\geq 0$. ###### Proof. As observed in 7.14, $(\operatorname{\overline{N}}\operatorname{Flo}_{\Sigma}[X])([0])$, which is just $\operatorname{Flo}_{\Sigma}[X]$ with the poset structure removed, is a unique factorization category. Now, let $n\geq 1$. The morphisms in $(\operatorname{\overline{N}}\operatorname{Flo}_{\Sigma}[X])([n])$ are represented by $f_{0}\Rightarrow\dots\Rightarrow f_{n}$ with $f_{i}$ morphisms in $\operatorname{Flo}_{\Sigma}[X]$. It is clear that $(\operatorname{\overline{N}}\operatorname{Flo}_{\Sigma}[X])([n])$ is a finite directed category. Furthermore, given a representative $f_{0}\Rightarrow\dots\Rightarrow f_{n}$, $f_{0}\colon x\to y$ has a unique factorization through critical cells $x=c_{0},c_{1},\dots,c_{k}=y$. By Theorem 6.7, then all $f_{i}$ factors through these critical cells. Let $f_{i}=f_{i}^{1}\circ\dots\circ f_{i}^{k}$ be the the decomposition of $f_{i}$ through $c_{0},\dots,c_{k}$. We then have a decomposition $\\{f_{0},\dots,f_{n}\\}=\\{f_{0}^{1},\dots,f_{n}^{1}\\}\circ\dots\circ\\{f_{0}^{k},\dots,f_{n}^{k}\\}$. The sets on the right hand side are indecomposable, as $f_{0}^{i}$ is indecomposable for all $i$. Furthermore, the decomposition is unique, as $f_{0}^{1}\circ\dots f_{0}^{k}$ is the unique decomposition of $f_{0}$, and all other $f_{i}^{j}$ are decided uniquely by this decomposition. ∎ ###### Corollary 8.8. Let $\\{E^{r}_{*,*}\\}$ be the spectral sequence associated to a discrete flow category. Then $E^{1}_{p,q}=0$ for $q\geq 2$ ###### Proof. Let $C$ be the discrete flow category. By definition, $E^{1}_{p,q}$ is the $q$th homology of the simplicial set $E^{0}_{p,*}=(\operatorname{N\overline{N}}C)_{p,*}=\mathcal{N}\left((\operatorname{\overline{N}}C)([p])\right)$. By combining Theorem 8.7 and 7.16, we get that this is 0 when $q\geq 2$. ∎ ###### Lemma 8.9. Let $\\{E^{r}_{*,*}\\}$ be the spectral sequence associated to a discrete flow category. Then $E^{2}_{p,0}=0$ for $p\geq 1$. ###### Proof. Let $C$ be the discrete flow category. We have that $E^{1}_{p,0}$ is the free abelian group on the connected components of $\mathcal{N}\left((\operatorname{\overline{N}}C)([p])\right)$. Now, if $a$ and $b$ are connected in $\mathcal{N}\left((\operatorname{\overline{N}}C)([p])\right)$, then they are connected in $\mathcal{N}\left((\operatorname{\overline{N}}C)([i])\right)$ for all $i$. Hence, $E^{1}_{p,0}=\\{(s_{0})^{p}[x]:[x]\in E^{1}_{0,0}\\}$. Now, as $\partial_{h}(s_{0})^{p}[x]$ is $[x]$ when $p$ is odd and 0 when $p$ is even, taking the horizontal homology gives us 0 everywhere except in $E^{1}_{0,0}$. ∎ With the previous two results, we are finally ready to prove B. ###### Theorem 8.10 (Theorem B). The spectral sequence associated to a discrete flow category collapses on the second page. ###### Proof. As $E^{1}_{p,q}=0$ for $q\geq 2$, the same applies to $E^{2}$. Hence, the only potential nonzero entries on page 2 are $E^{2}_{0,0}$ and $E^{2}_{p,1},p\geq 0$. Hence, there can be no nonzero differentials from page 2 and onward. ∎ Note that the above results applies to the ordinary spectral sequence associated to a bisimplicial set, _not_ the one where degenerate simplices have been removed (as in 8.4). However, for computational purposes it is useful to remove degenerate simplices from the spectral sequence, as there are always infinitely many degenerate simplices. Therefore, we now show that the spectral sequence also collapses on page 2 when removing degeneracies. ###### Theorem 8.11. Let $X$ be the double nerve of a discrete flow category. Then the spectral sequence associated to the double complex $FX/F(D^{h}\cup D^{v})$ collapses at the second page. ###### Proof. Let $\\{E^{r}_{*,*}\\}$ be the associated spectral sequence. Firstly, there are no nondegenerate simplices in $X_{p,0}$ for $p\geq 1$, so $E^{0}_{p,0}\cong 0$ for $p\geq 1$ (and the same holds for the succeeding pages). We now show that $E^{1}_{p,q}\cong 0$ for $q\geq 2$. Recall that $E^{1}_{p,q}=H_{II}(FX/F(D^{h}\cup D^{v}))_{p,q}$ First, we show that $H_{II}(FX/FD^{h})_{p,q}\cong 0$ for $q\geq 2$. Suppose that $[x]\in(FX/FD^{h})_{p,q}$, with $q\geq 2$, such that $\partial_{v}[x]=0$. Then $x\in FX_{p,q}$ is such that $\partial_{v}x\in FD^{h}$, so $\partial_{v}x=s^{h}_{0}y_{0}+\dots+s^{h}_{p-1}y_{p-1}$ for some $y_{0},\dots,y_{p-1}\in FX_{p-1,q}$, where $s_{i}^{h}=(s_{i},\operatorname{id})$ is the $i$th horizontal degeneracy map. Now, let $x^{\prime}=x-s^{h}_{p-1}d^{h}_{p}x$. Then $[x^{\prime}]=[x]$ in $(FX/FD^{h})_{p,q}$, and $\displaystyle\partial_{v}x^{\prime}$ $\displaystyle=\partial_{v}x-\partial_{v}(s^{h}_{p-1}d^{h}_{p})x=\partial_{v}x-(s^{h}_{p-1}d^{h}_{p})\partial_{v}x$ $\displaystyle=\left(s^{h}_{0}y_{0}+\dots+s^{h}_{p-1}y_{p-1}\right)-\left(s^{h}_{p-1}d^{h}_{p}s^{h}_{0}y_{0}+\dots+s^{h}_{p-1}d^{h}_{p}s^{h}_{p-1}y_{p-1}\right).$ We now utilize the fact that $d_{p}s_{p-1}=\operatorname{id}$ and that when $i<p-1$, $s_{p-1}d_{p}s_{i}=s_{p-1}s_{i}d_{p-1}=s_{i}s_{p-2}d_{p-1}$. Thus, the sum above rewrites to $\displaystyle s^{h}_{0}$ $\displaystyle(y_{0}-s^{h}_{p-2}d^{h}_{p-1}y_{0})+\dots+s^{h}_{p-2}(y_{p-2}-s^{h}_{p-2}d^{h}_{p-1}y_{p-2})+s^{h}_{p-1}(y_{p-1}-y_{p-1})$ $\displaystyle=s^{h}_{0}y_{0}^{\prime}+\dots s^{h}_{p-2}y_{p-2}^{\prime}.$ Now, continuing this process by letting $x^{\prime\prime}=x^{\prime}-s^{h}_{p-2}d^{h}_{p-1}x^{\prime}$, and so on, we end up with a $\hat{x}\in FX_{p,q}$ such that $[\hat{x}]=[x]\in(FX/FD^{h})_{p,q}$ and $\partial_{v}\hat{x}=0$. But as $H_{II}(FX)_{p,q}$ is 0 when $q\geq 2$, then $\hat{x}$ is in the image of $\partial_{v}$, and thus $[x]$ is in the image of $\partial_{v}$. Hence, $H_{II}(FX/FD^{h})_{p,q}\cong 0$ when $q\geq 2$. It remains to show that $H_{II}(FX/F(D^{h}\cup D^{v}))_{p,q}\cong 0$ when $q\geq 2$. For this, we use that $G/(AB)=(G/A)/(B/(A\cap B))$ for groups $A,B<G$, so we have a short exact sequence of chain complexes $0\rightarrow FD^{v}/(FD^{h}\cap FD^{v})\rightarrow FX/FD^{h}\rightarrow FX/F(D^{h}\cup D^{v})\rightarrow 0.$ (10) We know from 2.4 that $H_{II}(FD^{v})\cong 0$. Furthermore, one can show that $D^{h}_{p,*}$ together with the vertical face and degeneracy maps is a simplicial set555This follows from the fact that if $x$ is a horizontal degeneracy, then so is $d^{v}_{i}x$ and $s^{v}_{i}x$, as $d^{v}_{i}s^{h}_{j}=s^{h}_{j}d^{v}_{i}$ and $s^{v}_{i}s^{h}_{j}=s^{h}_{j}s^{v}_{i}$.. The degeneracies in this simplicial set are $D^{h}\cap D^{v}$, so, again by 2.4, $H_{II}(F(D^{h}\cap D^{v}))=H_{II}(FD^{h}\cap FD^{v}))\cong 0$. In then follows (from the long exact sequence in homology of short exact sequences of chain complexes) that $H_{II}(F^{v}/(FD^{h}\cap FD^{v}))=0$. Finally, we can use the long exact sequence in homology from the short exact sequence in (10) to conclude that $H_{II}(FX/F(D^{h}\cup D^{v}))_{p,q}\cong H_{II}(FX/FD^{h})_{p,q}$ (for all $p$ and $q$). In particular, $H_{II}(FX/F(D^{h}\cup D^{v}))_{p,q}\cong 0$ for $q\geq 2$. Hence, $E^{1}_{p,q}$ is 0 when $q\geq 2$, and this, together with the fact that $E^{1}_{p,0}\cong 0$ when $p\geq 1$, shows that the spectral sequence collapses on the second page. ∎ Observe that due to this theorem, when computing the spectral sequence, we never need to know $E^{0}_{p,q}$ for $q\geq 3$. In fact, the only things we need to compute at page 1 is the row $E^{1}_{*,1}$ and the single square $E^{1}_{0,0}$. The squares in $E^{1}_{*,1}$ also have a relatively simple description; the differential $\partial_{v}E^{0}_{p,2}\to E^{0}_{p,1}$ sends a composable pair $\\{f,g\\}$ to $f+g-g\circ f$. Hence, $E^{1}_{p,1}$ is the same as $E^{0}_{p,1}$ where all morphisms have been identified with the sum of the morphisms in their (unique) decomposition of indecomposable morphisms. We now show another example of a spectral sequence computation, using Theorem 8.11 and the observations in the last example. ###### Example 8.12. We compute the discrete flow category of a discrete Morse function on the 2–torus, illustrated in Figure 19. We compute the associated spectral sequence (with degeneracies removed), with rational coefficients (to make the computations easier). $A$$B$$C$$D$$\alpha$$\beta$$\alpha$$\beta$$\gamma$$\delta$$\varepsilon$$\zeta$$\varepsilon$$\zeta$$\eta$$\theta$$a$$b$$a$$c$$d$$c$$a$$b$$a$ Figure 19. A discrete Morse function (represented by its gradient vector field) on a regular CW decomposition of the 2–torus. The regular pairs are given by red, dashed arrows. The critical cells are $A$, $\alpha$, $\epsilon$ and $a$. We compute the Hom posets with Algorithm 1, and get: $\displaystyle\operatorname{Hom}(A,\alpha)$ $\displaystyle=\left\\{f_{1}=[A>\alpha],\ f_{2}=[A>\gamma<C>\alpha]\right\\},$ $\displaystyle\operatorname{Hom}(A,\varepsilon)$ $\displaystyle=\left\\{f_{1}^{\prime}=[A>\varepsilon],\ f_{2}^{\prime}=[A>\eta<B>\varepsilon]\right\\},$ $\displaystyle\operatorname{Hom}(\alpha,a)$ $\displaystyle=\left\\{g_{1}=[\alpha>a],\ g_{2}=[\alpha>b<\beta>a]\right\\},$ $\displaystyle\operatorname{Hom}(\varepsilon,a)$ $\displaystyle=\left\\{g_{1}^{\prime}=[\varepsilon>a],\ g_{2}^{\prime}=[\varepsilon>c<\zeta>a]\right\\}.$ The only Hom poset with nontrivial partial orders is $\operatorname{Hom}(A,a)$, which consists of 36 morphisms in four connected components, as illustrated in Figure 20. $\displaystyle g_{1}\circ f_{1}\Rightarrow\ \bullet\ \Leftarrow g_{1}^{\prime}\circ f_{1}^{\prime}$ $\displaystyle g_{2}\circ f_{1}\Rightarrow\ \bullet\ \Leftarrow\quad\dots\quad\Rightarrow\ \bullet\ \Leftarrow g_{1}^{\prime}\circ f_{2}^{\prime}$ $\displaystyle g_{1}\circ f_{2}\Rightarrow\ \bullet\ \Leftarrow\quad\dots\quad\Rightarrow\ \bullet\ \Leftarrow g_{2}^{\prime}\circ f_{1}^{\prime}$ $\displaystyle g_{2}\circ f_{2}\Rightarrow\ \bullet\ \Leftarrow\quad\dots\quad\Rightarrow\ \bullet\ \Leftarrow g_{2}^{\prime}\circ f_{2}^{\prime}$ Figure 20. The four components in the Hom poset $\operatorname{Hom}(A,a)$. The unnamed morphisms (represented by dots) are all indecomposable. [title = Page 0] [”Q^4”](0,0) [”Q^44”](0,1) [”Q^8”](0,2) [”Q^32”](1,1) 0̣ (0, 2) 0̣ (0, 1) [title = Page 1] [”Q”](0,0) [”Q^33”](0,1) [”Q^32”](1,1) 1̣ (1, 1) (1,2) [title = Page 2] [”Q”](0,0) [”Q^2”](0,1) [”Q”](1,1) (1,2) Figure 21. The first three pages of the spectral sequence for a discrete flow category of the 2–torus. The discrete flow category has 4 objects, 44 morphisms, 8 nondegenerate horizontal compositions of morphisms, 32 non-identity partial orders, and no other nondegenerate compositions. Hence, the $E^{0}$ page is as illustrated in Figure 21. On page 1, $E^{1}_{0,0}$ is $\mathbb{Q}$, as the nerve of the category has one connected component. Hence, $E^{1}_{0,1}=\mathbb{Q}^{44-8-3}=\mathbb{Q}^{33}$. Furthermore, $E^{1}_{0,1}$ can be described as the cycles in $E^{0}_{0,1}$, where all compositions $f\circ g$ has been identified with $f+g$. Now, to compute $E^{2}$, we need only determine the map $\partial^{2}_{1,1}\colon E^{1}_{1,1}\to E^{1}_{0,1}$. For this, observe that for each of the four components in Figure 20, we can assign an alternating sum of partial orders $x_{i}$, so that $\displaystyle\partial^{2}x_{1}=g_{1}^{\prime}\circ f_{1}^{\prime}-g_{1}\circ f_{1}=g_{1}^{\prime}+f_{1}^{\prime}-g_{1}-f_{1},$ $\displaystyle\partial^{2}x_{2}=g_{1}^{\prime}\circ f_{2}^{\prime}-g_{2}\circ f_{1}=g_{1}^{\prime}+f_{2}^{\prime}-g_{2}-f_{1},$ $\displaystyle\partial^{2}x_{3}=g_{2}^{\prime}\circ f_{1}^{\prime}-g_{1}\circ f_{2}=g_{2}^{\prime}+f_{1}^{\prime}-g_{1}-f_{2},$ $\displaystyle\partial^{2}x_{4}=g_{2}^{\prime}\circ f_{2}^{\prime}-g_{2}\circ f_{2}=g_{2}^{\prime}+f_{2}^{\prime}-g_{2}-f_{2}.$ Now, $\partial^{2}(x_{1}-x_{2}-x_{3}+x_{4})=0$. One can verify (for example through writing $\partial^{2}$ as a matrix and row reducing) that $(x_{1}-x_{2}-x_{3}+x_{4})$ generates the kernel of $\partial^{2}$, so that $E^{2}_{1,1}=\mathbb{Q}$ and $E^{2}_{0,1}=\mathbb{Q}^{2}$. The spectral sequence now collapses, as predicted by Theorem 8.11, and the computed homology of the 2–torus is: $H_{n}(T^{2};\mathbb{Q})=\begin{cases}\mathbb{Q},&\quad n=0,2,\\\ \mathbb{Q}^{2},&\quad n=1,\\\ 0,&\quad\text{ otherwise.}\\\ \end{cases}$ Observe that in this case we got a nonzero group in $E^{2}_{1,1}$ even though the homology of (the nerve of) the Hom poset $\operatorname{Hom}(A,a)$ is 0 in degree 1. This happens because on page 1, the compositions $f\circ g$ becomes identified with the sums $f+g$, so that the differential $\partial^{2}\colon E^{1}_{1,1}\to E^{1}_{0,1}$ has a nonzero element in the kernel, even though the differential $\partial_{h}\colon E^{0}_{1,1}\to E^{0}_{0,1}$ does not. ## References * [1] A. Björner. Posets, regular CW complexes and Bruhat order. European J. Combin., 5(1):7–16, 1984. * [2] M. Bullejos and A. M. Cegarra. On the geometry of 2-categories and their classifying spaces. $K$-Theory, 29(3):211–229, 2003. * [3] Pilar Carrasco, Antonio M. Cegarra, and Antonio R. Garzón. Nerves and classifying spaces for bicategories. Algebr. Geom. Topol., 10(1):219–274, 2010. * [4] Ralph Cohen, John Jones, and Graeme Segal. Morse theory and classifying spaces, preprint, 1995. Available at http://www.math.toronto.edu/mgualt/MorseTheory/CohenJonesSegal.pdf. * [5] Albrecht Dold and Dieter Puppe. Homologie nicht-additiver Funktoren. Anwendungen. Ann. Inst. Fourier (Grenoble), 11:201–312, 1961. * [6] Robin Forman. Morse theory for cell complexes. Advances in Mathematics, 134(1):90–145, 1998. * [7] Robin Forman. A user’s guide to discrete Morse theory. Sém. Lothar. Combin., 48:Art. B48c, 35, 2002. * [8] Paul G. Goerss and John F. Jardine. Simplicial homotopy theory, volume 174 of Progress in Mathematics. Birkhäuser Verlag, Basel, 1999. * [9] Dmitry N. Kozlov. Organized collapse: an introduction to discrete Morse theory, volume 207 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2020. * [10] Albert T. Lundell and Stephen Weingram. The topology of CW complexes. The University Series in Higher Mathematics. Van Nostrand Reinhold Co., New York, 1969. * [11] J. Peter May. Simplicial objects in algebraic topology. Van Nostrand Mathematical Studies, No. 11. D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London, 1967. * [12] John McCleary. A user’s guide to spectral sequences, volume 58 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, second edition, 2001. * [13] J. Milnor. Morse theory. Annals of Mathematics Studies, No. 51. Princeton University Press, Princeton, N.J., 1963. Based on lecture notes by M. Spivak and R. Wells. * [14] Vidit Nanda. Discrete morse theory and localization. Journal of Pure and Applied Algebra, 223(2):459–488, 2019. * [15] Vidit Nanda. Computational algebraic topology lecture notes, 2022. https://people.maths.ox.ac.uk/nanda/cat/TDANotes.pdf. * [16] Daniel Quillen. Homotopy properties of the poset of nontrivial p-subgroups of a group. Advances in Mathematics, 28(2):101–128, 1978. * [17] Nicholas A. Scoville. Discrete Morse theory, volume 90 of Student Mathematical Library. American Mathematical Society, Providence, RI, 2019. * [18] S. Smale. Differentiable dynamical systems. Bull. Amer. Math. Soc., 73:747–817, 1967. * [19] William P. Thurston. Three-dimensional geometry and topology. Vol. 1, volume 35 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1997. Edited by Silvio Levy. * [20] Melvin Vaupel, Erik Hermansen, and Paul Trygsland. Section complexes of simplicial height functions, 2022. arXiv:2201.12617. * [21] Charles A. Weibel. An introduction to homological algebra, volume 38 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1994.
# Conditional Cross Attention Network for Multi-Space Embedding without Entanglement in Only a SINGLE Network Chull Hwan Song1 Taebaek Hwang1 Jooyoung Yoon1 Shunghyun Choi1 Yeong Hyeon Gu2 1Dealicious Inc. 2Sejong University Corresponding author ###### Abstract Many studies in vision tasks have aimed to create effective embedding spaces for single-label object prediction within an image. However, in reality, most objects possess multiple specific attributes, such as shape, color, and length, with each attribute composed of various classes. To apply models in real-world scenarios, it is essential to be able to distinguish between the granular components of an object. Conventional approaches to embedding multiple specific attributes into a single network often result in entanglement, where fine-grained features of each attribute cannot be identified separately. To address this problem, we propose a Conditional Cross-Attention Network that induces disentangled multi-space embeddings for various specific attributes with only a single backbone. Firstly, we employ a cross-attention mechanism to fuse and switch the information of conditions (specific attributes), and we demonstrate its effectiveness through a diverse visualization example. Secondly, we leverage the vision transformer for the first time to a fine-grained image retrieval task and present a simple yet effective framework compared to existing methods. Unlike previous studies where performance varied depending on the benchmark dataset, our proposed method achieved consistent state-of-the-art performance on the FashionAI, DARN, DeepFashion, and Zappos50K benchmark datasets. 0.15pt Figure 1: Multiple Networks vs Single Network for Multi-space embedding. CCA means our proposed Conditional Cross Attention Network. ## 1 Introduction Figure 2: Previous works (CSN, ASEN, CAMNet) _vs_. Ours (CCA) ImageNet [2] is a representative benchmark dataset to verify the visual feature learning effects of deep learning models in the vision domain. However, each image has only one label, which cannot fully explain the various features of real objects. For example, a car can be identified with various attributes such as category, color, and length, as in Figure 1. As shown in Figure 1 (a), the general method of forming embeddings for objects’ various attributes involves constructing neural networks equal to the number of specific attributes, and creating multiple embeddings for vision tasks such as image classification [6, 22, 8] and retrieval [10, 20]. Unlike conventional methods, this study presents a technique that embeds various attributes into a single network. We refer to this technique as multi-space attribute-specific embedding Figure 1 (b). Embedding space aims to encapsulate feature similarities by mapping similar features to close points and dissimilar ones to farther points. However, when the model attempts to learn multiple visual and semantic concepts simultaneously, the embedding space becomes complex, resulting in entanglement; thus, points corresponding to the same semantic concept can be mapped in different regions. Consequently, embedding multiple concepts in an image into a single network is very challenging. Although previous studies attempted to solve this problem using convolutional neural networks (CNNs) [26, 13, 4, 20], they have required intricate frameworks, such as the incorporation of multiple attention modules or stages, in order to identify specific local regions that contain attribute information. Recently, there has been an increase in research related to ViT [11], which outperforms existing CNN-based models in various vision tasks, such as image classification [11], retrieval [21], and detection [1]. In addition, research analyzing how ViT learns representations compared to CNN is underway [17, 15, 14]. Raghu _et al_. [17] demonstrated that the higher layers of ViT are superior in preserving spatial locality information, providing improved spatially discriminative representation than CNN. Some attributes of an object are more easily distinguished when focusing on specific local areas. So, we tailor the last layer of ViT to recognize specific attributes based on their spatial locality, which provides fine-grained information about a particular condition. Figure 2 summarizes the difference between existing CNN-based and proposed ViT-based methods. This study makes the following contributions: 1. 1. Entanglement occurs when embedding an object containing multiple attributes using a single network. The proposed CCA that applies a cross-attention mechanism can solve this problem by adequately fusing and switching between the different condition information (specific attributes) and images. 2. 2. This is the first study to apply ViT to multi-space embedding-based image retrieval tasks. In addition, it is a simple and effective method that can be applied to the ViT architecture with only minor modification. Moreover, it improves memory efficiency by forming multi-space embeddings with only one ViT backbone rather than multiple backbones. 3. 3. Most prior studies showed good performance only on specific datasets. However, the proposed method yields consistently high performance on most datasets and effectively learns interpretable representations. Moreover, the proposed method achieved state-of-the-art (SOTA) performance on all relevant benchmark datasets compared to existing methods. ## 2 Related Works #### Similarity Embedding Triplet Network [24, 18] uses distance calculation to embed images into a space; images in the same category are placed close and those in different categories are far apart. This algorithm has been widely used for diverse subjects such as face recognition and image retrieval. However, as it learns from a single embedding space, it is unsuitable for embedding multiple subjects with multiple categories. Multiple learning models must be created separately according to the number of categories to increase the sophistication level. #### Image Retrieval via CNN-based Embedding Image Retrieval is a common task in computer vision, which is finding relevant images based on a query image. Recent works have explored the CNN-based embedding and attention mechanisms to improve image retrieval. Some works leverage attention mechanisms according to the channel-wise [8, 28, 29] and spatial-wise [29] concepts to assign more importance to attended object in the image. Understanding the detailed characteristics of objects is crucial in image retrieval. This is particularly significant in the fashion domain, where even the same type of clothing can have various attributes such as color, material, and length. Therefore, to excel in attribute-based retrieval,, it is required to recognize disentangled representation for each attribute. The nature of this task is suitable for demonstrating the effectiveness of multi- space embedding. Thus, we show the efficacy of CCA through a fashion attribute-specific retrieval task. #### CNN based Attributes-Specific Embedding Figure 2 outlines the concepts of existing attribute-specific embedding, similar to our current study. CSN [26] converts the condition into a mask-like representation for multi-space embedding. The mask can be easily applied to the fully connected layer (FC). ASEN [13] joins the attention mechanism with a condition for multi-space embedding. A variation, ASEN++ [4], extended ASEN to 2 stages. These multi-stage techniques are excluded from this study for a fair comparison. M2Fashion [27] adds a classifier to the ASEN base. Unlike CSN, CAMNet [19] was extended to 3D feature maps and applied to the spatial attention mechanism, thus enhancing performance. These studies are CNN-based, not self-attention-based like the present study. The recent ViT [11] has been successfully applied to many vision tasks. However, there has been no technique of multi-space embedding for specific attributes, as described in this study. ## 3 Methods Figure 3: The Architecture of Conditional Cross Attention Network (CCA) Figure 3 presents the proposed CCA architecture, which is mostly similar to that of ViT [11] because it was designed to embed specific attributes through a detailed analysis of the ViT architecture. Hence, CCA is easily applicable under the ViT architecture. Moreover, as described in subsection 4.5, it yields excellent performance. The proposed architectures comprise self- attention and CCA modules. The following sections explain these networks. ### 3.1 Self Attention Networks The self-attention module learns a common representation containing the information necessary for multi-space embedding. The self-attention modules are nearly identical to ViT [11]. ViT divides the image into specific patch sizes and converts it into continuous patch tokens ([PATCH]). Here, classification tokens ([CLS]) [3] are added to the input sequence. As self- attention in ViT is position-independent, position embeddings are added to each patch token for vision applications requiring position information. All tokens of ViT are forwarded through a stacked transformer encoder and used for classification using [CLS] of the last layer. The transformer encoder consists of feed-forward (FFN) and multi-headed self-attention (MSA) blocks in a continuous chain. FFN has two multi-layer perceptrons; layer normalization (LN) is applied at the beginning of the block, followed by residual shortcuts. The following equation is for the $l$-th transformer encoder. $\begin{split}\mathbf{x}_{0}&=\left[\mathbf{x}_{{\texttt{[CLS]}}};\mathbf{x}_{{\texttt{[PATCH]}}}\right]+\mathbf{x}_{{\texttt{[POS]}}}\\\ \mathbf{x^{\prime}}_{l}&=\mathbf{x}_{l-1}+\mathtt{MSA}(\mathtt{LN}(\mathbf{x}_{l-1}))\\\ \mathbf{x}_{l}&=\mathbf{x^{\prime}}_{l}+\mathtt{FFN}(\mathtt{LN}(\mathbf{x^{\prime}}_{l}))\end{split}$ (1) where $\mathbf{x}_{0}$ is inital ViT input. $\mathbf{x}_{{\texttt{[CLS]}}}\in\mathbb{R}^{1\times D}$, $\mathbf{x}_{{\texttt{[PATCH]}}}\in\mathbb{R}^{N\times D}$ and $\mathbf{x}_{{\texttt{[POS]}}}\in\mathbb{R}^{(1+N)\times D}$ are the classification, patch, and positional embedding, respectively. The output of the $L-1$ repeated encoder is used as input to the CCA module, as explained in subsection 3.2 Figure 4: Visualization of attention heat maps for each attribute. Red outlines denote actual annotated attributes in FashionAI. ### 3.2 Conditional Cross Attention Network In this study, the transformer must fuse the concept of attributes and mapped condition information for the network to learn. Drawing inspiration from Vaswani _et al_. [25], we propose CCA to enable learning in line with the transformer’s self-attention mechanism. CCA uses a common representation obtained from the self-attention module and cross-attention of the mask according to the given condition to learn nonlinear embeddings that effectively express the semantic similarity based on the condition. Though existing techniques, such as CSN [26] and ASEN [13], have applied condition information to the embedding, these methods are CNN-based rather than transformer-based. #### Conditional Token Embedding Some network switch based on the condition is needed to embed multiple attributes under a single network. In other words, attributes must be learned according to the condition. This study proposes two conditional token embedding methods, as shown in Figure 3. First, Condition ${c}$ is converted into a one-hot vector form, after which conditional token embedding is performed, similar to that used in multi-modal studies such as DeViSE [5], which learns text and image information having the same meaning using heterogeneous data in the same space, as follows: ${\mathbf{q}_{\mathbf{c}}}=\mathtt{FC(onehot({c}))}$ (2) where $\mathbf{q}_{\mathbf{c}}\in\mathbb{R}^{D\times 1}$, ${c}$ is condition of size $K$. Second is the CSN [26] technique, presented in Figure 2 (a). To express $K$ conditions, CSN applies a mask $\in\mathbb{R}^{K\times D}$ to one of the features and uses element-wise multiplication to fuse and embed two CNN features $\in\mathbb{R}^{D}$. This study uses this step only for conditional feature embedding without fusing the features. To this end, we initialize the mask $\in\mathbb{R}^{K\times D}$ for all attributes. This mask can be expressed as a learnable lookup table. The conditional token embedding using the mask is expressed as follows: $\begin{array}[]{l}{\mathbf{q}_{\mathbf{c}}}=\mathtt{FC}(\phi(\mathbf{M_{\theta}}{[c,:]}))\end{array}$ (3) where $\phi$ refers to ReLU, the activation function. Accordingly, the dimensions must be the same as the feature to apply self-attention. The result of $\mathbf{FC}$ in Equation 2 and Equation 3 is embedded while matching the dimension of $C$. Finally, the result of both equations must equal the dimensions of the token embedding in subsection 3.1. Therefore, the same vector $\mathbf{q}_{\mathbf{c}}\in\mathbb{R}^{D\times 1}$ is repeated times to expand the result of both equations as follows: $\begin{array}[]{l}{Q_{c}}=[\mathbf{q}_{\mathbf{c}};\mathbf{q}_{\mathbf{c}};...;\mathbf{q}_{\mathbf{c}}]\end{array}$ (4) #### Conditional Cross Attention Finally, the transformer architecture must effectively fuse the conditional token embedding vector $Q_{c}$, for which we use CCA. The MSA process in Equation 1 uses a self-attention mechanism with the vector query ($Q$), key ($K$), and Value ($V$) as input and is expressed as follows: $\mathtt{Attention}{(Q_{i},K_{i},V_{i})}=\mathtt{softmax}{(\frac{Q_{i}K_{i}^{\top}}{\sqrt{d}})V_{i}}$ (5) These vectors generated from image $i$ can be expressed using ${K_{i},Q_{i},V_{i}}\in\mathbb{R}^{N\times D}$, consistent with the tokens mentioned above. The inner product of ${Q}$ and ${K}$ is calculated, which scales and normalizes with the softmax function to obtain weight N. In contrast, though CCA is nearly identical to self-attention, Query, $Q_{c}$ in Equation 4, is generated to have condition information. $K_{i}$ and $V_{i}$, which are the same as above, are input, and the cross-attention mechanism is applied to construct the final CCA as follows. $\mathtt{Attention}{(Q_{c},K_{i},V_{i})}=\mathtt{softmax}{(\frac{Q_{c}K_{i}^{\top}}{\sqrt{d}})V_{i}}$ (6) The cross-attention mechanism is nearly identical to general self-attention; except for the part of Equation 6, it is the same as Equation 1. The output is the embedding values of [CLS] and [PATCH]. In our proposed CCA, only [CLS] is used for the loss calculation. For the final output, FC and l2 normalization are applied to the embedding feature $x_{{\texttt{[CLS]}}}\in\mathbb{R}^{D}$ of [CLS] as follows: $f^{final}=\mathtt{l{2}}(\mathtt{FC}{(x_{{\texttt{[CLS]}}})})$ (7) Self-attention, explained in subsection 3.1, executes the transformer encoder until step $1\sim(L-1)$, while CCAN, explained in subsection 3.2, applies only to the final step $L$. In other words, during inference, as shown in Figure 1, if step $1\sim(L-1)$ is executed only once and the condition in the final step $L$ is changed and repeated, then several specific features can be obtained under various conditions. Figure 4 shows related experimental results. Eight attributes in the FashionAI dataset are attended in regions matched to each attribute. In addition, step $1\sim(L-1)$ in the network model can apply the existing ViT-based pre-trained model without modification for learning. ### 3.3 Triplet Loss with Conditions We use triplet loss for learning specific attributes, different from the previous general triplet loss in that a conditioned triplet must be constructed. If a label with image $I$ and condition $c$ exists, then the Pair can be denoted as $(I,L_{c})$. When expanded to triplets, this is expressed as follows. $\mathcal{T}=\\{((I^{a},L^{a}_{c}),(I^{+},L^{+}_{c}),(I^{-},L^{-}_{c})|c)\\}$ (8) where $a$ indicates the anchor, $+$ means that it has the same class in the same condition as the anchor, and $-$ means that it does not have the same class. Using negative samples with the same condition in triplet learning can be interpreted as a hard negative mining strategy. As shown in [30], randomly selected negatives are easily distinguished from anchors, enabling the model to learn only coarse features. However, for negative samples with the same condition, the model must distinguish more fine-grained differences. Hence, informative negative samples more suitable for specific-attributes learning are provided. The equation of triplet loss $\mathcal{L}$ is as follows. $\begin{multlined}\mathcal{L}(I^{a},I^{+},I^{-},c)=\\\ \max\\{0,\textsc{Dist}(I^{a},I^{+}|c)-\textsc{Dist}(I^{a},I^{-}|c)+m\\}\end{multlined}\mathcal{L}(I^{a},I^{+},I^{-},c)=\\\ \max\\{0,\textsc{Dist}(I^{a},I^{+}|c)-\textsc{Dist}(I^{a},I^{-}|c)+m\\}$ (9) $m$ uses a predefined margin, and Dist() refers to cosine distance. In the Appendix, we present Algorithm 1, which outlines the pseudo-code of our proposed method. ## 4 Experiments Table 1 shows the statistics of the datasets, including the number of attributes, classes within the attributes, and the total number of images. The difficulty increases as the number of attributes increases and for higher classes. These results can also be seen in the evaluation results of Table 2, Table 3, and Table 4. DataSets | #Attributes | #Classes | #Images ---|---|---|--- FashionAI [32] | 8 | 55 | 180,335 DARN [9] | 9 | 185 | 195,771 DeepFashion [12] | 6 | 1000 | 289,222 Zappos50k [31] | 4 | 34 | 50,025 Table 1: Statistics of the banchmark datasets. ### 4.1 Metrics For FashionAI, DARN, and DeepFashion, we used the experimental setting information of ASEN [13] and applied the mean average precision (mAP) metric for evaluation. For Zappos50K, we followed the experimental setting of CSN [26] and applied the triplet prediction metric for evaluation. This metric verifies the efficiency of attribute specific embedding learning for predicting triplet relationships. ### 4.2 Implementation Details The experimental environment was implemented using 8 RTX 3090 GPUs. We used Pytorch [16] for all implementations. The backbone network was initialized with pre-trained R50+ViT-B/16 [11]. A batch size of 64 and learning rate of 0.0001 was applied for learning. We trained the models up to 200 epochs and selected the trained model that yielded the best results. Triplet loss, described in subsection 3.3, was used with a margin of 0.2. Figure 5: Comparison of multi-space embedding : Conditional (Ours) _vs_. Non- Conditional (Triplet network) Method | Backbone | mAP | mAP for each attribute ---|---|---|--- skirt length | sleeve length | coat length | pant length | collar design | lapel design | neckline design | neck design Random baseline [13] | R50 | 15.79 | 17.20 | 12.50 | 13.35 | 17.45 | 22.36 | 21.63 | 11.09 | 21.19 Triplet network [13] | R50 | 38.52 | 48.38 | 28.14 | 29.82 | 54.56 | 62.58 | 38.31 | 26.64 | 40.02 CSN [13] | R50 | 53.52 | 61.97 | 45.06 | 47.30 | 62.85 | 69.83 | 54.14 | 46.56 | 54.47 ASEN [13] | R50 | 61.02 | 64.44 | 54.63 | 51.27 | 63.53 | 70.79 | 65.36 | 59.50 | 58.67 CAMNet [19] | R50 | 61.97 | 64.14 | 56.22 | 53.05 | 65.67 | 72.60 | 67.74 | 63.05 | 61.97 ASEN++ [4] | R50 | 64.31 | 66.34 | 57.53 | 55.51 | 68.77 | 72.94 | 66.95 | 66.81 | 67.01 TF-CSN† | ViT | 64.86 | 66.73 | 59.58 | 59.94 | 70.91 | 71.45 | 68.17 | 64.92 | 62.33 TF-ASEN† | ViT | 64.21 | 65.86 | 60.11 | 59.74 | 70.20 | 70.80 | 67.01 | 64.08 | 59.48 Ours | CCA (Type-1) | ViT | 66.06 | 67.20 | 62.34 | 60.47 | 70.29 | 75.93 | 70.32 | 65.76 | 61.04 CCA (Type-2) | ViT | 69.03 | 69.55 | 65.92 | 64.43 | 72.74 | 75.39 | 71.89 | 70.42 | 63.85 Table 2: mAP comparisons of our methods against other studies on FashionAI. Bold: the best results among all methods. Bold black: the best results among the counterparts. TF is Transformer. R50 is ResNet50. $\dagger$ indicates our reproduced results. ### 4.3 Visualization of Multi-Space Embedding and Ranking Results #### Entangled _vs_. Disentangled Multi-space Embedding The proposed method enables multi-space embedding for various specific attributes with only one backbone network. When using the general learning method, entanglement in the embedding space inevitably occurs. To solve the entanglement problem and verify whether multi-space embeddings were formed, t-SNE [23] was used to examine the results. The t-SNE visualization results in Figure 5 show whether each attribute class of the FashionAI dataset is properly embedded. The t-SNE visualization results at the center are for the FashionAI dataset with 8 fashion attributes. For the proposed method, excellent embedding results are found for all 8 attributes in the center, and each attribute on the edges. However, training a single model for multiple attributes with the non-conditional method, which is the triplet network in Table 2, Table 3, and Table 4, do not solve the entanglement problem. These findings offer strong evidence that the proposed method achieved multi-space embedding with only one backbone network. Figure 6: Comparison of multi-space embeddings (Ours vs. ASEN and CAMNet) for FashionAI. The top row corresponds to our method, the middle row to ASEN, and the bottom row to CAMNet. The embeddings are shown for three categories (Neck Design, Sleeve Length, Coat Length) out of eight attributes. #### Ours _vs_. Previous Works’ Multi-space Embedding Figure 6 compares the embedding results between the proposed and previous (ASEN [13], CAMNet [19]) methods for the FashionAI dataset. The comparison results for 3 of the 8 detailed categories (Neck Design, Sleeve Length, and Coat Length) in FashionAI are shown. Our method yielded better embedding results than ASEN and CAMNet. For example, in the ASEN and CAMNet results, entanglement occurred in the embedding space for the Wrist Length, Long Sleeves, and Extra-long Sleeves classes of Sleeve Length, whereas entanglement is resolved with our proposed method. Figure A9 presents the embedding results for the 8 attributes in the FashionAI dataset. #### Ranking Results Figure A8 in Appendix C presents the Top 3 ranking results for the 8 attributes in the FashionAI dataset. The order in the figure is lapel design (notched), neckline design (round), skirt length (floor), pant length (midi), sleeve length (short), neck design (low turtle), coat length (midi), and collar design (peter pan). The features of each attribute are reflected accurately in the ranking. This is also demonstrated in the attention heat map. ### 4.4 Memory Efficiency The ViT used in this study has 98M parameters. Individual networks are required to learn attributes with the existing naive method, which necessitates 98M $\times K$ parameters. However, our proposed method can form multi-space embeddings with only one backbone network, thus requiring approximately 98M $\times 1$ parameters. As shown in Figure 2, only the last layer of the ViT model is modified in the proposed CCA, and fewer than 0.1M parameters are added for conditional token embedding. Thus, the proposed method achieves SOTA performance with very few parameters, indicating high efficiency of the algorithm. ### 4.5 Benchmarking Table 2, Table 3, and Table 4 present the evaluations for mAP using the metrics in subsection 4.1. Table 5 shows the triplet prediction metric results. In all tables, our method outperforms the SOTA models CSN [26] and ASEN [13]. #### FashionAI In Table 2, our method achieves SOTA performance for all categories except neck design. Overall, we achieve a +4.72% performance improvement. Method | Backbone | mAP | mAP for each attribute ---|---|---|--- clothes category | clothes button | clothes color | clothes length | clothes pattern | clothes shape | collar shape | sleeve length | sleeve shape Random baseline [13] | R50 | 32.26 | 8.49 | 24.45 | 12.54 | 29.90 | 43.26 | 39.76 | 15.22 | 63.03 | 55.54 Triplet network [13] | R50 | 40.14 | 23.59 | 38.07 | 16.83 | 39.77 | 49.56 | 47.00 | 23.43 | 68.49 | 56.48 CSN [13] | R50 | 50.86 | 34.10 | 44.32 | 47.38 | 53.68 | 54.09 | 56.32 | 31.82 | 78.05 | 58.76 ASEN [13] | R50 | 53.31 | 36.69 | 46.96 | 51.35 | 56.47 | 54.49 | 60.02 | 34.18 | 80.11 | 60.04 CAMNet [19]† | R50 | 44.32 | 25.24 | 38.02 | 47.01 | 45.25 | 48.35 | 45.57 | 23.33 | 71.69 | 55.89 M2Fashion [27] | R50 | 54.29 | 36.91 | 48.03 | 51.14 | 57.51 | 56.09 | 60.77 | 35.05 | 81.13 | 62.23 ASEN++ [4] | R50 | 55.94 | 40.15 | 50.42 | 53.78 | 60.38 | 57.39 | 59.88 | 37.65 | 83.91 | 60.70 TF-CSN† | ViT | 62.85 | 48.65 | 60.71 | 53.27 | 66.18 | 63.70 | 72.75 | 45.95 | 88.36 | 66.35 TF-ASEN† | ViT | 33.52 | 6.20 | 23.28 | 31.24 | 31.37 | 41.16 | 39.02 | 15.57 | 60.88 | 54.16 Ours | | CCA (Type-1) | ViT | 66.78 | 51.56 | 65.55 | 55.94 | 72.95 | 66.97 | 75.80 | 51.37 | 90.08 | 71.44 CCA (Type-2) | ViT | 68.09 | 53.04 | 68.21 | 56.65 | 74.71 | 70.12 | 77.03 | 52.51 | 90.23 | 70.99 Table 3: mAP comparisons of our methods against other studies on DARN. $\dagger$ indicates our reproduced results. Method | Backbone | mAP | mAP for each attribute ---|---|---|--- texture-related | fabric-related | shape-related | part-related | style-related Random baseline [13] | R50 | 3.38 | 6.69 | 2.69 | 3.23 | 2.55 | 1.97 Triplet network [13] | R50 | 7.36 | 13.26 | 6.28 | 9.49 | 4.43 | 3.33 CSN [13] | R50 | 8.01 | 14.09 | 6.39 | 11.07 | 5.13 | 3.49 ASEN [13] | R50 | 8.74 | 15.13 | 7.11 | 12.39 | 5.51 | 3.56 ASEN++ [4] | R50 | 9.64 | 15.60 | 7.67 | 14.31 | 6.60 | 4.07 TF-CSN† | ViT | 10.04 | 15.27 | 8.11 | 14.91 | 7.40 | 4.51 TF-ASEN† | ViT | 8.53 | 13.98 | 6.56 | 13.39 | 5.61 | 3.13 Ours CCA (Type-1) | ViT | 10.64 | 16.18 | 8.38 | 15.98 | 7.99 | 4.78 CCA (Type-2) | ViT | 11.04 | 16.76 | 8.42 | 16.83 | 8.47 | 4.92 Table 4: mAP comparisons of our methods against other studies on DeepFashion. #### DARN In Table 3, the proposed model yields SOTA performance for all items. Averaged across the board, it shows a significant performance improvement of +12.15%. #### DeepFashion In Table 4, the proposed model yields SOTA performance for all items. Overall, we achieve a performance improvement of +1.4%. As shown in Table 1, although it consists of only five attributes, these contain many more classes than FashionAI and DARN at 1000, resulting in a relatively low mAP value. Method | Prediction Accuracy(%) ---|--- Random baseline [13] | 50.00 Triplet network [26] | 76.28 CSN [26] | 89.27 ASEN [13] | 90.79 ADDE-C [7] | 91.37 TF-CSN† | 94.78 TF-ASEN† | 94.56 Ours CCA (Type-1) | 94.98 CCA (Type-2) | 94.85 Table 5: Performance of triplet prediction on Zappos50k. #### Zappos50K Table 5 presents the triplet prediction metric results. Our method achieved SOTA performance, with a +3.61% improvement compared to the previous method. Unlike the aforementioned datasets, the Zappos50K dataset is relatively simple, as indicated by the category composition in Table 1 and the example in Figure A7. ### 4.6 Ablation Studies #### SOTA models applied Transformer The results of the existing CSN [26], ASEN [13] models are obtained with the RestNet50 as the backbone. For a fair comparison, we apply the ViT backbone rather than CNN to these methods and present the experimental results. These models are indicated as TF-CSN and TF-ASEN, respectively. To apply this to CSN and ASEN, first, CSN must accept dimensions of size $\mathbb{R}^{D}$. Hence, it must be applied in [CLS] $\in\mathbb{R}^{D}$. In contrast, ASEN must accept a CNN feature map of $\in\mathbb{R}^{W\times H\times D}$ dimensions. For ViT, it must be applied in [PATCH] $\in\mathbb{R}^{N\times D}$, which can be applied because $N$ can be reshaped to ${W\times H}$. One peculiarity is that ASEN outperforms CSN based on CNN but not that based on ViT. Overall, our proposed CCA, with the same transformer base as TF-CSN and TF-ASEN, outperforms both models. #### Consistent Performance We found that previous studies yielded different performance results for the datasets. For example, Table 2, CAMNet [19] outperformed ASEN and CSN, whereas, in Table 3 and Table 4, there are no performance results. Similarly, in Table 3, M2Fashion [27] outperformed ASEN and CSN, whereas, in Table 2 and Table 4, there are no results. This suggests that the performance varies with the dataset. Accordingly, we applied the CAMNet study to the DARN dataset to reproduce it. In Table 3, the $\dagger$ symbol indicates our reproduced results. CAMNet model yielded lower performance than ASEN and CSN. Moreover, ASEN outperformed CSN based on CNN in the experimental results of TF-CSN and TF-ASEN when using the transformer. This is attributed to differences in learning according to the characteristics of each dataset. Thus, learning to form embeddings for objects with multiple attributes using a single network is very difficult. In contrast, our proposed CCA consistently yields high performance for all datasets. #### Type-1 _vs_. Type-2 These results relate to Equation 2 and Equation 3 in subsection 3.2. Table 2 presents the results for CCA (Type-1) and CCA (Type-2); CCA (Type-2) yielded +2.97% higher performance than CCA (Type-1). In Table 3 and Table 4, CCA (Type-2) showed +1.31% and +0.4% higher performance, respectively. In Table 5, CCA (Type-1) yielded +0.42% higher performance. CCA (Type-2) was slightly higher in the previous three benchmark sets, whereas CCA (Type-1) was slightly higher by 0.13% in this dataset. However, CCA (Type-1) and CCAN (Type-2) outperformed all results of the previous studies and TF-CSN and TF-ASEN described above, achieving SOTA performance. ## 5 Conclusion This study investigates forming embeddings for an object with multiple attributes using a single network, which is generally difficult in practice. However, the proposed method can extract various specific attribute features using a single backbone network. The proposed network enables multi-space embedding for multiple attributes. Finally, our proposed algorithm achieved SOTA performance in all evaluation metrics for the benchmark datasets. ## References * [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. * [2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. * [3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), June 2019. * [4] Jianfeng Dong, Zhe Ma, Xiaofeng Mao, Xun Yang, Yuan He, Richang Hong, and Shouling Ji. Fine-grained fashion similarity prediction by attribute-specific embedding learning. In IEEE Transactions on Image Processing, 2021. * [5] Andrea Frome, Greg Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013. * [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, June 2016. * [7] Yuxin Hou, Eleonora Vig, Michael Donoser, and Loris Bazzani. Learning attribute-driven disentangled representations for interactive fashion retrieval. In ICCV, 2021. * [8] Jie Hu, Li Shen, Gang Sun, and Samuel Albanie. Squeeze-and-Excitation Networks. In TPAMI, 2017. * [9] Junshi Huang, Rogerio Feris, Qiang Chen, and Shuicheng Yan. Cross-Domain Image Retrieval with a Dual Attribute-Aware Ranking Network. In ICCV, 2015. * [10] Yannis Kalantidis, Clayton Mellina, and Simon Osindero. Cross-Dimensional Weighting for Aggregated Deep Convolutional Features. In ECCV, 2016. * [11] Alexander Kolesnikov, Alexey Dosovitskiy, Dirk Weissenborn, Georg Heigold, Jakob Uszkoreit, Lucas Beyer, Matthias Minderer, Mostafa Dehghani, Neil Houlsby, Sylvain Gelly, Thomas Unterthiner, and Xiaohua Zhai. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. * [12] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In CVPR, 2016. * [13] Zhe Ma, Jianfeng Dong, Zhongzi Long, Yao Zhang, Yuan He, Hui Xue, and Shouling Ji. Fine-Grained Fashion Similarity Learning by Attribute-Specific Embedding Network. In Thirty-fourth AAAI Conference on Artificial Intelligence, 2020\. * [14] Muhammad Muzammal Naseer, Kanchana Ranasinghe, Salman H Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In NeurIPS, 2021. * [15] Namuk Park and Songkuk Kim. How do vision transformers work? In ICLR, 2022. * [16] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS, 2019. * [17] Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. Do vision transformers see like convolutional neural networks? 2021\. * [18] Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A Unified Embedding for Face Recognition and Clustering. In CVPR, 2015. * [19] Chull Hwan Song and Hye Joo Han. Convolutional attribute mask with two-step attention for fashion image retrieval. In 26th International Conference on Pattern Recognition (ICPR), IEEE, 2022. * [20] Chull Hwan Song, Hye Joo Han, and Yannis Avrithis. All the attention you need: Global-local, spatial-channel attention for image retrieval. In WACV, 2022. * [21] Chull Hwan Song, Jooyoung Yoon, Shunghyun Choi, and Yannis Avrithis. Boosting vision transformers for image retrieval. In WACV, 2023. * [22] Mingxing Tan and Quoc Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, pages 6105–6114, Long Beach, California, USA, June 2019. PMLR. * [23] Laurens van der Maaten and Geoffrey Hinton. Visualizing Data using t-SNE. In Journal of Machine Learning Research, 2008. * [24] Laurens van der Maaten and Kilian Weinberger. Stochastic triplet embedding. In MLSP, 2012. * [25] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett, editors, Advances in Neural Information Processing Systems. Curran Associates, Inc., 2017. * [26] Andreas Veit, Serge Belongie, and Theofanis Karaletsos. Conditional Similarity Networks. In CVPR, 2017. * [27] Yongquan Wan, Cairong Yan, Bofeng Zhang, and Guobing Zou. Learning image representation via attribute-aware attention networks for fashion classification. In MultiMedia Modeling: 28th International Conference, MMM 2022, Phu Quoc, Vietnam, June 6–10, 2022, Proceedings, Part I, 2022. * [28] Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo, and Qinghua Hu. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In CVPR, 2020. * [29] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. CBAM: Convolutional Block Attention Module. In ECCV, 2018. * [30] Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. Hard negative examples are hard, but useful. In ECCV, 2020. * [31] Aron Yu and Kristen Grauman. Fine-grained visual comparisons with local learning. In CVPR, 2014. * [32] Xingxing Zou, Xiangheng Kong, W. Wong, Congde Wang, Yuguang Liu, and Yuanpeng Cao. FashionAI: A Hierarchical Dataset for Fashion Understanding. In CVPRW, pages 296–304, 2019. Supplementary material for “Conditional Cross Attention Network for Multi-space Embedding without Entanglement in Only a SINGLE Network” ## Appendix A Datasets #### FashionAI [32] The data published in the FashionAI Global Challenge 2018 has 180,335 apparel images. This dataset comprises 8 fashion attributes containing 55 classes each. #### DARN [9] An open dataset for attribute classification and street-to-shop image retrieval, comprising 253,983 images and 9 attributes. Each attribute contains 185 classes. The data is provided as image URLs; excluding broken URLs that cannot be downloaded, we used 195,771 URLs. #### DeepFashion [12] This dataset comprises 289,222 images and 6 attributes. Each attribute contains 1000 classes. #### Zappos50K [31] This dataset comprises 50,025 shoe images collected from Zappos.com. It consists of 4 attributes containing 34 classes each. Figure A7 presents actual examples using the four training sets. The figure shows four examples in the order of FashionAI [32], DARN [9], DeepFashion [12], Zappos50k [31]. ## Appendix B More experiments ### B.1 Benchmarking : DeepFashion Table 4 presents the experimental results for DeepFashion, described in detail in subsection 4.5. ## Appendix C More Visualization Figure A7: Examples of Our TrainSets. The order of each row is FashionAI, DARN, DeepFasion and Zappos50K. ### C.1 Ranking and Attention Heat map Figure A8 shows the Top 3 results along with each actual attention map. Each part of each attribute is considered, interpreted as the result of disentanglement multi-space modeling. The order in the figure is lapel design (notched), neckline design (round), skirt length (floor), pant length (midi), sleeve length (short), neck design (low turtle), coat length (midi), and collar design (peter pan). Figure A8: Examples of our top 3 ranking pair (image, attention heat map) results for FashionAI of 8 attributes. Red rectangle is query images. The order of each line is lapel design (notched), neckline design (round), skirt length (floor), pant length (midi), sleeve length (short), neck design (low turtle), coat length (midi) and collar design (peter pan). ### C.2 Ours _vs_. Previous Works : Multi-Space Embedding Figure A9: Ours _vs_. Previous Works (ASEN, CAMNet) : Multi-space embedding’s visualization using t-SNE about FashionAI. Figure 6 comparatively analyzed the embedding results of our study and previous studies. Of the 8 categories, the results for Neck Design, Sleeve Length, and Coat Length were presented. Figure A9 shows the expanded results for all 8 categories. Our method solves the entanglement problem much better than ASEN [13] and CAMNet [19]. Algorithm 1 Pseudo-Code for CCA Training 1:input: Image $\mathcal{I}$, Condition $c$ 2:batch $\mathcal{B}$, training epochs $K$, triplet set $\mathcal{T}$ 3:Self Attention Block $SA$, Conditional Cross Attention $CCA$ 4:for $epoch=1,...,K$ do 5: for $\mathcal{B}=1,...,M\in\mathcal{T}$ do 6: $Triplet(\mathcal{A}_{c},\mathcal{P}_{c},\mathcal{N}_{c})\leftarrow\mathcal{B}$ 7: $\mathcal{I},c\leftarrow\mathcal{A}_{c},\mathcal{P}_{c},\mathcal{N}_{c}$ 8: $\mathcal{Q}_{i},\mathcal{K}_{i},\mathcal{V}_{i}\leftarrow Token\\_{Embedding}(\mathcal{I})$ 9: for $l=1,...,(\mathcal{L}-1)$ do 10: $\mathcal{Q}_{i},\mathcal{K}_{i},\mathcal{V}_{i}\leftarrow SA(\mathcal{Q}_{i},\mathcal{K}_{i},\mathcal{V}_{i})$ 11: end for 12: Last iteration $l=\mathcal{L}$ do 13: $\mathcal{Q}_{c}\leftarrow Conditional\\_Token\\_Embedding(c)$ 14: ${\texttt{[CLS]}}\leftarrow CCA(\mathcal{Q}_{c},\mathcal{K}_{i},\mathcal{V}_{i})$ 15: $f\leftarrow l2(FC({\texttt{[CLS]}}))$ 16: calculate $f_{a},f_{p},f_{n}\leftarrow Triplet(\mathcal{A}_{c},\mathcal{P}_{c},\mathcal{N}_{c})$ 17: calculate triplet loss $\mathcal{L}(f_{a},f_{p},f_{n}|c)$ 18: calculate gradients of $\nabla\mathcal{L}(\theta)$ 19: $\theta\leftarrow Adam(\nabla\mathcal{L}(\theta)$) 20: end for 21:end for
# Generation and decidability for periodic $\ell$-pregroups Nikolaos Galatos<EMAIL_ADDRESS>Isis A. Gallardo<EMAIL_ADDRESS> ###### Abstract In [11] it is shown that the variety $\mathsf{DLP}$ of distributive $\ell$-pregroups is generated by a single algebra, the functional algebra $\mathbf{F}(\mathbb{Z})$ over the integers. Here, we show that $\mathsf{DLP}$ is equal to the join of its subvarieties $\mathsf{LP_{n}}$, for $n\in\mathbb{Z}^{+}$, consisting of $n$-periodic $\ell$-pregroups. We also prove that every algebra in $\mathsf{LP_{n}}$ embeds into the subalgebra $\mathbf{F}_{n}(\mathbf{\Omega})$ of $n$-periodic elements of $\mathbf{F}(\mathbf{\Omega})$, for some integral chain $\mathbf{\Omega}$; we use this representation to show that for every $n$, the variety $\mathsf{LP_{n}}$ is generated by the single algebra $\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z})$, noting that the chain $\mathbb{Q}\overrightarrow{\times}\mathbb{Z}$ is independent of $n$. We further establish a second representation theorem: every algebra in $\mathsf{LP_{n}}$ embeds into the wreath product of an $\ell$-group and $\mathbf{F}_{n}(\mathbb{Z})$, showcasing the prominent role of the simple $n$-periodic $\ell$-pregroup $\mathbf{F}_{n}(\mathbb{Z})$. Moreover, we prove that the join of the varieties $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$ is also equal to $\mathsf{DLP}$, hence equal to the join of the varieties $\mathsf{LP_{n}}$, even though $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))\not=\mathsf{LP_{n}}$ for every single $n$. In this sense $\mathsf{DLP}$ has two different well-behaved approximations. We further prove that, for every $n$, the equational theory of $\mathbf{F}_{n}(\mathbb{Z})$ is decidable and, using the wreath product decomposition, we show that the equational theory of $\mathsf{LP_{n}}$ is decidable, as well. ###### keywords: periodic lattice-ordered pregroups, decidability , equational theory , variety generation , residuated lattices , lattice-ordered groups , diagrams ††journal: Journal of Algebra [inst1]organization= Department of Mathematics, University of Denver, addressline=2390 S. York St., city=Denver, postcode=80208, state=CO, country=USA ## 1 Introduction A _lattice-ordered pregroup_ (_$\ell$ -pregroup_) is an algebra $(A,\wedge,\vee,\cdot,^{\ell},^{r},1)$, where $(A,\wedge,\vee)$ is a lattice, $(A,\cdot,1)$ is a monoid, mutilication preserves the lattice order $\leq$, and for all $x$, $x^{\ell}x\leq 1\leq xx^{\ell}\text{ and }xx^{r}\leq 1\leq x^{r}x.$ We often refer to $x^{\ell}$ and to $x^{r}$ as the _left_ and _right inverse_ of $x$, respectively. The $\ell$-pregroups that satisfy $x^{\ell}=x^{r}$ are exactly the _lattice- ordered groups_ (_$\ell$ -groups_): algebras $(A,\wedge,\vee,\cdot,{}^{-1},1)$, where $(A,\wedge,\vee)$ is a lattice $(A,\cdot,{}^{-1},1)$ is a group, and mutilication preserves the order. Lattice-ordered groups have been studied extensively ([1], [5], [12], [14]) and they admit a Calyey-style representation theorem due to Holland [13]: every $\ell$-group can be embedded into the _symmetric_ $\ell$-group of order- preserving permutations on a totally-ordered set. _Pregroups_ are defined in a similar way as $\ell$-pregroups, but without the stipulation that the underlying order yields a lattice. Pregroups were introduced and studied in the context of mathematical linguistics, both in the theoretical realm (in connection to context-free grammars and automata) and in applications (for studying the sentence structure of various natural languages), see [15], [4], [3]. Pregroups where the order is discrete (or, equivalently, that satisfy $x^{\ell}=x^{r}$) are exactly groups. It turns out that $\ell$-pregroups are precisely the _residuated lattices_ that satisfy $(xy)^{\ell}=y^{\ell}x^{\ell}$ and $x^{r\ell}=x=x^{\ell r}$ (see [10]), hence they can be axiomatized by equations and they form a variety of algebras which we denote by $\mathsf{LP}$. Since other examples of residuated lattices include the lattice of ideals of a ring with unit, Boolean algebras, and the algebra of binary relations over a set among others, $\ell$-pregroups enjoy common properties with these structures (for example, congruences correspond to ‘normal’ subalgebras). Furthermore, since residuated lattices form algebraic semantics for _substructural logics_ [10] (including intuitionistic, linear, relevance, and many-valued logic), the study of $\ell$-pregroups relates to the study of these non-classical logical systems, as well. An $\ell$-pregroup is called _distributive_ , if its underlying lattice is distributive. It is shown in [6] that distributive $\ell$-pregroups, like $\ell$-groups, also enjoy a Cayley/Holland embedding theorem: every distributive $\ell$-pregroup can be embedded in the _functional/symmetric_ $\ell$-pregroup $\mathbf{F}(\mathbf{\Omega})$ over a chain $\mathbf{\Omega}$. Here $\mathbf{F}(\mathbf{\Omega})$ consists of all functions on $\mathbf{\Omega}$ that have _residuals_ and _dual residuals_ of all orders (as defined in Section 2.1) under composition and pointwise order. In [11] we showed that this representation theorem can be improved in the sense that the chain $\mathbf{\Omega}$ can be assumed to be integral; a chain is called _integral_ if every point is contained in an interval isomorphic to $\mathbb{Z}$. This tighter representation is used in [11] to show that the variety of distributive $\ell$-pregroups is generated by a single algebra: the functional algebra $\mathbf{F}(\mathbb{Z})$. Here, we establish similar generation results for all periodic varieties of $\ell$-pregroups. An $\ell$-pregroup is called _$n$ -periodic_, for $n\in\mathbb{Z}^{+}$, if it satisfies $x^{\ell^{n}}=x^{r^{n}}$; we denote the corresponding variety by $\mathsf{LP_{n}}$. For example, $2$-periodic $\ell$-pregroups satisfy $x^{\ell\ell}=x^{rr}$, while $1$-periodic $\ell$-pregroups are precisely the $\ell$-groups; we say that an $\ell$-pregroup is _periodic_ if it is $n$-periodic for some $n\in\mathbb{Z}^{+}$. As we mention right below, periodicity is related to distributivity, so there is hope of using a similar approach as in [11] to obtain a representation theorem and a generation result. In particular, even though it is still an open problem whether every $\ell$-pregroup is distributive (we only know that the underlying lattice is semi-distributive by [9]), $\ell$-groups are known to be distributive and in [7] it is shown that actually all periodic $\ell$-pregroups are distributive, i.e., $\mathsf{LP_{n}}$ is a subvariety of $\mathsf{DLP}$ for all $n\in\mathbb{Z}^{+}$. In Section 2 , we make use of this fact to show that the representation theorem of [11] for distrubutive $\ell$-pregroups restricts nicely to $n$-periodic $\ell$-pregroups, for every $n$. In particular, first we observe that if $\mathbf{\Omega}$ is a chain, then the elements of $\mathbf{F}(\mathbf{\Omega})$ that satisfy $x^{\ell^{n}}=x^{r^{n}}$ form a subalgebra $\mathbf{F}_{n}(\mathbf{\Omega})$ of $\mathbf{F}(\mathbf{\Omega})$ and we provide an alternative characterization for these elements; then we show that every $n$-periodic $\ell$-pregroup embeds in $\mathbf{F}_{n}(\mathbf{\Omega})$, for some integral chain $\mathbf{\Omega}$ (first representation for $n$-periodic $\ell$-pregroups). In other words the operator $\mathbf{F}_{n}$ does for $\mathsf{LP_{n}}$ what the operator $\mathbf{F}$ does for $\mathsf{DLP}$, at least in terms of an embedding theorem. Moreover, utilizing that every integral chain $\mathbf{\Omega}$ is locally isomorphic to $\mathbb{Z}$, we show that every function on $\mathbf{\Omega}$ decomposes into a global component and many local components, where the latter are elements of $\mathbf{F}_{n}(\mathbb{Z})$. This shows the important role that $\mathbf{F}_{n}(\mathbb{Z})$ plays; we give an acessible description of the elements of $\mathbf{F}_{n}(\mathbb{Z})$ and show that the latter is a simple algebra. Finally, we extend the notion of wreath product (for groups, monoids, and $\ell$-groups) to $\ell$-pregroups and prove that every $n$-periodic $\ell$-pregroup embeds into the wreath product $\mathbf{H}\wr\mathbf{F}_{n}(\mathbb{Z})$ of an $\ell$-group $\mathbf{H}$ and $\mathbf{F}_{n}(\mathbb{Z})$ (second representation for $n$-periodic $\ell$-pregroups). As mentioned earlier, $\ell$-pregroups are involutive residuated lattices and it turns out that the notion of $n$-periodicity for $\ell$-pregroups is just a specialization of the one for involutive residuated lattices. In [8] it is shown that the join of all of the $n$-periodic varieties of involutive residuated lattices, for $n\in\mathbb{Z}^{+}$, is equal to the whole variety of involutive residuated lattices; the proof is quite involved using both algebraic and proof-theoretic arguments. It is natural to ask what is the join of all of the $\mathsf{LP_{n}}$’s and in particular whether the join is all of $\mathsf{DLP}$, given that each $\mathsf{LP_{n}}$ is a subvariety of $\mathsf{DLP}$. Since we do not have an analytic proof-theoretic calculus for $\ell$-pregroups we cannot use the methods of [8]. In Section 3 we prove that indeed $\mathsf{DLP}$ is equal to the join of all of the varieties $\mathsf{LP_{n}}$. In more detail, using the method of diagrams of [11] we show that if an equation fails in $\mathsf{DLP}$, then it fails in $\mathbf{F}_{n}(\mathbb{Z})$, for some $n$. As a result we obtain that $\mathsf{DLP}$ is also equal to the join of the varieties $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$, thus complementing nicely the generation result for $\mathsf{DLP}$ given in [11]. With the goal of converting the representation theorem for $\mathsf{LP_{n}}$, for every $n$, into a generation result, we search for chains $\mathbf{\Omega}_{n}$ such that the variety $\mathsf{LP_{n}}$ is generated by $\mathbf{F}_{n}(\mathbf{\Omega}_{n})$. Since $\bigvee\mathsf{LP_{n}}=\bigvee\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$, we investigate whether we can take $\mathbf{\Omega}_{n}=\mathbb{Z}$ for all $n$. It is easy to see that this cannot work for $n=1$, as $\mathbf{F}_{n}(\mathbb{Z})$ is isomorphic to the $\ell$-group on the integers, hence it generates the variety of abelian $\ell$-groups, not the variety of all $\ell$-groups. We further prove that, unfortunately, $\mathsf{LP_{n}}\not=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$ for every single $n$, contrasting with $\bigvee\mathsf{LP_{n}}=\bigvee\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$. Since $\mathbf{F}_{1}(\mathbb{Q})$ ends up being the $\ell$-group or order- preserving permutations on $\mathbb{Q}$, Holland’s generation theorem shows that we can take $\mathbf{\Omega}_{1}$ to be $\mathbb{Q}$, i.e., $\mathsf{LP_{1}}=\mathsf{V}(\mathbf{F}_{1}(\mathbb{Q}))$. We prove, however, that $\mathsf{LP_{n}}\not=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Q}))$, for every $n>1$, leaving unsettled the question of whether there is a choice of $\mathbf{\Omega}_{n}$ for each $n$; or whether we can select a single/uniform chain $\mathbf{\Omega}$ that will serve as $\mathbf{\Omega}_{n}$ for all $n$, i.e., $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbf{\Omega}))$. Later, in Section 5, we prove that such a uniform choice of a chain does exist and can be taken to be $\mathbb{Q}\overrightarrow{\times}\mathbb{Z}$, i.e., $\mathsf{LP_{n}}$ is generated by $\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z})$, for all $n$ (generation theorem for $\mathsf{LP_{n}}$). It follows from the wreath product representation $\mathbf{H}\wr\mathbf{F}_{n}(\mathbb{Z})$ that, even though $\mathbf{F}_{n}(\mathbb{Z})$ does not generate the variety $\mathsf{LP_{n}}$, it plays an important role for $n$-periodic $\ell$-pregroups. It turns out that a good understanding of this algebra is needed for the goal of Section 5, so Section 4 is devoted to $\mathbf{F}_{n}(\mathbb{Z})$ and, among other things, to proving that its equational theory is decidable. In particular, we observe that each element $f$ of $\mathbf{F}_{n}(\mathbb{Z})$ admits a useful decomposition $f=f^{\circ}\circ f^{*}$, where $f^{\circ}$ is an automorphism of $\mathbb{Z}$ ($f^{\circ}\in F_{1}(\mathbb{Z})$) and $f^{*}$ is a _short_ element of $\mathbf{F}_{n}(\mathbb{Z})$. Furthermore, we consider diagrams (of [11]) that are $n$-periodic (a new notion) and show that they are suitable for studying $\mathbf{F}_{n}(\mathbb{Z})$: we can go from a failure of an equation on $\mathbf{F}_{n}(\mathbb{Z})$ to an $n$-periodic diagram and back. As the $n$-periodic diagrams obtained in this way can have unbounded _heights_ and a decision algorithm would need a known bound of this height to terminate, we focus on controlling the height of the automorphism part $f^{\circ}$ of a function (the $f^{*}$ part is short by construction). We rephrase the height- control problem in the language or linear algebra and, working with linear systems, we prove that such an upper bound can indeed be computed, thus leading to decidability. In Section 5 we rely on the wreath product decomposition of the second representation theorem. In particular, each function in $\mathbf{F}_{n}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is an integral chain, has a global component and many local components, which are elements of $\mathbf{F}_{n}(\mathbb{Z})$ and thus can be controlled by results in Section 4. Since every integral chain has a lexicographic product structure $\mathbf{J}\overrightarrow{\times}\mathbb{Z}$, where $\mathbf{J}$ is an arbitrary chain, we introduce a notion of diagram suited for $\mathbf{F}_{n}(\mathbf{\Omega})$, called $n$-periodic partition diagram, that captures the natural partitioning induced by the lexicographic product on the diagram and has local parts that are $n$-periodic diagrams. We prove that a failure of an equation in $\mathbf{F}_{n}(\mathbf{\Omega})$ yields a failure in an $n$-periodic partition diagram, we show that the latter can be taken to be short per local component, and we further prove that any failure in an $n$-periodic partition diagram can be materialized in $\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z})$. This way, we obtain both the generation result $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z}))$ and that the latter has a decidabile equational theory. ## 2 Characterizing the elements of $\mathbf{F}_{n}(\mathbf{\Omega})$. In this section, for every positive integer $n$, we define the $n$-periodic $\ell$-pregroup of the form $\mathbf{F}_{n}(\mathbf{\Omega})$ over a given chain $\mathbf{\Omega}$. These algebras will play an important role in the representation and generation theorems for $n$-periodic $\ell$-pregroups. ### 2.1 The functional $\ell$-pregroup $\mathbf{F}(\mathbf{\Omega})$. We first describe the functional/symmetric $\ell$-pregroup $\mathbf{F}(\mathbf{\Omega})$ over a chain $\mathbf{\Omega}$. Given functions $f:\mathbf{P}\rightarrow\mathbf{Q}$ and $g:\mathbf{Q}\rightarrow\mathbf{P}$ between posets, we say that $g$ is a _residual_ for $f$, or that $f$ is a _dual residual_ for $g$, or that $(f,g)$ forms a _residuated pair_ if $f(a)\leq b\Leftrightarrow a\leq g(b)\text{, for all }a\in P,b\in Q.$ The residual and the dual residual of $f$ are unique when they exists and we denote them by $f^{r}$ and $f^{\ell}$, respectively; if they exist, they are given by $f^{r}(b)=\max\\{a\in P:f(a)\leq b\\}\quad\text{ and }\quad f^{\ell}(a)=\min\\{b\in Q:a\leq f(b)\\}.$ If $f$ has a residual, i.e., if $f^{r}$ exists, then $f$ is called _residuated_ and if it has a dual residual, i.e., if $f^{\ell}$ exists, , it is called _dually residuated_. If the residual of $f^{r}$ exists, we denote it by $f^{rr}$ or $f^{(-2)}$ and we call it the _second-order residual_ of $f$. If the dual residual of $f^{\ell}$ exists, we denote it by $f^{\ell\ell}$ or $f^{(2)}$ and we call it the _second-order dual residual_ of $f$. More generally, $f^{r^{n}}$ or $f^{(-n)}$ is the _$n$ th-order residual_ of $f$, if it exists, and $f^{\ell^{n}}$ or $f^{(n)}$ is the _$n$ th-order dual residual_ of $f$, if it exists. Given a chain $\mathbf{\Omega}$, we denote by $F(\mathbf{\Omega})$ the set all maps on $\mathbf{\Omega}$ that have residuals and dual residuals of all orders. This set supports an $\ell$-pregroup $\mathbf{F}(\mathbf{\Omega})$, under composition, identity, the pointwise order, r and ℓ. Since the elements are functions with a chain a codomain and the order is pointwise, $\mathbf{F}(\mathbf{\Omega})$ is a _distributive_ $\ell$-pregroup, i.e., its underlying lattice is distrubutive; we denote by $\mathsf{DLP}$ the variety of distrubutive $\ell$-pregroups. In [6] it is shown that every distrubutive $\ell$-pregroup can be embedded in $\mathbf{F}(\mathbf{\Omega})$ for some chain $\mathbf{\Omega}$. We write $a\prec b$, when $a$ is covered by $b$ (i.e., when $b$ is a cover of $a$) and also $a+1=b$ and $a=b-1$. A chain $\mathbf{\Omega}$ is said to be _integral_ if it is isomorphic to the lexicographic product $\mathbf{J}\overrightarrow{\times}\mathbb{Z}$ for some chain $\mathbf{J}$. In [11] it is shown that every $\mathbf{F}(\mathbf{\Omega})$ embeds in $\mathbf{F}(\overline{\mathbf{\Omega}})$ for some integral chain $\overline{\mathbf{\Omega}}$ containing $\mathbf{\Omega}$. This results in an improved representation theorem for $\mathsf{DLP}$: every distrubutive $\ell$-pregroup can be embedded in $\mathbf{F}(\mathbf{\Omega})$ for some integral chain $\mathbf{\Omega}$. ### 2.2 Functions in $\mathbf{F}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is integral We now take a closer look at the elements in $\mathbf{F}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is integral—in particular on how they behave under the application of iterated inverses. ###### Lemma 2.1. [11] If $f$ is an order-preserving map on a chain $\mathbf{\Omega}$ with residual $f^{r}$ and dual residual $f^{\ell}$, and $a,b\in\Omega$, then: 1. 1. $f^{\ell}ff^{\ell}=f^{\ell}$, $f^{r}ff^{r}=f^{r}$, $ff^{\ell}f=f$ and $ff^{r}f=f$. 2. 2. We have $b<f(a)$ iff $f^{r}(b)<a$. Also, $f(b)<a$ iff $b<f^{\ell}(a)$. It is well known that a function $f:\mathbf{P}\rightarrow\mathbf{Q}$ is residuated iff it is order-preserving and, for all $b\in Q$, the set $\\{a\in P:f(a)\leq b\\}$ has a maximum in $\mathbf{Q}$; also, it is dually residuated iff it is order-preserving and, for all $b\in Q$, the set $\\{b\in P:a\leq f(b)\\}$ has a minimum. If a function is residuated, then it preserves existing joins, and if it is dually residuated, it preserves existing meets. We start by studying the effect of $f\mapsto f^{\ell\ell}$ on functions of $\mathbf{F}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is integral; this process is an $\ell$-pregroup automorphism (see [11], for example). ###### Lemma 2.2. Assume that $\mathbf{\Omega}$ is an integral chain, $f:\Omega\rightarrow\Omega$ is an order-preserving map and $x\in\Omega$. 1. 1. If $f^{\ell\ell}$ exists, then $f^{\ell\ell}(x)=f(x-1)+1$. 2. 2. If $f^{rr}$ exists, then $f^{rr}(x)=f(x+1)-1$. ###### Proof. (1) We first show the claim for the case where $f^{\ell}f(x)=x$. Since $x=f^{\ell}f(x)=\min\\{b\in\Omega:f(x)\leq f(b)\\}$, we get $x-1\notin\\{b\in\Omega:f(x)\leq f(b)\\}$, hence $f(x)\nleq f(x-1)$; since $\mathbf{\Omega}$ is a chain we get $f(x-1)<f(x)$. Therefore, $f(x-1)<f(x-1)+1\leq f(x)$, which yields $f^{\ell}(f(x-1)+1)=x$, by the definition of $f^{\ell}$. On the other hand, Lemma 2.1(1) yields $ff^{\ell}f(x-1)=f(x-1)<f(x-1)+1$, so by Lemma 2.1(2) we get $f^{\ell}f(x-1)<f^{\ell}(f(x-1)+1)=x$. Therefore, $f^{\ell}(f(x-1))<x$ and $f^{\ell}(f(x-1)+1)\leq x$; taking into account that $f(x-1)\prec f(x-1)+1$ we obtain $f^{\ell\ell}(x)=\min\\{b\in\Omega:x\leq f^{\ell}(b)\\}=f(x-1)+1$. Now, we consider the case $f^{\ell}f(x)\neq x$. Since $f^{\ell}f(x)\leq x$ by residuation, we get $f^{\ell}f(x)<x$. Also, since $f(x)<f(x)+1$, Lemma 2.1(2) yields $x<f^{\ell}(f(x)+1)$; so $f^{\ell}f(x)<x<f^{\ell}(f(x)+1)$. Hence, by the definition of $f^{\ell\ell}$, we have $f^{\ell\ell}(x)=f(x)+1=f^{\ell\ell}(f^{\ell}(f(x)+1))$. So, for $y:=f^{\ell}(f(x)+1)$ we have $f^{\ell\ell}(x)=f^{\ell\ell}(y)$. Consequently, using Lemma 2.1(1) we obtain $f^{\ell}f(y)=f^{\ell}ff^{\ell}(f(x)+1)=f^{\ell}(f(x)+1)=y$. In particular, the element $y$ satisfies the conditions of the previous case, so $f^{\ell\ell}(x)=f^{\ell\ell}(y)=f(y-1)+1$. Below we prove that $f(y-1)=f(x-1)$, which yields $f^{\ell\ell}(x)=f(y-1)+1=f(x-1)+1$, as desired. Since $y-1<y=f^{\ell}(f(x)+1)$, Lemma 2.1(2) gives $f(y-1)<f(x)+1$, so $f(y-1)\leq f(x)$. As mentioned above, we have $f^{\ell}f(x)<x$, hence $f^{\ell}f(x)\leq x-1$. Therefore, $f(x)=ff^{\ell}f(x)\leq f(x-1)$ by using Lemma 2.1(1). In summary, we have $f(y-1)\leq f(x)\leq f(x-1)$. Since $x<y$, we also get the other inequality: $f(x-1)\leq f(y-1)$. Statement (2) is the dual of (1). ∎ Using induction and Lemma 2.2 we get the following formula. ###### Lemma 2.3. If $\mathbf{\Omega}$ is an integral chain, $f\in F(\mathbf{\Omega})$ and $n\in\mathbb{Z}$, then $f^{(2n)}(x)=f(x-n)+n$ for all $x\in\Omega$. As usual, we denote the inverse image of a set $X\subseteq\Omega$ via a function $f$ by $f^{-1}[X]$; we write $f^{-1}[a]$ for $f^{-1}[\\{a\\}]$. ###### Lemma 2.4. [11] Given a chain $\mathbf{\Omega}$, a map $f$ on $\Omega$ is residuated and dually residuated iff $f$ is order-preserving and for all $a\in\Omega$: 1. 1. If $a\in f[\Omega]$, then $f^{-1}[a]=[b,c]$, for some $b\leq c$ in $\Omega$. 2. 2. If $a\notin f[\Omega]$, there exists $b,c\in\Omega$ such that, $b\prec c$ and $a\in(f(b),f(c))$. In the case (1), $f^{\ell}(a)=b$ and $f^{r}(a)=c$; in the case (2), $f^{\ell}(a)=c$ and $f^{r}(a)=b$. Moreover, if $\mathbf{\Omega}$ is integral then this is further equivalent to $f\in F(\mathbf{\Omega})$. A simple characterization of the elements of $\mathbf{F(\mathbb{Z})}$ is given by the next lemma. ###### Lemma 2.5. [11] If $f$ is a function on $\mathbb{Z}$, then $f\in F(\mathbb{Z})$ iff $f$ is order-preserving and $f^{-1}[a]$ is a bounded interval for all $a\in f[\mathbb{Z}]$ iff $f$ is an order-preserving and $f^{-1}[a]$ is a finite set for all $a\in f[\mathbb{Z}]$. ### 2.3 The $n$-periodic subalgebra $\mathbf{F}_{n}(\mathbf{\Omega})$ of $\mathbf{F}(\mathbf{\Omega})$. In the language of $\ell$-pregroups, we define $x^{(n)}$ for all $n\in\mathbb{Z}$, by $x^{(0)}:=x$, $x^{(n+1)}:=(x^{(n)})^{\ell}$ if $n>0$ and $x^{(n-1)}:=(x^{(n)})^{r}$ if $n<0$. Note that this notation agrees with the notation for $f^{(n)}$, when this function exists. Given $n\in\mathbb{Z}^{+}$, we say that an element $x$ is $n$-periodic if $x^{\ell^{n}}=x^{r^{n}}$, i.e., $x^{(n)}=x^{(-n)}$, or equivalently $x^{(2n)}=x$; an $\ell$-pregroup is _$n$ -periodic_ if all of its elements are, and it is called _periodic_ if it is $n$-periodic for some $n$. In [7] it is shown that every periodic $\ell$-pregroup is in fact distributive. For every chain $\mathbf{\Omega}$, $F_{n}(\mathbf{\Omega})$ denotes the set of all $n$-periodic elements of $\mathbf{F}(\mathbf{\Omega})$; the following lemma shows that this set supports a subalgebra $\mathbf{F}_{n}(\mathbf{\Omega})$ of $\mathbf{F}(\mathbf{\Omega})$. ###### Lemma 2.6. Let $n$ be a positive integer. 1. 1. The $n$-periodic elements of any $\ell$-pregroup form a $n$-periodic subalgebra. 2. 2. For every chain $\mathbf{\Omega}$, $\mathbf{F}_{n}(\mathbf{\Omega})$ is an $n$-periodic subalgebra of $\mathbf{F}(\mathbf{\Omega})$. ###### Proof. (1) As mentioned in the introduction, $\ell$-pregroups are involutive residuated lattices, so they satisfy involutivity $x^{\ell r}=x=x^{r\ell}$, the De Morgan laws $(x\vee y)^{\ell}=x^{\ell}\wedge y^{\ell}$, $(x\wedge y)^{\ell}=x^{\ell}\vee y^{\ell}$, $(x\vee y)^{r}=x^{r}\wedge y^{r}$, $(x\wedge y)^{r}=x^{r}\vee y^{r}$, and, since $x+y=xy$, the inverses are monoid antihomomorphisms $(xy)^{\ell}=y^{\ell}x^{\ell}$, $(xy)^{r}=y^{r}x^{r}$, and $1^{\ell}=1^{r}=1$; see [7]. Therefore, the map $x\mapsto x^{\ell\ell}$ is an endomorphism of the $\ell$-pregroup $\mathbf{L}$ (actually an automorphism), hence so are all of its (positive and negative) powers. In particular, $x\mapsto x^{(2n)}$ is an endomorphism, hence the set of its fixed points is a subalgebra (for example, if $x^{(2n)}=x$ and $y^{(2n)}=y$, then $(xy)^{(2n)}=x^{(2n)}y^{(2n)}=xy$). Clearly, an element of $\mathbf{L}$ is $n$-periodic iff it is a fixed point of $x\mapsto x^{(2n)}$. As a result, the set of all $n$-periodic elements of $\mathbf{L}$ forms a subalgebra of $\mathbf{L}$ and all of its elements are $n$-periodic. (2) follows by applying (1) to $\mathbf{F}(\mathbf{\Omega})$. ∎ We now establish our first representation theorem for $n$-periodic $\ell$-pregroups, which will be useful in obtaining the generation and decidability results for $n$-periodic $\ell$-pregroups; the second representation theorem will be Theorem 2.16. ###### Theorem 2.7. Given $n\in\mathbb{Z}^{+}$, every $n$-periodic $\ell$-pregroup can be embedded in $\mathbf{F}_{n}(\mathbf{\Omega})$, for some integral chain $\mathbf{\Omega}$, i.e., in $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$, for some chain $\mathbf{J}$. ###### Proof. Let $\mathbf{L}$ be an $n$-periodic $\ell$-pregroup. By [7] $\mathbf{L}$ is a distributive, so by [11] $\mathbf{L}$ embeds in $\mathbf{F}(\mathbf{\Omega})$, for some integral chain $\mathbf{\Omega}$. Since $\mathbf{L}$ satisfies the equation $x^{(2n)}=x$, the image of the embedding also satisfies the equation; so the image is contained in $\mathbf{F}_{n}(\mathbf{\Omega})$. The last part of the theorem follows from the fact that every integral chain is isomorphic to $\mathbf{J}\overrightarrow{\times}\mathbb{Z}$ for some chain $\mathbf{J}$. ∎ We will dedicate the remainder of this subsection to characterizing the functions in $\mathbf{F}_{n}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is an integral chain. ###### Lemma 2.8. If $\mathbf{\Omega}$ is an integral chain, $f\in F(\mathbf{\Omega})$ and $n\in\mathbb{Z}^{+}$, each of the following are equivalent to $f\in F_{n}(\mathbf{\Omega})$: 1. 1. For all $x\in\Omega$, $f(x)=f(x-n)+n$. 2. 2. For all $x\in\Omega$ and $k\in\mathbb{Z}$, $f(x)=f(x-kn)+kn$. 3. 3. For all $x,y\in\Omega$: 1. (a) $x\leq y+n\Rightarrow f(x)\leq f(y)+n$ and 2. (b) $x+n\leq y\Rightarrow f(x)+n\leq f(y)$. 4. 4. For all $x,y\in\Omega$ and $k\in\mathbb{Z}$: $x\leq y+kn\Rightarrow f(x)\leq f(y)+kn$. ###### Proof. The equivalence to (1) follows from $n$-periodicity and Lemma 2.3. Also, $(1)\Rightarrow(2)$ follows by induction and the converse is obvious. $(4)\Rightarrow(3)$ is obvious, as well. For $(2)\Rightarrow(4)$, if $x,y\in\Omega$, $k\in\mathbb{Z}$ and $x\leq y+kn$, then by order preservation and (2) we have $f(x)\leq f(y+kn)=f(y+kn- kn)+kn=f(y)+kn$. For $(3)\Rightarrow(1)$, if $x\in\Omega$, then $x\leq(x-n)+n$, so $f(x)\leq f(x-n)+n$ by (3a). Since $x-n+n\leq x$, (3b) gives $f(x-n)+n\leq f(x)$; so $f(x-n)+n=f(x)$. ∎ In particular, for $\mathbf{F}_{n}(\mathbb{Z})$ we make use of the characterization for $\mathbf{F}(\mathbb{Z})$, as well. ###### Lemma 2.9. For all $f:\mathbb{Z}\rightarrow\mathbb{Z}$, and $n\in\mathbb{Z}^{+}$, we have $f\in F_{n}(\mathbb{Z})$ iff 1. 1. for all $a\in f[\mathbb{Z}]$, $f^{-1}[a]$ is finite (equivalently, a bounded interval) and, 2. 2. for all $x,y,k\in\mathbb{Z}$: $x\leq y+kn\Rightarrow f(x)\leq f(y)+kn$. ###### Proof. In the forward direction, observe that by Lemma 2.4, we know that for all $a\in f[\mathbb{Z}]$, there exist $b,c\in\mathbb{Z}$ such that $f^{-1}[a]=[b,c]$, so $f^{-1}[a]$ is bounded. Also, by Lemma 2.8, for all $x,y,k\in\mathbb{Z}$, if $x\leq y+kn$, then $f(x)\leq f(y)+kn$. For the converse direction, if $x,y\in\mathbb{Z}$ and $x\leq y$, by (2) we have $f(x)\leq f(y)$, so $f$ is order preserving. Since (1) also holds, Lemma 2.5 implies $f\in F(\mathbb{Z})$; by (2) and Lemma 2.8, we get $f\in F_{n}(\mathbb{Z})$ ∎ ###### Lemma 2.10. For every $n$, the $\ell$-pregroup $\mathbf{F}_{n}(\mathbb{Z})$ is simple. ###### Proof. The invertible elements of $\mathbf{F}_{n}(\mathbb{Z})$ are the order- preserving bijections on $\mathbb{Z}$, so they are precisely the translations $t_{k}:x\mapsto x-k$, for $k\in\mathbb{Z}$. Using the results in [10] about the correspondence between congruences and convex normal submonoids of the negative cone, to prove that $\mathbf{F}_{n}(\mathbb{Z})$ is simple it suffices to prove that: if $M$ is a non-trivial convex normal submonoid of the negative cone of $\mathbf{F}_{n}(\mathbb{Z})$ then $M$ is the whole negative cone. Given a strictly negative $f\in M$, the element $g:=f\wedge f^{\ell\ell}\wedge\cdots\wedge f^{\ell^{2n-2}}$ is invertible, since $(f\wedge f^{\ell\ell}\wedge\cdots\wedge f^{\ell^{2n-4}}\wedge f^{\ell^{2n-2}})^{\ell\ell}=f^{\ell\ell}\wedge f^{\ell^{4}}\wedge\cdots\wedge f^{\ell^{2n-2}}\wedge f^{\ell^{2n}}=f^{\ell\ell}\wedge f^{\ell^{4}}\wedge\cdots\wedge f^{\ell^{2n-2}}\wedge f$, and $g\leq f<1$ so $g$ is a strictly decreasing translation and $M$ contains $g$ and all of its powers; therefore $M$ contains translations $t_{k}$ for arbitrarily large $k$. We now argue that every negative element $h$ of $F_{n}(\mathbb{Z})$ is above some power of $g$ and thus belongs to $M$. Since $h$ is negative, for all $m\in\mathbb{Z}$ there exists $k_{m}\geq 0$ such that $h(m)=m-k_{m}$. Since $h$ is $n$-periodic, there are only $n$-many distinct such $k_{m}$’s; let $k^{\prime}$ be the largest. We have $t_{k}\leq h$ for $k\geq k^{\prime}$, so $h\in M$. ∎ The following two lemmas provide an even more transparent characterization of the elements of $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ where $\mathbf{J}$ is a chain. Note that if $f\in F_{n}(\mathbf{\Omega})$ then not only $f^{(2n)}=f$, but moreover $f^{(2kn)}=f$ for all $k\in\mathbb{Z}$. For $x,y\in J\times\mathbb{Z}$, we write $x\equiv y$ iff they have the same first coordinate, i.e., if there exists $k\in\mathbb{Z}$ such that $x=y+k$ ($x$ and $y$ belong to the same component of the lexicographic product). The following lemma shows that maps in $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ preserve this equivalence relation. ###### Lemma 2.11. If $\mathbf{J}$ is a chain, $\mathbf{\Omega}=\mathbf{J}\overrightarrow{\times}\mathbb{Z}$, $n\in\mathbb{Z}$, $f\in F_{n}(\mathbf{\Omega})$, $j,i\in J$ and $m\in\mathbb{Z}$, then the following hold. 1. 1. For all $x,y\in\Omega$, $x\equiv y\Rightarrow f(x)\equiv f(y)$. In other words, if $f(j,0)=(i,m)$, then for all $r\in\mathbb{Z}$, there exists $k\in\mathbb{Z}$, such that $f(j,r)=(i,k)$. 2. 2. If $x\notin f[\Omega]$, then there exists $b\in\Omega$ such that $f(b)<x<f(b+1)$. In other words, if $(i,m)\notin f[\Omega]$, then there exist $b\in\Omega$, and $r,k\in\mathbb{Z}$ such that $f(b)=(i,r)$, $f(b+1)=(i,k)$ and $m\in(r,k)$. ###### Proof. (1) Suppose $f(j,0)=(i,m)$ and let $r\leq 0$; the proof for $r>0$ is dual. We have $f(j,r)=(i^{\prime},k)$, where $i^{\prime}\in J$ and $k\in\mathbb{Z}$; we will show that $i^{\prime}=i$. Since $(j,r)\leq(j,0)$, we get $(i^{\prime},k)=f(j,r)\leq f(j,0)=(i,m)$ by the order-preservation of $f$, so $i^{\prime}\leq i$. Using the order-preservation of $f$, Lemma 2.3 and the $n$-periodicity of $f$, we get $(i,m)=f(j,0)\leq f(j,r-rn)=f^{(2rn)}((j,r-rn)+rn)-rn=f^{(2rn)}(j,r)-rn=f(j,r)-rn=(i^{\prime},k)-rn=(i^{\prime},k-rn)$, which implies $i\leq i^{\prime}$; therefore $i=i^{\prime}$. (2) If $(i,m)\notin f[\Omega]$ then, since $f\in F(\mathbf{\Omega})$, by Lemma 2.4 there exist $b,c\in\Omega$ with $b\prec c$ such that $(i,m)\in(f(b),f(c))$. Then, by (1), there exist $j\in J$ and $r,k\in\mathbb{Z}$ such that $f(b)=(j,r)$ and $f(c)=f(b+1)=(j,k)$; so $(i,m)\in((j,r),(j,k))$. Therefore, $i=j$ and $m\in(r,k)$. ∎ The following theorem shows that maps in $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ consist of a global component map on $\mathbf{J}$ and many local component maps, one for each $j\in J$. The ability to work on the global level and on the local levels separately will be crucial for the results in the next sections. ###### Theorem 2.12. Given a chain $\mathbf{J}$, then $f\in F_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ if and only if there exists an order-preserving bijection $\widetilde{f}:J\rightarrow J$ and maps $\overline{f}_{j}\in F_{n}(\mathbb{Z})$, for all $j\in J$, such that $f(j,r)=(\widetilde{f}(j),\overline{f}_{j}(r))$ for all $(j,r)\in J\times\mathbb{Z}$. ###### Proof. We first show that if $\widetilde{f}:J\rightarrow J$ is an order-preserving bijection and $\overline{f}_{j}\in F_{n}(\mathbb{Z})$, for all $j\in J$, then $f\in F(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$, where $f(j,r)=(\widetilde{f}(j),\overline{f}_{j}(r))$ for all $(j,r)\in J\times\mathbb{Z}$, by verifying the conditions of Lemma 2.4; we set $\mathbf{\Omega}:=\mathbf{J}\overrightarrow{\times}\mathbb{Z}$ for brevity. If $(j_{1},r_{1})\leq(j_{2},r_{2})$ in $\mathbf{\Omega}$, then $j_{1}<j_{2}$ or ($j_{1}=j_{2}$ and $r_{1}\leq r_{2}$). In the first case, since $\widetilde{f}$ is an order preserving bijection, we have $\widetilde{f}(j_{1})<\widetilde{f}(j_{2})$ and in the second case, $\widetilde{f}(j_{1})=\widetilde{f}(j_{2})$ and $\overline{f}_{j_{1}}(r_{1})\leq\overline{f}_{j_{1}}(r_{2})=\overline{f}_{j_{2}}(r_{2})$, since $\overline{f}_{j_{1}}$ is order preserving. Hence, in both cases $f(j_{1},r_{1})=(\widetilde{f}(j_{1}),\overline{f}_{j_{1}}(r_{1}))\leq(\widetilde{f}(j_{2}),\overline{f}_{j_{2}}(r_{2}))=f(j_{2},r_{2})$, i.e., $f$ is order-preserving. If $(i,m)\in f[\Omega]$, then $(i,m)=f(j,\ell)=(\widetilde{f}(j),\overline{f}_{j}(\ell))$, for some $(j,\ell)\in J\times\mathbb{Z}$. Actually, since $\widetilde{f}$ is a bijection, $j\in J$ is unique with the property $\widetilde{f}(j)=i$. Also, since $m\in\overline{f}_{j}[\mathbb{Z}]$ and $\overline{f}_{j}\in F(\mathbb{Z})$, by Lemma 2.4 there exist $r,k\in\mathbb{Z}$ such that $(\overline{f}_{j})^{-1}[m]=[r,k]$. Therefore, $f^{-1}[(i,m)]=[(j,r),(j,k)]$. If $(i,m)\not\in f[\Omega]$, since $\widetilde{f}$ is a bijection, there exist $j\in J$ such that $\widetilde{f}(j)=i$, hence $m\not\in(\overline{f}_{j})^{-1}[\mathbb{Z}]$. Since $\overline{f}_{j}\in F(\mathbb{Z})$, by Lemma 2.4 there exist $r,k\in\mathbb{Z}$ such that $r\prec k$ and $m\in(\overline{f}_{j}(r),\overline{f}_{j}(k))$. Hence, $(j,r)\prec(j,k)$ and $(i,m)\in(f(j,r),f(j,k))$. Having shown that $f\in F(\mathbf{\Omega})$, to prove $f\in F_{n}(\mathbf{\Omega})$ we verify that $f$ is $n$-periodic. For all $(j,r)\in\Omega$, by applying Lemma 2.3 to $f$ and to $\overline{f}_{j}$ and using $\overline{f}_{j}\in F_{n}(\mathbb{Z})$, we get $f^{(2n)}(j,r)=f((j,r)-n)+n=f(j,r-n)+n=(\widetilde{f}(j),\overline{f}_{j}(r-n))+n=(\widetilde{f}(j),\overline{f}_{j}(r-n)+n)=(\widetilde{f}(j),\overline{f}_{j}^{(2n)}(r))=(\widetilde{f}(j),\overline{f}_{j}(r))=f(j,r)$. Conversely, if $f\in F_{n}(\mathbf{\Omega})$, we define the function $\widetilde{f}:J\rightarrow J$ by $\widetilde{f}(j)=i$ iff $f(j,0)=(i,m)$ for some $m\in\mathbb{Z}$; by Lemma 2.11(1) $\widetilde{f}$ is well defined. To show that $\widetilde{f}$ is one-to-one, we assume that $\widetilde{f}(j_{1})=\widetilde{f}(j_{2})$ for some $j_{1},j_{2}\in J$; since $\mathbf{J}$ is a chain, we may further assume that $j_{1}\leq j_{2}$. Then, there exist $r_{1},r_{2}\in\mathbb{Z}$ such that $f(j_{1},0)=(\widetilde{f}(j_{1}),r_{1})$ and $f(j_{2},0)=(\widetilde{f}(j_{1}),r_{2})$; since $j_{1}\leq j_{2}$, by the order-preservation of $f$ we have $(\widetilde{f}(j_{1}),r_{1})=f(j_{1},0)\leq f(j_{2},0)=(\widetilde{f}(j_{2}),r_{2})=(\widetilde{f}(j_{1}),r_{2})$, so $r_{1}\leq r_{2}$. Let $k\in\mathbb{Z}$ be such that $r_{2}<r_{1}+kn$. Using this fact, Lemma 2.3 and the periodicity of $f$, we have $f(j_{2},0)=(\widetilde{f}(j_{1}),r_{2})<(\widetilde{f}(j_{1}),r_{1}+kn)=(\widetilde{f}(j_{1}),r_{1})+kn=f(j_{1},0)+kn=f(j_{1},kn- kn)+kn=f((j_{1},kn)-kn)+kn=f^{(2kn)}(j_{1},kn)=f(j_{1},kn)$; hence $f(j_{1},kn)\nleq f(j_{2},0)$. By order preservation we get $(j_{1},kn)\not\leq(j_{2},0)$ and since $\Omega$ is a chain we obtain $(j_{2},0)<(j_{1},kn)$; thus $j_{2}\leq j_{1}$. In conclusion $j_{1}=j_{2}$. To show that $\widetilde{f}$ is onto, let $i\in J$ and consider $(i,0)\in\Omega$. If $(i,0)\in f[\Omega]$, there exists $(j,r)\in\Omega$ such that $f(j,r)=(i,0)$; by Lemma 2.11(1) we have $f(j,0)=(i,k)$ for some $k\in\mathbb{Z}$, so $i\in\widetilde{f}[J]$. If $(i,0)\notin f[\Omega]$, by Lemma 2.11(2) there exist $j\in J$ and $r,m_{1},m_{2}\in\mathbb{Z}$, such that $f(j,r)=(i,m_{1})$, $f(j,r+1)=(i,m_{2})$, and $0\in(m_{1},m_{2}$); then, by Lemma 2.11(1), $f(j,0)=(i,m_{3})$ for some $m_{3}\in\mathbb{Z}$, so $\widetilde{f}(j)=i$, i.e., $i\in\widetilde{f}[J]$. Now, for $j\in J$, we define $\overline{f}_{j}:\mathbb{Z}\rightarrow\mathbb{Z}$ where $\overline{f}_{j}(r)=m$ iff $f(j,r)=(\widetilde{f}(j),m)$ for some $m\in\mathbb{Z}$; by Lemma 2.11(1) $\overline{f}_{i}$ is well-defined. We will show that $\overline{f}_{j}\in F(\mathbb{Z})$ by verifying the condition in Lemma 2.5. If $m\in\overline{f}_{j}[\mathbb{Z}]$, then there exist $r\in\mathbb{Z}$ such that $\overline{f}_{j}(r)=m$, hence, by the definition of $\overline{f}_{j}$, there exist $i\in J$ such that $f(j,r)=(i,m)$. Since $f\in F(\mathbf{\Omega})$, by Lemma 2.4(1) there exist $(j_{1},r_{1}),(j_{2},r_{2})\in\Omega$ such that $f^{-1}[(i,m)]=[(j_{1},r_{1}),(j_{2},r_{2})]$. So, by Lemma 2.11(1) $\widetilde{f}(j_{1})=i=\widetilde{f}(j_{2})$ and since $\widetilde{f}$ is a bijection, $j_{1}=j_{2}:=j$. Then $f^{-1}[(i,m)]=[(j,r_{1}),(j,r_{2})]$; thus $\overline{f}_{j}^{-1}[m]=[r_{1},r_{2}]$. Finally, we prove that $\overline{f}_{j}\in F_{n}(\mathbb{Z})$, by showing that $\overline{f}_{j}$ is $n$-periodic. For all $j\in J$ and $m\in\mathbb{Z}$, by the $n$-periodicity of $f$ and Lemma 2.3, we have $(\widetilde{f}(j),\overline{f}_{j}(m))=f(j,m)=f^{(2n)}(j,m)=f((j,m)-n)+n=f(j,m-n)+n=(\widetilde{f}(j),\overline{f}_{j}(m-n))+n=(\widetilde{f}(j),\overline{f}_{j}(m-n)+n)$; hence $\overline{f}_{j}(m)=\overline{f}_{j}(m-n)+n=(\overline{f}_{j})^{(2n)}(m)$, by Lemma 2.3, we have that $\overline{f}_{j}$ is $n$-periodic. ∎ ###### Example 2.13. In Figure 1 we can see a function $f$ of $\mathbf{F}_{2}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z})$ and its $0$-th component $\overline{f}_{0}$ which is an element of $\mathbf{F}_{2}(\mathbb{Z})$. $3$$3$$2$$2$$1$$1$$0$$0$$7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$$\vdots$$f$ $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$\overline{f}_{0}$ Figure 1: An element of $\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z})$ and one of its components in $\mathbf{F}_{n}(\mathbb{Z})$ ### 2.4 Wreath products In this section we show that $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ is actually a wreath product and that every $n$-periodic $\ell$-pregroup can be embedded in the wreath product of an $\ell$-group and an $n$-periodic simple $\ell$-pregroup (our second representation theorem for $n$-periodic $\ell$-pregroups). First recall that if $\mathbf{H}=(H,\overline{\circ},1_{H})$ and $\mathbf{N}=(N,\odot,1_{N})$ are monoids and $\mathbf{H}$ acts on the monoid $\mathbf{N}$ on the right via $\otimes:\mathbf{H}\curvearrowright\mathbf{N}$, (i.e., $n\otimes 1_{H}=n$ $(n\otimes h_{1})\otimes h_{2}=n\otimes(h_{1}\overline{\circ}h_{2})$, $(n_{1}\odot n_{2})\otimes h=(n_{1}\otimes h)\odot(n_{2}\otimes h)$ and $1_{N}\otimes h=1_{N}$), then the _semidirect product_ $\mathbf{H}\ltimes_{\otimes}\mathbf{N}$ is defined by $(h_{1},n_{1})(h_{2},n_{2})=(h_{1}\overline{\circ}h_{2},(n_{1}\otimes h_{2})\odot n_{2})$ and it forms a monoid with unit $(1_{H},1_{N})$. If we take $\mathbf{N}=\mathbf{F}^{J}$, where $J$ is a set and $\mathbf{F}=(F,\ocirc,1_{F})$ is a monoid, i.e., the operation of $\mathbf{N}$ is $(n_{1}\odot n_{2})(j):=n_{1}(j)\ocirc n_{2}(j)$, and if $\mathbf{H}$ acts on the set $J$ on the left by $*:\mathbf{H}\curvearrowright J$, then $\mathbf{H}$ acts on $\mathbf{F}^{J}$ on the right via $\otimes:\mathbf{H}\curvearrowright\mathbf{F}^{J}$, where $n\otimes h:=n\circ\lambda_{h}$ and $\lambda_{h}(j):=h*j$, hence $(n\otimes h)(j)=(n\circ\lambda_{h})(j)=n(\lambda_{h}(j))=n(h*j)$. Indeed, $\otimes$ is a right action because $((n_{1}\odot n_{2})\otimes h)(j)=((n_{1}\odot n_{2})\circ\lambda_{h})(j)=(n_{1}\odot n_{2})(\lambda_{h}(j))=n_{1}(\lambda_{h}(j))\ocirc n_{2}(\lambda_{h}(j))=n_{1}\circ\lambda_{h}(j)\ocirc n_{2}\circ\lambda_{h}(j)=(n_{1}\otimes h)(j)\ocirc(n_{2}\otimes h)(j)=((n_{1}\otimes h)\odot(n_{1}\otimes h))(j)$ and $(1_{N}\otimes h)(j)=(1_{N}\circ\lambda_{h})(j)=1_{N}(\lambda_{h}(j))=1_{F}=1_{N}(j)$. Also, for every $h_{1},h_{2}\in H$ and $j\in J$, we have $(\lambda_{h_{1}}\circ\lambda_{h_{2}})(j)=h_{1}*(h_{2}*j)=(h_{1}\overline{\circ}h_{2})*j=\lambda_{h_{1}\overline{\circ}h_{2}}(j)$, so $(n\otimes h_{1})\otimes h_{2}=(n\circ\lambda_{h_{1}}\circ\lambda_{h_{2}})=n\circ\lambda_{h_{1}\overline{\circ}h_{2}}=n\otimes(h_{1}\overline{\circ}h_{2})$. The semidirect product $\mathbf{H}\ltimes_{\otimes}\mathbf{F}^{J}$ is known as the _wreath product_ $\mathbf{H}\wr_{J,*}\mathbf{F}$, or simply $\mathbf{H}\wr\mathbf{F}$ when $\mathbf{H}$ is a submonoid of $\mathbf{End}(J)$ and $h*j:=h(j)$. One such case is when we take $\mathbf{H}:=\mathbf{Aut}(\mathbf{J})$ to be the order-preserving automorphisms of a chain $\mathbf{J}$, resulting in the wreath product $\mathbf{Aut}(\mathbf{J})\wr\mathbf{F}=\mathbf{Aut}(\mathbf{J})\ltimes\mathbf{F}^{J}$. Now, we expand the monoid structure of the wreath product $\mathbf{H}\wr_{J,*}\mathbf{F}$ relative to $J$ and $*$, when $\mathbf{H}$ is an $\ell$-group, $\mathbf{J}$ is a chain, $\mathbf{F}$ is an $\ell$-pregroup, and $*:\mathbf{H}\curvearrowright\mathbf{J}$ is a left action of $\mathbf{H}$ on the chain $\mathbf{J}$, by defining an order and inversion operations. We say that a monoid $\mathbf{H}$ acts on a chain $\mathbf{J}$ via $*:\mathbf{H}\curvearrowright\mathbf{J}$, if $\mathbf{H}$ acts on the set $J$ and $*$ is monotone on both coordinates. We define $(h_{1},n_{1})\leq(h_{2},n_{2})$ by: $h_{1}\leq h_{2}$ and $h_{1}*j=h_{2}*j\Rightarrow n_{1}(j)\leq n_{2}(j)$, for all $j$. Also, $(h_{1},n_{1})\vee(h_{2},n_{2})=(h_{1}\vee h_{2},n_{\vee})$ and $(h_{1},n_{1})\wedge(h_{2},n_{2})=(h_{1}\wedge h_{2},n_{\wedge})$, where for every $j$, $n_{\vee}(j)=\begin{cases}n_{2}(j)\\\ n_{1}(j)\\\ n_{1}(j)\vee n_{2}(j)\end{cases}n_{\wedge}(j)=\begin{cases}n_{1}(j)&\text{ if }h_{1}*j<h_{2}*j\\\ n_{2}(j)&\text{ if }h_{2}*j<h_{1}*j\\\ n_{1}(j)\wedge n_{2}(j)&\text{ if }h_{1}*j=h_{2}*j\end{cases}$ Also, we define: $(h,n)^{\ell}:=(h^{-1},n^{\ell}\otimes h^{-1})$ and $(h,n)^{r}:=(h^{-1},n^{r}\otimes h^{-1})$ , where $n\mapsto n^{\ell}$ and $n\mapsto n^{r}$ are the operations of the direct product $\mathbf{F}^{J}$. We denote by $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ the resulting structure, which we call the _( $\ell$-pregroup) wreath product_, and we prove that it is an $\ell$-pregroup. ###### Theorem 2.14. If $*:\mathbf{H}\curvearrowright\mathbf{J}$ is a left action of the $\ell$-group $\mathbf{H}$ on the chain $\mathbf{J}$ and $\mathbf{F}$ is an $\ell$-pregroup, then $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ is an $\ell$-pregroup. Furthermore, 1. 1. for every $k$, $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ is $k$-periodic iff $\mathbf{F}$ is $k$-periodic. 2. 2. $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ is distributive iff $\mathbf{F}$ is distributive. ###### Proof. From the above calculations we know that $\mathbf{H}\wr_{J,*}\mathbf{F}$ supports a monoid structure. We show that $\leq$ is a partial order; it is clearly reflexive. For transitivity, we assume $(h_{1},n_{1})\leq(h_{2},n_{2})\leq(h_{3},n_{3})$, so $h_{1}\leq h_{2}\leq h_{3}$; in particular $h_{1}*j\leq h_{2}*j\leq h_{3}*j$ for all $j$, by the monotonicity of $*$ on its left coordinate. So, if $h_{1}*j=h_{3}*j$, then $h_{1}*j=h_{2}*j=h_{3}*j$, so $n_{1}(j)\leq n_{2}(j)\leq n_{3}(j)$. For antisymmetry, we assume $(h_{1},n_{1})\leq(h_{2},n_{2})\leq(h_{1},n_{1})$, so $h_{1}\leq h_{2}\leq h_{1}$; in particular $h_{1}*j=h_{2}*j$, for all $j$. Therefore, by assumption we get $n_{1}(j)\leq n_{2}(j)\leq n_{1}(j)$, for all $j$. We will use the fact that $*$ distributes over joins on its left coordinate, which we establish now. First note that the map $j\mapsto h*j$ is an order- preserving bijection: $h*i=h*j\Rightarrow h^{-1}*(h*i)=h^{-1}*(h*j)\Rightarrow i=j$ and the action is order-preserving on the right coordinate; so, $j\mapsto h*j$ distributes over joins and meets. Now, to prove $(h_{1}\vee h_{2})*j=h_{1}*j\vee h_{2}*j$, by order-preservation we only need to check $(h_{1}\vee h_{2})*j\leq h_{1}*j\vee h_{2}*j$, which is equivalent to $j\leq(h_{1}\vee h_{2})^{-1}*(h_{1}*j\vee h_{2}*j)$, i.e., to $j\leq(h_{1}\vee h_{2})^{-1}*(h_{1}*j)\vee(h_{1}\vee h_{2})^{-1}*(h_{2}*j)$, which is equivalent to $j\leq(1\wedge h_{2}^{-1}\overline{\circ}h_{1})*j\vee(1\wedge h_{1}^{-1}\overline{\circ}h_{2})*j$, which in turn holds by order preservation on the left coordinate. We show that $\leq$ is a lattice-order and that join and meet are given by the formulas. First note that $(h_{1},n_{1})\leq(h_{1}\vee h_{2},n_{\vee})$, because $h_{1}\leq h_{1}\vee h_{2}$ and, if $h_{1}*j=(h_{1}\vee h_{2})*j$, then $h_{2}*j\leq(h_{1}\vee h_{2})*j=h_{1}*j$; by definition, we get that then $n_{\vee}(j)\in\\{n_{1}(j),n_{1}(j)\vee n_{2}(j)\\}$, thus $n_{1}(j)\leq n_{\vee}(j)$. Also, if $(h_{1},n_{1})\leq(h,n)$ and $(h_{2},n_{2})\leq(h,n)$, then $h_{1},h_{2}\leq h$, so $h_{1}\vee h_{2}\leq h$. If $(h_{1}\vee h_{2})*j=h*j$, for some $j$, then $(h_{1}*j)\vee(h_{2}*j)=h*j$, so because $\mathbf{J}$ is a chain we get $h*j=h_{1}*j$ or $h*j=h_{2}*j$. In either case, by the definition of $n_{\vee}$, we get $n_{\vee}(j)=n_{1}(j)\vee n_{2}(j)$. Therefore, $(h_{1}\vee h_{2},n_{\vee})\leq(h,n)$. Likewise, we show that the meet is given by the formula. We will show that multiplication is order-preservating on the right. We assume that $(h_{1},n_{1})\leq(h_{2},n_{2})$, i.e., $h_{1}\leq h_{2}$ and $h_{1}*j=h_{2}*j\Rightarrow n_{1}(j)\leq n_{2}(j)$, for all $j$. We will show that $(h_{1},n_{1})(h,n)\leq(h_{2},n_{2})(h,n)$, i.e., that $(h_{1}\overline{\circ}h,(n_{1}\otimes h)\odot n)\leq(h_{2}\overline{\circ}h,(n_{2}\otimes h)\odot n)$. Since $h_{1}\leq h_{2}$, by monotonicity of $\overline{\circ}$, we get $h_{1}\overline{\circ}h\leq h_{2}\overline{\circ}h$. For each $j$, if $(h_{1}\overline{\circ}h)*j=(h_{2}\overline{\circ}h)*j$, then $h_{1}*(h*j)=h_{2}*(h*j)$, so $n_{1}(h*j)\leq n_{2}(h*j)$, by hypothesis. By the order-preservation of $\ocirc$ in $\mathbf{F}$, we get $n_{1}(h*j)\ocirc n(j)\leq n_{2}(h*j)\ocirc n(j)$. Since $n_{i}(h*j)=(n_{i}\circ\lambda_{h})(j)=(n_{i}\otimes h)(j)$, we get $(n_{1}\otimes h)(j)\ocirc n(j)\leq(n_{2}\otimes h)(j)\ocirc n(j)$, i.e., $((n_{1}\otimes h)\odot n)(j)\leq((n_{2}\otimes h)\odot n)(j)$. For the left side, we will show that $(h,n)(h_{1},n_{1})\leq(h,n)(h_{2},n_{2})$, i.e., that $(h\overline{\circ}h_{1},(n\otimes h_{1})\odot n_{1})\leq(h\overline{\circ}h_{2},(n\otimes h_{2})\odot n_{2})$. We get $h\overline{\circ}h_{1}\leq h\overline{\circ}h_{2}$, by the monotonicity of $\overline{\circ}$. For each $j$, if $(h\overline{\circ}h_{1})*j=(h\overline{\circ}h_{2})*j$, then $h*(h_{1}*j)=h*(h_{2}*j)$, so $h^{-1}*(h*(h_{1}*j))=h^{-1}*(h*(h_{2}*j))$. Since $\mathbf{H}$ is a group we get $h^{-1}*(h*(i))=(h^{-1}\overline{\circ}h)*i=1_{H}*i=i$, therefore $h_{1}*j=h_{2}*j$, hence $n_{1}(j)\leq n_{2}(j)$, by hypothesis; also $n(h_{1}*j)=n(h_{2}*j)$. By the monotonicity of $\ocirc$, we get $n(h_{2}*j)\ocirc n_{1}(j)\leq n(h_{2}*j)\ocirc n_{2}(j)$. Since $n(h_{i}*j)=(n\circ\lambda_{h_{i}})(j)=(n\otimes h_{i})(j)$, we get $(n\otimes h_{1})(j)\ocirc n_{1}(j)\leq(n\otimes h_{2})(j)\ocirc n_{2}(j)$, hence $((n\otimes h_{1})\odot n_{1})(j)\leq((n\otimes h_{2})\odot n_{2})(j)$. Now, show that for all $x\in H\times F^{J}$, we have $x^{\ell}x\leq 1$ and $1\leq x^{r}x$. Using the fact that $f^{\ell}f\leq 1_{F}$ holds for all $f\in F$, and thus also $n^{\ell}\odot n\leq 1_{N}$ for $n\in F^{J}$, we get $(h^{-1},n^{\ell}\otimes h^{-1})(h,n)=(h^{-1}\overline{\circ}h,((n^{\ell}\otimes h^{-1})\otimes h)\odot n)=(1_{H},(n^{\ell}\otimes(h^{-1}\overline{\circ}h))\odot n)=(1_{H},(n^{\ell}\otimes 1_{H})\odot n)=(1_{H},n^{\ell}\odot n)\leq(1_{H},1_{N})$. Also, $(h^{-1},n^{r}\otimes h^{-1})(h,n)=(h^{-1}\overline{\circ}h,((n^{r}\otimes h^{-1})\otimes h)\odot n)=(1_{H},(n^{r}\otimes(h^{-1}\overline{\circ}h))\odot n)=(1_{H},(n^{r}\otimes 1_{H})\odot n)=(1_{H},n^{r}\odot n)\geq(1_{H},1_{N})$. Note that we used the fact that $\mathbf{H}$ acts via $\otimes$ on the set $F^{J}$. For showing $xx^{r}\leq 1\leq xx^{\ell}$ we will use that $\mathbf{H}$ acts also on the monoid $\mathbf{F}^{J}$ and the order-preservation of $\otimes$ on the first coordinate, which we establish first. If $n_{1}\leq n_{2}$, i.e., $n_{1}(j)\leq n_{2}(j)$ for all $j$, then for all $h$ we get $n_{1}\otimes h\leq n_{2}\otimes h$. Indeed, for all $j$, we have $n_{1}(h*j)\leq n_{1}(h*j)$, so $(n_{1}\otimes h)(j)\leq(n_{2}\otimes h)(j)$. Thus, $\otimes:\mathbf{H}\curvearrowright\mathbf{F}^{J}$ is order-preserving in the left coordinate. Now, for $xx^{r}\leq 1\leq xx^{\ell}$, we have $(h,n)(h^{-1},n^{r}\otimes h^{-1})=(h\overline{\circ}h^{-1},(n\otimes h^{-1})\odot(n^{r}\otimes h^{-1}))=(1_{H},(n\odot n^{r})\otimes h^{-1})\leq(1_{H},1_{N}\otimes h^{-1})=(1_{H},1_{N})$ and $(h,n)(h^{-1},n^{\ell}\otimes h^{-1})=(h\overline{\circ}h^{-1},(n\otimes h^{-1})\odot(n^{\ell}\otimes h^{-1}))=(1_{H},(n\odot n^{\ell})\otimes h^{-1})\geq(1_{H},1_{N}\otimes h^{-1})=(1_{H},1_{N})$ (1) We now prove that $(n\otimes h)^{\ell}=n^{\ell}\otimes h$, i.e., $(n\otimes h)^{\ell}(j)=(n^{\ell}\otimes h)(j)$, for all $j$. By the coordinate-wise definition of ℓ on elements of $F^{J}$, this is equivalent to $((n\otimes h)(j))^{\ell}=(n^{\ell}\otimes h)(j)$, to $(n(h*j))^{\ell}=n^{\ell}(h*j)$ and to $(n(h*j))^{\ell}=(n(h*j))^{\ell}$, which holds. Likewise, we have $(n\otimes h)^{r}=n^{r}\otimes h$. We use these properties to establish the characterization of the periodic case. Note that $(h,n)^{\ell\ell}=(h^{-1},n^{\ell}\otimes h^{-1})^{\ell}=((h^{-1})^{-1},(n^{\ell}\otimes h^{-1})^{\ell}\otimes(h^{-1})^{-1}))=(h,(n^{\ell\ell}\otimes h^{-1})\otimes h)=(h,n^{\ell\ell}\otimes(h^{-1}\overline{\circ}h))=(h,n^{\ell\ell}\otimes 1_{H})=(h,n^{\ell\ell})$. More generally, we get $(h,n)^{(2k)}=(h,n^{(2k)})$, for every $k$. Therefore, for each positive integer $k$, we have that $(h,n)^{(2k)}=(h,n)$ for all $(h,n)\in\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$, iff $(h,n^{(2k)})=(h,n)$ for all $(h,n)\in\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$, iff $n^{(2k)}=n$ for all $n\in F^{J}$. (2) We first show that if $\mathbf{F}$ is distributive, then $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ is distributive, as well. We will show that for all $(h_{1},n_{1}),(h_{2},n_{2}),(h_{3},n_{3})\in\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ we have: $(h_{1},n_{1})\wedge((h_{2},n_{2})\vee(h_{3},n_{3}))=((h_{1},n_{1})\wedge(h_{2},n_{2}))\vee((h_{1},n_{1})\wedge(h_{3},n_{3}))$ For simplicity, we define $n_{i\wedge j}$ by $(h_{i},n_{i})\wedge(h_{l},n_{l})=(h_{i}\wedge h_{l},n_{i\wedge j})$, instead of the generic term $n_{\wedge}$, and similarly for the join. By the distributivity of $\mathbf{J}$ we get $h_{1}*j\wedge(h_{2}*j\vee h_{3}*j)=(h_{1}*j\wedge h_{2}*j)\vee(h_{1}*j\wedge h_{3}*j)$ and in particular, there exists $i\in\\{1,2,3\\}$ such that $h_{1}*j\wedge(h_{2}*j\vee h_{3}*j)=h_{i}*j$, since $\mathbf{J}$ is a chain. If $h_{1}*j,h_{2}*j,h_{3}*j$ are all distinct, then $n_{1\wedge(2\vee 3)}(j)=n_{i}(j)=n_{(1\wedge 2)\vee(1\wedge 3)}(j)$ On the other hand, if $h_{1}*j,h_{2}*j,h_{3}*j$ are not all distinct, for $\\{i,k\\}=\\{2,3\\}$, we have $\displaystyle n_{1\wedge(2\vee 3)}(j)$ $\displaystyle=\begin{cases}n_{1}(j)\wedge n_{k}(j)&\text{ if }h_{i}*j<h_{k}*j=h_{1}*j\\\ n_{1}(j)&\text{ if }h_{1}*j=h_{k}*j<h_{i}*j\\\ n_{1}(j)&\text{ if }h_{1}*j<h_{k}*j=h_{i}*j\\\ n_{2}(j)\vee n_{3}(j)&\text{ if }h_{k}*j=h_{i}*j<h_{1}*j\\\ n_{1}(j)\wedge(n_{2}(j)\vee n_{3}(j))&\text{ if }h_{1}*j=h_{k}*j=h_{i}*j\end{cases}$ Also, if $h_{1}*j,h_{2}*j,h_{3}*j$ are not all distinct, $n_{(1\wedge 2)\vee(1\wedge 3)}(j)=$ $\displaystyle\begin{cases}n_{1}(j)\wedge n_{k}(j)&\text{ if }h_{i}*j<h_{k}*j=h_{1}*j\\\ n_{1}(j)&\text{ if }h_{1}*j=h_{k}*j<h_{i}*j\\\ n_{1}(j)&\text{ if }h_{1}*j<h_{k}*j=h_{i}*j\\\ n_{2}(j)\vee n_{3}(j)&\text{ if }h_{k}*j=h_{i}*j<h_{1}*j\\\ (n_{1}(j)\vee n_{2}(j))\wedge(n_{1}(j)\vee n_{3}(j))&\text{ if }h_{1}*j=h_{k}*j=h_{i}*j\end{cases}$ By the distributivity of $\mathbf{F}$, we get $n_{1\wedge(2\vee 3)}(j)=n_{(1\wedge 2)\vee(1\wedge 3)}(j)$ for all $j\in J$, and by the distributivity of $\mathbf{H}$, we have $(h_{1},n_{1})\wedge((h_{2},n_{2})\vee(h_{3},n_{3}))=((h_{1},n_{1})\wedge(h_{2},n_{2}))\vee((h_{1},n_{1})\wedge(h_{3},n_{3}))$. On the other hand, if $\mathbf{H}\wr_{\mathbf{J},*}\mathbf{F}$ is distributive, for all $n_{1},n_{2},n_{3}\in F$ and $h\in H$, $(h,n_{1})\wedge((h,n_{2})\vee(h,n_{3}))=((h,n_{1})\wedge(h,n_{2}))\vee((h,n_{1})\wedge(h,n_{3}))$, hence $(h,n_{1}\vee(n_{2}\wedge n_{3}))=(h,(n_{1}\vee n_{2})\wedge(n_{1}\vee n_{3}))$, so $n_{1}\vee(n_{2}\wedge n_{3})=(n_{1}\vee n_{2})\wedge(n_{1}\vee n_{3})$. Thus, $\mathbf{F}$ is distributive. ∎ We want to stress that the wreath product construction applies to arbitrary $\ell$-pregroups, not only to distributive ones. ###### Remark 2.15. We note that the more general definition where $\mathbf{H}$ is allowed to be an $\ell$-pregroup, $(h,n)^{\ell}:=(h^{\ell},n^{\ell}\otimes h^{\ell})$ and $(h,n)^{r}:=(h^{r},n^{\ell}\otimes h^{r})$ fails: $(h,n)^{\ell r}=(h^{\ell},n^{\ell}\otimes h^{\ell})^{r}=(h^{\ell r},(n^{\ell}\otimes h^{\ell})^{r}\otimes h^{\ell r})=(h,(n^{\ell r}\otimes h^{\ell})\otimes h)=(h,n\otimes(h^{\ell}\overline{\circ}h))\leq(h,n\otimes 1_{H})=(h,n)$, but unless $h\overline{\circ}h^{r}=1_{H}$ (i.e., $h$ is invertible, for arbitrary $h$), we do not get equality. It is well known that if the monoid $\mathbf{F}$ acts on the left on a set $\Omega$, then the monoid wreath product $\mathbf{H}\wr_{J,*}\mathbf{F}$ acts on $J\times\Omega$. It is not hard to show that if an $\ell$-pregroup $\mathbf{F}$ acts on a chain $\mathbf{\Omega}$ then the $\ell$-group wreath product $\mathbf{H}\wr_{J,*}\mathbf{F}$ acts on the lexicographic product $\mathbf{J}\overrightarrow{\times}\mathbf{\Omega}$, by $(h,g)(j,a):=(h(j),g(a))$, for $(h,g)\in\mathbf{H}\wr_{J,*}\mathbf{F}$ and $(j,a)\in{J}\overrightarrow{\times}\Omega$. As promised we now show that $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ is indeed an $\ell$-pregroup wreath product and use this fact to get a second representation theorem for $n$-periodic $\ell$-pregroups. ###### Theorem 2.16. For every chain $\mathbf{J}$ and $n\in\mathbb{Z}^{+}$, $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})\cong\mathbf{Aut}(\mathbf{J})\wr\mathbf{F}_{n}(\mathbb{Z})$. Therefore, every $n$-periodic $\ell$-pregroup can be embedded in the wreath product of an $\ell$-group and the simple $n$-periodic $\ell$-pregroup $\mathbf{F}_{n}(\mathbb{Z})$. ###### Proof. By Theorem 2.12, every $f$ in $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$ can be identified with the pair $(\widetilde{f},\overline{f})$ in $\mathbf{Aut}(\mathbf{J})\times(\mathbf{F}_{n}(\mathbb{Z}))^{J}$, where $\overline{f}(j)=\overline{f}_{j}$, i.e., $\overline{f}=(\overline{f}_{j})_{j\in J}$. We will verify that composition of $f$’s corresponds to multiplication of pairs $(\widetilde{f},\overline{f})$ in the wreath product $\mathbf{Aut}(\mathbf{J})\wr\mathbf{F}_{n}(\mathbb{Z})=\mathbf{Aut}(\mathbf{J})\ltimes(\mathbf{F}_{n}(\mathbb{Z}))^{J}$ under the identification $f\equiv(\widetilde{f},\overline{f})$. Indeed, for every $(j,m)\in J\times\mathbb{Z}$ and $f,g$ in $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$, we have $\begin{array}[]{rl}(f\circ g)(j,m)&=f(g(j,m))=f(\widetilde{g}(j),\overline{g}_{j}(m))=(\widetilde{f}(\widetilde{g}(j)),\overline{f}_{\widetilde{g}(j)}(\overline{g}_{j}(m)))\\\ &=((\widetilde{f}\circ\widetilde{g})(j),(\overline{f}_{\widetilde{g}(j)}\circ\overline{g}_{j})(m))=((\widetilde{f}\circ\widetilde{g})(j),(\overline{f}\otimes\widetilde{g})\odot\overline{g})_{j}(m))\\\ &=(\widetilde{f}\circ\widetilde{g},(\overline{f}\otimes\widetilde{g})\odot\overline{g})(j,m)=((\widetilde{f},\overline{f})(\widetilde{g},\overline{g}))(j,m)\end{array}$ where we used that $\overline{f}_{\widetilde{g}(j)}\circ\overline{g}_{j}=\overline{f}(\widetilde{g}(j))\circ\overline{g}(j)=(\overline{f}\otimes{\widetilde{g})(j)}\circ\overline{g}(j)=(\overline{f}\otimes\widetilde{g})\odot\overline{g})(j)=(\overline{f}\otimes\widetilde{g})\odot\overline{g})_{j}$ which is, in turn, based on $\overline{f}(\widetilde{g}(j))=(\overline{f}\otimes{\widetilde{g})(j)}$, a general fact we already established above as $n(h(j))=(n\otimes h)(j)$. For the order, note that $f\leq g$ iff $(\widetilde{f}(j),(\overline{f}_{j}(m))_{j\in J})\leq(\widetilde{g}(j),(\overline{g}_{j}(m))_{j\in J})$, for all $(j,m)$, iff $\widetilde{f}\leq\widetilde{g}$ and $\widetilde{f}(j)=\widetilde{g}(j)\Rightarrow\overline{f}_{j}(m)\leq\overline{g}_{j}(m)$ for all $(j,m)$, iff $\widetilde{f}\leq\widetilde{g}$ and $\widetilde{f}(j)=\widetilde{g}(j)\Rightarrow\overline{f}_{j}\leq\overline{g}_{j}$. for all $j$. Also, we have that if $f$ corresponds to $(\widetilde{f},\overline{f})$, then $f^{\ell}\equiv(\widetilde{f}^{-1},\overline{f}^{\ell}\otimes\widetilde{f}^{-1})$, where $\overline{f}^{\ell}(j):=\overline{f}(j)^{\ell}$ is given by the definition of ℓ to the direct product $\mathbf{F}_{n}(\mathbb{Z})^{J}$. By Lemma 2.4 for all $(j,n)\in{J}\overrightarrow{\times}\mathbb{Z}$ there exist $(i,m)\in{J}\overrightarrow{\times}\mathbb{Z}$ such that $f(i,m-1)<(j,n)\leq f(i,m)$ and $f^{\ell}(j,n)=(i,m)$, so $(\tilde{f}(i),\overline{f}_{i}(m-1))<(j,n)\leq(\tilde{f}(i),\overline{f}_{i}(m))$. Then $j=\tilde{f}(i)$ and $\overline{f}_{i}(m-1)<n\leq\overline{f}_{i}(m)$, therefore $\overline{f}_{i}^{\ell}(n)=m$. So $f^{\ell}(j,n)=(i,m)=(\tilde{f}^{-1}(j),\overline{f}_{\tilde{f}^{-1}(j)}^{\ell}(n))=(\widetilde{f}^{-1},\overline{f}^{\ell}\otimes\widetilde{f}^{-1})(j,n)$. That $\mathbf{F}_{n}(\mathbb{Z})$ is simple follows from Lemma 2.10. ∎ ###### Remark 2.17. We mention that in the definition of the monoid $\mathbf{H}$ we denoted the operation by $\overline{\circ}$ because in our application $\mathbf{H}=\mathbf{Aut}(\mathbf{J})$ the operation is the usual composition $\circ$ of functions on $J$. Furthermore, we denoted by $\ocirc$ the operation of $\mathbf{F}$, as in our case of $\mathbf{F}=\mathbf{F}_{n}(\mathbb{Z})$ it is composition $\circ$ of functions on $\mathbb{Z}$. Finally, we denoted by $\odot$ the operation of $\mathbf{N}=\mathbf{F}^{J}$, as it is the extension of the composition operation of $\mathbf{F}$ to the direct power $\mathbf{F}^{J}$. Typically, the same symbol is used for the operation in the direct product, but when this operation is functional composition then $(n_{1}\odot n_{2})(j)=n_{1}(j)\circ n_{2}(j)$ looks better than $(n_{1}\circ n_{2})(j)$ which could be easily misinterpreted as $n_{1}(n_{2}(j))$. ## 3 The join of the periodic varieties In [7] it is shown that each each periodic $\ell$-pregroup is distributive, so $\mathsf{LP_{n}}\subseteq\mathsf{DLP}$, for all $n$. In this section we show that the join of all of the varieties $\mathsf{LP_{n}}$, for $n\in\mathbb{Z}^{+}$, is $\mathsf{DLP}$. As mentioned earlier, $\ell$-pregroups are exactly the involutive residuated lattices that satisfy $(xy)^{\ell}=y^{\ell}x^{\ell}$ and $x^{r\ell}=x=x^{\ell r}$; in the language of residuated lattices this implies $x+y=xy$, $0=1$, ${\sim}x=x^{r}$ and ${-}x=x^{\ell}$. Given $n\in\mathbb{Z}^{+}$, an involutive residuated lattice is called _$n$ -periodic_ if it satisfies ${\sim}^{n}x={-}^{n}x$; therefore, the notion of $n$-periodicity for $\ell$-pregroups is a specialization of the one for involutive residuated lattices. Since $n$-periodicity is defined equationally, the class of all $n$-periodic involutive residuated lattices is a variety, for each $n$. In [8] it is proved that the join of all of these $n$-periodic involutive residuated lattice varieties is equal to the whole variety of involutive residuated lattices. This proof relies, among other things, on proof-theoretic arguments on an analytic sequent calculus for the class of involutive residuated lattices; since an analytic calculus is not known for the class of distributive $\ell$-pregroups, we cannot use the same methods here, but we rely on the method of diagrams. ### 3.1 Diagrams and $n$-periodicity We recall some definitions from [11]. The main idea behind these notions is to try to capture the failure of an equation in an algebra $\mathbf{F}(\mathbf{\Omega})$ in a finitistic way: retain only finitely many points of $\mathbf{\Omega}$ and record the behavior of functions of $\mathbf{F}(\mathbf{\Omega})$ on these points to still witness the failure. A _c-chain_ , or _chain with covers_ , is a triple $(\Delta,\leq,\Yleft)$, consisting of a finite chain $(\Delta,\leq)$ and a subset of the covering relation, ${\Yleft}\subseteq{\prec}$, i.e., if $a\Yleft b$, then $a$ is covered by $b$. Given a chain $(\Delta,\leq)$, a partial function $g$ over $\Delta$ is called _order-preserving_ if, for all $a,b\in Dom(g)$, $a\leq b$ implies $g(a)\leq g(b)$; such a $g$ is intended as the restriction of an $f\in\mathbf{F}(\mathbf{\Omega})$ to $\mathbf{\Delta}$. A _diagram_ $(\mathbf{\Delta},g_{1},\ldots,g_{l})$ consists of a c-chain $\mathbf{\Delta}$ and order-preserving partial functions $g_{1},\ldots,g_{l}$ on $(\Delta,\leq)$, where $l\in\mathbb{N}$. Given a c-chain $(\Delta,\leq,\Yleft)$ and an order-preserving partial function $g$ on $\Delta$, we define the relation $g^{[\ell]}$ by: for all $x,b\in\Delta$, $(x,b)\in g^{[\ell]}$ iff $b\in Dom(g)$ and there exists $a\in Dom(g)$ such that $a\Yleft b$ and $g(a)<x\leq g(b)$. We also define the relation $g^{[r]}$ by: for all $x,a\in\Delta$, $(x,a)\in g^{[r]}$ iff $a\in Dom(g)$ and there exists $b\in Dom(g)$, such that $a\Yleft b$ and $g(a)\leq x<g(b)$. The intention is that $g^{[\ell]}$ will correspond to $f^{\ell}$, for $f\in\mathbf{F}(\mathbf{\Omega})$, and the covering relation $\Yleft$ is crucial for this correspondence and for the correct calculation of the two inverses. ###### Lemma 3.1. [11] If $\mathbf{\Delta}$ is a c-chain and $g$ is an order-preserving partial function on $\mathbf{\Delta}$, then $g^{[\ell]}$ and $g^{[r]}$ are order- preserving partial functions. Therefore, for all $x,b\in\Delta$, $g^{[\ell]}(x)=b$ iff: $b,b-1\in Dom(g)$, $b-1\Yleft b$ and $g(b-1)<x\leq g(b)$. Given a c-chain $\mathbf{\Delta}$ and an order-preserving partial function $g$ on $\mathbf{\Delta}$, we define $g^{[n]}$, for all $n\in\mathbb{N}$, recursively by: $g^{[0]}=g$ and $g^{[k+1]}:=(g^{[k]})^{[\ell]}$. Also, we define $g^{[-n]}$, for all $n\in\mathbb{N}$, recursively by $g^{[-(k+1)]}:=(g^{[k]})^{[r]}$. Lemma 3.1 shows that if $g$ is a partial function on $\mathbf{\Delta}$, then $g^{[n]}$ is a partial function for all $n\in\mathbb{Z}$. We will now define the notion of $n$-periodicity for a partial function $g$ in a diagram $\mathbf{\Delta}$, which is intended to capture the fact that, when $\Delta$ is identified with a subset of $\mathbb{Z}$, $g$ is the restriction to $\Delta$ of an $n$-periodic function of $\mathbb{Z}$, i.e., a function in $\mathbf{F}_{n}(\mathbb{Z})$. As $\Delta$ might not be convex in $\mathbb{Z}$, the information of the spacing between elements of $\Delta$ is lost and only their relative ordering is retained (together with some coverings that are needed for the correct calculation of iterated inverses $g^{[m]}$). Given $\mathbf{\Delta}$ and $g$, whether $g$ deserves to be called $n$-periodic depends on the existence of such a spacing (which is provided by a way of viewing $\mathbf{\Delta}$ inside $\mathbb{Z}$). This leads to the following definitions. A _spacing embedding_ of c-chains is an injection that preserves the order and the covering relations. Note that such an embedding also reflects the order (due to the fact that we are working with chains) but may not reflect the covering relation. Given a spacing embedding $e:\mathbf{\Delta}_{1}\rightarrow\mathbf{\Delta}_{2}$ and a partial function $g$ on a c-chain $\mathbf{\Delta}_{1}$, the _counterpart_ of $g$ in $\mathbf{\Delta}_{2}$ is defined to be the partial map $g^{e}=:e\circ g\circ e^{-1}$. When $\Delta_{2}=\mathbb{Z}$, spacing embeddings that are shifts of each other via automorphisms of $\mathbb{Z}$ are essentially equivalent for our purposes. So we will consider only spacing embeddings where the least element of their image is $0$. The _height_ of a spacing embedding $e:\mathbf{\Delta}\rightarrow(\mathbb{Z},\leq_{\mathbb{Z}},\Yleft_{\mathbb{Z}})$ is then defined to be $\max(e[\Delta])$, i.e., the size of the convexification of the image $e[\Delta]$ in $\mathbb{Z}$ (minus 1). The following definition of $n$-periodicity is inspired by the equivalences in Lemma 2.8. A partial function $g$ on a c-chain $\mathbf{\Delta}$ is called _$n$ -periodic_ with respect to a spacing embedding $e:\mathbf{\Delta}\rightarrow(\mathbb{Z},\leq_{\mathbb{Z}},\Yleft_{\mathbb{Z}})$, where $n\in\mathbb{Z}^{+}$, if ($\min(e[\Delta])=0$ and) for all $x,y\in Dom(g^{e})$ and $k\in\mathbb{Z}$, $x\leq_{\mathbb{Z}}y+kn\Rightarrow g^{e}(x)\leq_{\mathbb{Z}}g^{e}(y)+kn.$ or equivalently, because $\mathbb{Z}$ is a chain, and by setting $k:=-k$, $g^{e}(y)<_{\mathbb{Z}}g^{e}(x)+kn\Rightarrow y<_{\mathbb{Z}}x+kn.$ Therefore, $g$ is $n$-periodic with respect to $e$ iff $g^{e}$ is $n$-periodic with respect to the identity map on $\mathbb{Z}$. Observe that given that $e$ preserves and reflects the order, $n$-periodic partial functions are also order preserving. A main issue with this definition is that it provides no insight on whether it is possible to consider only finitely-many spacing embeddings or, given such an embedding, whether it suffices to consider only finitely-many $k$’s in the $n$-periodicity condition. Actually, Lemma 3.3 below shows that this definition merely translates, into a more palatable form, the demand that there exists a counterpart of $g$ that extends to an $n$-periodic function of $\mathbb{Z}$. In Lemma 3.2 we show that checking finitely-many $k$’s suffices. In order to obtain decidability results, we later also prove that finitely- many spacing embeddings suffice. ###### Lemma 3.2. If $\mathbf{\Delta}$ is a c-chain, $g$ is a partial function on $\mathbf{\Delta}$ and $e:\mathbf{\Delta}\rightarrow\mathbb{Z}$ is a spacing embedding of height $d$, then $g$ is $n$-periodic with respect to $e$ iff for all $x,y\in Dom(g^{e})$ and $|k|\leq\lceil d/n\rceil$ we have $x\leq y+kn\Rightarrow g^{e}(x)\leq g^{e}(y)+kn$. ###### Proof. For the non-trivial (backward) direction, we assume that $x\leq y+kn$, $x,y\in Dom(g^{e})$, and $k\in\mathbb{Z}$ with $k>\lceil d/n\rceil$. Since, $x\leq y+d\leq y+\lceil d/n\rceil n\leq y+kn$, we get $g^{e}(x)\leq g^{e}(y)+\lceil d/n\rceil n\leq g^{e}(y)+kn$, by the $n$-periodicity of $g$ for the value $\lceil d/n\rceil$. If $k<-\lceil d/n\rceil$ and $x,y\in Dom(g^{e})$, then $-d\leq x-y$, so $y+kn<y-\lceil d/n\rceil n\leq y-d\leq x$; hence the condition is holds vacuously. ∎ For convenience, for a fixed $n\in\mathbb{Z}^{+}$, for every $x\in\mathbb{Z}$ we denote by $Rx$ the remainder and by $Qx$ the quotient of dividing $x$ by $n$. So, $x-Rx=Qx\cdot n$ and we denote this value by $Sx$. ###### Lemma 3.3. If $\mathbf{\Delta}$ is a c-chain, $g$ is a partial function on $\Delta$ and $e$ is a spacing embedding $e$ on $\mathbf{\Delta}$, then $g$ is $n$-periodic with respect to $e$ iff its counterpart $g^{e}$ can be extended to a function in $F_{n}(\mathbb{Z})$. ###### Proof. First note that the backward direction holds by Lemma 2.8. Note that the empty partial function is vacuously $n$-periodic, so we will assume that the partial functions are non-empty. For the forward direction, we note that $h:=g^{e}$ is an $n$-periodic partial function on $\mathbb{Z}$ with respect to the identity spacing embedding. We define the partial function $\hat{h}$ on $\mathbb{Z}$ by $\hat{h}(Rx)=h(x)-Sx$ for all $x\in Dom(h)$, hence that $Dom(\hat{h})\subseteq[0,n)_{\mathbb{Z}}$; we will show that $\hat{h}$ is well-defined (single-valued) and $n$-periodic at the same time. If $x,y\in Dom(h)$, $k\in\mathbb{Z}$ and $Rx\leq Ry+kn$, then $Rx+Sx\leq Ry+Sy+(Sx- Sy)+kn$, so $x\leq y+kn+(Sx-Sy)$; since $Sx$ and $Sy$ are multiples of $n$, the periodicity of $h$ yields $h(x)\leq h(y)+kn+(Sx-Sy)$, so $(h(x)-Sx)\leq(h(y)-Sy)+kn$. By applying this for $k=0$ (and for two inequalities simultaneously) we get $Rx=Ry\Rightarrow(h(x)-Sx)=(h(y)-Sy)$. By applying this for arbitrary $k$ we get $Rx\leq Ry+kn\Rightarrow\hat{h}(Rx)\leq\hat{h}(Ry)+kn$. We extend $\hat{h}$ to a function $f:[0,n)_{\mathbb{Z}}\rightarrow\mathbb{Z}$ on the whole period, given by $f(x)=\hat{h}(a_{x})$, where $a_{x}$ is the smallest $a\in Dom(\hat{h})$ with $x\leq a$, if such an $a$ exists, and $a_{x}$ is the biggest $a\in Dom(\hat{h})$, otherwise. To verify that $f$ is $n$-periodic with respect to the identity, by Lemma 3.2 it is enough to check the $n$-periodicity condition for $k\in\\{-1,0,1\\}$; since $x\nleq y-n$ for all $x,y\in[0,n)_{\mathbb{Z}}$, we consider only $k\in\\{0,1\\}$. If $x,y\in[0,n)_{\mathbb{Z}}$ and $x\leq y+n$, since $a_{x},a_{y}\in[0,n)$ we have $a_{x}\leq a_{y}+n$, so $\hat{h}(a_{x})\leq\hat{h}(a_{y})+n$ by the $n$-periodicity of $\hat{h}$, hence $f(x)\leq f(y)+n$. Also, for all $x,y\in[0,n)_{\mathbb{Z}}$, if $x\leq y+0$, then $a_{x}\leq a_{y}$, so $\hat{h}(a_{x})\leq\hat{h}(a_{y})$, hence $f(x)\leq f(y)$. Therefore $f$ is a partial function on $\mathbb{Z}$ that is $n$-periodic with respect to the identity. Finally, we extend $f$ to a total function $\hat{f}$ on $\mathbb{Z}$, by $\hat{f}(x)=f(Rx)+Sx$ for all $x\in\mathbb{Z}$; note that $\hat{f}$ extends $f$, since $Rx=x$ and $Sx=0$, for all $x\in[0,n)_{\mathbb{Z}}$. To show that $\hat{f}\in F_{n}(\mathbb{Z})$, we will verify the conditions of Lemma 2.9. For condition (2), if $x,y,k\in\mathbb{Z}$ and $x\leq y+kn$, then $Rx\leq Ry+Sy-Sx+kn$, so by the $n$-periodicity of $f$ we have $f(Rx)\leq f(Ry)+Sy- Sx+kn$, so $f(Rx)+Sx\leq f(Ry)+Sy+kn$, i.e., $\hat{f}(x)\leq\hat{f}(y)+kn$. In particular, $\hat{f}$ is order preserving. For condition (1), if $b\in\mathbb{Z}$ and $x\in\hat{f}^{-1}[\hat{f}(b)]$, then $\hat{f}(x)=\hat{f}(b)$, so $\hat{f}(b)-n<\hat{f}(x)<\hat{f}(b)+n$. Also, $\hat{f}(b\pm n)=f(R(b\pm n))+S(b\pm n)=f(Rb)+Sb\pm n=\hat{f}(b)\pm n$, so $\hat{f}(b-n)<\hat{f}(x)<\hat{f}(b+n)$, and by order preservation $b-n<x<b+n$. Therefore, $\hat{f}^{-1}[\hat{f}(b)]\subseteq[b-n,b+n]$. Consequently, $\hat{f}\in F_{n}(\mathbb{Z})$. To see that $\hat{f}$ extends $h$, note that if $x\in Dom(h)$ then $Rx\in Dom(\hat{h})$ and $\hat{h}(Rx)=h(x)-Sx$. By definition of $\hat{f}$ and given that $f$ extends $\hat{h}$, we have $\hat{f}(x)=f(Rx)+S(x)=\hat{h}(Rx)+Sx=h(x)-Sx+Sx=h(x)$. ∎ The next lemma shows that [ℓ] and [r] keep us within the setting of $n$-periodicity. ###### Lemma 3.4. If $\mathbf{\Delta}$ is a c-chain and $g$ is partial function on $\mathbf{\Delta}$ that is $n$-periodic with respect to a spacing embedding $e$, then the partial functions $g^{[\ell]}$ and $g^{[r]}$ are $n$-periodic on $\mathbf{\Delta}$ with respect to $e$. Moreover, $(g^{[\ell]})^{e}\subseteq(g^{e})^{[\ell]}$ and $(g^{[r]})^{e}\subseteq(g^{e})^{[r]}$. ###### Proof. By Lemma 3.1 we know that $g^{[\ell]}$ and $g^{[r]}$ are partial functions over $\mathbf{\Delta}$. Let $h:=g^{e}$ be the counterpart of $g$ with respect to a spacing embedding $e$ witnessing the $n$-periodicity of $g$; then $h$ is $n$-periodic with respect to $id_{\mathbb{Z}}$. Let $x,y\in Dom(h^{[\ell]})$, and set $a:=h^{[\ell]}(x)$ and $b:=h^{[\ell]}(y)$. Then, by definition, $a,a-1,b,b-1\in Dom(h)$, $h(a-1)<x\leq h(a)$ and $h(b-1)<y\leq h(b)$. If $x\leq y+kn$ for some $k\in\mathbb{Z}$, then $h(a-1)<x\leq y+kn\leq h(b)+kn$, so by the $n$-periodicity of $h$, we get $a-1<b+kn$. Consequently, $h^{[\ell]}(x)=a\leq b+kn=h^{[\ell]}(y)+kn$. So $h^{[\ell]}$ is an $n$-periodic partial function with respect to $id_{\mathbb{Z}}$; in particular, any restriction of it is also $n$-periodic. To show that $(g^{[\ell]})^{e}=h^{[\ell]}|_{Dom((g^{[\ell]})^{e})}$, note that if $x\in Dom((g^{[\ell]})^{e})$, then $e^{-1}(x)\in Dom(g^{[\ell]})$, so there exist $a,a-1\in Dom(g)$, such that $g(a-1)<e^{-1}(x)\leq g(a)$ and $g^{[\ell]}(e^{-1}(x))=a$. Hence, $ege^{-1}(e(a)-1)=ege^{-1}(e(a-1))<x\leq ege^{-1}(e(a))$, i.e., $h(e(a)-1)<x\leq h(e(a))$, thus $h^{[\ell]}(x)=e(a)=eg^{[\ell]}(e^{-1}(x))=(g^{[\ell]})^{e}(x)$. Similarly, we can show that $(g^{[r]})^{e}\subseteq(g^{e})^{[r]}$. ∎ ### 3.2 The join By [7], for every $n$, the variety $\mathsf{LP_{n}}$ of $n$-periodic $\ell$-pregroups is a subvariety of $\mathsf{DLP}$. Here we prove that the join of all of the $\mathsf{LP_{n}}$’s is precisely the whole variety $\mathsf{DLP}$; in other words the class of periodic $\ell$-pregroups generates the variety of all distributive $\ell$-pregroups. It is shown in [11] that every equation in the language of $\ell$-pregroups is equivalent to one of the form $1\leq w_{1}\vee\ldots\vee w_{k}$ where $m\in\mathbb{Z}^{+}$ and the $w_{i}$’s are _intensional terms_ , i.e., terms of the form $x_{1}^{(m_{1})}x_{2}^{(m_{2})}\ldots x_{l}^{(m_{l})},$ where $x_{1},\ldots,x_{l}$ are not necessarily distinct variables, $l\in\mathbb{N}$, and $m_{1},\ldots,m_{l}\in\mathbb{Z}$. Therefore, we may consider equations in such _intensional_ form. Recall that a failure of an equation $1\leq w_{1}\vee\ldots\vee w_{k}$ in $\mathbf{F}(\mathbf{\Omega})$ consists of a homomorphism $\varphi:\mathbf{Tm}\rightarrow\mathbf{F}(\mathbf{\Omega})$, from the algebra $\mathbf{Tm}$ of all terms in the language of $\ell$-pregroups, such that $\varphi(1)(p)>\varphi(w_{1})(p),\ldots,\varphi(w_{k})(p)$, for some $p\in\Omega$. The failure of an equation in a diagram is formulated relative to an _intensional algebra_ —an algebra over the language $(\cdot,1,^{\ell},^{r})$—playing the role of $\mathbf{F}(\mathbf{\Omega})$. Given a c-chain $\mathbf{\Delta}$, we define the algebra $\mathbf{Pf}(\mathbf{\Delta})=(Pf(\mathbf{\Delta}),{\circ},^{[\ell]},^{[r]},i_{\Delta})$, where $Pf(\mathbf{\Delta})$ is the set of all the order-preserving partial functions over $\mathbf{\Delta}$, $\circ$ is the composition of partial functions, $i_{\Delta}$ is the identity function on $\Delta$, and $g\mapsto g^{[\ell]}$ and $g\mapsto g^{[r]}$ are the two inversion operations as defined on $\mathbf{\Delta}$. The following lemma follows from the definition of counterparts and by iterations of Lemma 3.4. ###### Lemma 3.5. If $\mathbf{\Delta}$ is an c-chain and $g,f$ are partial functions on $\mathbf{\Delta}$ that are $n$-periodic with respect to a spacing embedding $e$, then 1. 1. $(f\circ g)^{e}=f^{e}\circ g^{e}$ and 2. 2. for all $m\in\mathbb{Z}$, $(g^{[m]})^{e}\subseteq(g^{e})^{[m]}$. Moreover, for every $u\in\mathbf{Ti}$ with $l$-many variables and $g_{1},\ldots,g_{l}$ partial functions on $\mathbf{\Delta}$, $n$-periodic with respect to $e$, then $(u^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}\subseteq u^{\mathbf{Pf}(\mathbb{Z})}(g_{1}^{e},,g_{l}^{e})$. ###### Proof. (1) is easy to see and (2) follows from Lemma 3.4. We will prove the remaining inclusion by induction on the structure of terms. For the induction step, let $u$ be a term of the form $u=x^{(m)}v$ where $m\in\mathbb{Z}$, $v$ is a term in $\mathbf{Ti}$ over variables $x_{1},\ldots,x_{l}$, and $x\in\\{x_{1},\ldots,x_{l}\\}$. Also, let $g_{1},\ldots,g_{l}$ be partial functions on $\mathbf{\Delta}$ that are $n$-periodic with respect to the same spacing embedding $e$ such that $(v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}\subseteq v^{\mathbf{Pf}(\mathbb{Z})}(g_{1}^{e},\ldots,g_{l}^{e})$; we will write $g$ for the partial function corresponding to $x$. If $a\in Dom((u^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e})$, then $a\in Dom((g^{(m)})^{e}(v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e})$ by (1), so $a\in Dom((v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e})$, in particular. So, $e^{-1}a\in Dom(v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))$ and $v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l})(e^{-1}a)\in Dom(g^{[m]})$. So, by (1), $(u^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}(a)=(g^{[m]})^{e}(v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}(a)$ and by hypothesis $(g^{[m]})^{e}(v^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}(a)=(g^{[m]})^{e}(v^{\mathbf{Pf}(\mathbb{Z})}(g_{1}^{e},\ldots g_{l}^{e}))(a)$. By (2), $(g^{[m]})^{e}(v^{\mathbf{Pf}(\mathbb{Z}}(g_{1}^{e},\ldots g_{l}^{e}))(a)=(g^{e})^{[m]}(v^{\mathbf{Pf}(\mathbb{Z})}(g_{1}^{e},\ldots g_{l}^{e}))(a)$, therefore, $(u^{\mathbf{Pf}(\mathbf{\Delta})}(g_{1},\ldots g_{l}))^{e}(a)=u^{\mathbf{Pf}(\mathbb{Z})}(g_{1}^{e},\ldots,g_{l}^{e})(a)$. ∎ For $n\in\mathbb{Z}^{+}$, a diagram is called _$n$ -periodic_ with respect to some spacing embedding if all of its partial functions are $n$-periodic with with respect to that embedding. A diagram is called _$n$ -periodic_ if it is $n$-periodic with respect to some spacing embedding. We say that the equation $1\leq w_{1}\vee\ldots\vee w_{k}$ in intensional form over variables $x_{1},\ldots,x_{l}$ _fails_ in a diagram $(\mathbf{\Delta},f_{1},\ldots,f_{l})$ if there is an intensional homomorphism $\varphi:\mathbf{Ti}\rightarrow\mathbf{Pf}(\mathbf{\Delta})$, from the algebra $\mathbf{Ti}$ of all intensional terms, and a point $p\in\Delta$ such that $\varphi(1)(p)>\varphi(w_{1})(p),\ldots,\varphi(w_{k})(p)$ and $\varphi(x_{i})=f_{i}$ for all $1\leq i\leq l$. In the following, we denote the lenght of an equation $\varepsilon$ by $|\varepsilon|$. ###### Lemma 3.6. If an equation $\varepsilon$ fails in a diagram, then it fails in an $n$-periodic diagram, where $n=2^{|\varepsilon|}|\varepsilon|^{4}$. ###### Proof. In the proof of Corollary 4.4 of [11] it is shown that if an equation $\varepsilon$ fails in a diagram, then it also fails in a diagram where the c-chain has size $n=2^{|\varepsilon|}|\varepsilon|^{4}$. Note that every diagram based on a c-chain $\mathbf{\Delta}$ is $n$-periodic for $n\geq|\Delta|$, as follows. For every partial function $g$ of the diagram, we consider the (unique) spacing embedding $e:\Delta\rightarrow\mathbb{Z}$ where $e[\Delta]=\mathbb{N}_{d}$ and $d=|\Delta|$. Then the partial function $g^{e}$ is defined on the convex subset $\mathbb{N}_{d}$ of $\mathbb{Z}$, so it can be extended to a partial function with domain $\mathbb{N}_{n}$ (in any order-preserving way) since $d\leq n$. This order-preserving partial function is vacuously $n$-periodic, so it can be further extended periodically to an $n$-periodic function on $\mathbb{Z}$ by Lemma 3.3. Thus, every partial function of the diagram is $n$-periodic, hence the equation fails in an $n$-periodic diagram. ∎ The following lemma will help us show that if an equation $\varepsilon$ fails in a distributive $\ell$-pregroup, then it fails in a periodic one. ###### Lemma 3.7. If an equation fails in an $n$-periodic diagram, then it fails in $\mathbf{F}_{n}(\mathbb{Z})$. ###### Proof. Suppose that an equation $1\leq w_{1}\vee\ldots\vee w_{t}$, in intensional form, fails in a diagram $(\mathbf{\Delta},g_{1},\ldots,g_{l})$ that is $n$-periodic with respect to a spacing embedding $e:\mathbf{\Delta}\rightarrow(\mathbb{Z},\leq_{\mathbb{Z}},\Yleft_{\mathbb{Z}})$, where $\min(e[\Delta])=0$. This means that there exist $p\in\Delta$ and an intentional homomorphism $\varphi:\mathbf{Ti}\rightarrow\mathbf{Pf}(\mathbf{\Delta})$ where $\varphi(x_{i})=g_{i}$ for all $1\leq i\leq l$, satisfying $\varphi(1)(p)>\varphi(w_{1})(p),\ldots,\varphi(w_{t})(p)$. For each $i\in\\{1,\ldots,l\\}$, by Lemma 3.3 the counterpart of each $g_{i}$ by $e$ can be extended to a function $f_{i}\in F_{n}(\mathbb{Z})$. We extend the assignment $\hat{\varphi}(x_{i})=g^{e}_{i}$ to an intentional homomorphism $\hat{\varphi}:\mathbf{Ti}\rightarrow\mathbf{Pf}(e[\mathbf{\Delta}])$; by Lemma 3.5 have $\varphi(u)^{e}e(p)=\hat{\varphi}(u)e(p)$, for all $u\in FS$. It follows from the last paragraphs of Theorem 4.1 in [11] that there is a homomorphism ${\psi}:\mathbf{Ti}\rightarrow\mathbf{F}(\mathbb{Z})$ extending the assignment $\psi(x_{i})=f_{i}$ and satisfying $\psi(u)(e(p))=\hat{\varphi}(u)(e(p))$ for all $u\in FS$. Since the extensions $f_{1},\ldots,f_{l}$ are in $F_{n}(\mathbb{Z})$, the range of $\psi$ is in $\mathbf{F}_{n}(\mathbb{Z})$. Also $\varepsilon$ fails in $\mathbf{F}_{n}(\mathbb{Z})$ since for all $i\in\\{1,\ldots,k\\}$, we have $\psi(w_{i})(e(p))=e\varphi(w_{i})(p)<\psi(1)(e(p))=e\varphi(1)(p)$. ∎ ###### Theorem 3.8. An equation $\varepsilon$ fails in $\mathsf{DLP}$ iff it fails in $\mathbf{F}_{n}(\mathbb{Z})$, where $n=2^{|\varepsilon|}|\varepsilon|^{4}$. ###### Proof. If an equation fails in $\mathsf{DLP}$, then by [11] it fails in a diagram. By Lemma 3.6, the equation fails in an $n$-periodic diagram, where $n=2^{|\varepsilon|}|\varepsilon|^{4}$. By Lemma 3.7, the equation fails in $\mathbf{F}_{n}(\mathbb{Z})$. On the other hand, if $\varepsilon$ fails in $\mathbf{F}_{n}(\mathbb{Z})$, then $\varepsilon$ fails in $\mathsf{DLP}$, since $\mathbf{F}_{n}(\mathbb{Z})\in\mathsf{DLP}$. ∎ This reduces the decidability of the equational theory of $\mathsf{DLP}$ to the decidability of all of the $\mathbf{F}_{n}(\mathbb{Z})$’s: given an equation to check, we know which $\mathbf{F}_{n}(\mathbb{Z})$ to test it in. In Section 4 we will prove that $\mathbf{F}_{n}(\mathbb{Z})$ is decidable, for each $n$. The following theorem shows that $\mathsf{DLP}$ can be expressed as a join in two different ways. ###### Theorem 3.9. $\mathsf{DLP}=\bigvee\mathsf{LP_{n}}=\bigvee\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z})\\})=\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z}):n\in\mathbb{Z}^{+}\\})$. ###### Proof. By Lemma 3.8, we have $\mathsf{DLP}\subseteq\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z}):n\in\mathbb{Z}^{+}\\})$. Furthermore, by general facts we have $\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z}):n\in\mathbb{Z}^{+}\\})=\bigvee\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z})\\})$. Since each algebra $\mathbf{F}_{n}(\mathbb{Z})$ is $n$-periodic, we get $\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z})\\})\subseteq\mathsf{LP_{n}}$, so $\bigvee\mathsf{V}(\\{\mathbf{F}_{n}(\mathbb{Z})\\})\subseteq\bigvee\mathsf{LP_{n}}$. Finally, by [7], for every $n$, $\mathsf{LP_{n}}\subseteq\mathsf{DLP}$, so $\bigvee\mathsf{LP_{n}}\subseteq\mathsf{DLP}$. ∎ This complements nicely the result of [11] that $\mathsf{DLP}$ is generated by $\mathbf{F}(\mathbb{Z})$. ### 3.3 From the join down to the varieties? It follows from Theorem 3.9 that $\bigvee\mathsf{LP_{n}}=\bigvee\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$. For each $n$, we want to find a generating algebra for $\mathsf{LP_{n}}$, so it is tempting to conjecture that $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$, for all (or at least some) $n$. This fails for $n=1$, since $\mathsf{LP}_{1}$ is the variety of $\ell$-groups while $\mathbf{F}_{1}(\mathbb{Z})$ is isomorphic the $\ell$-pregroup on the integers (it consists of only the translations on the integers) and $\mathsf{V}(\mathbf{F}_{1}(\mathbb{Z}))$ is the variety of abelian $\ell$-groups. Corollary 3.11 below shows that this actually fails for every single $n$, i.e., for all $n$, the algebra $\mathbf{F}_{n}(\mathbb{Z})$ generates a proper subvariety of $\mathsf{LP_{n}}$. In other words, it will follow that if an equation fails in a distributive $\ell$-pregroup, then it fails in $\mathsf{LP_{n}}$ and it fails in $\mathsf{V}(\mathbf{F}_{m}(\mathbb{Z}))$ for some $m,n$ (which can be then taken to be minimal such), and also that $n\leq m$ (since $\mathsf{V}(\mathbf{F}_{k}(\mathbb{Z}))\subseteq\mathsf{LP}_{k}$, for all $k$), but we necessarily have $n<m$. The combination of $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))\not=\mathsf{LP_{n}}$, for all $n$, and $\bigvee\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))=\bigvee\mathsf{LP_{n}}$ is quite interesting and surprising. This means that for each $n$, the algebra $\mathbf{F}_{n}(\mathbb{Z})$ satisfies a special equation, but there is no special equation that is satisfied by all of the $\mathbf{F}_{n}(\mathbb{Z})$’s. We prove that $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))\not=\mathsf{LP_{n}}$, for each $n$, by providing such an equation tailored to each $n$. The equation states that all invertible elements commute. Clearly, the invertible elements in an $\ell$-pregroup are the $1$-periodic elements and, by Lemma 2.6, they form a subalgebra, which is therefore an $\ell$-group; this is the maximal subalgebra that is an $\ell$-group and we call it the the _maximal $\ell$-subgroup_ of the $\ell$-pregroup. Therefore, in an $\ell$-pregroup the inverible elements commute iff its maximal $\ell$-subgroup is abelian. It is easy to see that an element $g$ is invertible iff $g^{\ell}=g^{r}$. Therefore, the the property that all invertibles commute can be captured by the quasiequation $(x^{\ell}=x^{r}\;\&\;y^{\ell}=y^{r})\Rightarrow xy=yx.$ The lemma below entails that this quasiequation is not equivalent to any equation. However, when restricted to $n$-periodic $\ell$-pregroups, for a fixed $n$, this can be captured by the equation $(x\vee x^{\ell\ell}\vee\cdots\vee x^{\ell^{2n-2}})(y\vee y^{\ell\ell}\vee\cdots\vee y^{\ell^{2n-2}})=(y\vee\cdots\vee y^{\ell^{2n-2}})(x\vee\cdots\vee x^{\ell^{2n-2}})$ For convenience, for all $n$, we define the term $i_{n}(x):=x\vee x^{\ell\ell}\vee\cdots\vee x^{\ell^{2n-2}}$. ###### Lemma 3.10. For every $n$, and $n$-periodic $\ell$-pregroup $\mathbf{A}$, the following hold. 1. 1. The invertible elements of the algebra $\mathbf{A}$ are exactly the ones of the form $i_{n}(a)=a\vee a^{\ell\ell}\vee\cdots\vee a^{\ell^{2n-2}}$, where $a\in A$. 2. 2. The invertible elements of the algebra $\mathbf{A}$ commute iff $\mathbf{A}$ satisfies the equation $i_{n}(x)i_{n}(y)=i_{n}(y)i_{n}(x)$. 3. 3. The equation $i_{n}(x)i_{n}(y)=i_{n}(y)i_{n}(x)$ holds in $\mathbf{F}_{n}(\mathbb{Z})$ but fails in $\mathsf{LP_{n}}$. ###### Proof. (1) Since $\mathbf{A}$ is $n$-periodic, it satisfies $x^{\ell^{2n}}=x$, so for all $a\in A$, we have $(a\vee a^{\ell\ell}\vee\cdots\vee a^{\ell^{2n-4}}\vee a^{\ell^{2n-2}})^{\ell\ell}=a^{\ell\ell}\vee a^{\ell^{4}}\vee\cdots\vee a^{\ell^{2n-2}}\vee a^{\ell^{2n}}=a^{\ell\ell}\vee a^{\ell^{4}}\vee\cdots\vee a^{\ell^{2n-2}}\vee a$. Therefore, $a\vee a^{\ell\ell}\vee\cdots\vee a^{\ell^{2n}}$ is invertible. Conversely, if $a$ is invertible, then $a\vee a^{\ell\ell}\vee\cdots\vee a^{\ell^{2n}}=a$, so it is of this form. (2) follows from (1). (3) The only invertible elements of $\mathbf{F}_{n}(\mathbb{Z})$ are the translations, so they commute with each other; thus $\mathbf{F}_{n}(\mathbb{Z})$ satisfies the equation. However, there exist non- commutative $\ell$-groups, such as the group $\mathbf{F}_{1}(\mathbb{Q})$ of all order-preserving permutations on $\mathbb{Q}$. All $\ell$-pregroups are $1$-periodic, hence also $n$-periodic. Thus, $\mathbf{F}_{1}(\mathbb{Q})$ is in $\mathsf{LP_{n}}$, but not all of its invertible elements commute, since all of its elements are invertible. ∎ Actually, even though by the following corollary $\mathbf{F}_{n}(\mathbb{Z})$ fails to generate $\mathsf{LP_{n}}$, our second representation theorem (Theorem 2.16) that every $n$-periodic $\ell$-pregroup can be embedded into a wreath product $\mathbf{H}\wr\mathbf{F}_{n}(\mathbb{Z})$, where $\mathbf{H}$ is an $\ell$-group, showcases the importance of the algebra $\mathbf{F}_{n}(\mathbb{Z})$ in studying $\mathsf{LP_{n}}$. In that respect, Lemma 3.7 (which shows that failures in $n$-periodic diagrams can be embedded in $\mathbf{F}_{n}(\mathbb{Z})$) will turn out to be very useful in the next sections. ###### Corollary 3.11. $\mathsf{LP_{n}}\not=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$, for all $n$. For each $n$, we want to find a chain $\mathbf{\Omega}_{n}$ such that $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbf{\Omega}_{n}))$. Corollary 3.11 shows that we cannot take $\mathbf{\Omega}_{n}=\mathbb{Z}$ for any $n$. At this moment it is not clear that such an $\mathbf{\Omega}_{n}$ exists for any $n$ other than $n=1$: we can take $\mathbf{\Omega}_{1}=\mathbb{Q}$ because the variety of $\ell$-groups is generated by $\mathbf{F}_{1}(\mathbb{Q})$. Actually, the next result shows that we cannot take $\mathbf{\Omega}_{n}=\mathbb{Q}$ for any $n>1$. Recall that an element of a chain is called a _limit point_ if it is the join of all of the elements strictly below it or the meet of all of the elements strictly above it. ###### Lemma 3.12. If $\mathbf{\Omega}$ is a chain where every point is a limit point, then we get $\mathbf{F}_{n}(\mathbf{\Omega})=\mathbf{F}_{1}(\mathbf{\Omega})$, for all $n$. In particilar, $\mathbf{F}_{n}(\mathbb{Q})=\mathbf{F}_{1}(\mathbb{Q})$, for all $n$. Therefore, $\mathsf{LP_{n}}\not=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Q}))$ for each $n>1$. ###### Proof. By Lemma 2.7 of [11], for $f\in F_{n}(\mathbf{\Omega})$ if the preimage of an element under $f$ has more than one element, then the element is not a limit point, a contradiction; so every preimage must have at most one element, i.e., $f$ is one-to-one. To show that it is onto, assume an element $a$ is not in the image. By Lemma 2.4 there exists a covering pair $b\prec c$ such that $a\in(f(b),f(c))$, contradicting the fact that there are no covering pairs in $\mathbf{\Omega}$. ∎ In Section 5 we prove that for each $n$ there exists indeed a $\mathbf{\Omega}_{n}$ such that $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbf{\Omega}_{n}))$. Actually, we do better than that: we identify a single chain $\mathbf{\Omega}$ such that $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbf{\Omega}))$ for all $n$. This single/uniform chain is $\mathbf{\Omega}=\mathbb{Q}\overrightarrow{\times}\mathbb{Z}$. Due to the wreath-product decomposition $\mathbf{H}\wr\mathbf{F}_{n}(\mathbb{Z})$ of $\mathbf{F}_{n}(\mathbf{\Omega})$, where $\mathbf{\Omega}$ is integral, the analysis in Section 5 will benefit from the study of the algebra $\mathbf{F}_{n}(\mathbb{Z})$, so the next section is devoted to that. ## 4 Decidability for $F_{n}(\mathbb{Z})$ The wreath product decomposition for $\mathbf{F}_{n}(\mathbf{\Omega})$ of Theorem 2.16 brings to the forefront the role of $\mathbf{F}_{n}(\mathbb{Z})$ in understanding $n$-preriodic $\ell$-pregroups. In particular, it turns out that a lot of the notions (such as $n$-periodicity in a diagram) that will be needed for the generation result for $\mathsf{LP_{n}}$ already show up when studying the structure of $\mathbf{F}_{n}(\mathbb{Z})$. In this section we study this structure and actually prove that the equational theory of $\mathbf{F}_{n}(\mathbb{Z})$ is decidable. The results of this section will also be the main driving force for proving that the equational theory of $n$-preriodic $\ell$-pregroups is decidable in Section 5. ### 4.1 Failures in compatible surjections Given an equation $\varepsilon$ in intensional form $1\leq w_{1}\vee\ldots\vee w_{k}$ we will define a set $\Delta_{\varepsilon}$ of terms in an expansion of the intensional language. First we define the set of _final subwords_ of $\varepsilon$, $FS_{\varepsilon}:=\\{u:w_{1}=vu\text{ or }...\text{ or }w_{k}=vu,\text{ for some }v\\}$. We actually take $\mathbf{Ti}$ to satisfy the equalities $v\cdot 1=v=1\cdot v$, so strictly speaking we take it to be a quotient of the absolutely free algebra, so $FS_{\varepsilon}$ contains $1$. The particular syntactic expressions below, serving as names for the points in $\Delta_{\varepsilon}$, will not be important for our results, since the work done in [11] (which we will be citing when appropriate) ensures that $\Delta_{\varepsilon}$ has enough points for the iterated inverses $g^{[m]}$ to be calculated correctly; we include the definition for completeness. In the following the notation $-a$ and $+a$ will be interpreted by the lower and upper cover of $a$ and $0a=a$, when evaluating the terms. For a variable $x$ among the variables $x_{1},\ldots,x_{n}$ of $\varepsilon$, $m\in\mathbb{N}$ and $v\in FS_{\varepsilon}$ we define $\displaystyle\Delta_{x,m}^{v}$ $\displaystyle:=\\{v\\}\cup\bigcup_{j=0}^{m}\\{\sigma_{j}x^{(j)}\ldots\sigma_{m}x^{(m)}v:\,\sigma_{j},\ldots,\sigma_{m}\in\\{-,0\\},\sigma_{0}=0\\}$ $\displaystyle\Delta_{x,-m}^{v}$ $\displaystyle:=\\{v\\}\cup\bigcup_{j=0}^{m}\\{\sigma_{j}x^{(-j)}\ldots\sigma_{m}x^{(-m)}v:\,\sigma_{j},\ldots,\sigma_{m}\in\\{+,0\\},\sigma_{0}=0\\}$ $\displaystyle S_{\varepsilon}$ $\displaystyle:=\\{(i,m,v):i\in\\{1,\ldots,n\\},m\in\mathbb{Z},v\in FS_{\varepsilon}\text{ and }x_{i}^{(m)}v\in FS_{\varepsilon}\\}$ $\displaystyle\Delta_{\varepsilon}$ $\displaystyle:=\\{1\\}\cup\underset{(i,m,v)\in S_{\varepsilon}}{\bigcup}\Delta_{x_{i},m}^{v}$ It is easy to see that $\Delta_{\varepsilon}$ is finite and that it contains $FS_{\varepsilon}$. The notion of a compatible surjection that we now define, is a sister notion to that of a diagram that allows for tighter control in falures of equations. The underlying set of a diagram can be arbitrary and the partial functions need to be explicitly given, but for compatible surjections (which are tight to a given equation $\varepsilon$) the undrerlying set has labels inhereted from $\Delta_{\varepsilon}$ and this is enough infortation to construct the partial functions. Given an equation $\varepsilon(x_{1},\ldots,x_{l})$ in intensional form, a _compatible surjection_ for $\varepsilon$ is an onto map $\varphi:\Delta_{\varepsilon}\rightarrow\mathbb{N}_{q}$, where $\mathbb{N}_{q}=\\{1,\ldots,q\\}$ has its the natural order (and $q\leq|\Delta_{\varepsilon}|$), such that: * (i) The relation $g_{i}:=\\{(\varphi(u),\varphi(x_{i}u))\mid u,x_{i}u\in\Delta_{\varepsilon}\\}$ on $\mathbb{N}_{q}$ is an order-preserving partial function for all $i\in\\{1,\ldots,l\\}$. * (ii) The relation ${\Yleft}:=\\{(\varphi(v),\varphi(+v))\mid v,+v\in\Delta_{\varepsilon}\\}\cup\\{(\varphi(-v),\varphi(v))\mid v,-v\in\Delta_{\varepsilon}\\}$ on $\mathbb{N}_{q}$ is contained in the covering relation $\prec$ of $\mathbb{N}_{q}$. * (iii) $\varphi(x_{i}^{(m)}u)=g_{i}^{[m]}(\varphi(u))$, when $i\in\\{1,\ldots,l\\}$, $m\in\mathbb{Z}$ and $u,x_{i}^{(m)}u\in\Delta_{\varepsilon}$. Note that the first two conditions ensure that $\mathbf{D}_{\varepsilon,\varphi}:=(\mathbb{N}_{q},{\leq},{\Yleft},g_{1},\ldots,g_{l})$ is a diagram; in (iii), $g_{i}^{[m]}$ is calculated in this diagram. Also, it follows that the relation $\Yleft$ is an order-preserving partial function. We say that $\varphi$ is _$n$ -periodic_ with respect to a spacing embedding, if all $g_{i}$’s are $n$-periodic with respect to that embedding; this is equivalent to $\mathbf{D}_{\varepsilon,\varphi}$ being $n$-periodic with respect to that embedding. We say that $\varphi$ is _$n$ -periodic_ if it is $n$-periodic with respect to some spacing embedding. We say that the equation $1\leq w_{1}\vee\ldots\vee w_{k}$ in intensional form _fails_ in a compatible surjection $\varphi$, if $\varphi(w_{1}),\ldots,\varphi(w_{k})<\varphi(1)$. We denote by $\mathbf{Pf}(\mathbf{\Delta})^{\pm}$ the expansion of the algebra $\mathbf{Pf}(\mathbf{\Delta})$ with two distinguished elements $+$ and $-$, where $+(a)=b$ iff $a\Yleft b$ and $-(a)=b$ iff $b\Yleft a$. Also, every intensional homomorphism $\varphi:\mathbf{Ti}\rightarrow\mathbf{Pf}(\mathbf{\Delta})$ extends to a homomorphism $\varphi^{\pm}:\mathbf{Ti}^{\pm}\rightarrow\mathbf{Pf}(\mathbf{\Delta})^{\pm}$, where $\mathbf{Ti}^{\pm}$ is the expansion with the two constants. Likewise, when $\mathbf{\Omega}$ is an integral chain, we can consider homomorphisms $\varphi^{\pm}:\mathbf{Ti}^{\pm}\rightarrow\mathbf{F}(\mathbf{\Omega})^{\pm}$, where $\mathbf{F}(\mathbf{\Omega})^{\pm}$ denotes the expansion of $\mathbf{F}(\mathbf{\Omega})$ with the functions $+$ and $-$ that give the upper cover and the lower cover of an element. ###### Theorem 4.1. If an equation $\varepsilon$ fails in $\mathbf{F}_{n}(\mathbb{Z})$, then it also fails in some $n$-periodic compatible surjection for $\varepsilon$. ###### Proof. If $\varepsilon=\varepsilon(x_{1},\ldots,x_{l})$ is an equation in intentional form $1\leq w_{1}\vee\ldots\vee w_{k}$ that fails in $\mathbf{F}_{n}(\mathbb{Z})$, then there exists a list $f=(f_{1},\ldots,f_{l})$ of elements of $\mathbf{F}_{n}(\mathbb{Z})$ and $p\in\mathbb{Z}$ such that $w_{1}^{\mathbf{F}_{n}(\mathbb{Z})}(f)(p),\ldots,w_{k}^{\mathbf{F}_{n}(\mathbb{Z})}(f)(p)>p$; here $f_{i}=\psi(x_{i})$ where $\psi:\mathbf{Ti}\mathbin{\rightarrow}\mathbf{F}_{n}(\mathbb{Z})$ is the homomorphism witnessing the failure of $\varepsilon$. We denote by $\psi^{\pm}:\mathbf{Ti}^{\pm}\mathbin{\rightarrow}\mathbf{F}_{n}(\mathbb{Z})^{\pm}$ the extension of $\psi$. In Theorem 3.6 of [11] it is shown that $\psi_{p}:\Delta_{\varepsilon}\rightarrow\psi_{p}[\Delta_{\varepsilon}]$ is a compatible surjection, where the order on $\psi_{p}[\Delta_{\varepsilon}]$ is inherited from $\mathbb{Z}$ and $\psi_{p}(u):=\psi^{\pm}(u)(p)=u^{\mathbf{F}_{n}(\mathbb{Z})^{\pm}}(f)(p)$ for $u\in\Delta_{\varepsilon}$. (In [11] the compatible surjection is actually taken to be the composition of $\psi_{p}$ with the (unique) isomorphism of the chain $\psi_{p}[\Delta_{\varepsilon}]$ with the initial segment $\mathbb{N}_{q}$ of $\mathbb{Z}^{+}$, where $q=|\psi_{p}[\Delta_{\varepsilon}]|$, but we do not need to do that.) To simplify the notation, we write $u_{fp}$ for $u^{\mathbf{F}_{n}(\mathbb{Z})^{\pm}}(f)(p)$. Also, in the same theorem of [11] it is shown that $\varepsilon$ fails in $\psi_{p}$. We will show that $\psi_{p}$ is $n$-periodic. Suppose $\psi_{p}(u)\leq\psi_{p}(v)+kn$ for some $u,v\in\Delta_{\varepsilon}$ and $k\in\mathbb{Z}$, i.e., $u_{f,p}\leq v_{f,p}+kn$. Given that $f_{i}\in F_{n}(\mathbb{Z})$, we have $\psi_{p}(x_{i}u)=(x_{i}u)_{f,p}=f_{i}(u_{f,p})\leq f_{i}(v_{f,p})+kn=(x_{i}v)_{f,p}+kn=\psi_{p}(x_{i}v)$. This shows that all the partial functions are $n$-periodic with respect to the identity spacing embedding on $\mathbb{Z}$. ∎ ### 4.2 Controlling the automorphisms of $\mathbb{Z}$. As mentioned before, a key issue with the definition of $n$-periodicity in a diagram is that it does not provide any control of the spacing embeddings in $\mathbb{Z}$. The spacing has to do with the fact that some functions $f$ in $\mathbf{F}_{n}(\mathbb{Z})$ have a big numerical difference $f(m)-m$ between their input and output values and thus may span multiple periods (this can be thought of as the _height_ of $f$ as measured at $m$). The lack of control of these differences translates into lack of control of the spacing embedding $e$ (when moving from a partial function on a diagram to a function of $\mathbf{F}_{n}(\mathbb{Z})$). Put differently, the problem is not the numerical size of the set $\Delta$ on which a diagram is based, nor of its bijective set $e[\Delta]$ in $\mathbb{Z}$, but rather the size of the convexification of $e[\Delta]$ in $\mathbb{Z}$. We will show that every function $f$ in $\mathbf{F}_{n}(\mathbb{Z})$ naturally decomposes of into an automorphism $f^{\circ}\in F_{1}(\mathbb{Z})$ and a _short_ function $f^{*}\in F_{n}(\mathbb{Z})$, i.e., a function whose height at every point is bounded by $2n$. In that sense, the height problem is now focused only on the $f^{\circ}$ component of $f$; in this section, we prove results that control the height of $f^{\circ}$. The following definition characterizes all the spacing embeddings that interact well with a c-chain. Given a finite sub c-chain $\mathbf{\Delta}$ of $\mathbb{Z}$, a spacing embedding $e:\mathbf{\Delta}\rightarrow(\mathbb{Z},\leq_{\mathbb{Z}},\Yleft_{\mathbb{Z}})$ is said to _transfer $n$-periodicity_ if for every $f\in F_{n}(\mathbb{Z})$, $(f|_{\Delta\times\Delta})^{e}$, when non-empty, can be extended to an element of $F_{n}(\mathbb{Z})$; here $f|_{\Delta\times\Delta}$ is the partial function obtained by restricting both domain and range to $\Delta$ and $(f|_{\Delta\times\Delta})^{e}$ is its counterpart in $e[\Delta]$. Below we show that these spacing embeddings, for $n=1$, can be found by solving a linear system. A finite subset $\Delta=\\{p_{0},\ldots,p_{l}\\}$ of $\mathbb{Z}$, where $p_{0}<p_{1}<\ldots<p_{l}$, is called a _solution_ to a system of equations, if the vector $y_{\Delta}:=(p_{1}-p_{0},\ldots,p_{l}-p_{l-1})$ of spaces between the points of $\Delta$ is a solution. Also, we say that a spacing embedding is a _solution_ to a system, if its image is a solution. ###### Lemma 4.2. If $\Delta=\\{p_{0},\ldots,p_{l}\\}$ is a finite subset of $\mathbb{Z}$, where $p_{0}<p_{1}<\ldots<p_{l}$, and $y_{\Delta}:=(p_{1}-p_{0},\ldots,p_{l}-p_{l-1})$, then for every $j,z,j^{\prime},z^{\prime}\in\\{0,\ldots,l\\}$ with $z\leq j$ and $z^{\prime}\leq j^{\prime}$, we have $p_{j}-p_{z}=p_{j^{\prime}}-p_{z^{\prime}}$ iff $\sum_{k=z+1}^{j}y_{k}-\sum_{k=z^{\prime}+1}^{j^{\prime}}y_{k}=(j^{\prime}-z^{\prime})-(j-z).$ ###### Proof. Note that for $z\leq j$, we have $p_{j}-p_{z}=\sum_{k=z+1}^{j}(p_{k}-p_{k-1})=\sum_{k=z+1}^{j}(y_{k}+1)=\sum_{k=z+1}^{j}y_{k}+(j-z).$ Therefore, for $z\leq j$ and $z^{\prime}\leq j^{\prime}$ we have: $p_{j}-p_{z}=p_{j^{\prime}}-p_{z^{\prime}}$ iff $\displaystyle\sum_{k=z+1}^{j}y_{k}+j-z=$ $\displaystyle\sum_{k=z^{\prime}+1}^{j^{\prime}}y_{k}+j^{\prime}-z^{\prime}\qquad\text{ iff }$ $\displaystyle\sum_{k=z+1}^{j}y_{k}-\sum_{k=z^{\prime}+1}^{j^{\prime}}y_{k}=$ $\displaystyle(j^{\prime}-z^{\prime})-(j-z)\qed$ A system of equations $AY=b$, where $A$ is an $L\times l$ matrix and $Y$ and $b$ column vectors of size $l$, is said to be _$\Delta$ -bounded_ if $\Delta$ is a solution to the system, $l\leq|\Delta|$, the entries in $b$ are integers in $[-2|{\Delta}|,2|{\Delta}|]$ and the entries of $A$ are integers in $\\{-1,0,1\\}$. The next lemma states that finding all the spacing embedddings that transfer $1$-periodicity amounts to solving a linear system. We stress that sub c-chains of $\mathbb{Z}$ are not assumed to be convex. ###### Lemma 4.3. If $\mathbf{\Delta}$ is a finite sub c-chain of $\mathbb{Z}$, then there exists a $\Delta$-bounded system of equations such that a spacing embedding with domain $\mathbf{\Delta}$ is a solution iff it transfers 1-periodicity. ###### Proof. First consider all the possible non-empty intersections of automorphisms of $\mathbb{Z}$ with $\Delta\times\Delta$, and note that there are finitely-many such intersections, say $s$-many; let $f_{1},\dots,f_{s}\in F_{1}(\mathbb{Z})$ be functions realizing these intersections. Since the intersections are subsets of $\Delta\times\Delta$, we get $s\leq 2^{|\Delta\times\Delta|}=2^{|\Delta|^{2}}$. If $\Delta=\\{p_{0},\ldots,p_{l}\\}$, where $p_{0}<p_{1}<\ldots<p_{l}$, we set $y_{j}:=p_{j}-p_{j-1}-1$, for $j\in\\{1,\ldots,,l\\}$ and $y_{\Delta}:=(y_{1},\ldots,y_{l})$; in particular, if $p_{j-1}\Yleft p_{j}$ we have $y_{j}=0$. For $z<j$ and for all $i$, if $p_{z^{\prime}}:=f_{i}(p_{z})$ and $p_{j^{\prime}}:=f_{i}(p_{j})$ are in $\Delta$, then $p_{j}-p_{z}=p_{j^{\prime}}-p_{z^{\prime}}$ since $f_{i}$ is a translation, so by Lemma 4.2 we get that $(y_{n})$ satisfies the equation: $\displaystyle\sum_{k=z+1}^{j}Y_{k}-\sum_{k=z^{\prime}+1}^{j^{\prime}}Y_{k}$ $\displaystyle=(j^{\prime}-z^{\prime})-(j-z)$ ($i,z,j$) We also consider the equations $Y_{k}=0$ for all $k\in\\{1,\ldots,l\\}$ where $p_{k-1}\Yleft p_{k}$ (capturing the fact that $y_{k}=0$ for these $k$’s). The resulting finite system, of all $(i,z,j)$-equations and all the $Y_{k}=0$ equations, can be written as $AY=b$, for an $L\times l$ matrix $A$, and column vectors $Y$ and $b$ of size $l$, where the entries of $A$ are in $\\{-1,0,1\\}$, the entries of $b$ have absolute value at most $2|\Delta|$, since they all have the form $(j^{\prime}-z^{\prime})-(j-z)$, and $\Delta$ is a solution of the system. Therefore, the system is $\Delta$-bounded. Let $\overline{\cdot}:\Delta\rightarrow{\mathbb{Z}}$ be a spacing embedding. If $\overline{\Delta}$ is a solution of the system $AY=b$, for each $i\in\\{1,\ldots,s\\}$, we define the function $g_{i}:\mathbb{Z}\rightarrow\mathbb{Z}$ by $g_{i}(x)=x+(\overline{f_{i}(c_{i})}-\overline{c_{i}})$ for all $x\in\mathbb{Z}$, where $c_{i},f_{i}(c_{i})\in\Delta$; thus $g_{i}\in F_{1}(\mathbb{Z})$. Note that $g_{i}$ does not depend on the choice of $c_{i}$: If $k,f_{i}(k)\in\Delta$, then there exist $j,z,j^{\prime},z^{\prime}\in\\{1,\ldots,l\\}$ such that $\overline{c_{i}}=\overline{p_{j}}$, $\overline{k}=\overline{p_{z}}$, $\overline{f_{i}(c_{i})}=\overline{p_{j^{\prime}}}$, and $\overline{f_{i}(k)}=\overline{p_{z^{\prime}}}$; since $\bar{\Delta}$ is a solution, $y_{\Delta}$ it satisfies the equation $(i,\min\\{j,z\\},\max\\{j,z\\})$, which by Lemma 4.2 is equivalent $\overline{f_{i}(k)}-\overline{f_{i}(c_{i})}=\overline{k}-\overline{c_{i}}$. Hence, $g_{i}(\overline{k})=\overline{k}+(\overline{f_{i}(c_{i})}-\overline{c_{i}})=\overline{f_{i}(k)}$, for all $k$ such that $k,f_{i}(k)\in\Delta$. As a result, the restriction of $g_{i}$ to $\overline{\Delta}$ is equal to the counterpart of $f_{i}|_{\Delta\times\Delta}$ in $\overline{\Delta}$. Conversely, if $\overline{\cdot}:\Delta\rightarrow{\mathbb{Z}}$ transfers 1-periodicity, for every $i\in\\{1,\dots,s\\}$ the counterpart of ${f_{i}}|_{\Delta\times\Delta}$ by $\overline{\cdot}$ can be extended to a function in $F_{1}(\mathbb{Z})$, i.e., for all $p_{j},p_{z}\in\Delta$ such that $p_{j^{\prime}}:=f_{i}(p_{j}),p_{z^{\prime}}:=f_{i}(p_{z})\in\Delta$, we have that $\overline{p_{j}}-\overline{p_{z}}=\overline{f_{i}(p_{j})}-\overline{f_{i}(p_{z})}$. Hence, $\sum_{k=z+1}^{j}\overline{y_{k}}-\sum_{k=z^{\prime}+1}^{j^{\prime}}\overline{y_{k}}=(j^{\prime}-z^{\prime})-(j-z)$ by Lemma 4.2, so if $z<j$, then $\overline{\Delta}$ satisfies the equation $(i,z,j)$. Since this is true for all $i\in\\{1,\ldots,s\\}$, $\overline{\cdot}$ preserves coverings and $j,z\in Dom({f_{i}}|_{\Delta\times\Delta})$, we have that $\overline{\Delta}$ satisfies $Ax=b$. ∎ The following theorems will help us guarantee the existence of small solutions for $\Delta$-bounded systems of equations. ###### Theorem 4.4. [2] Let $Ay=b$ be a system of $M\times l$ linear equations with integer coefficients. Assume the rows of $A$ are linearly independent (hence $M\leq l$) and denote by $\gamma^{\prime}$ (respectively $\gamma$) the maximum of the absolute values of the $M\times M$ minors of the matrix $A$ (the augmented matrix $(A|b)$). If the system has a solution in non-negative integers, then the system has a solution $(y_{k})$ in the non-negative integers with $y_{k}\leq\gamma^{\prime}$ for $l-M$ variables and $y_{i}\leq(l-M+1)\gamma$ for $M$ variables. Given that $l-M+1\geq 1$ and $\gamma^{\prime}\leq\gamma$ we obtain the following corollary. ###### Corollary 4.5. Let $Ay=b$ be a system of $M\times l$ linear equations with integer coefficients. Assume the rows of $A$ are linearly independent, and denote by $\gamma$ the maximum of the absolute values of the $M\times M$ minors of the augmented matrix $(A|b)$. If the system has a solution in non-negative integers, then the system has a solution $(y_{k})$ in non-negative integers with $y_{k}\leq(l-M+1)\gamma$ for all $k$. Using these corollaries and Lemma 4.3, we will prove that for every sub c-chain of $\mathbb{Z}$, there exists a spacing embedding of bounded height that transfers 1-periodicity. For convenience, we define $\rho(a):=2a^{3}a!+a+1$, for $a\in\mathbb{Z}^{+}$. ###### Lemma 4.6. Given a finite sub c-chain $\mathbf{\Delta}$ of $\mathbb{Z}$, there exists a spacing embedding with domain $\Delta$ that transfers 1-periodicity and that has height at most $\rho(|\Delta|)$. ###### Proof. By Lemma 4.3, there exists a $\Delta$-bounded system of equations $AY=b$ satisfying the conditions of the lemma. Since the system $Ay=b$ is $\Delta$-bounded, $\Delta$ is a solution for it, hence the system is consistent. It follows, say by Theorem 2.38 of [16], that $rank(A)=rank(A|b)$. So we can select a maximal collection of linearly independent rows of $(A|b)$ to obtain an equivalent (sub)system $A^{\prime}y=b^{\prime}$ where $rank(A|b)=rank(A^{\prime}|b^{\prime})=rank(A^{\prime})$, hence all the rows are linearly independent. Therefore, $A^{\prime}$ has dimensions $M\times l$ where $M\leq l$. Also, since $y_{\Delta}$ is a solution (in the non-negative integers) of $Ay=b$, it is also a solution of the equivalent system $A^{\prime}y=b^{\prime}$. Observe that if $C=(c_{i,j})$ is an $M\times M$ submatrix of $(A^{\prime}|b^{\prime})$, then $|det(C)|=|\sum_{\sigma\in S_{M}}sgn(\sigma)c_{\sigma(1),1}\cdot\ldots\cdot c_{\sigma(M),M}|\leq\sum_{\sigma\in S_{M}}|c_{\sigma(1),1}|\cdot\ldots\cdot|c_{\sigma(M),M}|$ Since $Ax=b$ is $\Delta$-bounded, all columns of $(A^{\prime},b^{\prime})$, aside from the last column which equals $b^{\prime}$, have entries among $-1,0,1$. Therefore, for all $\sigma\in S_{M}$, we have $|c_{\sigma(i),i}|\leq 1$ for $i\neq M$ and $|c_{\sigma(M),M}|\leq 2|{\Delta}|$, since the entries of $b$ are integers in $[-2|{\Delta}|,2|{\Delta}|]$ (again because of $\Delta$-boundedness). So $|det(C)|\leq\sum_{\sigma\in S_{M}}|c_{\sigma(1),1}|\cdot\ldots\cdot|c_{\sigma(M),M}|\leq\sum_{\sigma\in S_{M}}|c_{\sigma(M),M}|\leq M!\cdot 2|\Delta|$ Therefore, if $\gamma$ is the maximum of the absolute values of the $M\times M$ minors of the augmented matrix $(A^{\prime}|b^{\prime})$, then $\gamma\leq M!\cdot 2|\Delta|\leq l!\cdot 2|\Delta|\leq(|\Delta|)!\cdot 2|\Delta|$, where we used that $l\leq|\Delta|$ due to $\Delta$-boundedness. Also $l-M+1\leq l\leq|\Delta|$. So, by Corollary 4.5, there exists a solution $(y_{k})$ in the non-negative integers with $y_{k}\leq(|\Delta|)!\cdot 2|\Delta|^{2}$ for all $k$. Using the solution $(y_{k})$, we define the map $\overline{\cdot}:\Delta\rightarrow\mathbb{Z}$ by setting $\overline{p_{0}}=0$ and $\overline{p_{i}}:=i+\sum^{i}_{j=1}y_{j}$, for all $i\in\\{1,\ldots,l\\}$; so $\overline{\Delta}=\\{\overline{p_{0}},\ldots,\overline{p_{l}}\\}$. The map $\overline{\cdot}$ is order preserving, because the solution $(y_{k})$ has non-negative entries, and it preserves $\Yleft$, since every zero entry in $y_{\Delta}$ implies that the same entry is zero in $y_{\bar{\Delta}}$. Also, observe that $y_{\bar{\Delta}}=(y_{k})$, hence $\overline{\cdot}$ is a solution of $A^{\prime}x=b^{\prime}$, thus also of $Ax=b$. Therefore, $\overline{\cdot}$ is a spacing embedding that is a solution of $Ax=b$. Since we took $Ax=b$ to be the system given by Lemma 4.3, by that lemma it follows that $\overline{\cdot}$ transfers $1$-periodicity. Since $y_{k}\leq(|\Delta|)!\cdot 2|\Delta|^{2}$ for all $k$, the height of $\overline{\cdot}$ is $1+\overline{p_{i}}=1+l+\sum^{i}_{j=1}y_{j}\leq 1+l+l((|\Delta|)!\cdot 2|\Delta|^{2})\leq 1+|\Delta|+|\Delta|((|\Delta|)!\cdot 2|\Delta|^{2})=1+|\Delta|+(|\Delta|)!\cdot 2|\Delta|^{3}$, hence the height is bounded by $\rho(|\Delta|)$. ∎ ### 4.3 From $\mathbf{F}_{n}(\mathbb{Z})$ to a short $n$-periodic compatible surjection. We will describe the decomposition of functions $f\in\mathbf{F}_{n}(\mathbb{Z})$ into functions of $\mathbf{F}_{1}(\mathbb{Z})$ and short functions of $\mathbf{F}_{n}(\mathbb{Z})$, and combine it with the results of the previous section to ensure that for any $n$-periodic diagram there is a short spacing embedding witnessing the $n$-periodicity. A spacing embedding $e:\mathbf{\Delta}\rightarrow\mathbb{Z}$ over a c-chain $\mathbf{\Delta}$ is called _$n$ -short_ if its height is at most $\nu(|\Delta|)$, where $\nu(a)=(\rho(3a)+1)n$ for all $a\in\mathbb{Z}^{+}$. (As always we assume that images of spacing embeddings are non-negative and include $0$). Recall that for a fixed $n\in\mathbb{Z}^{+}$, for every $x\in\mathbb{Z}$, $Rx$ denotes the remainder and $Qx$ the quotient of dividing $x$ by $n$, while $Sx:=x-Rx=nQx$. Given $g\in F_{1}(\mathbb{Z})$ and $h\in F_{n}(\mathbb{Z})$, we have $g,h\in F_{n}(\mathbb{Z})$ so $g\circ h\in F_{n}(\mathbb{Z})$. Conversely, for $f\in F_{n}(\mathbb{Z})$, we define $f^{\circ}(x):=x+Sf(0)$ and $f^{*}(x):=f(x)-Sf(0)$, for all $x\in\mathbb{Z}$. In the following lemmas we will make use of the fact that for all $g\in F_{1}(\mathbb{Z})$ and $x\in\mathbb{Z}$, we have $g(x)=g(x-k)+k$ for all $k\in\mathbb{Z}$, and in particular $g(x)=x+g(0)$. ###### Lemma 4.7. The maps $f\mapsto(f^{\circ},f^{*})$ and $(f^{\circ},f^{*})\mapsto f^{\circ}\circ f^{*}$ define a bijection between $F_{n}(\mathbb{Z})$ and the set of pairs $f^{\circ}\in F_{1}(\mathbb{Z})$ and $f^{*}\in F_{n}(\mathbb{Z})$ with $Sf^{\circ}(0)=f^{\circ}(0)$ and $0\leq f^{*}(0)<n$. ###### Proof. For $f\in F_{n}(\mathbb{Z})$, we have $f^{\circ}(x):=x+Sf(0)$ and $f^{*}(x):=f(x)-Sf(0)$ for all $x\in\mathbb{Z}$, so $f^{\circ}\in F_{1}(\mathbb{Z})$, $f^{*}\in F_{n}(\mathbb{Z})$, $Sf^{\circ}(0)=S(Sf(0))=Sf(0)=f^{\circ}(0)$ and $0\leq f^{*}(0)=f(0)-Sf(0)<n$. So the map $f\mapsto(f^{\circ},f^{*})$ indeed maps to suitable pairs. Given $f\in F_{n}(\mathbb{Z})$, we have $(f^{\circ}\circ f^{*})(x)=f^{\circ}(f(x)-Sf(0))=f(x)-Sf(0)+Sf(0)=f(x)$. So, the two maps compose to the identity, one way. Conversely, if $g\in F_{1}(\mathbb{Z})$, $h\in F_{n}(\mathbb{Z})$, $Sg(0)=g(0)$, and $0\leq h(0)<n$, then $S(gh(0))=S(h(0)+g(0))=S(h(0)+Sg(0))=S(g(0))=g(0)$. Therefore, $(g\circ h)^{\circ}(x)=x+S(gh(0))=x+g(0)=g(x)$ and $(g\circ h)^{*}(x)=gh(x)-S(gh(0))=h(x)+g(0)-g(0)=h(x)$. ∎ Lemma 4.6 provides spacing embeddings (of controlled height) that transfer $1$-periodicity. Using that result and the decomposition of Lemma 4.7, we will construct spacing embeddings (of controlled height) that transfer $n$-periodicity, for any given $n$. ###### Example 4.8. We will demonstrate the idea of how, given a sub c-chain of $\mathbb{Z}$ based on $\Delta=\\{0,4\\}$, we can obtain a spacing embedding $e$ with domain $\Delta$ that transfers $n$-periodicity. In other words, we will construct $e$ such that given any function $f$ of $F_{n}(\mathbb{Z})$, the partial function $(f|_{\Delta\times\Delta})^{e}$ is extendable to an $n$-periodic function on $\mathbb{Z}$. We first decompose $f$ as $f=f^{\circ}\circ f^{*}$, where $f^{\circ}\in F_{1}(\mathbb{Z})$ and $f^{*}\in F_{n}(\mathbb{Z})$ with $Sf^{\circ}(0)=f^{\circ}(0)$ and $0\leq f^{*}(0)<n$, according to Lemma 4.7; Figure 2 shows an example of that. We will keep $f^{*}$ as it is, but we will obtain an improved version of $f^{\circ}$ and compose it back with $f^{*}$. We will first view $f^{\circ}$ as providing us information about only the multiples of $n$—i.e., only about the subset $n\mathbb{Z}$ of $\mathbb{Z}$—and we will scale $n\mathbb{Z}$ into a copy of $\mathbb{Z}$. Since $f^{\circ}(0)=Sf^{\circ}(0)$, we have $f^{\circ}(0)=kn$ for some $k\in\mathbb{Z}$, hence $f^{\circ}(x)=x+kn$, for all $x\in\mathbb{Z}$; i.e., $f^{\circ}$ is the translation by $kn$. By dividing $kn$ by $n$, we consider the function $f^{\circ}\div n$ that translates by $k$, i.e., $f^{\circ}\div n:\mathbb{Z}\rightarrow\mathbb{Z}$ is given by $(f^{\circ}\div n)(x)=x+k$ for all $x\in\mathbb{Z}$; we set $g:=f^{\circ}\div n$ for brevity. The function $f^{\circ}\div n$ captures the behavior of $f^{\circ}$ on the first elements of the periods (the multiples of $n$), and it scales $n\mathbb{Z}$ into a copy of $\mathbb{Z}$. We will apply Lemma 4.6 to this scaled function. We define the set $\widetilde{\Delta}=\\{Qx:x\in\Delta\\}=\\{0,2\\}$ and recall that Lemma 4.6 produces a spacing embedding $d:\tilde{\Delta}\rightarrow\mathbb{Z}$ that transfers 1-periodicity and has bounded height. So, $(g|_{\tilde{\Delta}\times\tilde{\Delta}})^{d}$ can be extended to a function $g_{\tilde{\Delta},d}:\mathbb{Z}\rightarrow\mathbb{Z}$ that is $1$-periodic; say $g_{\tilde{\Delta},d}(x)=x+\ell$ for all $x\in\mathbb{Z}$, for some $\ell\in\mathbb{Z}$. By Lemma 4.6, the height is controlled in terms of the convexification of $d[\tilde{\Delta}]$. We now scale this improved function $g_{\tilde{\Delta},d}$ back: we view the domain $\mathbb{Z}$ of $g_{\tilde{\Delta},d}$ as a copy of the multiples of $n$, and we move from this subset $n\mathbb{Z}$ to all of $\mathbb{Z}$. We define the function $g_{\tilde{\Delta},d}\times n:\mathbb{Z}\rightarrow\mathbb{Z}$ by $(g_{\widetilde{\Delta},d}\times n)(x)=x+n\ell$ for all $x\in\mathbb{Z}$. By composing $g_{\tilde{\Delta},d}\times n$ with $f^{*}$, we obtain a shorter version of $f$. We claim that $e(x)=d(Qx)n+Rx$, for all $x\in\Delta$, defines the desired spacing embedding and that $f_{\Delta,e}:=(g_{\tilde{\Delta},d}\times n)\circ f^{*}$ is an $n$-periodic function extending $(f|_{\Delta\times\Delta})^{e}$; see Figure 2. This is the general process we follow in the proof of the next lemma; for technical reasons the c-chain $\widetilde{\Delta}$ will need to be suitably larger, as we will see in the argument. $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$f$ $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$f^{*}$ $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$f^{\circ}$ $3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$f^{\circ}\div n$ $3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$g_{\tilde{\Delta},d}$ $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$g_{\tilde{\Delta},d}\times n$ $7$$7$$6$$6$$5$$5$$4$$4$$3$$3$$2$$2$$1$$1$$0$$0$$\vdots$$\vdots$$\vdots$$\vdots$$f_{\Delta,e}$ Figure 2: Shortening an element of $\mathbf{F}_{n}(\mathbb{Z})$ ###### Lemma 4.9. Given a finite sub c-chain $\mathbf{\Delta}$ of $\mathbb{Z}$, there exists an $n$-short spacing embedding with domain $\mathbf{\Delta}$ that transfers $n$-periodicity. ###### Proof. For $\widetilde{\Delta}:=\\{Qx:x\in\Delta\\}\cup\\{Qx+1:x\in\Delta\\}\cup\\{Qx-1:x\in\Delta\\}$ we have $|\widetilde{\Delta}|\leq 3|\Delta|$; let $\widetilde{\mathbf{\Delta}}$ be the sub-chain of $\mathbb{Z}$ with base set $\widetilde{\Delta}$. By Lemma 4.6, there exists a spacing embedding $\overline{\cdot}:\widetilde{\Delta}\rightarrow\mathbb{Z}$ that transfers $1$-periodicity and has height at most $\rho(|\widetilde{\Delta}|)$. Let $e:\Delta\rightarrow\mathbb{Z}$ be given by $e(x)=\overline{Qx}n+Rx$, for all $x\in\Delta$. Observe that $\max(e[\Delta])\leq(\rho(|\widetilde{\Delta}|)+1)n\leq(\rho(3|{\Delta}|)+1)n=\nu(|{\Delta}|)$; hence $e$ is $n$-short. We will show that $e$ transfers $n$-periodicity. If $f\in F_{n}(\mathbb{Z})$ and $f|_{\Delta\times\Delta}$ is non-empty, by Lemma 4.7 there exist $f^{\circ}\in F_{1}(\mathbb{Z})$ and $f^{*}\in F_{n}(\mathbb{Z})$ with $0<f^{*}(0)\leq n$, $Sf^{\circ}(0)=f^{\circ}(0)$, and $f=f^{\circ}\circ f^{*}$. Using the $n$-periodicity of $f^{*}$ and that $f^{\circ}(a)=f^{\circ}(0)+a$, for all $a$, we obtain $f(x)=(f^{\circ}\circ f^{*})(Sx+Rx)=f^{\circ}(0)+f^{*}(Sx+Rx)=f^{\circ}(0)+Sx+f^{*}(Rx)=f^{\circ}(Sx)+f^{*}(Rx)$. Since $f^{\circ}(Sx)=Sx+f^{\circ}(0)=Sx+Sf^{\circ}(0)=S(x+f^{\circ}(0))=Sf^{\circ}(Sx)$, we get that $f^{\circ}(Sx)$ is a multiple of $n$, so $Sf(x)=f^{\circ}(Sx)$ or $Sf(x)=f^{\circ}(Sx)+n$, depending on whether $f^{*}(Rx)$ is smaller or bigger than $n$; note that indeed $f^{*}(Rx)\in[0,2n)_{\mathbb{Z}}$, since $0\leq Rx<0+n$ implies $0\leq f^{*}(0)\leq f^{*}(Rx)<f^{*}(0)+n<n+n=2n$, by $n$-periodicity. Therefore, $f^{\circ}(Sx)=Sf(x)=n\cdot Qf(x)$ or $f^{\circ}(Sx)=Sf(x)-n=n\cdot Qf(x)-n$; hence $Qf^{\circ}(Sx)=Qf(x)$ or $Qf^{\circ}(Sx)=Qf(x)-1$. In either case, if $x,f(x)\in\Delta$, then $Qx,Qf^{\circ}(Sx)\in\widetilde{\Delta}$. We define the function $g$ on $\mathbb{Z}$, by $g(a)=Qf^{\circ}(an)$, for all $a\in\mathbb{Z}$; note that $g(Qx)=Qf^{\circ}(Sx)$, for all $x\in\mathbb{Z}$ (denoted by $f^{\circ}\div n$ in the Example 4.8). Observe that if $x,f(x)\in\Delta$, then $Qx,g(Qx)\in\widetilde{\Delta}$; hence, since $f|_{\Delta\times\Delta}$ is non-empty so is $g|_{\widetilde{\Delta}\times\widetilde{\Delta}}$. Since $\overline{\cdot}:\widetilde{\Delta}\rightarrow\mathbb{Z}$ transfers $1$-periodicity, the counterpart of $g|_{\widetilde{\Delta}\times\widetilde{\Delta}}$ by $\overline{\cdot}$ can be extended to some $h\in F_{1}(\mathbb{Z})$. We further define the function $k\in F_{1}(\mathbb{Z})$ given by $k(x)=h(0)n+x$ (denoted by $h\times n$ in the Example 4.8). Note that $k\circ f^{*}\in F_{n}(\mathbb{Z})$; we will show now that $k\circ f^{*}$ extends the counterpart of $f|_{\Delta\times\Delta}$ by $e$. Assume that we have $x,f(x)\in\Delta$. We first show that $ef(x)=\overline{Qf^{\circ}(Sx)}n+f^{*}(Rx)$, i.e., $e(f^{\circ}(Sx)+f^{*}(Rx))=\overline{Qf^{\circ}(Sx)}n+f^{*}(Rx)$. When $0\leq f^{*}(Rx)<n$, then $R(f^{\circ}(Sx)+f^{*}(Rx))=Rf^{*}(Rx)=f^{*}(Rx)$ and $Q(f^{\circ}(Sx)+f^{*}(Rx))=Qf^{\circ}(Sx)$, so the equation holds. If $n\leq f^{*}(Rx)<2n$, then $0\leq f^{*}(Rx)-n<n$, so $R(f^{\circ}(Sx)+f^{*}(Rx))=f^{*}(Rx)-n$ and $\overline{Q(f^{\circ}(Sx)+f^{*}(Rx))}=\overline{Qf^{\circ}(Sx)+1}=\overline{Qf^{\circ}(Sx)}+1$, since $\overline{\cdot}$ preseves coverings, so the equation holds in this case, as well. Therefore, $e{f(x)}=e(f^{\circ}(Sx)+f^{*}(Rx))=\overline{Q{f^{\circ}(Sx)}}n+f^{*}(Rx)=\overline{g(Qx)}n+f^{*}(Rx)=h(\overline{Qx})n+f^{*}(Rx)=h(0)n+\overline{Qx}n+f^{*}(Rx)=k(0)+\overline{Qx}n+f^{*}(Rx)=k(\overline{Qx}n+f^{*}(Rx))=kf^{*}(\overline{Qx}n+Rx)=kf^{*}(ex)$. ∎ ###### Lemma 4.10. Every $n$-periodic compatible surjection (with respect to some spacing embedding) is $n$-periodic with respect to an $n$-short spacing embedding. ###### Proof. If $\varphi:\Delta_{\varepsilon}\rightarrow\mathbb{N}_{d}$ is an $n$-periodic compatible surjection for some equation $\varepsilon(x_{1},\ldots,x_{l})$, there exists a spacing embedding $e:\mathbb{N}_{d}\rightarrow\mathbb{Z}$ such that the partial functions $g_{1},\ldots,g_{l}$ given by $\varphi$ are $n$-periodic respect to $e$. Hence, each counterpart $g_{i}^{e}$ is a partial function on $e[\mathbb{N}_{d}]\subseteq\mathbb{Z}$ that is $n$-periodic with respect to $id_{\mathbb{Z}}$, and by Lemma 3.3 can be extended to a function in $F_{n}(\mathbb{Z})$. By Lemma 4.9, there exists an $n$-short spacing embedding $e^{\prime}:e[\mathbb{N}_{d}]\rightarrow\mathbb{Z}$ that transfers $n$-periodicity, so $e^{\prime}\circ e:\mathbb{N}_{d}\rightarrow\mathbb{Z}$ is an $n$-short spacing embedding (since it has the same image as $e^{\prime}$, which is $n$-short) and each $g_{i}$ has an $n$-periodic counterpart by $e^{\prime}\circ e$ (since $e^{\prime}$ transfers $n$-periodicity). Hence, $\varphi$ is $n$-periodic with respect to the $n$-short spacing embedding $e^{\prime}\circ e$. ∎ ### 4.4 Decidability. Finally, we are ready obtain decidability results for $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$ and $\mathsf{DLP}$. ###### Lemma 4.11. An equation fails in $\mathbf{F}_{n}(\mathbb{Z})$ iff it fails in an $n$-short, $n$-periodic compatible surjection. ###### Proof. Suppose than an equation $\varepsilon$ fails in $\mathbf{F}_{n}(\mathbb{Z})$, then by Theorem 4.1, there exists an $n$-periodic compatible surjection $\varphi$ in which $\varepsilon$ fails. By Lemma 4.10, there exists an $n$-short spacing embedding with respect to which $\varphi$ is also $n$-periodic. Hence $\varepsilon$ fails in an $n$-short, $n$-periodic compatible surjection. The converse follows from Lemma 3.7. ∎ ###### Theorem 4.12. The equational theory of the variety $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$ is decidable. ###### Proof. An equation $\varepsilon$ fails in $\mathsf{V}(\mathbf{F}_{n}(\mathbb{Z}))$ iff it fails in $\mathbf{F}_{n}(\mathbb{Z})$ iff (by Theorem 4.11) $\varepsilon$ fails in an $n$-short, $n$-periodic compatible surjection. There exist finitely many compatible surjections with domain $\Delta_{\varepsilon}$ (since all of them have a range of the form $\mathbb{N}_{q}$, where $q\leq|\Delta_{\varepsilon}|$). Also, for each such compatible surjection, there are finitely many $n$-short spacing embeddings under which the compatible surjection is $n$-periodic, as the image of each $n$-short spacing embedding is contained in $\mathbb{N}_{d}$, where $d\leq\nu(q)$. Therefore, by checking all of these finitely-many situations we obtain an algorithm that decides every equation. ∎ Theorem 4.12 and Theorem 3.8 yield the following result, which provides a proof of the decidability of $\mathsf{DLP}$ that is different to the one given in [11]. ###### Corollary 4.13. The equational theory of the variety $\mathsf{DLP}$ is decidable. ## 5 Generation and decidability for $\mathsf{LP_{n}}$. In this section we will prove that, for all $n$, $\mathsf{LP_{n}}=\mathsf{V}(\mathbf{F}_{n}(\mathbb{Q}\overrightarrow{\times}\mathbb{Z}))$ and that the equational theory of $\mathsf{LP_{n}}$ is decidable. A lot of the detailed work we did in the previous sections will come in handy. As the failure of an equation in an ($n$-periodic) diagram corresponds to the failure in $\mathbf{F}_{n}(\mathbb{Z})$ and the latter does not generate $\mathsf{LP_{n}}$, a new notion of diagram is required for $\mathbf{F}_{n}(\mathbf{J}\overrightarrow{\times}\mathbb{Z})$, capturing the natural partiction induced by the lexicographic product ordering on the chain. For $\Delta\subseteq J\times\mathbb{Z}$, where $J$ is a set, we define the equivalence relation $\equiv$ on $\Delta$ by $(i_{1},x)\equiv(i_{2},y)$ iff $i_{1}=i_{2}$; note that generalizes to subsets the equivalence relation we defined on $J\times\mathbb{Z}$ in Section 2.3. A _partition diagram_ consists of a diagram of the form $(\mathbf{J\overrightarrow{\times}\Delta},\Yleft,g_{1},\ldots,g_{s})$, where $\mathbf{J}$ is a chain and $\mathbf{\Delta}$ is a finite subchain of $\mathbb{Z}$ such that $x\Yleft y\Rightarrow x\equiv y$ and $x\equiv y$ iff $g_{i}(x)\equiv g_{i}(y)$, for all $i\in\\{1,\ldots,s\\}$ and $x,y\in Dom(g_{i})$, together with the exact partition induced by $\mathbf{J}$ and $\mathbf{\Delta}$; so if $\mathbf{J\overrightarrow{\times}\Delta}$ is partitioned differently then we get a different partition diagram. For each partial function $g$ of the diagram we define the partial function $\widetilde{g}:J\rightarrow J$ where $\widetilde{g}(j)=k$ iff there exist $x,y\in\mathbb{Z}$ such that $(j,x)\in Dom(g)$ and $g(j,x)=(k,y)$. Note that that $\widetilde{g}$ is well-defined because if $g(j,x_{1})=(k_{1},y_{1})$ and $g(j,x_{2})=(k_{2},y_{2})$, then $(j,x_{1})\equiv(j,x_{2})$ implies $(k_{1},y_{1})=g(j,x_{1})\equiv g(j,x_{2})=(k_{2},y_{2})$, hence $k_{1}=k_{2}$; moreover, $\widetilde{g}$ is one-to-one. We also define $\overline{g_{j}}:\Delta\rightarrow\\{j\\}\times\Delta\stackrel{{\scriptstyle g}}{{\rightarrow}}\\{\tilde{g}(j)\\}\times\Delta\rightarrow\Delta$, for every $j\in J$, where the first and last maps are the obvious bijections. We say that an equation _fails_ in a partition diagram, if it fails in the underlying diagram. A partition diagram $(\mathbf{J}\overrightarrow{\times}\mathbf{\Delta},\Yleft,g_{1},\ldots,g_{s})$ is said to be _$n$ -periodic_ if there exists a spacing embedding $e$ on $\mathbf{\Delta}$ such that $\overline{g}_{j}$ is $n$-periodic with respect to $e$, for all $j\in J$ and $g\in\\{g_{1},\ldots,g_{s}\\}$. A $n$-periodic partition diagram is said to be _$n$ -short_ if the spacing embedding
# Framework for Learning and Control in the Classical and Quantum Domains Seyed Shakib Vedaie These authors contributed equally. <EMAIL_ADDRESS>Institute for Quantum Science and Technology, University of Calgary, Calgary, Alberta, Canada T2N 1N4 Archismita Dalal These authors contributed equally<EMAIL_ADDRESS>Institute for Quantum Science and Technology, University of Calgary, Calgary, Alberta, Canada T2N 1N4 Eduardo J. Páez<EMAIL_ADDRESS>Institute for Quantum Science and Technology, University of Calgary, Calgary, Alberta, Canada T2N 1N4 Barry C. Sanders<EMAIL_ADDRESS>Institute for Quantum Science and Technology, University of Calgary, Calgary, Alberta, Canada T2N 1N4 ###### Abstract Control and learning are key to technological advancement, both in the classical and quantum domains, yet their interrelationship is insufficiently clear in the literature, especially between classical and quantum definitions of control and learning. We construct a framework that formally relates learning and control, both classical and quantum, to each other, with this formalism showing how learning can aid control. Furthermore, our framework helps to identify interesting unsolved problems in the nexus of classical and quantum control and learning and help in choosing tools to solve problems. As a use case, we cast the well-studied problem of adaptive quantum-enhanced interferometric-phase estimation as a supervised learning problem for devising feasible control policies. Our unification of these fields relies on diagrammatically representing the state of knowledge, which elegantly summarizes existing knowledge and exposes knowledge gaps. ## I Introduction Closed-loop control aims to regulate a system of devices to produce a desired output autonomously [1, 2], and machine learning (ML) is about improving a procedure autonomously through experience [3]. ML can be subdivided into reinforcement learning (RL) and semi-supervised learning (SSL), with supervised learning (SL) and unsupervised learning (UL) the extreme cases [4]. Both control and learning are vital to advancing technology and are interrelated: learning can assist with enhancing control (such as the field of ‘machine-learning control’) [5], and control tools could be used for improving learning [6]. Adding to the richness of this field, both control and learning can be cast in a classical (i.e., non-quantum) context [1, 7] and in a quantum context [8, 9, 10, 11]. These fields are rapidly developing with concomitant problems of inconsistent terminology, disparities between classical and quantum versions, and currently less-than-clear relations between these four topics: classical and quantum control and learning, i.e., four areas of classical control ($\mathscr{C}$C), quantum control ($\mathscr{Q}$C), classical learning ($\mathscr{C}$L) and quantum learning ($\mathscr{Q}$L). To be succinct, we refer to these topics as ‘learning for control’, and our aim is to construct a unified framework connecting these topics, which we call our learning-for-control framework (LfC). To this end, we summarise state-of-the- art in connecting these topics together and show the value of our LfC framework by how it reveals knowledge gaps and by showing how applying SL to quantum-enhanced metrology [12, 13] reveals a procedure for performing this task autonomously. We begin by summarizing state of the art, and we represent this state of the art by constructing a knowledge graph [14] whose vertices are current topic areas, and both directed and undirected edges show connections between topic areas. Furthermore, knowledge graph edges are labelled by the actual references. This knowledge graph is particularly useful to convey not just state of the art but also to convey knowledge gaps. Next, we unify $\mathscr{C}$C and $\mathscr{Q}$C. Although both of these topics are focused on control, specifically closed-loop control, the literature for $\mathscr{Q}$C differs, even from basic construction, from the literature for $\mathscr{C}$C. $\mathscr{C}$C is founded on concepts such as a controller, the controller’s policy for controlling a plant, a reference used to achieve a particular output state, and feedback [1]. $\mathscr{Q}$C typically focuses on techniques to optimise quantum systems by controlling coefficients in a Hamiltonian or open-system evolution [8, 9]. We formulate a new version of $\mathscr{Q}$C that essentially quantises $\mathscr{C}$C, which is distinct from our earlier approach of defining $\mathscr{Q}$C independent of $\mathscr{C}$C [15]. Building on this $\mathscr{C}$C-$\mathscr{Q}$C unification, we explore unifying $\mathscr{C}$L and $\mathscr{Q}$L in the same way. Our work sets the stage for this unification by revisiting foundational aspects of $\mathscr{C}$L but considers whether the basic objects therein can be treated mathematically. Specifically, we articulate desiderata for $\mathscr{Q}$L based on extending $\mathscr{C}$L concepts and based on unification commensurate with our $\mathscr{C}$C-$\mathscr{Q}$C analysis. Our next step is to extend the idea of learning for control [16] to the quantum domain. This extension involves allowing for quantum channels connecting the controller, plant and learner. Furthermore, the no-cloning principle of quantum mechanics [17] forbids the same feedback to be sent, accurately and deterministically, to both learner and controller, which we account for in our framework. We base our approach on Fu’s framework for learning control systems [16]. We explicitly separate teacher, user, learner and controller in our framework, as distinct from Fu’s approach, which equates controller and learner and does not include ‘user’ in the picture (hence, not complete). In our LfC framework, the teacher implements the process of learning for control for RL and SSL, which differs from Fu’s approach, where the teacher is only present for the SL case. We demonstrate the utility of our LfC framework in casting adaptive quantum- enhanced interferometric-phase estimation (A$\mathscr{Q}$P) [12, 13] as an SL problem. The benefit of introducing SL into this framework is that new circumstances—such as modified working conditions—can be accommodated by using the SL model to predict new possibilities rather than having to optimise for each case, which is intrinsically harder. A$\mathscr{Q}$P is a form of quantum-enhanced metrology, useful for enhancing quantum clocks [18] and interferometric position-shift measurements [19, 20, 21] inter alia, but with feedback incorporated, yielding the advantage that only single-particle measurements are required rather than joint measurements if adaptive feedback methods are not used. Beyond the practical application of using our framework for methodically applying learning to control, our work is interesting on a philosophical level. Does control theory make sense for a quantum controller, for example, a controller with a quantum computer and with quantum information coming to or leaving the controller? Could the controller prepare the plant in a superposition of states or be entangled with the plant? Could the controller have a superposition of policies? We do not have the answers to such questions, but one value of our work is that such questions arise very clearly from our framework. Our paper continues in §II with a review of the key concepts in ML, control theory, unification of classical and quantum mechanics, and learning for A$\mathscr{Q}$P. We then elaborate on how we construct our LfC framework in §III. Based on our literature survey and the key concepts established in §III, we present our knowledge graph in §IV. Next, in §V, we describe the application of our framework to A$\mathscr{Q}$P. Finally, we discuss our results in §VI and conclude in §VII with a summary of our work and an outlook. As we employ many abbreviations, we summarise these abbreviations and their full expressions in Table 1 for the convenience of the reader. Abbreviation | Description ---|--- ML | Machine learning $\mathscr{C}$C | Classical control $\mathscr{Q}$C | Quantum control $\mathscr{C}$L | Classical learning $\mathscr{Q}$L | Quantum learning SL | Supervised learning UL | Unsupervised learning RL | Reinforcement learning SSL | Semi-supervised learning QML | Quantum machine learning PAC | Probably approximately correct POVM | Positive operator-valued measure LfC | Learning-for-control A$\mathscr{Q}$P | Adaptive quantum-enhanced | interferometric-phase estimation Table 1: Glossary ## II Background In this section, we give a review of the concepts of $\mathscr{C}$C and ML, along with their quantum counterparts. We further provide a summary of the concepts of learning for control, both in classical and quantum domains. The essential background on the $\mathscr{Q}$C technique of A$\mathscr{Q}$EM is also provided. ### II.1 Machine learning ML is proving to be highly valuable due to its capacity for making predictions based on experience. We begin by discussing the framework for ML. Then we briefly explain classical ML, which includes the widely used modes of SL and UL as well as RL. Finally, we discuss relevant definitions and notions of quantum ML (QML). #### II.1.1 Framework for machine learning Now we discuss the essentials of ML. First, we define ML and then differentiate learning from optimisation. Finally, we explain the classifications of an ML task. We begin by establishing the concept of ML and discussing its extension to the quantum case. We adopt and formalize Mitchell’s definition of ML. ###### Quotation 1 (Mitchell [3]). A computer program is said to learn from experience $\mathscr{E}$ with respect to some class of tasks $\mathscr{T}$ and performance measure $\mathscr{P}$, if its performance at tasks in $\mathscr{T}$, as measured by $\mathscr{P}$, improves with experience $\mathscr{E}$. Russell and Norvig formalize the concept of a computer program in the context of ML by introducing the notion of an agent. ###### Quotation 2 (Russell & Norvig [4]). An agent is just something that acts …. Of course, all computer programs do something, but computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. Quotations 1 and 2 clearly highlight the essential components of an ML implementation, namely learning agent $\mathscr{A}$, $\mathscr{E}$, $\mathscr{T}$ and $\mathscr{P}$, which we regard as essential constructs for QML as well. A model is a mathematical representation of the data, specifically, a formula or algorithm that labels the data in a probably approximately correct (PAC) way. The goal of ML is to search for a model that optimises the performance of a learning agent [3, 22]. Various notions of representation exist, so here we give our definition, pertinent throughout this paper. ###### Definition 1. A representation of a function $f(x)$ is $\tilde{f}_{d}(x)$, with $d$ specifying the size (e.g., number of bits) and $\|f-\tilde{f}_{d}\|$ the distance between the function and its representation with respect to the norm on function space. ###### Example 1. A truncated Taylor expansion [23] is one example of a representation. ###### Example 2. Another example of a representation of a function is a cumulant expansion [24] with the sequence of cumulants denoted $\bm{\varpi}:=\left(\kappa_{1},\kappa_{2},\kappa_{3},\ldots\right),$ (1) for $\kappa_{1}$ the mean, $\kappa_{2}$ the variance and $\kappa_{3}$ the skewness. ML is classified based on the nature of $\mathscr{E}$. For example, we can consider what we call the structure of $\mathscr{E}$—labelled or unlabelled features as one pair of cases or state-and-reward as another case— as a way of classifying; alternatively, we can consider all $\\{\mathscr{E}\\}$ being provided up-front or is simultaneous with the ML process. ML can be categorised into three major paradigms based on the structure of $\mathscr{E}$, namely SL for $\mathscr{E}$ being labelled features, UL for unlabelled features and RL for $\mathscr{E}$ being state-and-reward [25]. If all $\\{\mathscr{E}\\}$ is provided up-front, ML is offline; ML is online if $\mathscr{E}$ involves input data from the system in real time while learning. Figure 1: Machine learning classification. a) An example of the supervised learning paradigm showing a decision boundary that classifies input data. b) An example of the unsupervised learning paradigm that represents the clustering of input data. c) A typical workflow in the reinforcement learning paradigm that demonstrates how learning proceeds via interactions between an agent and its environment. Every ML algorithm comprises three components—representation, evaluation and optimisation [26]. The representation component concerns the hypothesis space of the learner [3, 22]. The evaluation component concerns the objective function for learning and the learning performance. Finally, the optimisation component concerns the methods to search for a model in the hypothesis space that maximizes the learning performance. Thus, optimization adds value to all ML problems by enhancing ML, and optimization is not learning in itself. #### II.1.2 Machine learning paradigms Now we elaborate on three major paradigms of ML and discuss a typical ML workflow. We begin by summarizing the ML pipeline, which entails four steps. These four steps are preprocessing raw data into a training data set, followed by the second step of training, followed by the validation step, and finally, the testing step. We then discuss SL and UL briefly yet sufficiently to establish their mutual differences and the comparative difference with RL, which we elaborate on at the end. We describe an ML pipeline as an iterative workflow with four steps typically, namely $\text{pre- processing}\rightarrow\text{calibrating}\rightarrow\text{training}\rightarrow\text{testing}.$ (2) The workflow begins with the data pre-processing step, where raw data is manipulated to construct a data set $\mathscr{D}$ that is suitable for ML. The pre-processing step involves several sub-tasks such as data cleansing, feature extraction and feature selection [27]. The pre-processed data set is then divided into the disjoint union of a calibrating and training set $\mathscr{D}_{\text{model}}$ of size $\mathscr{M}$ and a testing set $\mathscr{D}_{\text{test}}$ of size $\mathscr{M}^{\prime}$ as $\mathscr{D}=\mathscr{D}_{\text{model}}\sqcup\mathscr{D}_{\text{test}},$ (3) such that $\mathscr{M}+\mathscr{M}^{\prime}=\mathscr{N}=\left|\mathscr{D}\right|$ (4) In the calibration step, the hyperparameters of the model are tuned in an iterative way [28]. For each tuple of hyperparameters, a model is trained on a randomly-sampled subset $\mathscr{D}_{\text{train}}$ of $\mathscr{D}_{\text{model}}$, and the model’s performance is evaluated on the remaining data set. A mean performance, corresponding to each tuple of hyperparameters, is then calculated by repeating these two sub-steps for different $\mathscr{D}_{\text{train}}$. After repeating this process of calculating mean performance for all possible tuples of hyperparameters, the calibration step returns the tuple corresponding to the best model performance. In the training step, the model data set $\mathscr{D}_{\text{model}}$, along with the hyperparameters returned from the calibration step, are used to construct an ML model. Finally, the model, with selected parameters and hyperparameters, is assessed on $\mathscr{D}_{\text{test}}$. The model passes or fails at the test step; if the model passes, this model is then used on real data. In the SL paradigm [4], using Mitchell’s terminology [3], $\mathscr{E}$ is provided to $\mathscr{A}$ as a labelled data set $\mathscr{D}$ of size $\mathscr{N}$. For each labelled data being a tuple of an $F$-dimensional feature vector $\bm{x}_{i}$ and its corresponding label $y_{i}$, we can formally state the data set, according to a convenient formalism [29], as $\mathscr{D}=\\{(\bm{x}_{i},y_{i})\mid i\in[\mathscr{N}]:=\\{0,\ldots,\mathscr{N}-1\\}\\}\subset\mathds{R}^{F}\times\mathds{R}.$ (5) Theoretically, we allow real-valued entries, but computationally, we approximate by floating-point numbers up to machine precision. $\mathscr{A}$ then devises a labelling map $f:\mathds{R}^{F}\to\mathds{R}:\bm{x}\mapsto\tilde{y},$ (6) where $\bm{x}$ is an unseen feature vector and $\tilde{y}$ is its corresponding predicted label. Labelling is not guaranteed to be correct every time but it rather is probabilistically approximately correct [30, 31]. Denoting the set of all labels as $Y$, the fitness of $f$ is quantified by a ‘loss function’ $L:Y\times Y\mapsto\mathds{R}^{+},$ (7) that measures the difference between $y$ and $\tilde{y}$. Common examples of loss functions include the absolute and the squared losses [32]. SL tasks are further sub-classified into two types: classification and regression [22]. A classification task is defined for discrete labels, whereas regression is pertinent for the case of continuous labels. In the case of UL, the agent has access to unlabelled input data $\\{\bm{x}_{i}\\}$ but not to labels $\\{y_{i}\\}$. The task of a UL agent is to recognise hidden patterns in the data set and to use these patterns to cluster these data together into subsets [22]. In practical applications, SL and UL can be combined to form SSL [22]. SSL is especially useful for tasks involving training data with few input-label pairs and thus a large number of input-only data. RL is quite different from SL and UL: instead of having labelled or unlabelled input data, respectively, RL involves an agent, states of the environment, actions and rewards. These notions are subtle and inconsistently defined so we continue our practice of relying on verbatim quotes from respected sources to define these entities. The environment plays a key role for RL, with the environment described in the following quote. ###### Quotation 3 (Russell & Norvig [4]). The environment could be everything—the entire universe! In practice it is just that part of the universe whose state we care about when designing this agent—the part that affects what the agent perceives and that is affected by the agent’s actions. In RL, the agent performs actions on the environment that are intended to yield the best outcome, and the agent is rewarded accordingly for good actions. The agent acquires information from the environment and acts on this information to change the environment. The nature of the agent is explained in the following quotation. ###### Quotation 4 (Russell & Norvig [4]). An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. To be clear on how actuators and sensors are defined, we provide the following quotations. ###### Quotation 5 (Dorf & Bishop [1]). An actuator is a device employed by the control system to alter or adjust the environment. ###### Quotation 6 (Dorf & Bishop [1]). A sensor is a device that provides a measurement of a desired external signal. The agent is not just reacting to the environment to modify its state but rather learns how to improve the environment as Sutton and Barto explain. ###### Quotation 7 (Sutton & Barto [25]). Reinforcement learning is learning what to do—how to map situations to actions—so as to maximise a numerical reward signal. Russell and Norvig elaborate on the agent’s nature. ###### Quotation 8 (Russell & Norvig [4]). [A]gents are expected to … operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best …expected outcome. Sutton and Barto articulate the “prolonged time period” in Quotation 8 as the agent “maximizing not the immediate reward, but cumulative reward in the long run” [25]. The “reward signal” in Quotation 7 and the “best outcome” in Quotation 8 rely on an agent whom we regard as hidden. This reward, or “reward signal” [25], which formalises “the purpose or goal of the agent” [25] and is passed “from the environment to the agent” [25], has an origin that transcends the agent and the environment as two entities. The reward is vitally important to “formaliz[ing] the idea of a goal” [25] and seems to demand that a third party come into play. Sutton and Barto say: ###### Quotation 9 (Sutton & Barto [25]). The reward signal is your way of communicating to the agent what you want achieved. What does “your way of communicating to the agent” mean? This statement implies subjectivity by another agent, namely you. The reward thus represents the “creator” of the problem who imbues subjective values on how actions should be rewarded. ###### Quotation 10 (Sutton & Barto [25]). Our focus is on reinforcement learning methods that learn while interacting with the environment, which evolutionary methods do not do. …Evolutionary methods ignore much of the useful structure of the reinforcement learning problem: they do not use the fact that the policy they are searching for is a function from states to actions; they do not notice which states an individual passes through during its lifetime, or which actions it selects. …Although evolution and learning share many features and naturally collaborate, we do not consider evolutionary methods by themselves to be especially well suited to reinforcement learning problems … #### II.1.3 Online versus offline learning ML paradigms can be employed in both online and offline settings. First, we define the concepts of online and offline learning. Then, we treat the concept of RL in both online and offline settings separately as RL inherently assumes online interaction with an environment. We begin by explaining offline ML, which is applied for the case that all $\mathscr{E}$ is available before ML starts. Offline ML then searches for a feasible model by learning the parameters of the model by using the entire training data set for $\mathscr{E}$, which is fully available at the outset. In contrast to offline ML, online ML is employed while the training set of $\mathscr{E}$ becomes available; ideally, online ML acts immediately as each element of $\mathscr{E}$ is received rather than storing elements [33]. For example, in online SL, $\mathscr{A}$ has to predict the label of the next instance based on the labels of the previous instances that $\mathscr{A}$ has already seen. Online learning is helpful if training over the entire data set is computationally infeasible or the algorithm must adapt to new patterns in the data dynamically. The following quote conveys the distinction between offline and online learning. ###### Quotation 11 (Ben-David, Kushilevitz & Mansour [33]). The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. Now we consider online vs. offline RL. Per the construction of RL, which involves $\mathscr{A}$ interacting with the environment discussed in 3, RL is implicitly online. However, real-time collection of experience via interaction with an environment could be expensive or dangerous [34, 35]. To avoid such problems, offline RL is an important topic. Offline RL accommodates agents who utilize data from previously collected experience, which could have been obtained from simulation or for a safe setting. In contrast to offline SL and UL, which are well posed and widely used, reconciling RL, which is implicitly online, with offline methods is a work in progress, as emphasised in the following quote. ###### Quotation 12 (Levine, Kumar, Tucker & Fu [34]). [O]ffline RL is, at its core, a counter-factual inference problem: given data that resulted from a given set of decisions, infer the consequence of a different set of decisions. Such problems are known to be exceptionally challenging in machine learning, because they require us to step outside of the commonly used i.i.d. framework, which assumes that test-time queries involve the same distribution as the one that produced the training data. #### II.1.4 Quantum machine learning We review definitions of QML as stated explicitly in the literature. Then we explain the concept of QML as gleaned from these somewhat disparate definitions. Finally, we provide some examples of QML. A few comprehensive review articles [11, 36, 37] and textbooks [38, 39] provide various subjective definitions of QML. Unlike classical ML, a compact and unanimous definition for QML in terms of the learning components $\mathscr{A}$, $\mathscr{E}$, $\mathscr{P}$ and $\mathscr{T}$ (in Quotation 1) is lacking. As essential background, we provide key quotations from some of the most notable literature in this field. In an early review paper, a functional definition of QML, without hinting at the prospect of “quantum advantage” [40, 41, 42, 37], follows [43]. ###### Quotation 13 (Schuld, Sinayskiy & Petruccione [43]). In quantum machine learning, quantum algorithms are developed to solve typical problems of machine learning using the efficiency of quantum computing. This is usually done by adapting classical algorithms or their expensive subroutines to run on a potential quantum computer. More recently, QML definitions embed the concept of quantum advantage. QML is defined in a well-cited survey paper as follows. ###### Quotation 14 (Biamonte et al. [11]). The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. In a recent perspective article on this field, Wiebe defines QML from a data- science and computer-science perspective as follows. ###### Quotation 15 (Wiebe [44]). Quantum ML involves using a quantum device to solve a machine learning task with greater speed or accuracy than its classical analogue would allow. Thus, we see that QML is defined in three ways: by its utility as in Quotation 1, by the aspiration of QML as in Quotation 14 and by the QML device as in Quotation 15. Notable examples of QML are varied. One example is a quantum support-vector machine, where a quantum matrix-inversion algorithm solves the system of equations that appear in a least-squares formulation of the support-vector machine [45]. Another example of QML is the analogue-quantum kitchen sink, which quantises the classical random kitchen sinks algorithm. The classical random kitchen sink algorithm estimates a kernel by using randomised features and can be enhanced by employing adiabatic quantum evolution to yield requisite randomness [46]. A third example is quantum-enhanced RL, for which quantum speedups in the agent’s decision-making process and learning times have been claimed [47, 48]. ### II.2 Control System In this subsection, we summarise studies of control. We begin with a summary of $\mathscr{C}$C. Then we discuss the topic of $\mathscr{Q}$C. Finally, we summarise the status of applying ML to control. #### II.2.1 Classical control system Now we present the essentials of $\mathscr{C}$C. First, we discuss the task of a control system. Then we present a schematic for $\mathscr{C}$C that represents the essential elements and their connections. Finally, we discuss the nature of a control policy and how this policy is devised. A control task involves steering specific controllable degrees of freedom of a physical system such that its dynamics yields the desired observations within a required tolerance. This task is achieved by a control system, which can be defined in one of the following two ways, as examples. ###### Quotation 16 (Dorf & Bishop [1]). A control system is an interconnection of components forming a system configuration that will provide a desired system response. An alternative definition is the following. ###### Quotation 17 (Rosolia et al. [2]). A control system is a device in which sensed quantities are used to generate an autonomous behaviour through computation and actuation. Control systems are classified into closed-loop and open-loop control according to whether control depends on feedback or not, respectively. ###### Quotation 18 (Dorf & Bishop [1]). A closed-loop system uses a measurement of the output signal and a comparison with the desired output to generate an error signal that is used by the controller to adjust the actuator. We represent a generic closed-loop control system as a block diagram in Fig. 2. Control systems consist of two main parts: the controlled object and the controller. Our block diagram explains the essential components of closed-loop control. Figure 2: A block diagram of a closed-loop $\mathscr{C}$C system, where the arrows between two blocks represent the flow of classical information. A plant P, comprising the physical system, actuators and sensors, is controlled by a controller C who sends a control signal u to P. The filled circle beside P represents a “switching” operation that either fed back P’s output into the control system as a signal y or passes it on as the ultimate output signal z, which typically contains the termination signal and observable representing the final state of P. The switching operation in $\mathscr{C}$C is implemented as a fan-out. An evaluator E estimates an error signal d between y and the reference signal r using an evaluation function $\varepsilon_{\text{r}}$. C computes u based on a control policy $\varrho_{\text{r}}$, which is designed based on r, and d. In a setting where y=$\emptyset$, one obtains an open-loop control system. ###### Remark 1. The sets of all actuator data, all possible plant output and all error signals are $U:=\\{u\\},\,Y:=\\{y\\},\,D:=\\{d\\},$ (8) with each element $u$ being distinct from all other elements in $U$ and, similarly, each $y$ being distinct from all other elements in $Y$. Moreover, the sets $U$, $Y$ and $D$ have constant cardinalities $|U|$, $|Y|$ and $|D|$, respectively, over successive feedback loops. ###### Remark 2. In the classical case, each channel is labelled by a letter such as u (for actuator data), written in upright (e.g., Roman) font, with its value in some instance given by (slanted, or italic, font) $u\in U$, with (capital letter) $U$ being the set of all possible (control) values. The probability vector for the state of u is $\bm{p}^{\text{u}}:=\left(p^{\text{u}}_{u}\right)$ (9) with the right-hand side being the sequence of probability-vector components, which adds to $1$. ###### Remark 3. The quantum state of a channel, such as u, is given by a density operator $\rho^{\text{u}}$ [17], which is a positive semi-definite trace-class operator on Hilbert space, and a classical probability distribution is recovered by diagonalising $\rho^{\text{u}}$ [49]. A policy is a set of instructions that determine the control parameters, and hence the effectiveness of the control scheme. In closed-loop feedback, the controller executes a policy $\varrho:\mathbb{Z}\times D\to U:(r,d)\mapsto u.$ (10) with $r$ representing the value of the reference channel r. Contrariwise, in an open-loop control scheme, where y is either not available or is unnecessary, C steers P to obtain z without ever receiving any feedback from P. Now we explain two standard methods to construct $\varrho$, namely conventional model-based control (MBC) methods and alternative data-driven control (DDC) methods. ###### Quotation 19 (Hou & Wang [50]). Data-driven control includes all control theories and methods in which the controller is designed by directly using on-line or off-line I/O data of the controlled system or knowledge from the data processing but not any explicit information from mathematical model of the controlled process, and whose stability, convergence, and robustness can be guaranteed by rigorous mathematical analysis under certain reasonable assumptions. MBC proceeds by modelling the plant either using first principles or identification from data [50] and assumes a trusted mathematical model governing plant dynamics with bounded modelling uncertainty. The plant’s model is represented by the map $\varsigma:U\to Y:u\mapsto y$ (11) with resources (e.g., fresh water, battery energy, oxygen) implicitly consumed through this mapping as elements of $R$ are not typically stated in standard references. Then $\varrho$ is devised based on $\varsigma$ with the belief that the model is a trustworthy approximation of the true system. If $\varsigma$ (11) is either unknown or not readily solved analytically, then DDC methods can help. For DDC, $\varrho$ is exclusively devised based on input-output data taken from P and thus lacks systematic design-and-analysis tools. #### II.2.2 Quantum control system Now we discuss $\mathscr{Q}$C. First, we review disparate definitions of $\mathscr{Q}$C as stated explicitly in the literature. Then, we explain the concept of $\mathscr{Q}$C extracted from these somewhat disparate definitions. In contrast to $\mathscr{C}$C, a concise definition of $\mathscr{Q}$C is lacking. Therefore, we provide two quotations from notable literature on $\mathscr{Q}$C. ###### Quotation 20 (Walmsley & Rabitz [51]). Quantum control refers to active intervention in a system’s dynamics to maximise the probability, based on a given metric, that the system evolves toward a desired target state. ###### Quotation 21 (Lloyd [52]). In the conventional picture of quantum feedback control, sensors perform measurements on the system, a classical controller processes the results of the measurements, and actuators supply semiclassical potentials to alter the behaviour of the quantum system. Although the two quotations appear to be different, these quotations are compatible in that the probability in Quotation 20 is inferred from sampling by sensors in Quotation 21 and the active intervention in Quotation 20 is elaborated as classical control and altering semi-classical potentials in Quotation 21. Neither of these quotations is sufficiently clear on what components of a $\mathscr{Q}$C system are classical or quantum. Now we explain the techniques for constructing $\varrho$ in $\mathscr{Q}$C, which have traditionally taken an MBC approach. The standard practice in $\mathscr{Q}$C is to construct a first-principle model $\varsigma$ that describes the evolution of P over time, with $u$ manipulating one or more coefficients of the Hamiltonian describing P [53, 54, 10]. Then $\varrho$ is devised based on $\varsigma$ via gradient-based greedy algorithms. A popular greedy optimization algorithm is GRadient-Ascent Pulse Engineering (GRAPE), which was first proposed to obtain $\varrho$ for control tasks in NMR spectroscopy [55]. Although the fitness landscape for $\varrho$ in $\mathscr{Q}$C is often compatible with greedy algorithms, sometimes greedy algorithms yield poor results, especially for constrained large-dimensional quantum systems [56]. Therefore, MBC methods employing global-optimization algorithms, such as evolutionary algorithms [57], have been developed to obtain $\varrho$ for $\mathscr{Q}$C problems such as quantum-enhanced adaptive phase estimation [12, 13] and quantum gate design [58, 59, 56]. For the cases where $\varsigma$ is either unknown or not readily solved analytically [54], ideas from DDC $\mathscr{C}$C have been used in $\mathscr{Q}$C with success [60, 61, 62, 63, 64, 65]. However, the lack of a formalized structure prevents the rapid progress of DDC $\mathscr{Q}$C techniques. #### II.2.3 Learning for control We now proceed to discuss literature applying ML techniques for both types of control, i.e., classical and quantum. We begin by summarizing the concept of machine-learning control (MLC). Then we summarize the state of the art for classical ML applied to both $\mathscr{C}$C and $\mathscr{Q}$C. Finally, we summarize the literature on QML for $\mathscr{C}$C. MLC is the concept of using ML algorithms to learn an effective $\varrho$ for a control system. MLC is motivated by control problems, where identifying and modelling a plant is challenging or infeasible due to unobservability and highly non-linear effects. MLC typically involves a “learning controller” who employs learning techniques to execute a control task [16, 5]. Learning enables a controller, who is neither omniscient nor possesses a feasible alternative, to execute the task successfully by assisting in designing feasible control policies. ML methods are highly flexible and adaptable but lack systematic design-and-analysis tools for studying their stability and robustness in the context of control [50]. ###### Remark 4. To cast a control problem into a learning problem, we identify the components of a control task as the components of learning, namely, $\mathscr{A}$, $\mathscr{T}$, $\mathscr{P}$ and $\mathscr{E}$. $\mathscr{A}$ devises $\varrho$ for C such that the control task is achieved. In this scenario $\mathscr{E}$ comprises plant feedback and control signal. The user chooses $\mathscr{P}$, which is calculated based on r. Classical ML has been widely applied for controlling classical operations. We highlight three approaches to incorporating ML into $\mathscr{C}$C. (i) Using ML to construct $\varsigma^{-1}$ without prior knowledge of $\varsigma$ (11) [5, 50]. (ii) Using ML for system identification, i.e., learning $\varsigma$ based on training data. The obtained $\varsigma$ is then used to design $\varrho$ (10) following MBC methods [66, 67]. (iii) Using ML for safe control, where a learning agent uses data to safely improve performance by learning the uncertain dynamics [68]. Fu’s seminal work [16] introduces a framework for utilizing $\mathscr{C}$L for $\mathscr{C}$C, where an agent called teacher (T) assess and directs learning of a classical C towards better performance as shown in Fig. 3. Figure 3: A block diagram representing the existing learning-for-control framework [16]. Here C is a learning controller that devises $\varrho$ using the information y from P such that the control goal r is satisfied. The teacher T evaluates the performance of C and directs the learning performed by C such the system’s overall performance is gradually improved. T may or may not be involved in the learning loop depending upon whether the learning is done following an SL or UL scenario [16]. Classical ML has been applied to $\mathscr{Q}$C as well. Classical ML and heuristic optimization techniques are actively used for designing high- performance quantum gates for fault-tolerant quantum computing [69]. Additionally, neural networks are used in some other $\mathscr{Q}$C tasks including adaptive quantum tomography and dynamic decoupling [62]. Meta- heuristic algorithms are also shown to be advantageous in designing $\mathscr{Q}$C policies that are robust to random noise and decoherence [70]. We now consider the application of QML for control. QML for control has been studied in the context of quantum RL [71]. RL is a valuable tool for solving direct adaptive optimal control problems [72]. Therefore, developing quantum variations of RL algorithms is an important topic. Recent works on the concept of quantum RL, however, are limited to solving simple $\mathscr{C}$C problems such as the problem of maze traversal being solved on a quantum annealer [73, 74, 75, 76]. ### II.3 Unifying classical and quantum mechanics In this subsection, we explain our approach for combining classical and quantum descriptions of any physical system, and this framework is important to our overall aim of unifying classical and quantum control and learning. We begin with a brief review of C∗-algebra, which provides a consistent framework for observables independent of whether the system is classical or quantum. Then we discuss the correspondence principle and, finally, describe the representation of system states for classical and quantum mechanics. We adopt an operational viewpoint, which is about states being descriptions of how a system is prepared, and measurements being about describing detection [77, 78]. Formally, this description can be made rigorous by treating states as linear functionals on the appropriate vector space for the given physics (classical or quantum with particular symmetries) and measurement as a positive operator-valued measure (POVM) [79]. A duality exists between states and measurements, and the basic mathematical description is achieved by employing observables within a C∗ algebra $\mathcal{A}$ [80]. Each observable in $\mathcal{A}$ is self-adjoint and is a vector-space homomorphism (“linear operator”). The vector space for classical physics is probability space and, for quantum mechanics, is Hilbert space or its extension to generalised functions for infinite-dimensional space [81, 82]; here we maintain the simpler language of referring to Hilbert space whether or not the space is finite- or infinite-dimensional, with the technicalities being covered by literature on the Gel’fand triple [83, 84, 85]. Evolution is described by mappings known as channels. We now explain classical and quantum channels based on Wilde’s two descriptions [17], which we present here as definitions. The following definition rephrases slightly Wilde’s description of a classical channel. ###### Definition 2 (Wilde [17]). A classical channel is a conditional probability distribution from input random variable $X$ to output random variable $Y$. The next definition is verbatim from Wilde’s book [17]. ###### Definition 3 (Wilde [17]). A quantum channel is a linear, completely positive, trace-preserving map. As our objective is to describe learning and control agnostically, i.e., without regard to whether the object is quantum or classical, we need to combine Defs. 2 and 3 into a single agnostic definition of a channel: ###### Definition 4. A channel is a linear, completely positive, trace-preserving map acting on an inner-product space. Classical-quantum correspondence is a method either for quantizing a classical system or dequantise a quantum system and can be achieved in various ways [86, 87]. One way to achieve classical-quantum correspondence is via geometric correspondence. Geometry in the classical case arises from the Poisson bracket with states being represented on phase space, which is a Poisson manifold [88]. Quantum states, on the other hand, can be represented on a Kähler manifold with commutators imbuing a geometry that is analogous to the classical case [89, 90, 91]. This geometric correspondence can be challenging to establish in a mathematically rigorous way, but we do not require such sophisticated methods here [87]. In practice, as observed by Dirac [92], Poisson brackets of observables are simply replaced by commutators with $\text{i}:=\sqrt{-1}$ introduced as a scalar coefficient. With these notions in mind, we can understand that combining classical and quantum definitions of channel into 4 is consistent with Wilde’s claim below. ###### Quotation 22 (Wilde [17] §4.6.4). [C]lassical channels are special cases of quantum channels. Thus, if this inner-product space is Hilbert space, then the channel is quantum in concordance with Def. 3. If, on the other hand, the inner-product space is restricted to being probability space (as a subspace of Hilbert space [93] (ch 20)), then the channel is classical, matching Def. 2. Subsequent to the channel the system is subject to measurement, which we now explain mathematically. Rigorously, measurement corresponds to a POVM which we now describe classically and quantumly. ###### Definition 5 (van Fraassen [94]). [M]easurement is an operation that locates an item (already classified as in the domain of a given theory) in a logical space (provided by the theory to represent a range of possible states or characteristics of such items). In the quantum case, every ideal measurement corresponds to an observable, which is a self-adjoint operator in the C∗-algebra and, for finite-dimensional Hilbert space, can be represented by a Hermitian matrix. More generally, for blurry measurement, the POVM assigns some probability to each observable. Classically, the observables act on a probability space, and one way to obtain the classical POVM is to diagonalize the matrices representing observables [49]. We represent states of classical systems as elements of $L^{1}(\mathbb{R})$, which are square-integrable real-valued functions over real numbers and their limits, such as Dirac $\delta$. Uncertainties are associated with the spread of these distributions, such as standard deviation [95], entropic uncertainty [96] or Cramér-Rao lower bound [97]. In quantum mechanics, uncertainty arises from Born’s rule, for which the distribution of measurement outcomes arises by associating this distribution with the squared modulus of the wavefunction, itself being a representation of the state in $L^{2}(\mathbb{R})$ (complex- valued normalizable functions over real numbers) [98]. ### II.4 Learning for adaptive quantum-enhanced interferometric-phase estimation In this subsection, we summarize work to date on ML methods applied to A$\mathscr{Q}$P. First, we explain the topic of A$\mathscr{Q}$P. Then we discuss A$\mathscr{Q}$P in the context of $\mathscr{Q}$C with particular attention to policies, i.e., control procedures, and, finally, we elaborate on learning methods for such policies. #### II.4.1 Adaptive quantum-enhanced interferometric-phase estimation We now introduce the concept of A$\mathscr{Q}$P and its mathematical description. First, we focus on the interferometric transformation and the nature of the input state. We then use this mathematical description to quantify the imprecision in terms of the Holevo variance. Figure 4: A block diagram representing the A$\mathscr{Q}$P scheme as a closed- loop control (Fig. 2). The scheme utilizes a passive linear lossless two- channel interferometer, such Mach-Zehnder, which has a pair of input and output ports, one element of each pair labelled 0 and the other 1. The interferometer arms are also labelled 0 and 1. Within the interferometer, arm 0 undergoes a known controllable phase shift of $\Phi$, and arm 1 undergoes an unknown phase shift of $\varphi$. The quantum P incorporates the interferometer and its accompanying sensors and actuators and is represented by a purple block. The classical C is represented by an orange block, and the double-lined orange arrows represent the flow of classical information. C is in charge of sending control signal $u$ to update $\Phi$. The update in $\Phi$ is performed based on a pre-established $\varrho$ and P’s feedback $y$, which consists of the photon measurement information bit $b$ and the current value of $\Phi$. y goes to z as a fan-out, and after all $N$ photons have been injected into the interferometer and measurements are completed, the final bit string for all measurements maps to $\tilde{\varphi}$. The aim of A$\mathscr{Q}$P is an empirical unbiased estimate $\tilde{\varphi}$ of the relative unknown phase $\varphi-\Phi$ of a channel $\mathcal{I}(\varphi;\Phi)$ describing an interferometer, with one tunable parameter $\Phi$ and one unknown parameter $\varphi$, typically each being the phase shift of one arm of a two-arm interferometer. The imprecision of the estimate $\tilde{\varphi}$, namely $\Delta\tilde{\varphi}_{N}$, is a function of the quantum resource, in this case the number $N$ of particles injected sequentially into the interferometer. Without quantum resources, the standard quantum limit (SQL) is [99, 100, 101, 102] $\Delta\tilde{\varphi}_{N}\sim\nicefrac{{1}}{{\sqrt{N}}}.$ (12) The ideal A$\mathscr{Q}$P scheme involves a passive linear lossless two- channel interferometer, such as Michelson, Sagnac or, without loss of generality, Mach-Zehnder [103, 104]. The interferometer has a pair of input, and a pair of output, ports, which one element of each pair labelled 0 and the other 1, as depicted in Fig. 4. Within the interferometer, the arms are labelled 0 and 1 as well. The labelling of output ports conforms to our requirement that a particle injected into the 0 input port emerges from the 0 output port for $\varphi=0$. Within the interferometer arm 0 undergoes a known controllable phase shift of $\Phi$ and arm 1 undergoes an unknown phase shift of $\varphi$. Maximal quantum enhancement depends on choosing an appropriate input state [105, 106, 107], which we assume to be pure (ideal case), and is a sequence of photons with the choice of input port a degree of freedom. Extracting the maximum quantum advantage depends on choosing an appropriate multi-photon input state, denoted by $\ket{\Psi}_{N}$. This state is a superposition of basis states like $\ket{01\cdots 11}$, denoting the first photon in port 0, the second in port 1, the second last in port 1 and the last in port 1, as an example. Although we do not treat loss here, we restrict our consideration to permutationally-symmetric entangled states (changing the ordering of arrival times does not change the multi-photon state) as such states are resilient to photon loss [108]. We now elaborate on the procedure for obtaining $\tilde{\varphi}$. The probability distribution for the unknown phase $\varphi$ is denoted $p_{N}(\varphi;\bm{\varpi})$, with $\bm{\varpi}$ labelling a family of distributions, determined by the input state (with $\bm{\varpi}$ serving as a label for families of input states). Although the measurement could be general [109], we restrict measurements to be in the computational basis, i.e. single- photon detector at each output port. The information about single-photon detector clicks is in $x_{m}\in\\{0,1\\}$ (0 for detecting at output port 0 and 1 for detecting at output port 1), with this bit then sent to C. Then C employs policy $\varrho$, which we restrict to a binary decision tree [110], to decide from bit $x_{m}$ what the next value of $\Phi_{m}$ should be. After all $N$ photons have been injected into the interferometer and measurements are completed, the final bit string $\bm{x}_{N}$ for all measurements maps to $\tilde{\varphi}$ according to a formula that is sensitive to the choice of $\varrho$. We now discuss the scaling of $\Delta\tilde{\varphi}_{N}$ (12) for A$\mathscr{Q}$P. $\Delta\tilde{\varphi}$ is the standard deviation of $\tilde{\varphi}$ if $\Delta\tilde{\varphi}\in\mathbb{R}$, but some other natural quantity serving as the variance is needed given that $\tilde{\varphi}$ is a periodic variable; see Fig. 5. First, we introduce the Fourier transform $\check{p}_{N}(\nu;\bm{\varpi}):=\oint\text{d}\tilde{\varphi}\,p_{N}\left(\tilde{\varphi};\bm{\varpi}\right)\text{e}^{\text{i}\nu\tilde{\varphi}},$ (13) with $\check{}$ our notation for Fourier transform, and $\nu$ is the discrete conjugate variable to phase $\varphi$. By setting $\nu\leftarrow 1$, this Fourier transform (13) yields sharpness $S_{N}(\bm{\varpi})=\left|\check{p}_{N}(1;\bm{\varpi})\right|^{2},$ (14) from which Holevo variance [111, 112], which is an appropriate variance for a periodic variable, emerges: $\Delta\tilde{\varphi}_{N}^{2}=:V_{N}:=S_{N}^{-2}-1.$ (15) Estimating $S_{N}$ is achieved by injecting $\ket{\Psi}_{N}$, photon-by- photon, many times for sampling purposes. The quantum limit to imprecision scaling with respect to $N$ is the so-called Heisenberg limit (HL) [113] $\Delta\tilde{\varphi}_{N}\sim\nicefrac{{1}}{{N}},$ (16) but this limit is typically not reached in specific operational schemes [114, 115]. Figure 5: An example of a distribution of the estimate $\tilde{\varphi}$, with $\bm{\varpi}$ labelling the family of distributions, and $\varphi$ indicating the value to be estimated. When dealing with distributions over a compact domain, such as a circle, there can be ambiguity in defining covariance. The natural way to address this issue is to embed the circle in a complex plane or represent it by using a set of real numbers. The concept of ‘sharpness’ is introduced to quantify the spread of distribution along the boundary of the circle. The centre of mass, despite the mass being restricted to the boundary, is located inside the circle. The more spread out a distribution is, the further towards the origin the centre of mass goes. For example, for a uniform distribution around the circle, the centre of mass would be at the centre, which would be the most uncertain distribution. The complex number for sharpness represents the coordinate of the centre of mass, with the Fourier transform (13) yielding the moment-generating function for the distribution. Therefore (14) tells us how far away the centre of mass is from the centre without considering the direction. #### II.4.2 Devising feasible control policies for A$\mathscr{Q}$P First, we describe relevant elements of $\mathscr{Q}$C. Then we explain how $\Phi$ is updated according to $\varrho$. Finally, we introduce the term ‘policy orbit’, which comprises policies for each $\ket{\Psi}_{N}$ with $N\in\\{4,\dots,N_{\text{max}}\\}$. Finally, we explain the feasibility of policy orbits. We recast A$\mathscr{Q}$P as $\mathscr{Q}$C, which is expressed in Quotation 20. The $\mathscr{Q}$C problem in A$\mathscr{Q}$P falls under the category of measurement-based feedback control, which leads us to describe the components in A$\mathscr{Q}$P as elements of $\mathscr{Q}$C. P incorporates the interferometer and its accompanying sensors and actuators. C is in charge of sending control signal $u$ to update $\Phi$. The update in $\Phi$ is performed based on a pre-established $\varrho$ and P’s feedback $y$, which consists of the photon measurement information bit $b$ and the current value of $\Phi$. ###### Task 1 (A$\mathscr{Q}$P control task). Given an interferometer with an unknown phase shift $\varphi$ in one arm and a tunable phase shift $\Phi$ in the other arm, described by $\mathcal{I}(\varphi;\Phi)$, a quantum state of a sequence of photons $\ket{\psi}_{N}$ entering the interferometer, fast measurement of which port the photon leaves and feedback loop to control $\Phi$, devise a $\rho$ for C that beats the SQL (12) for estimating $\tilde{\varphi}$. We now discuss how C adaptively adjusts $\Phi$ based on $y$ and $\varrho$. Neglecting loss, the $m$th photon comes out from either of the output ports with a probability that depends on $\Phi_{m}-\Phi_{m-1}$. We label this outcome by $b_{m}\in\\{0,1\\}$, where ‘0’ refers to the photon exiting the first port and ‘1’ to the photon exiting the second port. Given the policy vector $\varrho_{N,\bm{\varpi}}:=\bm{\Delta}_{N,\bm{\varpi}}=(\Delta_{1,\bm{\varpi}},\Delta_{2,\bm{\varpi}},\dots,\Delta_{N,\bm{\varpi}})\in\mathbb{T}^{N},$ (17) for $\mathbb{T}^{N}:=\mathrel{\scalebox{1.2}{{\hbox{\varprodb}}}}^{N}\mathbb{S},\,\mathbb{S}:=[0,2\pi)$ (18) being the $N$-torus and the circle (coordinate being the angle from $0$ to $2\pi$ (radians), respectively. The value of $\Phi_{m}$ is updated sequentially as $\Phi_{m}=\Phi_{m-1}-(-1)^{b_{m}}\Delta_{m},$ (19) by starting with $\Phi_{m}=0$, for every round of measurement $m$. Once all photons in the input $\ket{\Psi}_{N}$ are exhausted by the $M$th measurement, allowing for the loss of photons such that $1\leq M\leq N$, the estimate of $\varphi$ is given by $\tilde{\varphi}\equiv\Phi_{M}$. As each measurement outcome $b_{m}$ is a binary value, the value of $\tilde{\varphi}$ obtained by this scheme is also discrete. We now define a policy orbit and its use in assessing scaling. For a maximum number of photons $N_{\text{max}}$, the ‘policy orbit’ is $\mathbb{\Delta}_{{N_{\text{max}}},\bm{\varpi}}:=\left\\{\bm{\Delta}_{N,\bm{\varpi}}\right\\}^{N_{\text{max}}}_{N=4}$ (20) with the subscript $(N_{\text{max}},\bm{\varpi})$ (21) describing both the maximum allowed $N$ and the input state as a function of $N$; see Fig. 6. A feasible policy orbit $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$ satisfies $\Delta\tilde{\varphi}_{N}$ having approximate power-law scaling with respect to $N$, with the scaling surpassing $\nicefrac{{1}}{{\sqrt{N}}}$. Numerically, the feasibility condition can be tested from a log-log plot of $V_{N}$ vs $N$ [70]. Specifically, this plot should fit a straight line $\log V_{N}=-2\wp\log N+\text{const},\;\wp>\nicefrac{{1}}{{2}},$ (22) with a goodness-of-fit $\overline{R^{2}}$ exceeding some acceptable value, e.g. 0.999 [116]. Figure 6: A pictorial representation of a policy orbit (20), with $N_{\text{max}}=7$, for the A$\mathscr{Q}$P control task. Each policy vector $\bm{\Delta}_{N,\bm{\varpi}}$ is represented by the coordinates on the corresponding stack of $N$ circles, where the red dots mark the values for each component $\Delta_{i,\bm{\varpi}}$ (17). Physically, each red dot in one column represents incremental changes in the estimation of $\varphi$ with a fixed number of photons (19). #### II.4.3 Learning for A$\mathscr{Q}$P Now we critically review existing literature on the topic of artificial intelligence for A$\mathscr{Q}$P. We begin by summarizing how meta-heuristic optimization and ML have been employed to solve the A$\mathscr{Q}$P task. Then we present our criticism of literature casting A$\mathscr{Q}$P as ML based on Mitchell’s definition of learning in Quotation 1 and Sutton’s comment on evolutionary methods for solving RL problems in Quotation 10. A common tool for solving the A$\mathscr{Q}$P control task, namely, Task 1, is to employ meta-heuristic optimization [117]. As examples, particle swarm optimization (PSO) [118, 12], differential evolution (DE) [13, 70, 119], genetic algorithms [120] and Bayesian optimisation [107, 121] have proven to be useful for Task 1. PSO was leveraged to obtain $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$ numerically for Task 1 [118, 12]. This estimation using PSO was employed successfully in an optics experiment [122]. Using DE for AQP was shown to have a run-time advantage over PSO [13]. In principle, ML can be employed to solve Task 1 as well. However, claims to use ML for Task 1 [118, 12, 120, 123, 13, 122, 124, 125, 126, 127, 128, 129, 130, 131] do not comply with Mitchell’s definition of ML stated in Quotation 1. Our criticism is restricted to applying ML to Task 1, not to more general instances of using ML for phase estimation [132, 133, 134, 135, 136, 137, 138, 139]. The reason for our criticism is that each article claiming to employ ML for Task 1 actually cast the problem in the context of optimisation. Now we consider specifically RL for Task 1 [118, 12, 140, 119, 128, 129]. The criteria for claiming RL is more stringent because both Quotation 1 and Quotation 10 have to be considered. Specifically, evolutionary algorithms, such as genetic algorithms, genetic programming and simulated annealing, do not qualify as RL because these algorithms do not incorporate learning while interacting with the environment during the lifetime of the RL agent. ## III Framework for learning and control Now that we have completed our survey of key background notions and state of the art, we proceed to set out how we approach building a framework for learning and control, whether the system is appropriately described classically or quantumly. To begin with, we explain our approach for combining notions of learning and control without assuming underlying classical or quantum constructs. In other words, we describe how we construct a framework for control whose language does not presuppose either classical or quantum rules. We then explain how we bring learning into this framework for control. ### III.1 Unifying $\mathscr{C}$C and $\mathscr{Q}$C Now we explain our approach to unifying classical and quantum control by formulating control in a way that is independent of whether the underlying physics is classical or quantum. First, we introduce a definition of control that is agnostic (literally, ‘not known’) with respect to a classical vs. a quantum framework: we can incorporate solely classical or solely quantum components (elements of the control system) or a hybrid version with both classical and quantum components. Second, we explain how we construct a mathematical underpinning for our agnostic definition of control. Finally, we discuss the elements of our control system, including controller C, plant P and communication channels. In §II.2, we gave two authoritative definitions of a control system, as depicted in Fig. 2. However, each of these definitions is vexing. * • Quotation 16 defines a control system by how it is made and whether the task is completed but lacks a precise explanation of what must be done well. * • Quotation 17 is unclear regarding what “sensed quantities” are and why autonomous behaviour is needed (why can’t a sentient being, like a traffic constable, be incorporated into a control system?). Instead, we define the control system in two ways. One way is the top-down definition. ###### Definition 6 (Top-down definition of control system). A control system steers system variables so that pertinent observables reach specific targets. Here, the control system is a black-box (the inside is unimportant, hence not seen), but input, output and performance measure are clear. The alternative bottom-up definition establishes the control system in terms of its key constituents. Before providing the bottom-up definition, we define essential components of the control system agnostically. The controller is described by the policy $\varrho$ (20), and the plant is described by a physical process (11), but these two descriptions assume classical input and classical output so are not yet agnostic. To be fully agnostic, we consider generalizations of policy and plant dynamics that accept both classical and quantum input and yield both classical and quantum output. Therefore, in concordance with Remark 1 we redefine error signals $D$, actuator actions $U$ and plant-based sensor outputs $Y$ as being spaces and do not make suppositions about them being Hilbert spaces or probability spaces. ###### Definition 7 (Evaluator). An evaluator E is a channel described by $r$-dependent policy $\varepsilon_{r}:Y\to D$. ###### Definition 8 (Controller). A controller C is a channel described by $r$-dependent policy $\varrho_{r}:D\to U$. This definition captures the essence of Eq. (10) but does not presuppose that $D$, $U$ and $Y$ are classical, although we treat the reference $r\in\mathbb{Z}$ and $\varrho$ as classical. ###### Remark 5. Our definition of the controller as an agent does not include sensors or actuators explicitly, in contrast to typical definitions; for example, Quotation 4 demands that C “can be viewed as [using] sensors and … actuators”, but our definition of the full control system in Def. 6 makes equivalent having sensors and actuators at P or at C. ###### Remark 6. We assume that C is able to read and act on all input data and that all outputs from C are faithfully sent without loss to P. We separate the agent executing the policy from sensors and actuators, which we regard as more conveniently defined as part of the plant. The reason for defining C this way becomes evident as we describe our whole system-of-system approach to control and learning, but the short version is that explicit communication lines between C, P, and eventually the teacher are described better using Definition 8, which accepts sensor input and yields outputs sent to actuators. The controller controls a plant, which we now define. ###### Definition 9 (Controlled plant). A controlled plant P performs a given task $\varsigma$, which maps resources $R$ and controlled actuator input $U$ to output $Y$. ###### Remark 7. This definition for P does not require P to be successful at the given task; the C’s job is to steer the plant to the successful execution of the task. We now discuss the control switch S. S is a channel that guides y to z if $y=r$ and to y otherwise; i.e., $\textbf{if}~{}y=r~{}\textbf{then}~{}\text{y}\to\text{z}~{}\textbf{else}~{}\text{y}\to\text{y},$ (23) with $\to$ referring to ‘guided’. In the classical case, the statement $y=r$ is randomly assigned true or false with the proportion of true vs. false determined by a threshold value. In the quantum case, $y=r$ is assigned true or false following the quantum fingerprinting technique with a one-sided error [141]. In the following, we first describe a classical S. Then we agnostisise (Our term for modifying the description to be equally applicable to quantum and classical cases) the definition of S. Finally, we show how the quantum S yields classical S in the classical limit; see Fig. 7. Figure 7: A diagram showing the relationships between classical ($\mathscr{C}$), quantum ($\mathscr{Q}$) and agnostic ($\mathscr{A}$) descriptions of a physical system. To define any component (e.g. switch) of our framework, we begin from its $\mathscr{C}$ description and “agnosticise” to construct an equivalent $\mathscr{A}$ description. By “quantising” this $\mathscr{A}$ description, we then derive the $\mathscr{Q}$ definition, which can be related back to the original $\mathscr{C}$ description via “dequantising”. ###### Remark 8. We follow quantum-pseudocode conventions, where quantum data and quantum logical operations are distinguished by an underline [142]. For example, $\underline{\textbf{if}}~{}\underline{c}~{}\underline{\textbf{then}}~{}\text{cnot}(\underline{b},\underline{a}),$ (24) implements a Toffoli gate [142], where $\underline{a}$, $\underline{b}$ and $\underline{c}$ are quantum registers and $\textsc{cnot}(\underline{b},\underline{a})$ is the controlled-not operation, controlled by the first argument. We note that the distinction between the classical and quantum syntactic annotation (meaning the distinction between whether the logic or data are classical or quantum) is primarily semantic and the quantum pseudocode without annotation makes sense operationally as emphasised in the following two quotes. However, in Def. 11 of the quantum S we annotate data and logic by underlining for clarity. ###### Quotation 23 (Knill [142]). In principle, one can write quantum pseudocode without using annotation. Note that only registers declared as bit sequences can be used for quantum operations. From an operational point of view it suffices to describe what happens to a register which is currently in superposition when subjected to a classical (non-reversible) operation. ###### Quotation 24 (Knill [142]). The conventions used here require that a register symbol is always considered either classical or quantum. Semantically, which is in effect depends on the most recent operation applied to it. If it has been declared as quantum, or a proper quantum operation has been applied, then no further classical operations can be used until it is measured. The syntactic annotation helps keep track of the semantics of a register in any given section of code. We cast S in the language of information theory, specifically as a logically reversible gate. Logical reversibility connects well the classical and quantum descriptions with reversible logic recasting irreversible Boolean functions as logically reversible permutations whose smallest logical element has three inputs and three outputs [143]. We regard S as having three inputs, various conditional information-processing steps, and three outputs; see Fig. 8(a). The three inputs are the control bit c whose value is $c\in\\{0,1\\}$, the plant-based sensor output dit y and the reference signal dit r. The three conditional information-processing steps are (i) swap y and r (i.e., swap(y,r) iff $c=1$ (if $c$ then swap(y,r)), (ii) discard r (which might have changed value), and (iii) if $c$ then y $\to$ z else y $\to$ y. The three outputs are c (unchanged from input), y and z (the control system output). Classically, the first step of S can be achieved by a Fredkin gate [144] with input c replaced by the random Bernoulli variable $G$, which is a random process labelled by threshold value $q\in(0,1)$ and expressed as $\begin{cases}0&\text{if unif[0,1]}\geq q,\\\ 1&\text{otherwise},\end{cases}$ (25) with unif referring to a uniform distribution. The threshold value $q$ is the upper bound for the distance between $y$ and $r$ such that those values are deemed to be approximately equal; i.e., if $\operatorname{dis}(y,r)<q$ then swap(y,r) Conceptually, $q$ should be an overlap such as the Bhattacharyya coefficient between two probability distributions [145]. The Fredkin gate follows and then r is discarded. The final step of S is realized by the railway switch RS, if $G$ then y $\to$ z else y $\to$ y. Figure 8: A block diagram showing the internal components of a control switch S, which we have previously represented as a filled circle in Fig. 2. We regard S as having three inputs, various conditional information-processing steps, and three outputs. (a) A classical S has three classical inputs, namely, a control bit c whose value is $c\in\\{0,1\\}$, plant-based sensor output dit y and reference dit r, and three outputs. The operation G converts c into a random Bernoulli variable. If this random variable is 1, r and y are swapped, followed by the trashing of r. Finally, the railway switch (RS) directs y to the output or the feedback based on the value of the random variable. The three outputs are c (unchanged from input), y and z (the control system output). (b) Quantum S is similar to classical S except that we convert data and logic to quantum data and quantum logic, and the controlled-SWAP operation is realized by the quantised Fredkin gate with conjugation with Hadamard gates on the control qubit. Similar to the classical S, a quantum S has three inputs c, r, and y, which are qubit, qudit and qudit, respectively. The final step of quantum S is achieved by measuring the control qubit and feeding the one-bit measurement outcome to RS, which directs the qudit y accordingly. Before agnosticising S, we generalise c to accept either classical or quantum input and to yield classical or quantum output, if the input is quantum or classical or the input is quantum, respectively. Therefore, following Remark 1, we regard the set of all control signals $C:=\\{c\\}$ as forming a Hilbert space without explicitly considering whether or not the Hilbert space is restricted to a probability space. ###### Definition 10 (Agnostic S). A control switch S is a channel that executes Procedure 1 ($|$ means ‘either’). 1: 2:bit|qubit control $\triangleright$ control $c$ 3:dit|qudit sensor $\triangleright$ plant-based sensor output $y$ 4:dit|qudit external $\triangleright$ reference $r$ 5: 6:bit control 7:dit|qudit sensor 8:dit|qudit external $\triangleright$ control system output $z$ 9:procedure agnosticS(control,sensor,external) 10: if control then swap(sensor,external) 11: discard external $\triangleright$ partial trace quantumly 12: control $\leftarrow$ control 13: if control then external $\leftarrow$ sensor $\triangleright$ This is control-dependent guiding 14: return control,sensor,external 15:end procedure Procedure 1 Classical$|$quantum control switch We now cast S as a quantum channel by quantising the agnostic version in Def. 10 and Procedure 1; see Fig. 8(b). Quantum S is similar to classical S discussed above except that we convert data and logic to quantum data and quantum logic, and the controlled-SWAP operation is realized by the quantised Fredkin gate [146, 147] with conjugation with Hadamard gates on the control qubit [141]. The final step of quantum S is achieved by measuring the control qubit and feeding the one-bit measurement outcome to RS. RS then executes if $c$ then y $\to$ z else y $\to$ y. Formally, we quantise S in Def. 10 by modifying Procedure 1 through the use of quantum syntactic annotation for quantum data and logical operations. ###### Definition 11 (Quantum S). A quantum S is a quantum channel that executes Procedure 2. 1: 2:qubit control $\triangleright$ control $c$ 3:qudit sensor $\triangleright$ plant-based sensor output $y$ 4:qudit external $\triangleright$ reference $r$ 5: 6:bit control 7:qudit sensor 8:qudit external $\triangleright$ control system output $z$ 9:procedure quantumS(control,sensor,external) 10: if control then swap(sensor,external) 11: discard external $\triangleright$ partial trace 12: control $\leftarrow$ control $\triangleright$ Measure control qubit 13: if control then external $\leftarrow$ sensor$\triangleright$ This is control-dependent guiding 14: return control,sensor,external 15:end procedure Procedure 2 Quantum control switch ###### Remark 9. A consequence of quantum S is that, after each control-loop cycle, y could become entangled with r, which causes decoherence in y. ###### Remark 10. We now explain how we dequantise quantum S. In quantum S, the Hadamard conjugation of c transforms it from the z-basis to the x-basis and back and then it is measured. The conversion to the x-basis yields a two-by-two pure density matrix. Classically this conversion is not allowed because there is only one basis in the classical setting. Therefore, to recover classical S we diagonalise the pure density matrix as it would make it an equal mixture of zeros and ones. This procedure is simulated by $G$ (25) in Fig. 8(a). Whereas there are two Hadamard gates for conjugation in the quantum case, this effect is simulated by $G$ in the classical S and the second Hadamard maps to line. The measurement is not necessary either as c is classical. Next, we define bottom-up control, as a contrast to top-down control in Def. 6. Our bottom-up definition is related to Quotation 16, wherein a control system is described as interconnected components required to build a standard control system. ###### Definition 12 (Bottom-up definition of control system). A control system comprises a controller, a controlled plant, communication lines between the controller and the plant, input of reference information for establishing targets and output being a task-relevant description of the plant’s final state. ###### Remark 11. Two examples of task-relevant descriptions include a simple beep that conveys the plant’s task is complete or, alternatively, sensor readings that convey temperature and consumed energy plus a beep that says the task is completed. ###### Remark 12. Both Defs. 6 and 12 are agnostic with respect to whether the controller, communication channels and plant are each quantum or classical. ###### Definition 13 (Quantum control). $\mathscr{Q}$C is $\mathscr{C}$C with at least any one of the following being quantum: E, C, P or S. ###### Remark 13. Our Def. 13 reduces to Quotations 20 and 21 subject to the restriction that P’s dynamics are strictly quantum. Now that we have agnostic definitions of the control system in Defs. 6 and 12, we describe mathematically the control system and its components without being explicit regarding whether particular components behave classically or quantumly; see Fig. 9. To this end, we treat E, C, P and S each as a classical or quantum channel, and these channels are themselves connected to each other by trivial channels, which also can be classical or quantum. Together, the controller and the plant characterizes the control system channel. Figure 9: A block diagram representing our agnostic closed-loop control system, where the single-line arrows represent the flow of both classical and quantum information, and the double-line arrows represent the flow of only classical information. We modify the $\mathscr{C}$C loop of Fig. 2 to add an agnostic S and make E, C and P act as either quantum or classical. The input to the control system is zeros (i.e., $y\equiv 0$, with $0$ denoting a string of zeros), and the output is labelled z. The termination condition for the control system is baked into the policy (i.e., is written into the code for the policy). We augment the channels between C, P and S with an extra bit that carries a termination signal from C to S. Upon receiving the termination signal, S switches its output from y to z. C uses an internal clock, whose time can be indicated by integers, to determine whether the termination condition is met. ### III.2 Unifying $\mathscr{C}$L and $\mathscr{Q}$L We now explain our approach to unifying classical and quantum ML. First, we extend the classical definition of ML in Quotation 1 to the quantum case. Next, following the definitions of classical and quantum ML we formulate learning in a way that is independent of whether the $\mathscr{A}$ or $\mathscr{E}$ are classical or quantum. We now extend Mitchell’s description of ML in Quotation 1 to the quantum case. We define QML by whether $\mathscr{A}$ and $\mathscr{E}$ are quantum. Building on the effective definition of a learning agent $\mathscr{A}$ in Quotation 4, a quantum learning agent is defined by whether any components of the agent are quantum. Quantising $\mathscr{E}$ involves a superposition of inputs, e.g, superposition of features $\ket{\text{feature}}$ for UL, superposition of labelled features $\ket{\text{feature}}\ket{\text{label}}$ for SL and superposition of action-observation [71] product states $\ket{\text{action}}\otimes\ket{\text{observation}}$ for RL. As a classical agent does not benefit from the richness of quantum $\mathscr{E}$, we only consider a quantum agent in the context of quantum $\mathscr{E}$. Now, following the definition of classical ML in Quotation 1, we define QML as following ###### Definition 14 (Quantum ML). A quantum agent $\mathscr{A}$ is said to learn from quantum experience $\mathscr{E}$, with respect to some class of tasks $\mathscr{T}$ and classical performance measure $\mathscr{P}$, if its performance at tasks in $\mathscr{T}$, as measured by $\mathscr{P}$, improves with quantum experience $\mathscr{E}$. We now introduce a definition of ML that is agnostic with respect to a classical vs. a quantum framework. Following Defs. 1 (classical learning) and 14 (quantum learning), plus Def. 4 (agent), we now define agnostic ML. ###### Definition 15 (Agnostic ML). An agent $\mathscr{A}$ is said to learn from experience $\mathscr{E}$, with respect to some class of tasks $\mathscr{T}$ and classical performance measure $\mathscr{P}$, if its performance at tasks in $\mathscr{T}$, as measured by $\mathscr{P}$, improves with experience $\mathscr{E}$. ### III.3 Introducing learner and teacher/user Having established the unified control and learning schemes, we now introduce the two remaining components of our LfC framework, namely the learner and teacher/user. To this end, we discuss why we treat the learner as an agnostic agent, whereas we treat the teacher/user as a classical agent. Additionally, we explain the connection between learner and teacher/user from the perspective of an ML pipeline. In our framework, we introduce a learner or learning agent (L) whose purpose is to devise policies for C to execute. We define the combination of L and C as a “learning controller”. As compared to Fu’s definition of a learning controller, our learning controller is not one agent but comprises two separate agents. This two-agent model simplifies the ideas of quantum and classical learning controllers. Depending on whether L and C are classical or classical four possibilities arise, namely $\mathscr{C}$L–$\mathscr{C}$C, $\mathscr{C}$L–$\mathscr{Q}$C, $\mathscr{Q}$L–$\mathscr{C}$C and $\mathscr{Q}$L–$\mathscr{Q}$C. Except for $\mathscr{C}$L–$\mathscr{C}$C, we denote the rest as quantum-learning controllers as they include at least one quantum component. Nevertheless, our LfC framework needs to be independent of the underlying physics of C and L, which leads us to describe L agnostically. We represent the learner L, which executes the learning algorithm, in a purple box to account for the fact that it can be either classical or quantum. Figure 10: A block diagram representing our LfC framework. Double-lined arrows represent the flow of classical information and single-lined arrows represent the flow of both classical and quantum information. The teacher M and user U are classical agents and hence all connections between them and the rest of the blocks are represented by double lines. We define the combination of L and C as a “learning controller”. In the training phase, M directs the learning agent L to devise the control policy $\varrho_{r}$ based on z from P. M evaluates the performance of C and directs the learning process such that the control system’s overall performance is gradually improved. Additionally, M also sets parameters $\zeta$ of P and provides r to S and policy $\varepsilon_{r}$ to E. In the test and validation phases, U apply the learned policy $\varrho_{r}$ to execute the control task. Thus far, our framework requires the policy to appear out of thin air, whereas a complete picture would involve a teacher, whom we denote by M for the Latin term magister. M trains L to develop a policy that becomes useful for the user U, who replaces M when the training and validation are complete. M is an agent who implements the process of learning for control. Our description of M is different from Fu’s notion that M is only there to supervise or train. Our notion of M/U is motivated by considering the physical realization of the ML pipeline (2). ### III.4 Learning for control pipeline In this subsection, we present our agnostic learning for control scheme in Fig. 10, where we draw all the components, namely C, P, S, E, L, M/U, and their interconnections, and explain how the scheme works. We begin by describing the data preprocessing, followed by model calibration for our ML for control scheme. Then we explain the training and testing of these models. Finally, we explain the subtleties related to online vs. offline learning for control. We now describe the pre-processing step of the ML pipeline for control. In the pre-processing step, the data set $\mathscr{D}=\mathscr{D}_{\text{model}}\sqcup\mathscr{D}_{\text{test}}$ (26) is formed by applying pre-processing operations, such as data cleansing, feature extraction and feature selection, on the raw data set [27]. The raw data set is either generated using a simulation of the control system or created based on data collected after executing the control system without the learning loop for some limited settings. The pre-processing step usually involves domain expertise and subjectivity [148, 149], which we are not analyzing here. The next step of the ML pipeline is calibrating, which we now describe in detail. In this step, a tuple of feasible hyperparameters of the model is obtained by searching over the hyperparameter space. During this step, we treat M and not U. The calibrating step involves two sub-steps, namely, training and testing. In the training step, M obtains a tuple of hyperparameters, a randomly-sampled subset $\mathscr{D}_{\text{train}}\subset\mathscr{D}_{\text{model}}$ (27) and the subset $\mathscr{D}^{\prime}_{\text{test}}=\mathscr{D}_{\text{model}}-\mathscr{D}_{\text{train}}$ (28) for training L. Next, M provides L with $\mathscr{D}_{\text{train}}$ and the hyperparameters and instructs L to obtain a $\varrho$ that maximizes the control system’s performance on $\mathscr{D}_{\text{train}}$. Then, in the testing step, M provides L with $\mathscr{D}^{\prime}_{\text{test}}$ and instructs L to evaluate the performance of obtained $\varrho$ on $\mathscr{D}^{\prime}_{\text{test}}$. M and L collaborate to evaluate this performance for each tuple of hyperparameters by repeating these two sub-steps for different $\mathcal{D}_{\text{train}}$. After repeating this process of evaluating performance for all possible tuples of hyperparameters, the calibration step returns the tuple corresponding to the maximum $\varrho$ performance. In the training step, M provides L with $\mathscr{D}_{\text{model}}$, along with the hyperparameters returned from the calibration step. M, then instructs L to obtain $\varrho$ on this subset. Finally, in the testing step, M instructs L to provide C with the $\varrho$ obtained from the training step. M proceeds with providing C with $\mathscr{D}_{\text{test}}$, which is unseen in the calibrating and training steps, and instructs C to commence the control loop. L remains inactive for the remainder of the time. The control system executing $\varrho$ either passes or fails at the test step; if the control system passes, it is then used in the real world, wherein M is replaced by U [149]. Here we discuss the subtleties of training and testing a learning controller in both online and offline settings. Online ML methods are employed when $\mathscr{E}$ become available in sequential order and at each step of the control process. In online learning, there is no distinction between the training and testing stages. Therefore, the roles of M and U become identical. In this setting, M/U instructs both L and C to interact with P to generate $\mathscr{E}$ online. L then obtains a $\varrho$ that improves the control system’s performance with increasing $\mathscr{E}$. On the other hand, offline ML methods obtain a $\varrho$ for a given data set $\mathscr{D}$. Whereas, in the online learning setting, $\varrho$ updates as more $\mathscr{E}$ comes in; in contrast, offline learning $\varrho$ is fixed once the training and testing steps are finished. In offline learning, C and P are inactive in the training and testing steps. M and L collaborate through the training and testing steps until L obtains a $\varrho$ that satisfies the control system’s performance requirement. If no $\varrho$ satisfies the performance requirement, then the control system fails. ## IV Graph representation of state-of-the-art In this section, we provide a graphical interpretation of the research done in the field connecting learning and control. We explain how we identify peer- reviewed literature on any pair of the four topics $\mathscr{Q}$L and $\mathscr{C}$L or $\mathscr{Q}$C and $\mathscr{C}$C. We employ a square $\square$ to represent the state of the art with vertices representing each of the four topics and edges representing overlaps between pairs of topics. Then we discuss how edges are labelled; these edge labels represent state of the art. ### IV.1 Aggregating candidates We begin with explaining how we aggregate the candidates for deciding their memberships as vertices or edges of our knowledge graph. Our collection of candidates is achieved by first searching the literature for relevant peer- reviewed articles. Then we sort these candidates into classes corresponding to different vertex and edge types. We aggregate relevant literature by employing Google Scholar using $\mathscr{C}$C, $\mathscr{Q}$C, $\mathscr{C}$L and $\mathscr{Q}$L keywords and their pairwise combinations as search prompts. The search prompts we obtain either corresponds to the four vertices of the $\square$ or the edges connecting these vertices. Furthermore, we include references from review articles [150, 41, 151, 152, 153, 137, 154, 139, 155, 71, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 16, 51, 170, 131, 171, 172, 36, 173, 174, 50, 43, 10, 175, 37, 11, 66, 176, 177] to our list of literature. We then filter out non-relevant articles based on the abstracts and criteria explained in the following subsections Section IV.2 and Section IV.3. We then sort the literature into four bins corresponding to the vertices of the $\square$. One work can be sorted into more than one bin. Next, we assign each article to an edge or hyperedge based on its membership to vertices. If the article is a member of two vertices, we assign it to either a directed or an undirected edge, which represents using one topic to address another or the unification of topics. If the article is a member of three or more vertices, we assign it to a hyperedge. However, in our search, no candidate was a member of more than two vertices. Therefore a graph is sufficient to represent our knowledge graph. ### IV.2 Vertices We now explain the vertices of the $\square$ and their membership criteria. Each vertex corresponds to one of the four topics: $\mathscr{C}$C, $\mathscr{Q}$C, $\mathscr{C}$L and $\mathscr{Q}$L. We orient the $\square$ such that the two vertices at the top represent learning, and the two at the bottom represent control. The left side represents the classical regime, and the right side represents the quantum regime. We decide membership based on the definitions we have provided for each of these four topics, which in some cases are quotations from the literature and in other cases, they form definitions that we have constructed. We now proceed to explain our criteria for deciding whether a given article is accepted as being a member of either or both of the two control vertices. We use the two authoritative definitions of $\mathscr{C}$C in Quotations 16 and 17 to decide memberships for the $\mathscr{C}$C vertex. A candidate article is accepted as a member of $\mathscr{C}$C if its implicit or explicit definition of $\mathscr{C}$C matches any of the above two definitions. In the quantum setting, we use our own definition of $\mathscr{Q}$C in Def. 13 to decide membership for the $\mathscr{Q}$C vertex. ###### Remark 14. Of course, this procedure for deciding membership is somewhat subjective, so we give an example of one case of rejecting membership to clarify how this procedure works. An example of rejecting membership is given by Lloyd [52] for the reason that neither implicit nor explicit separation of E, S and M/U are given. The control loop evolves into a coherent superposition of C and P without a clear distinction between them. We follow Quotation 1 and our extension to that definition, Def. 14, as the basis for defining learning in both classical and quantum domains. Specifically, we use Quotation 1 to decide the membership of the $\mathscr{C}$L vertex and Def. 14 to decide the membership of the $\mathscr{Q}$L vertex. Following our definition of $\mathscr{Q}$L, we deem an article as being a member of $\mathscr{Q}$L by whether $\mathscr{A}$ and $\mathscr{E}$ are quantum. ### IV.3 Edges We now discuss the types of edges used to connect vertices in the $\square$. An edge represents articles that incorporate both topics represented by the two vertices connected by the edge. A directed edge represents articles that use one topic to address another, and an undirected edge represents literature uniting the two topics. The absence of an edge represents a lack of literature relating these two topics. We also describe a hyperedge for the cases where an article covers three or more topics. Each edge is labelled by the list of its members, i.e., references pertaining to that overlap. Instances of articles in the literature are decided to be members of a vertex or an edge. A directed edge membership is decided based on whether the literature only shows how to use knowledge in the vertex at the tail end of the edge for the topic represented by the vertex at the head of the edge. We refer to the knowledge represented by a directed edge as the knowledge going one way, i.e., from the ‘tail’ topic to the ‘head’ topic. An edge is directed only if all the literature is only one-way. We now proceed to explain our criteria for deciding undirected-edge memberships. An undirected edge membership is decided based on whether the literature is both a member of two vertices and the two directed edges that connect them. The undirected edge thus shows that the aggregate knowledge in the literature goes both ways, _not_ that all the literature goes both ways. Undirected edges signify some advance towards unifying the two topics. We use our top-down and bottom-up agnostic definitions of control, Defs. 6 and 12, respectively, to decide membership for articles that signify some advance towards unifying $\mathscr{C}$C and $\mathscr{Q}$C, and our agnostic definition of ML, Def. 15, to decide membership for articles that signify some advance towards unifying learning in classical and quantum domains. Edge memberships are useful if the literature strictly connects only two topics at a time. If an article were to be a member of three or more vertices, we would use a hypergraph with hyperedges connecting two or more vertices together [178]. As we have not found such an article, a graph is sufficient to represent our knowledge graph. Our article unites all four topics, so would require a hyperedge if it were to be included in the knowledge representation, but we do not include our article in this set. ### IV.4 Knowledge graph We now represent key literature on unifying classical and quantum control and learning as a knowledge graph [14]. We employ a square $\square$ to represent our knowledge graph with vertices representing each of the $\mathscr{C}$C, $\mathscr{Q}$C, $\mathscr{C}$L and $\mathscr{Q}$L topics and both directed and undirected edges representing connections between topic areas. Our knowledge graph is particularly useful to convey not just state of the art but also knowledge gaps. We begin with describing the $\square$. Then we explain each of the connections in our knowledge graph and identify the knowledge gaps in the literature. Figure 11: Knowledge graph representing key literature concerning the connection between each of the $\mathscr{C}$C, $\mathscr{Q}$C, $\mathscr{C}$L and $\mathscr{Q}$L topics. Vertices represent each of the four topics, and both directed and undirected edges represent connections between topic areas. The edges are labelled by the list of corresponding member articles. If an article is a member of two vertices, we assign it to either a directed or an undirected edge, which represents using one topic to address another or the unification of topics. Edge memberships are useful if the literature strictly connects only two topics at a time. If an article were to be a member of three or more vertices, we would use a hypergraph with hyperedges connecting two or more vertices together. As we have not found such an article, a graph is sufficient to represent our knowledge graph. An example of a hyperedge is represented by the shaded grey area that connects $\mathscr{C}$C, $\mathscr{Q}$C and $\mathscr{Q}$L. Our knowledge graph is particularly useful to convey not just state of the art but also knowledge gaps, which are represented by the missing edges. Our knowledge graph, shown in Fig. 11, has four vertices labelled $\mathscr{C}$L for classical learning, $\mathscr{Q}$L for quantum learning, $\mathscr{C}$C for classical control, and $\mathscr{Q}$C for quantum control. We do not discuss the status of knowledge for each of these four topics; rather we are interested only in works establishing connections between these four topics. Our graph differs from usual graphs by having three kinds of edges allowed between vertices simultaneously. Edges represent connections between pairs of the four topics, with each topic represented by a vertex. We employ composite edges corresponding to directed and undirected edges. Each edge in the knowledge graph is either unidirectional if one topic builds on the other or undirected if each topic builds on the other. The essence of the unidirectional $\mathscr{C}$L-$\mathscr{Q}$L edges concerns quantum algorithms that enhance classical ML [11, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 47, 45, 46, 48, 195, 196, 197, 198, 199, 200, 29] for the rightward direction and quantum-inspired algorithms for classical ML [201, 202] in the case of the leftward-pointing edge. The undirected $\mathscr{C}$L-$\mathscr{Q}$L edge represents literature reporting an advance towards unifying the two topics [203, 204, 205, 65]. The unidirectional $\mathscr{C}$C-$\mathscr{Q}$C edge represents literature concerning generalisation of $\mathscr{C}$C schemes to $\mathscr{Q}$C [10, 8, 9] and the undirected $\mathscr{C}$C-$\mathscr{Q}$C edge represents literature working toward a unifying framework for $\mathscr{C}$C and $\mathscr{Q}$C [15, 206]. The downward unidirectional edge from $\mathscr{C}$L to $\mathscr{C}$C represents literature extending $\mathscr{C}C$ to allow for the controller to learn [66, 50, 5, 16], and the upward unidirectional edge concerns literature employing mathematical tools developed for $\mathscr{C}$C to solve problems in $\mathscr{C}$L, such as optimal parameter tuning for training neural networks [6, 207] and optimal control techniques for RL [208, 209, 210, 150, 25]. The undirected $\mathscr{C}$L-$\mathscr{C}$C edge represents literature reporting an advance towards unifying the two topics [72, 211, 67, 68]. The sole $\mathscr{C}$L-$\mathscr{Q}$C edge is unidirectional, and represents articles employing ML for solving problems in $\mathscr{Q}$C such as quantum-gate design [54, 69, 212, 213, 214, 215, 129, 134, 136, 130, 122, 128, 63, 62, 216, 64, 61]. The sole $\mathscr{Q}$L-$\mathscr{C}$C edge is unidirectional and represents articles employing QML for solving problems in $\mathscr{Q}$C. QML for control has been studied in the context of quantum RL and is limited to solving simple $\mathscr{C}$C problems [217, 74, 73, 75, 76]. The downward unidirectional edge from $\mathscr{Q}$L to $\mathscr{Q}$C represents literature employing mathematical tools developed for $\mathscr{Q}$L to solve problems in $\mathscr{Q}$C [218], and the upward unidirectional edge concerns literature employing mathematical tools developed for $\mathscr{Q}$C to solve problems in $\mathscr{Q}$L, such as diagnosing barren plateus [219] and tuning variational quantum algorithms [220, 221, 222]. Now we discuss the missing edges, evident in Fig. 11, which represent gaps in the literature. The first notable gap is represented by the lack of a directed $\mathscr{Q}$C-$\mathscr{C}$C edge. An example of filling this gap would be an article employing mathematical tools developed for $\mathscr{Q}$C to enhance $\mathscr{C}$C. The gaps represented by the lack of directed $\mathscr{C}$C-$\mathscr{Q}$L and $\mathscr{Q}$C-$\mathscr{C}$L edges are surprising because literature exists that employs mathematical tools developed for $\mathscr{C}$C and $\mathscr{Q}$C to solve problems in $\mathscr{C}$L and $\mathscr{Q}$L respectively. Examples of filling these gaps would be articles employing $\mathscr{C}$C to enhance $\mathscr{Q}$L or presenting quantum- enhanced control leading to superior RL. The undirected $\mathscr{Q}$C-$\mathscr{Q}$L gap is interesting because literature exists that reports an advance towards unifying $\mathscr{C}$C with deep learning and RL [72, 211, 67, 68] and the extension to the quantum case follows naturally. ## V A$\mathscr{Q}$P as an SL problem In this section, we apply our unified LfC framework to describe A$\mathscr{Q}$P as an SL problem. First, we motivate and present our idea of elevating the original control task in A$\mathscr{Q}$P to a generalized control problem, which is to obtain $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$ for an unknown $p(\varphi;\bm{\varpi})$. Then we discuss the mapping of this control problem to the corresponding learning task, i.e., learning $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$. Finally, we elaborate on our choice for the appropriate learning algorithm. ### V.1 Mapping A$\mathscr{Q}$P to learning Here we elaborate on an example of an application of our LfC framework. Specifically, we analyse the A$\mathscr{Q}$P learning problem. First, we discuss how we elevate the original A$\mathscr{Q}$P control task to a generalised control problem. Then we elaborate on our approach for mapping the generalized A$\mathscr{Q}$P control problem to an ML problem. Finally, we explain our approach for constructing a computationally feasible $\mathscr{D}$ for our formulation of the A$\mathscr{Q}$P ML problem. We generalise the A$\mathscr{Q}$P control problem, as described in Section II.4, and construct a new control task. For given $\bm{\varpi}$, the re- defined task is to devise $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$, defined just below Eq. (20), for any unknown $p(\varphi;\bm{\varpi})$. Previously, devising $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$ is achieved by obtaining $\bm{\Delta}_{N,\bm{\varpi}}$ (17) for each $N$ through constrained optimisation. The constrained optimisation problem, whose fitness landscape is highly non-convex, is solved using heuristic global-optimization algorithms such as PSO [118, 12] and DE [13, 70, 119] in congruence with the scaling condition (22). Each of these heuristic global-optimisation algorithms are computationally expensive, thus severely limiting achievable $N_{\text{max}}$. To address this drawback of the existing optimization schemes, we propose an ML algorithm to solve the generalized control task in A$\mathscr{Q}$P. We now elaborate on our procedure to map the generalised control task of A$\mathscr{Q}$P as an ML problem. To cast a control problem into a learning problem, we identify the components of a control task as the components of learning, namely, $\mathscr{A}$, $\mathscr{T}$, $\mathscr{P}$ and $\mathscr{E}$, with the agents being L, C and M described in Fig. 10. For our A$\mathscr{Q}$P learning problem, $\mathscr{T}$ is the control task described in Section II.4.1. $\mathscr{A}$ comprises L and C, where L devises $\varrho$ for C such that $\mathscr{T}$ is achieved. M chooses $\mathscr{P}$ in congruence with Eq. 22, which is described based on r [1]. Each datum in $\mathscr{E}$ comprises $p(\varphi;\bm{\varpi})$, $N_{\text{max}}$ and $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}}$, for which we seek an efficient representation expressed in the following definition based on our notion of representation in Def. 1. ###### Definition 16. A representation $\tilde{f}_{d}$ efficiently represents a function $f$ if the amount of information (e.g., bits) increases no more than polylog$(\nicefrac{{1}}{{\epsilon}})$ for $\epsilon$, the distance between the function and the representation. ###### Remark 15. We represent $p(\varphi;\bm{\varpi})$ by a truncated cumulant expansion (1) $\bm{\varpi}_{\zeta}:=\left(\kappa_{1},\kappa_{2},\kappa_{3},\ldots,\kappa_{\zeta}\right)\in\mathds{R}^{\zeta},$ (29) which is convenient if the representation is efficient. Each value of $p(\varphi;\bm{\varpi}_{\zeta})$ is then computed on-the-fly as the value is required, turning a large space requirement into a slightly longer computation. Now we elaborate on how to construct $\mathscr{D}$ (3) for the A$\mathscr{Q}$P learning problem. $\mathscr{D}$ is constructed in the pre-processing step of the ML pipeline (2). In the pre-processing step, M first constructs a set of parameters $\bm{\alpha}_{\zeta}:=\left\\{\bm{\varpi}_{1,\zeta},\ldots,\bm{\varpi}_{\mathscr{N},\zeta}\right\\}$, with $\mathscr{N}$ explained in Eq. (4), and provides P with one element of $\bm{\alpha}_{\zeta}$. Then M collaborates with C to devise $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}$ by obtaining $\bm{\Delta}_{N,\bm{\varpi}_{i,\zeta}}$ for each $N$ through optimisation. After repeating the process of devising $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}$ for all elements of $\bm{\alpha}_{\zeta}$, $\mathscr{D}$ comprises the data $\left(\left(N_{\text{max}},\bm{\varpi}_{i,\zeta}\right),\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}\right)\,\forall i\in[\mathscr{N}]$ (30) with $\left(N_{\text{max}},\bm{\varpi}_{i,\zeta}\right)$ described in Eq. (21). Finally, the pre-processing step returns $\mathscr{D}$. ### V.2 Formalizing the SL problem We now formally cast the A$\mathscr{Q}$P control problem as an SL problem. First, we describe how we classify the A$\mathscr{Q}$P learning problem as one of the SL, UL or RL paradigms of ML, depicted in Fig. 1, based on the nature of $\mathscr{E}$. Then we introduce the formal learning problem corresponding to the A$\mathscr{Q}$P control problem. Finally, we describe the ML workflow for A$\mathscr{Q}$P, which comprises the calibrating, training and testing steps. We now classify the A$\mathscr{Q}$P learning problem based on the nature of $\mathscr{E}$. For our A$\mathscr{Q}$P learning problem, $\mathscr{E}$ can be viewed as comprising the pair $\left(N_{\text{max}},\bm{\varpi}_{\zeta}\right)$ in Eq. (21) and the corresponding feasible choice $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{\zeta}}$ for the set $\mathbb{\Delta}_{N_{\text{max}},\bm{\varpi}_{\zeta}}$ in Eq. (20). Therefore, the data structure in $\mathscr{E}$ naturally fits the SL paradigm of ML, where $\mathscr{A}$ devises a labelling map (6) $f:\mathbb{Z}\times\mathds{R}^{\zeta}\to\mathrel{\scalebox{1.2}{{\hbox{\varprodb}}}}_{j=4}^{N_{\text{max}}}\mathbb{T}^{j}:\left(N_{\text{max}},\bm{\varpi}_{\zeta}\right)\mapsto\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{\zeta}}.$ (31) The pair $\left(N_{\text{max}},\bm{\varpi}_{\zeta}\right)$ is typically known as a feature vector in SL, and each $\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{\zeta}}$ corresponds to a label. We now formalise the A$\mathscr{Q}$P SL problem. For a SL problem, we are given $\mathscr{D}$ (5), where each datum is a tuple $(\bm{x}_{i},\bm{y}_{i})$, with $\bm{x}_{i}:=\left(N_{\text{max}},\bm{\varpi}_{i,\zeta}\right),\,\bm{y}_{i}:=\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}.$ (32) The A$\mathscr{Q}$P SL problem then involves devising a labelling map $f$ (31) such that, for an unseen feature vector $\bm{x}_{i}$ (32), the estimated label is $\tilde{\bm{y}}_{i}=\tilde{\mathbb{\Delta}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}\approx\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{i,\zeta}}\in\mathrel{\scalebox{1.2}{{\hbox{\varprodb}}}}_{j=4}^{N_{\text{max}}}\mathbb{T}^{j}.$ (33) We can use a ‘distance’ $L_{N_{\text{max}},\bm{\varpi}_{\zeta}}:=\text{dist}(\tilde{\mathbb{\Delta}}_{N_{\text{max}},\bm{\varpi}_{\zeta}},\mathbb{\Delta}^{\text{feas}}_{N_{\text{max}},\bm{\varpi}_{\zeta}}),$ (34) as the loss function to be used during training. One example of the distance would be the square root of the sum of squares of the distances between the label coordinates on each hypertorus (17). We now describe the ML workflow for our A$\mathscr{Q}$P SL problem, which comprises the calibrating, training and testing steps. In the calibrating step, M and L collaborate to obtain a tuple of hyperparameters that correspond to an $f$ (31) which minimises $L_{N_{\text{max}},\bm{\varpi}_{\zeta}}$ (34) on a set of randomly-sampled $\mathscr{D}^{\prime}_{\text{test}}$ (28) according to Section III.4. In the training step, M provides L with $\mathscr{D}_{\text{model}}$ (26), along with the hyperparameters returned from the calibration step. M, then instructs L to obtain $f$ on this subset. Finally, in the testing step, M instructs L to provide C with $\varrho=f$ obtained from the training step. M proceeds with providing C with $\mathscr{D}_{\text{test}}$ (26), which is unseen in the calibrating and training steps, and instructs C to commence the control loop. L remains inactive for the remainder of the time. The control system executing $f$ either passes or fails at the test step; if the control system passes, it is then used in the real world, wherein M is replaced by U. ## VI Discussion In this section, we discuss our results. We begin by discussing our agnostic definitions of control and learning in classical and quantum settings and then interpret our LfC framework. Next, we analyse the knowledge graph, which we have constructed based on existing literature in the fields of control and learning in both classical and quantum domains. Of particular interest is the identification of gaps, which we found surprising as our approach identifies these gaps and makes it clear that they need further study. Finally, we explain the relevance of casting A$\mathscr{Q}$P as an SL task. We have carefully crafted definitions of control and learning that are agnostic in the sense that they hold regardless of whether the underlying physics is classical or quantum. One key challenge that arises in unifying classical and quantum definitions of control is that the usual notion of a switch in classical control theory no longer applies. We define a switch which is appropriate for an agnostic approach, Def. 1, in contrast to specifically classical or quantum switches. Our solution is to cast the classical switch as a logically reversible gate, thus naturally extending the switching operation to the quantum domain. Interestingly, by following a quantum-pseudocode convention for the quantum switch, we can easily “switch” between quantum and classical descriptions. On the other hand, our agnostic definition of learning builds on an established classical ML framework, thus avoiding the discrepancies in the existing definitions of QML. Our main result is our LfC framework which unifies learning and control in both classical and quantum domains. Our framework includes extending the classical control loop to the quantum case, dealing properly with the switches, controller, what channels are classical, and what are quantum, teacher and the user. Our framework differs from existing literature in two aspects. Firstly, in contrast to literature not including a ‘teacher’ and a ‘user’, our framework includes a classical teacher, which trains a learner (classical or quantum) for the control task, and a classical user who replaces the teacher when the training and validation are complete. Secondly, our work differs from Fu’s seminal work [16] in regards to the separation between controller and learner and the role of teacher. By treating the learner and controller separately, our framework becomes valid in the quantum domain as we can just make the controller quantum and keep the learner compatible with the classical setting of the real world. These new features make our framework self-sufficient to be applied to any classical or quantum, or hybrid system. We present the existing literature on control, learning and their connections, in both classical and quantum domains, in the form of a square graph. An intriguing fact about this knowledge graph is that it represents only a subset of existing literature in the relevant fields. This is because of our strict filtering criteria, which excludes misinterpreted works. For example, if a paper claims to use ML for a control task but actually uses only optimization under the hood, we exclude that paper. Thus, in addition to gaps, our knowledge graph also exposes limitations and misinterpretations in the existing literature. Our unique way of representing literature is particularly interesting because it helps us identify the knowledge gaps, which provides fodder for future researchers. It is quite surprising to observe that although classical learning is used for both classical and quantum control, the application of quantum learning to quantum control has yet to be fully explored. Another interesting observation is that classical and quantum learning has benefitted from developments in classical and quantum control, respectively, but the interrelations between classical control and quantum learning and quantum control and classical learning are not explored yet. This knowledge graph also conveys that the unification of classical and quantum control (and learning) is not thoroughly explored, which we have addressed in this paper. Guided by the knowledge of our LfC framework, we address the challenging task of casting A$\mathscr{Q}$P as an SL problem. The A$\mathscr{Q}$P control task is computationally expensive. Nevertheless, analysing this task in the light of our framework allows us to employ ML to solve it, i.e. to potentially achieve a scaling better than SQL for an unseen unknown-phase probability distribution. In particular, we first elevate the control in A$\mathscr{Q}$P (Task 1) to a generalised control task amenable to recasting as an SL problem. By bringing learning into the picture, we could potentially reduce the computational cost for calculating control policies that beat SQL. ## VII Conclusions The fields of quantum control and machine learning (ML) are rapidly progressing, albeit mostly independent of each other. Although classical control is a widely-popular and established field, quantum control is still in its developing phase. On the other hand, quantum ML, concerning both quantum- for-learning and learning-for-quantum ideas, is also a very popular research topic. Despite separate research in these fields, not much attention was paid to unifying the terminologies in quantum control (learning) and classical control (learning); this lack of research might hinder quantum control and learning from fully exploiting established techniques from their classical counterparts. In this paper, we present unified, i.e. agnostic of whether classical or quantum, definitions for control and learning and critically review existing literature in the light of our agnostic definitions. Moreover, we formulate a learning-for-control framework and explain how supervised learning (SL) can be used to estimate the unknown phase in a quantum-enhanced interferometric setup. We review the relevant literature on ML, control systems, unification of classical and quantum mechanics and adaptive quantum- enhanced interferometric-phase estimation (A$\mathscr{Q}$P). In particular, we explain key concepts by quoting from authoritative references and, in some cases, formalizing them with mathematical relations. Moreover, we indicate popular discrepancies in topics, including the definition of quantum ML, evolutionary methods for reinforcement learning (RL) and the relation between optimization and ML. We then discuss the equivalence between classical and quantum descriptions of a physical system based on operational mechanics and geometric correspondence. Lastly, we recap the quantum control task in A$\mathscr{Q}$P and the optimization techniques, which were sometimes misrepresented as ML techniques in literature, used to solve this control task. The main result of our work is a learning-for-control framework, which holds irrespective of whether learning and control are described by classical or quantum mechanics. To do this, we first unify classical and quantum control by quantizing the components of a typical closed-loop classical control system, namely the evaluator, controller, plant, switch and their communication channels. Using a similar approach, we then unify classical and quantum learning. Finally, we constructed our “agnostic” framework by elevating an established learning-for-control proposal to account for a classical/quantum description of all components, including the learner, but separating the teacher as a classical agent. Based on our agnostic learning-for-control framework, we have two more results. Firstly, we use our unified (quantum and classical) definitions of control and learning to present the existing literature in these fields as a knowledge graph. This square graph represents a subset of the existing literature, which has been carefully filtered according to our clarified definitions. Secondly, we cast the quantum control problem in A$\mathscr{Q}$P as an SL problem and explain the calibration, training and testing steps using our constructed framework. Although this control problem was framed as an RL problem in the earlier works from our research group, the direct policy search approach doesn’t comply with the conventional RL paradigm as put forward by Sutton. Nevertheless, our new way of casting the A$\mathscr{Q}$P problem as an SL problem has potential applications in enhancing quantum clocks [18] and interferometric position-shift measurements [19, 20, 21]. Our work leads to many interesting questions and research directions. Although the field of quantum ML is currently very popular, its applications to classical and quantum control have not been investigated properly. Another interesting yet unexplored topic is quantum control for classical and quantum learning. We are particularly excited to study how control strategies on current quantum hardwares can enhance the performance of quantum generative ML to the point of achieving quantum advantage on these noisy devices. Based on our proposal, one can implement an SL agent for solving the quantum control problem of A$\mathscr{Q}$P to potentially achieve a scaling better than the standard quantum limit. Beyond the above-mentioned practical applicability, our learning-for-control framework makes the reader ponder more philosophical questions about control, learning and their interconnection. Can M/U be quantum? How does the measurement paradox affect the control and learning feedback loops? Does control theory make sense for a quantum controller? Although we do not have the answers to such questions, one value of our work is that such previously unexplored questions arise very clearly from our framework. ## VIII Acknowledgements We acknowledge the traditional owners of the land on which this work was undertaken at the University of Calgary: the Treaty 7 First Nations (www.treaty7.org). SSV would like to thank MITACS. AD would like to thank MITACS and the Canadian Queen Elizabeth II Diamond Jubilee Scholarships program. BCS appreciates financial support from the Natural Sciences and Engineering Research Council of Canada. AD, EJP and BCS acknowledge the support of the Major Innovation Fund, Government of Alberta, Canada. We thank Carlo Maria Scandolo and Howard M. Wiseman for useful discussions on quantum feedback. SSV and AD contributed equally to this work. ## References * Dorf and Bishop [2008] R. Dorf and R. Bishop, _Modern Control Systems_ , 11th ed. (Pearson, Boston, 2008). * Rosolia _et al._ [2018] U. Rosolia, X. Zhang, and F. Borrelli, Data-driven predictive control for autonomous systems, Annu. Rev. Control Robot. Auton. Syst. 1, 259 (2018). * Mitchell [1997] T. M. Mitchell, _Machine Learning_ , 1st ed. (McGraw-Hill, New York, 1997). * Russell and Norvig [2020] S. Russell and P. Norvig, _Artificial Intelligence: A Modern Approach_ , 4th ed., Pearson Series in Artifical Intelligence (Pearson, Hoboken, 2020). * Duriez _et al._ [2017] T. Duriez, S. L. Brunton, and B. R. Noack, _Machine Learning Control-Taming Nonlinear Dynamics and Turbulence_, 1st ed., Fluid Mechanics and Its Applications (Springer, Cham, 2017). * E _et al._ [2018] W. E, J. Han, and Q. Li, A mean-field optimal control formulation of deep learning, Res. Math. Sci. 6, 10 (2018). * Mehta _et al._ [2019] P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, A high-bias, low-variance introduction to machine learning for physicists, Phys. Rep. 810, 1 (2019). * Wiseman and Milburn [2009] H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_ (Cambridge University Press, Cambridge, 2009). * Jacobs [2014] K. Jacobs, _Quantum Measurement Theory and its Applications_ (Cambridge University Press, Cambridge, 2014). * Zhang _et al._ [2017] J. Zhang, Y.-x. Liu, R.-B. Wu, K. Jacobs, and F. Nori, Quantum feedback: theory, experiments, and applications, Phys. Rep. 679, 1 (2017). * Biamonte _et al._ [2017] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017). * Hentschel and Sanders [2011a] A. Hentschel and B. C. Sanders, Efficient algorithm for optimizing adaptive quantum metrology processes, Phys. Rev. Lett. 107, 233601 (2011a). * Lovett _et al._ [2013] N. B. Lovett, C. Crosnier, M. Perarnau-Llobet, and B. C. Sanders, Differential evolution for many-particle adaptive quantum metrology, Phys. Rev. Lett. 110, 220501 (2013). * Hogan _et al._ [2021] A. Hogan, E. Blomqvist, M. Cochez, C. D’amato, G. D. Melo, C. Gutierrez, S. Kirrane, J. E. L. Gayo, R. Navigli, S. Neumaier, A.-C. N. Ngomo, A. Polleres, S. M. Rashid, A. Rula, L. Schmelzeisen, J. Sequeda, S. Staab, and A. Zimmermann, Knowledge graphs, ACM Comput. Surv. 54, 1 (2021). * Vedaie _et al._ [2018] S. S. Vedaie, P. Palittapongarnpim, and B. C. Sanders, Reinforcement learning for quantum metrology via quantum control, in _2018 IEEE Photonics Society Summer Topical Meeting Series (SUM)_ (IEEE, 2018) pp. 163–164. * Fu [1970] K.-S. Fu, Learning control systems–review and outlook, IEEE Trans. Automat. Contr. 15, 210 (1970). * Wilde [2017] M. M. Wilde, _Quantum Information Theory_, 2nd ed. (Cambridge University Press, Cambridge, 2017). * Borregaard and Sørensen [2013] J. Borregaard and A. S. Sørensen, Near-Heisenberg-limited atomic clocks in the presence of decoherence, Phys. Rev. Lett. 111, 090801 (2013). * Hollenhorst [1979] J. N. Hollenhorst, Quantum limits on resonant-mass gravitational-radiation detectors, Phys. Rev. D 19, 1669 (1979). * Caves _et al._ [1980] C. M. Caves, K. S. Thorne, R. W. P. Drever, V. D. Sandberg, and M. Zimmermann, On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle, Rev. Mod. Phys. 52, 341 (1980). * Caves [1981] C. M. Caves, Quantum-mechanical noise in an interferometer, Phys. Rev. D 23, 1693 (1981). * Mohri _et al._ [2018] M. Mohri, A. Rostamizadeh, and A. Talwalkar, _Foundations of Machine Learning_ , 2nd ed., Adaptive Computation And Machine Learning series (MIT press, Cambridge, 2018). * Rudin [1976] W. Rudin, _Principles of Mathematical Analysis_ , 3rd ed., International Series in Pure and Applied Mathematics (McGraw-Hill, New York, 1976). * van Kampen [2007] N. van Kampen, _Stochastic Processes in Physics and Chemistry_, 3rd ed., North-Holland Personal Library (Elsevier, Amsterdam, 2007). * Sutton and Barto [2018] R. S. Sutton and A. G. Barto, _Reinforcement Learning: An Introduction_ , 2nd ed., Adaptive Computation and Machine Learning series (MIT Press, Cambridge, 2018). * Domingos [2012] P. Domingos, A few useful things to know about machine learning, Commun. ACM 55, 78–87 (2012). * Al-jabery _et al._ [2020] K. K. Al-jabery, T. Obafemi-Ajayi, G. R. Olbricht, and D. C. Wunsch II, 2 - Data Preprocessing, in _Computational Learning Approaches to Data Analytics in Biomedical Applications_, edited by K. K. Al-jabery, T. Obafemi-Ajayi, G. R. Olbricht, and D. C. Wunsch II (Academic Press, London, 2020) 1st ed., pp. 7–27. * Burman [1989] P. Burman, A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods, Biometrika 76, 503 (1989). * Dalal _et al._ [2021] A. Dalal, M. Bagherimehrab, and B. C. Sanders, Quantum-assisted support vector regression for detecting facial landmarks (2021), arXiv:2111.09304 . * Kearns and Vazirani [1994] M. J. Kearns and U. Vazirani, _An Introduction to Computational Learning Theory_ (MIT Press, Cambridge, 1994). * Valiant [1984] L. G. Valiant, A theory of the learnable, Commun. ACM 27, 1134–1142 (1984). * Klebanov _et al._ [2009] L. B. Klebanov, S. T. Rachev, and F. J. Fabozzi, _Robust and Non-Robust Models in Statistics_ (Nova Science Publishers, Hauppauge, 2009). * Ben-David _et al._ [1997] S. Ben-David, E. Kushilevitz, and Y. Mansour, Online learning versus offline learning, Mach. Learn. 29, 45 (1997). * Levine _et al._ [2020] S. Levine, A. Kumar, G. Tucker, and J. Fu, Offline reinforcement learning: tutorial, review, and perspectives on open problems (2020), arXiv:2005.01643 . * Prudencio _et al._ [2023] R. F. Prudencio, M. R. O. A. Maximo, and E. L. Colombini, A survey on offline reinforcement learning: taxonomy, review, and open problems, IEEE Trans. Neural Netw. Learn. Syst. , 1 (2023). * Dunjko and Briegel [2018] V. Dunjko and H. J. Briegel, Machine learning & artificial intelligence in the quantum domain: a review of recent progress, Rep. Prog. Phys. 81, 074001 (2018). * Ciliberto _et al._ [2018] C. Ciliberto, M. Herbster, A. D. Ialongo, M. Pontil, A. Rocchetto, S. Severini, and L. Wossnig, Quantum machine learning: a classical perspective, Proc. R. Soc. A: Math. Phys. Eng. Sci. 474, 20170551 (2018). * Schuld and Petruccione [2021] M. Schuld and F. Petruccione, _Machine Learning with Quantum Computers_, 2nd ed., Quantum Science and Technology (Springer, Cham, 2021). * Wittek [2014] P. Wittek, _Quantum Machine Learning: What Quantum Computing Means to Data Mining_ (Academic Press, New York, 2014). * Preskill [2012] J. Preskill, Quantum computing and the entanglement frontier (2012), arXiv:1203.5813 . * Preskill [2018] J. Preskill, Quantum computing in the NISQ era and beyond, Quantum 2, 79 (2018). * Harrow and Montanaro [2017] A. W. Harrow and A. Montanaro, Quantum computational supremacy, Nature 549, 203 (2017). * Schuld _et al._ [2015] M. Schuld, I. Sinayskiy, and F. Petruccione, An introduction to quantum machine learning, Contemp. Phys. 56, 172 (2015). * Wiebe [2020] N. Wiebe, Key questions for the quantum machine learner to ask themselves, New J. Phys. 22, 091001 (2020). * Rebentrost _et al._ [2014] P. Rebentrost, M. Mohseni, and S. Lloyd, Quantum support vector machine for big data classification, Phys. Rev. Lett. 113, 130503 (2014). * Noori _et al._ [2020] M. Noori, S. S. Vedaie, I. Singh, D. Crawford, J. S. Oberoi, B. C. Sanders, and E. Zahedinejad, Analog-quantum feature mapping for machine-learning applications, Phys. Rev. Appl. 14, 034034 (2020). * Paparo _et al._ [2014] G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel, Quantum speedup for active learning agents, Phys. Rev. X 4, 031002 (2014). * Saggio _et al._ [2021] V. Saggio, B. E. Asenbeck, A. Hamann, T. Strömberg, P. Schiansky, V. Dunjko, N. Friis, N. C. Harris, M. Hochberg, D. Englund, S. Wölk, H. J. Briegel, and P. Walther, Experimental quantum speed-up in reinforcement learning agents, Nature 591, 229 (2021). * Zurek [1981] W. H. Zurek, Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse?, Phys. Rev. D 24, 1516 (1981). * Hou and Wang [2013] Z.-S. Hou and Z. Wang, From model-based control to data-driven control: Survey, classification and perspective, Inf. Sci. 235, 3 (2013). * Walmsley and Rabitz [2003] I. Walmsley and H. Rabitz, Quantum physics under control, Phys. Today 56, 43 (2003). * Lloyd [2000] S. Lloyd, Coherent quantum feedback, Phys. Rev. A 62, 022108 (2000). * Borzì _et al._ [2017] A. Borzì, G. Ciaramella, and M. Sprengel, Chapter 2: Quantum Mechanics and the Schrödinger Equation, in _Formulation and Numerical Solution of Quantum Control Problems_, Computational Science & Engineering (Society for Industrial and Applied Mathematics, Philadelphia, 2017) pp. 7–71. * Brif _et al._ [2010] C. Brif, R. Chakrabarti, and H. Rabitz, Control of quantum phenomena: past, present and future, New J. Phys. 12, 075008 (2010). * Khaneja _et al._ [2005] N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms, J. Magn. Reson. 172, 296 (2005). * Zahedinejad _et al._ [2014] E. Zahedinejad, S. Schirmer, and B. C. Sanders, Evolutionary algorithms for hard quantum control, Phys. Rev. A 90, 032310 (2014).
11institutetext: Digital Security Group, Radboud University, Nijmegen, The Netherlands 11email<EMAIL_ADDRESS>22institutetext: Ikerlan Technological Research Centre, Arrasate-Mondragón, Gipuzkoa, Spain 22email<EMAIL_ADDRESS> # Keep It Unbiased: A Comparison Between Estimation of Distribution Algorithms and Deep Learning for Human Interaction-Free Side-Channel Analysis Unai Rioja 1122 Lejla Batina 11 Igor Armendariz 22 Jose Luis Flores 22 ###### Abstract Evaluating side-channel analysis (SCA) security is a complex process, involving applying several techniques whose success depends on human engineering. Therefore, it is crucial to avoid a false sense of confidence provided by non-optimal (failing) attacks. Different alternatives have emerged lately trying to mitigate human dependency, among which deep learning (DL) attacks are the most studied today. DL promise to simplify the procedure by e.g. evading the need for point of interest selection or the capability of bypassing noise and desynchronization, among other shortcuts. However, including DL in the equation comes at a price, since working with neural networks is not straightforward in this context. Recently, an alternative has appeared with the potential to mitigate this dependence without adding extra complexity: Estimation of Distribution Algorithm-based SCA. In this paper, we compare these two relevant methods, supporting our findings by experiments on various datasets. ###### Keywords: Hardware security Side-channel analysis Machine learning Estimation of distribution algorithms Artificial Intelligence Evaluation ## 1 Introduction These days we are surrounded by IoT devices that handle sensitive information, not only in industrial applications but also in our daily lives. This requires from designers and product developers to use cryptography to protect embedded devices, but cryptography is only one of the components ensuring the security of systems. As Kocher et al. demonstrated in 1999 [21], the security of a device depends not only on the mathematical characteristics of its cryptographic operations but also on its physical implementation. In other words, the physical nature of these devices can be exploited in order to break their security in several ways. Whereas some approaches are passive and rely on simply observing certain physical properties to retrieve information (Side- channel analysis, SCA), other procedures try to stress the system to alter its natural behaviour (Fault Injection, FI). Physical attacks are truly powerful, as they can bypass hardware and software security measures that the manufacturer has included in the design. The problem is that the inclusion and validation of countermeasures against this kind of attacks is not simple, especially in the SCA case. Current certification schemes require attacking the device under test (DUT) with a battery of known SCA attacks to prove its security [2]. Unfortunately, this approach is prohibitive in terms of time and resources. The ever-growing number of attack techniques, which in turn involve knowledge of very diverse subjects (e.g., signal processing, electronic design, cryptography, statistics, machine learning, etc.), make it difficult to master and correctly apply all of them. Moreover, the outcome of such attacks depends to a large extent on the experience of the person performing them. Therefore, a process so dependent on human interaction can lead to a biased result if the tests are not properly executed. One of the most prominent techniques in SCA research today is Deep Learning (DL)-based SCA [20, 26, 7, 4, 38, 48], part of the so-called profiling attacks (the strongest SCA technique nowadays). This method aims to overcome some drawbacks of classical SCA, such as the need for pre-processing or point of interest (POI) selection, promising a relaxation of the evaluator interaction. Conversely, working with DL is complex, especially considering the large number of possible architectures and hyper-parameters to adjust. Besides, although some attempts have been made [20, 52, 48], the SCA community has not yet agreed on how models should be constructed and evaluated. In addition, there is no generalized solution: when attacking new datasets, devices or cryptographic implementations, sometimes it is necessary to completely readjust the neural network. Consequently, while some of the more common difficulties are alleviated, new DL-related complications emerge. Recently, an option that promises to lessen this human dependency without increasing the overall complexity has emerged: Estimation of Distribution Algorithm-based Profiling Attacks (EDA-based PAs) [40, 39]. This concept comprises applying randomized optimization heuristics to the POI selection issue, allowing an automated boosting of the whole attack (POI selection, leakage profiling and key recovery). This method can produce state-of-the-art results straightforwardly, even in noisy environments [39]. Although they provide a simple alternative to DL, to the best of our knowledge, both approaches have never been directly compared. Therefore, in this paper, a comparison between the two methods is driven. Thus, the contributions of this work are the following: * • A comparative analysis of both approaches has been driven, in terms of complexity and performance, to highlight the strengths and weaknesses of each method. * • Our findings are based on repeatable experimental evidence. Some experiments have been carried out especially for this paper, while for others we have relied on related works to provide a comprehensive and impartial comparison. * • We also study the performance of several SCA-oriented DL architectures in unfavourable conditions (no mask leakage during the targeted window) to determine whether the same conclusions as in [39] using EDAs hold. The paper is organized as follows: In Sect. 2 we summarize relevant background on profiling attacks. We describes the related work that is closest to ours in Sect. 3. We present the comparison of the two techniques (EDAs and DL-based) in Sect. 4. In Sect. 5 we discuss the comparison and elaborate on our findings. Sect. 6 concludes the paper. ## 2 Background on profiling attacks Profiling Attacks (PAs) have become the prevailing type of SCA attack in recent years [10, 4, 54, 52, 39]. The original idea comprises generating a model of the power consumption of a device to be deployed for the secret key recovery. These attacks involve two phases (See Fig. 1): a phase in which the model is generated using a typically big number of traces (profiling phase), and the phase in which this model is compared with the actual power consumption of the device to recover sensitive information (attack phase). Depending on the method used to generate the model, there exist different types of profiling attacks. Whereas the first publications employed gaussian classification (Template Attacks, TAs) [8], other approaches use linear regression (Stochastic Models approach [42]) or even Machine Learning (ML) techniques such as Support Vector Machine (SVM) [18, 22, 16], Random Forest (RF) [23] or recently introduced Deep Learning (DL) [26, 7, 36]. In this paper we focus on TAs and DL as they are the most widely used options in practice [54, 24]. Figure 1: Scheme of a generic Profiling Attack ### 2.1 Template Attacks Template attacks are the first type of profiling attacks introduced and they involve building a multivariate model of the probability distribution of the leakage. The practice is to use an extensive set of power traces taken from the DUT when it is manipulating some intermediate value $v=f(p,k)$. As long as $v$ is related to a known variable (usually plaintext $p$ or ciphertext $c$) and the secret key $k$, guessing $v$ allows the attacker to disclose the secret key. First, in the profiling phase, the attacker employs a set of $n_{p}$ profiling traces ($\mathbf{T}_{p}$) to build a Gaussian multivariate model, for each possible $v$, creating the so-called templates (pairs of mean vectors and a covariance matrices $(\mathbf{m},\mathbf{C})$). Note that, as each $v$ depend on $d$ and $k$, each key hypothesis suggests a template. Finally, a discriminant score $D\left(k\mid\mathbf{t}_{i}\right)$ is computed for each key hypothesis $k_{j}$ and all key hypothesis are graded in decreasing order of probability, creating a key guessing vector. In SCA, it is common to work with a metric called Guessing Entropy ($ge$) [45], which is the average rank of the correct candidate $k^{*}$ after several attacks. In practice, TAs pose several limitations such as complexity issues or the need for dimensionality reduction [10]. The latter is often fixed by choosing just a few time samples of the power traces (POIs selection [37]), or employing a more complex technique like Principal Component Analysis (PCA) [1, 44] or Fisher’s Linear Discriminant Analysis (LDA) [19, 12]). Note that, with EDA-based PA, the POI selection is performed automatically by the algorithm [40]. ### 2.2 DL-based SCA In recent times, multiple papers related to DL-based SCA have been published. The approach is the same as for template attacks, but the model building and classification are performed using neural networks. Most of these works rely on two typical deep learning architectures: Multilayer Perceptron (MLP) and Convolutional Neural Networks (CNN). The early architecture used in DL-SCA for its simplicity was MLP. The first proposal was applying regression to characterize leakage [51] but the approach rapidly evolved to use MLP as a classifier for intermediate values of the traces [26, 29, 30]. After this, CNNs also began to be used because its spatial invariance property provides robustness against data distortions like environmental noise, desynchronization and countermeasures [30, 26, 7, 15, 4, 36]. Other studies have examined the performance of various PAs [54, 36, 4, 52]. As stated above, one of the major drawbacks of classical SCA is the need for pre-processing and POI selection, as this part is strongly dependent on human engineering. DL-SCA claims to overcome those struggles, since the features are selected automatically from traces by the neural network. In any case, note that although this part of the problem may be mitigated, working with neural networks is a complex process that also requires human interaction for architecture selection, tuning and training of the neural networks to operate correctly. ### 2.3 Masking protected implementations SCA countermeasures try to obfuscate the dependency between the power consumption of the DUT and the intermediate values of the implemented cryptographic algorithm. One of the most popular ones is masking [27]. Masking (also known as secret sharing) comprise concealing each intermediate value $v$ with a random value $m$ (mask), which is different for each execution and unknown by the attacker, such as $v_{m}=v*m$. If correctly implemented, this ensures pairwise independence between masked values, unmasked values and the masks. Consequently, a classical (first-order) Differential Power Analysis (DPA [21]) attack would fail. Although theoretically sound, implementing masking incorrectly can be fatal for a device’s security. Close manipulation of the shares can provoke unintended interactions between values in the microcontroller, principally caused by transitional effects [3] and glitches [9, 28]. These phenomena can halve the security order, making the first order masking (with two shares or mask and masked intermediate value) vulnerable even to first-order attacks [3]. This is important since this is the most widely used scheme in practice, because of the complexities involved in higher-order masking [43]. ## 3 Related Work ### 3.1 EDA-based SCA The usage of Estimation of Distribution Algorithms (EDAs) in the SCA context was introduced in [40] as an alternative to adjust the POI selection and perform template attacks over embedded devices in an automated way. EDAs are a class of population based optimization heuristics that explore potential results by forming explicit probabilistic models of promising candidates. The approach is to seek in the space shaped for all groupings of POIs for the best ones. Instead of an exhaustive enumeration of all possible combinations, this method performs a search based on a quality measure combined with EDAs. In a nutshell, in the first place, an initial population of $R$ individuals (POI selection candidates) is generated. This population can be generated at random or according to some criterion (i.e., assigning a higher probability to samples that present a stronger correlation with the processed intermediate values [40]). After this, $R$ attacks are performed with the $R$ candidates and the candidates are rated according to their performance. Then, the best $N$ candidates are chosen ($N<R$) and the probability distribution $p(\mathbf{x})$ of potential candidates is estimated from them. The process is repeated until a stop condition is reached (see [40] for more details). Although they present several advantages against DL, the authors of [40, 39] have contrasted it with other classical (manual) POI selection techniques but have never compared it directly with DL-based attacks. Thus, in this paper, we compare both techniques on various datasets, in terms of performance and complexity. ### 3.2 Profiling Attacks on Masking On the one hand, although PAs are a derivative of DPA attacks, many papers claim that (first-order) DL-based SCA attacks can deal with masking countermeasures [14, 26, 4, 47]. The claim is that, in these attacks, the network is trained without mask information (the traces are labelled with the unmasked intermediate value $v$). Despite this, as deep neural networks can implement highly complex functions, they might be able to guess the correct key without needing this information, and consequently, bypassing masking [35]. This is remarkable since, with classical TAs, when the mask value is unknown to the attacker during the profiling step, the leakages associated with a key follow a multimodal distribution. This leads to assumption errors whether the adversary exploits Gaussian template attacks, as confirmed in [24]. For this reason, some previous works apply TAs combined with second-order techniques and template-based DPA attacks with extra calculation considering the masks to succeed [32]. When the attacker has such strong capabilities (i.e., knows the key and the masks during profiling), it is considered a worst-case scenario [2, 6, 5]. Nonetheless, as mentioned above, if masking is not properly implemented the implementation can be vulnerable to first-order attacks: It is important to ensure that the two parts of the same secret (mask and masked intermediate value) are not handled too closely [43]. For instance, authors in [54] claim that when the mask leakage is included in the observation time window, (first- order) TAs can relate the dependence between the mask and the masked variable leakage. Many other related works perform TAs over masked implementation without mask information [20, 4, 40, 39]. This allows for a more “realistic” (real-world) attack. To the best of our knowledge, most of the results obtained with DL perform the attack in a weak setting (mask leakage in the attacked window and/or unintended interactions). This is partly because most of the results on DL- based SCAs are based on the ASCAD [4] dataset, in which there exist mask leakage in the targeted window, indicating possible unintended interactions. Thus, it is not clear that state-of-the-art CNNs have any advantage over TAs in these conditions, since both can circumvent masking. It is also unclear whether the attack works because of the presence of mask leakage or unintended interactions. In any case, the authors of [39], published a dataset containing traces from masking implementations with and without mask leakage, but they only use EDA- based attacks. In this paper, we perform DL-based attacks on that dataset trying to determine whether DL-based attacks can bypass masking on both conditions or, on the contrary, current CNNs do not have any advantage in this scenario111Note that a second-order attack (combining the leakage of two bytes of the key at a time to remove the mask) is feasible in both situations [39].. ## 4 Experimental Comparison In this section, we compare the performance of EDAs and DL-based attacks on two open datasets: ASCAD [4] and AES_RA [39]. ### 4.1 ASCAD Random Keys ASCAD [4] was the early open database for DL-SCA, and has become a standard for experimentation with DL in SCA, with many papers using it appearing every year [52, 48, 39, 38, 31, 50, 17, 53]. The DUT in this data set is an 8-Bit AVR microcontroller (ATmega8515), and includes EM emanation traces of the device implementing a masked AES-128 cipher [11, 27]. The dataset is divided into two parts, fixed key and variable key. Although many works employ the fixed key version because it is an easier problem, for this comparison, we focus on the variable key subset because it is a more challenging and realistic experiment. The traces represent a window of 1400 relevant samples per trace, corresponding to the third byte of the first round masked S-Box operation. As the sensitive intermediate value it is common to use an S-box output: $Y^{(i)}(k^{*})=Sbox[P_{3}^{(i)}\oplus k^{*}]$. For a deeper explanation of the ASCAD dataset, we refer to [4]. For this dataset there exist published papers using both DL [34, 49, 38] and EDAs [39]. Thus, we have focused on these works without performing additional experiments, allowing an objective and unbiased view. Table 1 summarizes the best published attacks against ASCAD, in terms of trainable parameters and $ge$. As usual in the field, for $ge$ we utilize $\bar{Q}_{t_{ge}}$, or the average number of traces needed to obtain a $ge$ of 0. ASCAD Random | [34] Best CNN | [49] Best CNN | [38] Best CNN | [38] Best CNN (RS) | [39] Best EDA ---|---|---|---|---|--- Trainable Param. | N/A | 2 076 744 | 70 492 | 3 298 | 1 400 $\mathbf{\bar{Q}_{t_{ge}}}$ | 105 | 1 568 | 490 | 1 018 | 150 Table 1: Top results with PAs on ASCAD random Overall, we can see how despite being easier to execute, EDA-based PAs are able to provide better results (smaller $\bar{Q}_{t_{ge}}$) of than most CNNs on ASCAD (random key), being a more practical and simpler option for evaluators. In terms of trainable parameters, the EDA-Based TA is also less demanding (one per time sample). Note that apart from the number of trainable parameters, with DL there is a huge number of possible architectures and hence hyper- parameters to select and adjust. Conversely, the EDA-based approach has a far narrower number of parameters to tune (# iterations, population size and evaluation function). Finally, the complexity of the “training” itself is again lighter in an EDA-based attack: The time complexity of the EDA (UMDA) is $O(n)$ [40] whereas the time complexity of a backpropagation algorithm is much larger ($O(n^{5})$ according to [13]). ### 4.2 AES_RA AES_RA was introduced in [39] and contains traces from two distinct embedded systems which use microcontrollers from the same family (Piñata training board from Riscure [41] and STM32F411E-DISCO development board [46]). With each device, authors acquire traces from three AES implementations: an unprotected software AES implementation and two masking schemes, resulting in six different setups. Thus, this dataset is divided into two parts: power consumption traces from the Piñata board and capacitor EM power traces from the STM32F411E-Discovery Board. Whereas power traces correspond to clean (noise-free) measurements, EM traces represent a more challenging problem (due to noise). Regarding the implementation, the authors show how masked scheme 1 (MS1 from now on) is completely weak against (first-order) PAs, as the mask leaks in the same time window as the masked intermediate value (SBox output). Conversely, masked scheme 2 (MS2) contains only intermediate value leakage in the targeted time window, and hence, this implementation is robust against (first-order) PAs. For more details, we refer to the original paper [39]. In this work, we perform DL-based experiments on the dataset, confirming that the same conclusions are obtained as in [39] with EDAs on the different masking schemes and configurations. #### 4.2.1 Experimental configuration We have performed several DL-based attacks on each protected implementation using some (pre-defined) CNN architectures. These architectures were not created for this specific dataset, and hence, good outcomes are not guaranteed. Nevertheless, all were designed for ASCAD, which represents a problem analogous to MS1 (mask leakage on the targeted window). Thus, some CNNs shall succeed, especially considering that MS1 is particularly weak on Piñata (secret key can be recovered with about 5 traces using TAs). This also gives us a clue about how difficult it is to attack a similar dataset with a pre-defined architecture. Regarding trainable parameters, we have selected complex and simple CNNs in order to have a wide range of results (see Table 2). More precisely, we have used the architecture for ASCAD random key suggested in the original paper [4], four CNNs from [38], the architecture from [52] for ASCAD fixed key and an improved version introduced in [33]: * • CNN1: From the original paper [4]. * • CNN2: From [52], for ASCAD Fixed. * • CNN3: From [33], for ASCAD Fixed. * • CNN4: From [38], ASCAD Random (HW) * • CNN5: From [38], ASCAD Random RS (HW) * • CNN6: From [38], ASCAD Fixed (HW) * • CNN7: From [38], ASCAD Fixed RS (HW) For all of them, we have used the hyper-parameters as in the original articles, except for the number of epochs. We have repeated the experiments with 50 and 75 epochs to ensure that poor results are not obtained because of underfitting/overfitting. #### 4.2.2 Experimental results Figures 2, 3, 4 and 5 show the averaged $ge$ of the correct key byte for all CNNs and setups (for 50 and 75 epochs respectively), and also the results using EDA-based PAs. This averaged $ge$ represents the average of 10 attacks using the same model, cross-validated to avoid bias. Also, Table 2 shows the number of trainable parameters of each architecture, and the $\bar{Q}_{t_{ge}}$ of successful attacks. Note that although MS1 is especially weak on Piñata, some neural networks could not disclose the key. AES_RA | CNN1 | CNN2 | CNN3 | CNN4 | CNN5 | CNN6 | CNN7 | EDA ---|---|---|---|---|---|---|---|--- Trainable Param. | 70 846 848 | 32 960 | 33 070 | 78 172 | 3 418 | 3 471 | 1 394 | 1 500 $\mathbf{\bar{Q}_{t_{ge}}}$(Piñata-MS1) | 27 | 8 | 6 | 113 | 34 | - | - | 5 $\mathbf{\bar{Q}_{t_{ge}}}$(STM32-MS1) | - | 1 867 | 1 222 | - | - | - | - | 850 Table 2: Top results with PAs on AES_RA In addition, the results with DL-based SCA are more variable in general. To show this, Figures 2, 3, 4 and 5 also include Box Plots. These plots represent the variation of the final $ge$ values of each one of the 10 attacks (for each model). In them, the dispersion and the median of the final values can be easily identified, besides clearly distinguishing whether there are outliers. This helps us to determine how precise is the model. (a) (b) (c) (d) Figure 2: Experimental results (Averaged $ge$ and Box Plot of final $ge$ values) on AES_RA - Piñata [50 Epochs] (a) (b) (c) (d) Figure 3: Experimental results (Averaged $ge$ and Box Plot of final $ge$ values) on AES_RA - STM32F4 [50 Epochs] (a) (b) (c) (d) Figure 4: Experimental results (Averaged $ge$ and Box Plot of final $ge$ values) on AES_RA - Piñata [75 Epochs] (a) (b) (c) (d) Figure 5: Experimental results (Averaged $ge$ and Box Plot of final $ge$ values) on AES_RA - STM32F4 [75 Epochs] ## 5 Discussion In terms of $ge$, similar outcomes can be achieved with both methods. Nevertheless, it should be noticed that we have only been able to achieve the results reached with EDAs with some CNNs. In addition, as the Box Plots show, the outcomes are in general more variable when using CNNs in our experiments. Also note that although these CNNs were designed for a similar dataset, they usually require human engineering to succeed. Our comparison shows that EDA-based attacks have some advantages over the DL approach: * • Simplicity: fewer parameters to tune (i.e., hyperparameters), trainable parameters, and lighter algorithmic complexity. * • Generalized solution: no need to re-tune the hyper-parameters when targeting new devices, implementations, datasets, etc. * • Greater control: we can give information about which time samples are expected to be relevant (i.e., which ones have a stronger correlation with the intermediate values). * • Search (i.e., “Training”) based on the outcome of realistic key recovery attacks, and not on minimizing a loss function. * • The expressiveness and transparency of the probabilistic model that guides the search [40]. On the other hand, we have also seen how a well trained DL model can usually approach, or even beat, EDA-based TAs. In addition, training a neural network today is a highly optimized process that can be performed relatively quickly (thanks to the use of parallelization using GPUs) while EDA-based attacks have a long way to go to reach these levels of optimization. However, EDA-based PAs are far less complex than DL in terms of algorithmic complexity. This, together with the fact that several ways of optimizing them have already been identified [40] (e.g., attack parallelization, attack computation optimization, etc.), means that they could become as efficient or even more efficient than DL. Conversely, TAs work with a Gaussian assumption, whereas ML models do not assume the probability density function of the data [25]. Nevertheless, note that ML models could also be employed with the EDA approach, although authors chose TA for demonstrating their approach in [40]. ## 6 Conclusions and Future work From this analysis, we draw the same conclusions about MS1 and MS2 as in [39]: we recovered the secret key on MS1 using several predefined CNN architectures but not on MS2. This shows that both TAs and DL can circumvent masking in schemes like ASCAD or MS1, as some previous works have also shown [20, 4, 54]. However, determining whether more complex CNNs or (EDA-based) PAs can actually bypass masking under unfavourable conditions (no mask leakage in the attacked window and/or no unintended interactions) is beyond the scope of this paper, we believe it is an interesting research question for future work. Concluding, as we intend to show in this paper, both alternatives can provide similar results, with EDA-based attacks being a simpler and more straightforward alternative that can represent a very efficient and interpretable shortcut for evaluators. ## References * [1] Archambeau, C., Peeters, E., Standaert, F.X., Quisquater, J.J.: Template attacks in principal subspaces. In: Cryptographic Hardware and Embedded Systems - CHES 2006. vol. 4249, pp. 1–14. Springer (October 2006) * [2] Azouaoui, M., Bellizia, D., Buhan, I., Debande, N., Duval, S., Giraud, C., Jaulmes, E., Koeune, F., Oswald, E., Standaert, F.X., Whitnall, C.: A systematic appraisal of side channel evaluation strategies. IACR Cryptol. ePrint Arch. (2020) * [3] Balasch, J., Gierlichs, B., Grosso, V., Reparaz, O., Standaert, F.X.: On the cost of lazy engineering for masked software implementations. vol. 8968, pp. 64–81. Joye, Marc, Springer-Verlag (2014) * [4] Benadjila, R., Prouff, E., Strullu, R., Cagli, E., Dumas, C.: Deep learning for side-channel analysis and introduction to ASCAD database. Journal of Cryptographic Engineering 10 (06 2020) * [5] Bronchain, O., Cassiers, G., Standaert, F.X.: Give me 5 minutes: Attacking ascad with a single side-channel trace. IACR Cryptol. ePrint Arch. (2021) * [6] Bronchain, O., Standaert, F.X.: Breaking masked implementations with many shares on 32-bit software platforms: or when the security order does not matter. IACR Transactions on Cryptographic Hardware and Embedded Systems 2021(3), 202–234 (Jul 2021) * [7] Cagli, E., Dumas, C., Prouff, E.: Convolutional neural networks with data augmentation against jitter-based countermeasures. In: Fischer, W., Homma, N. (eds.) CHES 2017. pp. 45–68. Springer (2017) * [8] Chari, S., Rao, J.R., Rohatgi, P.: Template Attacks. In: Cryptographic Hardware and Embedded Systems - CHES 2002. pp. 13–28. Springer (2002) * [9] Chen, Z., Haider, S., Schaumont, P.: Side-channel leakage in masked circuits caused by higher-order circuit effects. In: Park, J.H., Chen, H.H., Atiquzzaman, M., Lee, C., Kim, T.h., Yeo, S.S. (eds.) Advances in Information Security and Assurance. pp. 327–336. Springer Berlin Heidelberg, Berlin, Heidelberg (2009) * [10] Choudary, M.O., Kuhn, M.G.: Efficient, Portable Template Attacks. IEEE Transactions on Information Forensics and Security 13(2), 490–501 (Feb 2018) * [11] Federal Information Processing Standard: FIPS 197: Announcing the Advanced Encryption Standard (AES) (November 2001). https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf (2001) * [12] Fisher, R.: The statistical utilization of multiple measurements. Annals of Eugenics (Cambridge) 8, 376–386 (November 1935) * [13] Fred, K.: Computational complexity of neural networks. https://kasperfred.com/series/introduction-to-neural-networks/computational-complexity-of-neural-networks (2020) * [14] Gilmore, R., Hanley, N., O’Neill, M.: Neural network based attack on a masked implementation of aes. Proceedings of the 2015 IEEE International Symposium on Hardware-Oriented Security and Trust, HOST 2015 pp. 106–111 (06 2015) * [15] Hettwer, B., Gehrer, S., Güneysu, T.: Profiled power analysis attacks using convolutional neural networks with domain knowledge. In: Selected Areas in Cryptography - SAC 2018 - 25th International Conference, Calgary, AB, Canada, August 15-17, 2018. pp. 479–498 (2018) * [16] Heuser, A., Zohner, M.: Intelligent machine homicide. In: Constructive Side-Channel Analysis and Secure Design. pp. 249–264. Springer, Heidelberg (2012) * [17] Hoang, A.T., Hanley, N., O’Neill, M.: Plaintext: A missing feature for enhancing the power of deep learning in side-channel analysis? breaking multiple layers of side-channel countermeasures. IACR Transactions on Cryptographic Hardware and Embedded Systems 2020(4), 49–85 (Aug 2020) * [18] Hospodar, G., Gierlichs, B., De Mulder, E., Verbauwhede, I., Vandewalle, J.: Machine learning in side-channel analysis: a first study. J. Cryptographic Engineering 1, 293–302 (Oct 2011) * [19] Johnson, R., Wichern, D.: Applied multivariate statistical analysis. Springer, Berlin, Heidelberg, Upper Saddle River, NJ, 5 edn. (2015) * [20] Kim, J., Picek, S., Heuser, A., Bhasin, S., Hanjalic, A.: Make some noise. unleashing the power of convolutional neural networks for profiled side-channel analysis. IACR Transactions on Cryptographic Hardware and Embedded Systems 2019(3), 148–179 (May 2019) * [21] Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) Advances in Cryptology — CRYPTO’ 99. pp. 388–397. Springer Berlin Heidelberg, Berlin, Heidelberg (1999) * [22] Lerman, L., Bontempi, G., Markowitch, O.: Side channel attack : an approach based on machine learning. In: Constructive Side-Channel Analysis and Secure Design, COSADE. pp. 29–41 (2011) * [23] Lerman, L., Bontempi, G., Markowitch, O.: A machine learning approach against a masked aes. Journal of Cryptographic Engineering 5(2), 123–139 (Jun 2015) * [24] Lerman, L., Poussier, R., Markowitch, O., Standaert, F.X.: Template attacks versus machine learning revisited and the curse of dimensionality in side-channel analysis: extended version. Journal of Cryptographic Engineering 8 (11 2018) * [25] Maghrebi, H.: Deep learning based side-channel attack: a new profiling methodology based on multi-label classification. IACR Cryptol. ePrint Arch. (2020) * [26] Maghrebi, H., Portigliatti, T., Prouff, E.: Breaking cryptographic implementations using deep learning techniques. In: SPACE 2016 (December 2016) * [27] Mangard, S., Oswald, E., Popp, T.: Power Analysis Attacks: Revealing the Secrets of Smart Cards (Advances in Information Security). Springer-Verlag, Berlin, Heidelberg (2007) * [28] Mangard, S., Popp, T., Gammel, B.M.: Side-channel leakage of masked cmos gates. In: Menezes, A. (ed.) Topics in Cryptology – CT-RSA 2005. pp. 351–365. Springer Berlin Heidelberg, Berlin, Heidelberg (2005) * [29] Martinasek, Z., Malina, L.: Comparison of profiling power analysis attacks using templates and multi-layer perceptron network (01 2014) * [30] Martinasek, Z., Malina, L., Trasy, K.: Profiling power analysis attack based on multi-layer perceptron network. In: Computational Problems in Science and Engineering, pp. 317–339. Springer (2015) * [31] Masure, L., Dumas, C., Prouff, E.: A comprehensive study of deep learning for side-channel analysis. Transactions on Cryptographic Hardware and Embedded Systems 2020 (11 2019) * [32] Oswald, E., Mangard, S.: Template attacks on masking—resistance is futile. In: CT-RSA 2007: Topics in Cryptology. vol. 4377, pp. 243–256 (02 2007) * [33] Paguada, S., Rioja, U., Armendariz, I.: Controlling the deep learning-based side-channel analysis: A way to leverage from heuristics. In: ACNS Workshops (2020) * [34] Perin, G., Chmielewski, L., Picek, S.: Strength in numbers: Improving generalization with ensembles in machine learning-based profiled side-channel analysis. IACR Transactions on Cryptographic Hardware and Embedded Systems 2020(4), 337–364 (Aug 2020) * [35] Perin, G., Ege, B.: Lowering the bar: Deep learning for side-channel analysis ( whitepaper ). In: Proc. BlackHat. pp. 1–15 (2018) * [36] Picek, S., Samiotis, I.P., Kim, J., Heuser, A., Bhasin, S., Legay, A.: On the performance of convolutional neural networks for side-channel analysis. In: Chattopadhyay, A., Rebeiro, C., Yarom, Y. (eds.) Security, Privacy, and Applied Cryptography Engineering. pp. 157–176. Springer International Publishing, Cham (2018) * [37] Rechberger, C., Oswald, E.: Practical template attacks. In: Lim, C.H., Yung, M. (eds.) Information Security Applications. pp. 440–456. Springer Berlin Heidelberg, Berlin, Heidelberg (2005) * [38] Rijsdijk, J., Wu, L., Perin, G., Picek, S.: Reinforcement learning for hyperparameter tuning in deep learning-based side-channel analysis. IACR Transactions on Cryptographic Hardware and Embedded Systems 2021(3), 677–707 (Jul 2021) * [39] Rioja, U., Batina, L., Armendariz, I., Flores, J.L.: Towards human dependency elimination: Ai approach to sca robustness assessment. IACR Cryptol. ePrint Arch., Report 2021/1316 (2021), https://ia.cr/2021/1316 * [40] Rioja, U., Batina, L., Flores, J.L., Armendariz, I.: Auto-tune pois: Estimation of distribution algorithms for efficient side-channel analysis. Computer Networks 198, 108405 (2021) * [41] Riscure: Piñata board brochure. https://www.riscure.com/uploads/2017/07/pi%C3%B1ata_board_brochure.pdf * [42] Schindler, W., Lemke, K., Paar, C.: A stochastic model for differential side channel cryptanalysis. In: CHES 2005. pp. 30–46. Springer (2005) * [43] Shelton, M., Samwel, N., Batina, L., Regazzoni, F., Wagner, M., Yarom, Y.: Rosita: Towards automatic elimination of power-analysis leakage in ciphers. In: NDSS Symposium (01 2021) * [44] Standaert, F.X., Archambeau, C.: Using subspace-based template attacks to compare and combine power and electromagnetic information leakages. In: Oswald, E., Rohatgi, P. (eds.) Cryptographic Hardware and Embedded Systems - CHES 2008. pp. 411–425. Springer, Heidelberg (2008) * [45] Standaert, F.X., Malkin, T.G., Yung, M.: A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks. In: Advances in Cryptology - EUROCRYPT 2009\. pp. 443–461. Springer Berlin Heidelberg (2009) * [46] STMicroelectronics: STM32F411VET6 datasheet. https://www.alldatasheet.com/datasheet-pdf/pdf/929991/STMICROELECTRONICS/STM32F411VET6.html (Dec 2016) * [47] Timon, B.: Non-profiled deep learning-based side-channel attacks with sensitivity analysis. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2019(2), 107–131 (2019) * [48] Wouters, L., Arribas, V., Gierlichs, B., Preneel, B.: Revisiting a methodology for efficient cnn architectures in profiling attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems 2020(3), 147–168 (Jun 2020) * [49] Wu, L., Perin, G., Picek, S.: I choose you: Automated hyperparameter tuning for deep learning-based side-channel analysis. IACR Cryptol. ePrint Arch. p. 1293 (2020) * [50] Wu, L., Picek, S.: Remove some noise: On pre-processing of side-channel measurements with autoencoders. IACR Transactions on Cryptographic Hardware and Embedded Systems 2020(4), 389–415 (Aug 2020) * [51] Yang, S., Zhou, Y., Liu, J., Chen, D.: Back propagation neural network based leakage characterization for practical security analysis of cryptographic implementations. pp. 169–185 (11 2011) * [52] Zaid, G., Bossuet, L., Habrard, A., Venelli, A.: Methodology for Efficient CNN Architectures in Profiling Attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems Volume 2020 (2019) * [53] Zhang, J., Zheng, M., Nan, J., Hu, H., Yu, N.: A novel evaluation metric for deep learning-based side channel analysis and its extended application to imbalanced data. IACR Transactions on Cryptographic Hardware and Embedded Systems 2020(3), 73–96 (Jun 2020) * [54] Zotkin, Y., Olivier, F., Bourbao, E.: Deep learning vs template attacks in front of fundamental targets: experimental study. IACR Cryptol. ePrint Arch. p. 1213 (2018)
# HST FUV Spectroscopy of Super Star Cluster A in the Green Pea Analog Mrk 71: Revealing the Presence of Very Massive Stars Linda J. Smith Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA M.S. Oey Astronomy Department, University of Michigan, Ann Arbor, MI 48103, USA Svea Hernandez AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Jenna Ryon Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Claus Leitherer Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Stephane Charlot Sorbonne Université, CNRS, UMR 7095, Institut d’Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France Gustavo Bruzual Instituto de Radioastronomía y Astrofísica, UNAM Campus Morelia, Apartado postal 3-72, 58090 Morelia, Michoacán, México Daniela Calzetti Department of Astronomy, University of Massachusetts Amherst, 710 North Pleasant Street, Amherst, MA 01003, USA You- Hua Chu Institute of Astronomy and Astrophysics, Academia Sinica No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan Matthew J. Hayes Department of Astronomy and Oskar Klein Centre for Cosmoparticle Physics, AlbaNova University Centre, Stockholm University, SE-10691, Stockholm, Sweden Bethan L. James AURA for ESA, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Anne. E. Jaskot Department of Astronomy, Williams College, Williamstown, MA 01267, USA Göran Östlin Department of Astronomy and Oskar Klein Centre for Cosmoparticle Physics, AlbaNova University Centre, Stockholm University, SE-10691, Stockholm, Sweden (Received XXX; Revised XXX; Accepted ) ###### Abstract Mrk 71 is a low metallicity ($Z=0.16$ Z☉) starburst region in the local dwarf galaxy NGC 2366, hosting two super star clusters (SSCs A and B), and is recognized as a Green Pea (GP) analog with SSC A responsible for the GP properties. We present STIS and FOS far-ultraviolet (FUV) spectra of the embedded SSC Mrk 71-A obtained with the Hubble Space Telescope (HST). The STIS FUV spectrum shows the characteristic features of very massive stars (VMS, masses $>100$ M⊙) and we derive an age of $1\pm 1$ Myr by comparison with the Charlot & Bruzual suite of spectral population synthesis models with upper mass limits of 300 and 600 M⊙. We compare the STIS spectrum with all known SSC spectra exhibiting VMS signatures: NGC 5253-5, R136a, NGC 3125-A1 and the $z=2.37$ Sunburst cluster. We find that the cluster mass-loss rates and wind velocities, as characterized by the C iv P Cygni profiles and the He ii emission line strengths, are very similar over $Z=0.16$ to 0.4 Z☉. This agrees with predictions that the optically thick winds of VMS will be enhanced near the Eddington limit and show little metallicity dependence. We find very strong damped Lyman-$\alpha$ absorption with $N$(H i) $=10^{22.2}$ cm-2 associated with Mrk 71-A. We discuss the natal environment of this young SSC in terms of radiatively-driven winds, catastrophic cooling and recent models where the cluster is surrounded by highly pressurized clouds with large neutral columns. Dwarf irregular galaxies (417); Starburst galaxies (1570); Young massive clusters (2049); Massive stars (732); H II regions (694); Spectroscopy (1558) ††journal: ApJ††software: STISTOOLS, VOIGTFIT, PYNEB ## 1 Introduction The distinction between young massive star clusters (YMCs) found in nearby galaxies and globular clusters (GCs) formed at high redshift has become blurred in recent years, with the realization that both populations can be formed in intense star formation episodes when gas pressures are high (Elmegreen & Efremov, 1997; Kruijssen, 2015; Elmegreen, 2018). Super star clusters (SSCs, mass $>10^{5}$ M⊙, radius $\lesssim 1$ pc) are the most massive subset of YMCs and are only found locally in galaxy mergers, starburst dwarf galaxies and the centers of large galaxies. Studies of these GC-like systems are critical because they form under high pressure conditions that are similar to those found in star-forming galaxies at the peak of cosmic star formation. The strongly lensed SSC in the Sunburst Arc galaxy at $z=2.37$ (Rivera-Thorsen et al., 2017b; Chisholm et al., 2019; Vanzella et al., 2020, 2022; Meštrić et al., 2023), and the cluster population of the lensed Sparkler galaxy at $z=1.38$ (Mowla et al., 2022; Claeyssens et al., 2023; Adamo et al., 2023) offer a rare insight into cluster formation at this peak epoch. It is thus essential to study local examples of young SSCs to investigate their massive star populations via FUV spectroscopy, and compare them to stellar population synthesis models, as a means of testing low metallicity models at the youngest ages representative of high-z systems, accessible with JWST spectroscopy. Mrk 71 is a well-studied, low metallicity, local starburst region in the dwarf galaxy NGC 2366 at a distance of 3.44 Mpc (Tolstoy et al., 1995) with $12+{\rm log(O/H)}=7.89$ (Izotov et al., 1997; Chen et al., 2023) or 0.16 Z⊙ using the solar oxygen abundance of Asplund et al. (2009). Micheva et al. (2017) suggest that Mrk 71 is a local Green Pea (GP) analog. GPs are low metallicity, intensely star-forming galaxies with extremely strong [O iii] $\lambda 5007$ emission and a typical redshift of $\sim 0.2$ due to the selection technique, which puts the strong [O iii] emission into the green band at this redshift (Cardamone et al. 2009). Mrk 71 shares many properties with GPs including a high ionization parameter, and may be a good candidate for the escape of Lyman continuum photons (Komarova et al., 2021). The only substantial difference is that Mrk 71 is 1–2 orders of magnitude less luminous than GPs. The starburst region of NGC 2366 contains two clusters Mrk 71-A and Mrk 71-B whose properties were first investigated in the FUV by Drissen et al. (2000) with HST using the Faint Object Spectrograph (FOS). For Mrk 71-A, they detected C iv $\lambda 1550$ nebular emission and no underlying stellar population. Drissen et al. (2000) concluded that Mrk 71-A is $\leq 1$ Myr old, in the massive ultracompact H ii region phase with the cluster embedded in natal molecular material, hidden from view in the FUV by dust, and is responsible for the bulk of the ionizing photons in the starburst. They found that Mrk 71-B is older at 3–5 Myr and has evacuated its surrounding region. The two clusters have a projected separation of 5 arcsec or 83 pc. Micheva et al. (2017) show that Mrk 71-A is responsible for the GP-like properties of Mrk 71. They derive a mass of $1.4\times 10^{5}$ M⊙ by modeling the H$\alpha$ luminosity. Mrk 71 is also well known for having very broad emission wings with a full width zero intensity of $\gtrsim 6000$ km s-1 (Roy et al., 1992; Gonzalez- Delgado et al., 1994; Binette et al., 2009) associated with the strong nebular lines of e.g., [O iii] and H$\alpha$. Komarova et al. (2021) suggest that these features centered on Mrk 71-A originate in a dense, clumpy, Lyman continuum or Lyman-$\alpha$-driven superwind. Oey et al. (2017) present CO($J=2-1$) observations of Mrk 71-A and detect a compact ($\sim 7$ pc) molecular cloud with a mass of $10^{5}$ M⊙, which is similar to the mass of the SSC itself and implies a high star formation efficiency of 50%. These observations suggest that stellar winds are not effective in fully clearing the natal gas at an age of 1 Myr and may be suppressed by catastrophic cooling due to the high densities in young compact SSCs (eg. Silich et al., 2004; Silich & Tenorio-Tagle, 2017, 2018). Radiation feedback should therefore dominate at the earliest ages as shown by Komarova et al. (2021) and predicted by Silich & Tenorio-Tagle (2013). To further understand the role of suppressed winds, it is necessary to identify the massive star population of Mrk 71-A to quantify the stellar winds, ionizing spectrum and verify its young age. The well-studied young SSC NGC 5253-5 at the starburst center of the blue compact galaxy NGC 5253 has a similar age to Mrk 71-A of 1 Myr (Calzetti et al., 2015; Smith et al., 2016). NGC 5253-5 was thought to have an age of 3–5 Myr from the presence of broad Wolf-Rayet emission lines in optical spectra (Monreal-Ibero et al., 2010) but Smith et al. (2016) showed that the stellar emission features arise from very massive stars (VMS, mass $>100$ M⊙) at an age of 1–2 Myr by analyzing HST FUV spectroscopy obtained with the Space Telescope Imaging Spectrograph (STIS). NGC 5253-5 coincides with the peak of the H$\alpha$ emission in the galaxy and Smith et al. (2016) showed that the high ionizing flux can only be explained by the presence of VMS. The presence of VMS in Mrk 71-A has been suggested by James et al. (2016) and Micheva et al. (2017) from the detection of He ii $\lambda 4686$ emission in narrow-band imaging with the Wide Field Camera 3 (WFC3) on HST. In Section 2, we present STIS FUV spectroscopic observations of Mrk 71-A and show that VMS are present in the cluster in Section 3. We examine the interstellar extinction and compare the spectrum with stellar population synthesis models in Section 4. We compare the FUV spectrum of Mrk 71-A with all known examples of SSCs containing VMS, and discuss what the STIS observations reveal about the local environment of Mrk 71-A in Section 5. The summary and conclusions are presented in Section 6 . Figure 1: WFC3/UVIS F438W (blue), F502N (green), F814W (red) image of Mrk 71-A showing the positions of the long STIS slit and the FOS apertures (square represents G190H and circle represents G130H observations). ## 2 Observations and Data Reduction We have obtained FUV long-slit spectroscopy with STIS of Mrk 71 as part of a Cycle 28 program (ID 16261; PI Oey) aimed at investigating the low metallicity feedback and stellar populations of SSCs Mrk 71-A and -B via FUV imaging and spectroscopy. The FUV imaging results are reported in Oey et al. (2023). Here we report on the FUV spectrum of Mrk 71-A. We supplement this dataset with archival Faint Object Spectrograph (FOS) observations of Mrk 71-A (ID 6096; PI Drissen and ID 5246; PI Skillman). In Fig. 1 we show the positions of the STIS and FOS apertures superimposed on WFC3/UVIS (ID 13041; PI James) images of Mrk 71-A. ### 2.1 STIS Spectra Long-slit FUV spectra of Mrk 71-A were obtained with HST/STIS in 2021 March and 2022 February. The dataset comprises 11 exposures over 4 visits (totaling 8.7 hr) taken with the G140L grating (1162–1712 Å) and the $52^{\prime\prime}\times 0^{\prime\prime}.2$ slit. The position angle was set at 81°.5 to obtain spectra of both Mrk 71-A and -B. The image scale is $0.0246$ arcsec/pixel ($=0.41$ pc/pixel) and 0.58 Å/pixel. Each observation was dithered along the slit using a 3-point dither pattern and a spacing of 12 pixels. Figure 2: Top panel: STIS G140L spectrum of Mrk 71-A with the error array plotted along the bottom. The main stellar features and interstellar lines are identified. The damped Lyman-$\alpha$ fit is shown in red and the Ly$\alpha$-corrected N V emission line profile in blue. Bottom panel: FOS G130H and G190H spectra of Mrk 71-A. The main stellar features, interstellar and nebular lines are identified. The flux is in units of $10^{-15}$ erg cm-2 s-1 Å-1. The individual 2D spectra were combined as follows. The stistools (Hack et al., 2018) package sshift was used to align the flat-fielded (FLT) images for each visit after the dither offsets were verified by cross-correlating the brightest regions of the spectra. The aligned FLTs were then combined by averaging and weighting by the visit exposure times. The error arrays were combined in quadrature as variance arrays with the square of the exposure times as weights. A flux-calibrated and distortion-corrected 2D spectrum was then created from the combined FLT with the stistools package x2d. We selected a background region 340 pixels (=8$\farcs$4) below the SSC-A spectrum and 135 pixels (=3$\farcs$3) wide. In Fig. 1, this region is to the west of Mrk 71-A, outside of the field of view shown, and thus well away from the nebula. The background region was averaged and fit with a fifth-order polynomial and then subtracted row by row from the combined X2D image. Finally, a 1D spectrum was extracted by summing over 11 pixels (=0$\farcs$27) in the spatial direction to optimize the signal-to-noise. The FWHM of Mrk 71-A in the spatial direction along the STIS slit is measured to be $4.80\pm 0.15$ pixels by fitting a Gaussian to the profile. From the STIS Instrument Handbook (Prichard et al., 2022), the line spread function (LSF) of a point source at 1500 Å has a FWHM of 1.5 pixels for the same observational setup and thus Mrk 71-A is resolved. Subtracting the LSF from the measured FWHM in quadrature, we derive a size of 109 mas or $0.93\pm 0.03$ pc in radius. We also measured the FWHM in the spatial direction of an individual exposure to verify that the co-addition of the 11 spectra did not degrade the resolution; we find a FWHM of $4.68\pm 0.17$ pixels, giving a deconvolved radius of 0.91 pc, which is within the errors. The deconvolved FWHM of Mrk 71-A then gives a spectral resolution of 2.65 Å or 550 km s-1 at 1450 Å. We confirm this value by fitting Gaussians to unresolved features (nebular O iii] $\lambda 1661$, interstellar Si ii $\lambda\lambda 1526,1260$) and find a mean FWHM of $2.60\pm 0.16$ Å. We correct for a radial velocity of $+95$ km s-1 (Micheva et al., 2019) for Mrk 71 and leave the data in the original bin size of 0.58 Å. The signal-to-noise ratio of the final spectrum reaches a maximum of 46 at 1300 Å and decreases to 20 beyond 1600 Å where the G140L sensitivity declines. We present and discuss the spectrum in the next section. As can be seen in Fig. 1, there is another object $0^{\prime\prime}.4$ to the E of Mrk 71-A. The spectrum of this region is nebular and shows strong C iv and O iii] emission. The analysis of the nebular spectra and FUV imaging for Mrk 71-A and -B will be presented in a separate paper. ### 2.2 FOS Spectra Mrk 71-A was observed with FOS in 1996 using the G130H grating (1140–1603 Å) and the 0$\farcs$86 circular aperture with a total exposure time of 6150 s. The spectrum is presented and discussed in Drissen et al. (2000). A longer wavelength spectrum was obtained in 1993 using the G190H grating (1603–2310 Å) and the 1$\farcs$0 square aperture with a total exposure time of 3300 s. The spectrum covering C iii] $\lambda\lambda 1906,1909$ is discussed in Garnett et al. (1995) and Rigby et al. (2015). We assume that the FOS spectral resolution is 3.2 Å, as appropriate for an extended source, and use the re-calibrated spectra from the atlas of Leitherer et al. (2011). We merged the two wavelength regions by using a scaling factor of 0.90 for the G130H spectrum. This value was derived by comparing the two FOS spectra with the STIS spectrum in the overlap region. Table 1: Emission and Absorption Line Measurements for Mrk 71-A. Ion | Wavelength | Line Type | Velocity | FWHM | EW | $10^{15}$ Flux | Instrument ---|---|---|---|---|---|---|--- | Å | | km s-1 | km s-1 | Å | erg s-1 cm-2 | N v | 1238.82, 1242.80 | P Cyg em. | $+$560 | $\cdots$ | $-3.9\pm 0.1$ | $\cdots$ | STIS O v | 1371.30 | Stellar abs. | $-165$ | $\cdots$ | $+0.7\pm 0.1$ | $\cdots$ | STIS | | P Cygni em. | $+1920:$ | $\cdots$ | $-0.3\pm 0.1$ | $\cdots$ | STIS | | P Cygni abs. | $-2765$ | $\cdots$ | $+1.1\pm 0.1$ | $\cdots$ | STIS C iv | 1548.20, 1550.78 | P Cyg em. | $+930$ | $\cdots$ | $-0.5\pm 0.1$ | $\cdots$ | STIS | | P Cygni abs. | $-3300$ | $\cdots$ | $+1.5\pm 0.2$ | $\cdots$ | STIS He ii | 1640.42 | Stellar em. | $+450$ | $1770\pm 150$ | $-3.0\pm 0.2$ | $6.5\pm 0.4$ | STIS O iii] | 1660.81 | Neb. em. | $+65$ | $550\pm 210$ | $-0.5:$ | 1.1: | STIS O iii] | 1666.15 | Neb. em. | $+40$ | $550\pm 210$ | $-1.3:$ | 2.7: | STIS Si ii | 1260.42 | IS abs. | $+16$ | $\cdots$ | $1.5\pm 0.1$ | $\cdots$ | STIS O i, Si ii | 1302.17, 1304.37 | IS abs. | $+16$ | $\cdots$ | $1.7\pm 0.1$ | $\cdots$ | STIS C ii | 1334.53, 1335.71 | IS abs. | $+35$ | $\cdots$ | $1.6\pm 0.1$ | $\cdots$ | STIS Si iv | 1393.75 | IS abs. | $+95$ | $\cdots$ | $0.6\pm 0.1$ | $\cdots$ | STIS Si iv | 1402.77 | IS abs. | $+60$ | $\cdots$ | $0.6\pm 0.1$ | $\cdots$ | STIS Si ii | 1526.71 | IS abs. | $+18$ | $\cdots$ | $0.9\pm 0.1$ | $\cdots$ | STIS Fe ii | 1608.45 | IS abs. | $-24$ | $\cdots$ | $0.9\pm 0.1$ | $\cdots$ | STIS Al ii | 1670.79 | IS abs. | $+45$ | $\cdots$ | $1.2\pm 0.1$ | $\cdots$ | STIS C iv | 1549.49 | Neb. em. | $+75$ | $\cdots$ | $-3.4\pm 0.3$ | $14.0\pm 1.2$ | FOS O iii] | 1660.81,1666.15 | Neb. em. | $+50$ | $\cdots$ | $-3.3\pm 0.2$ | 1$1.4\pm 1.0$ | FOS $[$Si iii] | 1883.00 | Neb em. | $+10$ | $\cdots$ | $-3.2\pm 0.2$ | $10.0\pm 0.8$ | FOS Si iii] | 1892.03 | Neb em. | $+25$ | $\cdots$ | $-2.7\pm 0.2$ | $8.0\pm 0.8$ | FOS C iii] | 1907.71 | Neb em. | $+10$ | $\cdots$ | $-20.0\pm 0.3$ | $53.6\pm 2.1$ | FOS ## 3 Description of the Spectra We show the STIS and FOS spectra of Mrk 71-A in Fig. 2 and label the main stellar features, nebular emission lines, and the strong interstellar (IS) absorption lines. Line measurements are provided in Table 1 where negative equivalent widths (EWs) indicate emission features. One striking feature of the FUV spectrum is the strength of the damped Ly$\alpha$ absorption at 1215 Å. The damping wings extend to $\sim 1400$ Å and subsume the N v P Cygni absorption profile. To derive the H i column density and correct the N v profile, the STIS spectrum was first normalized using a spline fit and extrapolated to the bluest wavelength available at 1162 Å. Voigt profiles were then fitted to the spectrum using two components to represent the Milky Way (MW) and Mrk 71-A Ly$\alpha$ absorption, following the approach of Hernandez et al. (2021). For the MW component, a value of $\log N({\rm H\,I})=20.79~{}\hbox{cm${}^{-2}$}$ was adopted from the H i map of HI4PI Collaboration et al. (2016). Radial velocities of $0$ (MW) and $+95$ km s-1 for Mrk 71-A (Micheva et al., 2019) were assumed and the routine voigtfit (Krogager, 2018) was used to fit the observed profile. We derive $\log N({\rm H\,I})=22.222\pm 0.005$ cm-2 for the Mrk 71-A line of sight. We discuss this high value in more detail in Section 4.1 but note here that the high column of neutral hydrogen is indicative of a large amount of neutral gas associated with Mrk 71-A. The resulting fit to the damped Ly$\alpha$ profile and the corrected N v profile are shown in Fig. 2. We note that we cannot recover the absorption component of the N v P Cygni profile because it has disappeared into the Ly$\alpha$ trough. We also note that the STIS spectral resolution at 1215 Å of 650 km s-1 is insufficient to resolve any Ly$\alpha$ emission that may be associated with Mrk 71-A from the geo- coronal component given the low radial velocity of $+95$ km s-1. The most conspicuous stellar features in the STIS spectrum are the strong N v emission (once corrected for the damped Ly$\alpha$ absorption) and the strong and broad (FWHM $=1770$ km s-1) He ii $\lambda 1640$ emission. O v $\lambda 1371$ is clearly detected and consists of blue-shifted stellar wind absorption and a possible weak, ill-defined P Cygni emission component. This feature is rarely observed in cluster spectra because it is usually produced by the hottest and most massive O stars with short lifetimes. Si iv P Cygni emission is absent, which is consistent with the other spectral features, indicative of the hottest and most massive stars where Si3+ is photoionized to Si4+ (Drew, 1989). The stellar wind column density of Si iv will also be low given the metallicity of Mrk 71 of $0.16Z_{\sun}$. The resonance doublet of C iv at $\lambda\lambda 1548,1551$ is seen as a P Cygni profile and its weakness again demonstrates the low metallicity of the region. These stellar diagnostic lines (O v, N v, He ii present and Si iv absent) together with the fact that Mrk 71-A is clearly still embedded in its natal gas (Fig. 1, Drissen et al., 2000) indicate that Mrk 71-A is very young and contains very massive stars. Crowther et al. (2016) used spatially resolved STIS FUV spectra of the R136a star cluster in 30 Doradus (age of 1–2 Myr) to show that the broad He ii $\lambda 1640$ emission is totally dominated by VMS in the integrated cluster spectrum at this young age. Thus the presence of broad He ii emission in Mrk 71-A at a cluster age of $\sim 1$ Myr (Drissen et al., 2000) is a clear indicator of main sequence stars more massive than 100 M⊙ (VMS). Likewise, such a strong N v emission feature is only seen at the youngest ages. An alternative possibility is that the cluster is older ($>2.5$ Myr) and the spectral features are due to the presence of classical WR stars. WN stars would then produce the N v and He ii emission but the standard WN N iv] $\lambda 1486$ emission line is absent in the STIS spectrum (Fig. 2). In addition, WC stars are needed to account for the O v feature but the weakness of C iv argues against their presence because the abundance of C is expected to be enhanced through core He-burning. We thus discount that classical WR stars are responsible for the UV spectral features of Mrk 71-A and conclude that the cluster contains VMS. In Sect. 4.2, we compare the spectrum to stellar population synthesis (SPS) models incorporating VMS. The FOS spectra were taken with 0$\farcs$86 and 1$\farcs$0 arcsec apertures (as shown in Fig. 1) and are dominated by strong nebular emission lines arising from the ultracompact H ii region. Overall, the spectral features are reminiscent of low metallicity, high redshift star-forming galaxies (e.g. Senchyna et al., 2022; Schaerer et al., 2022). C iii] $\lambda 1909$ is very strong with an equivalent width of 20 Å (see also Rigby et al., 2015) and nebular C iv emission is present in the FOS spectrum. The stellar features of N v and O v, although weak, are seen to be present when compared to the STIS spectrum. The G190H spectrum is very noisy near the start at 1603 Å and it is not clear if He ii $\lambda 1640$ stellar emission is present because the potential noisy emission feature (marked “He ii ?” in Fig. 2) has a peak wavelength at 1628 Å, although the wavelength calibration near the starting wavelength may be uncertain. The ratio of [Si iii] $\lambda 1883$ to Si iii] $\lambda 1892$ can be used as an electron density diagnostic. We use PyNeb (Luridiana et al., 2015) to derive a value of $n_{\rm e}=7400\pm 3900$ cm-3 adopting an electron temperature $T_{\rm e}$ of $15\,000$ K (Gonzalez-Delgado et al., 1994; Chen et al., 2023). This corresponds to a high thermal pressure $P/k$ of $\sim 2\times 10^{8}$ cm-3 K. Mingozzi et al. (2022) show that electron densities derived from the Si iii] lines are insensitive to electron temperature. The derived density is much higher than the value of 800 cm-3 quoted by Pérez et al. (2001) from the [Ar iv] line ratio measured in ground-based spectra, while this refers to the larger scale H ii region rather than the inner 1 arcsec or 17 pc, as observed with FOS. Micheva et al. (2019) derive values from ground-based IFU data of $T_{\rm e}=13400$ K and $n_{\rm e}=273$ cm-3 for Mrk 71-A from the [S ii] doublet. Likewise, Chen et al. (2023) derive a lower $n_{\rm e}$ value of $160\pm 10$ cm-3 from the [O ii] $\lambda\lambda 3726/3729$ ratio for Mrk 71. The much larger $n_{\rm e}$ obtained from [Si iii] compared to values derived from [Ar iv] and other optical diagnostics is similar to trends in $n_{\rm e}$ measurements for local blue compact starbursts in the CLASSY sample obtained by Mingozzi et al. (2022). These authors find that UV density diagnostic line ratios are overall $\sim 2$ dex higher than their optical counterparts. We note that the thermal pressure $P/k$ would be much lower if the optical values of $n_{\rm e}$ are adopted. ## 4 Comparison with Models ### 4.1 Dust Attenuation We derive the interstellar extinction $A_{V}$ towards Mrk 71-A by comparing the observed continuum energy distribution of the combined STIS and FOS spectrum to the continuum energy distributions predicted by stellar population synthesis (SPS) models for various values of $A_{V}$. We use the single star Binary Population and Spectral Synthesis (BPASS) version 2.2.1 models (Eldridge et al., 2017; Stanway & Eldridge, 2018) with a Kroupa (2001) initial mass function, an upper mass cutoff of 300 M⊙ and a metallicity of $Z=0.003$. Although we use Charlot & Bruzual (C&B) models to fit the stellar spectral features in Section 4.2, we note that the continuum shape is identical in both BPASS and C&B models. We merged the G190H FOS (scaled by 0.514) and the STIS spectra to provide a long wavelength baseline for determining the extinction. The spectrum was de- reddened for the Milky Way extinction using a value of E(B$-$V) of 0.033 mag (Schlafly & Finkbeiner, 2011) with the Cardelli et al. (1989) law, corrected for a radial velocity of 95 km s-1, and binned at 1 Å intervals to match the BPASS model spectra. The nebular continuum arising from ionized gas makes a significant contribution to the total continuum flux of clusters with ages $<5$ Myr (Reines et al., 2010) and therefore needs to be added to the model spectra. We calculated the nebular continuum by following the analytic approach of Leitherer et al. (1999), which assumes all stellar photons below 912 Å are converted into free-free, free-bound and two-photon emission at longer wavelengths. The resulting nebular continuum shape is simply scaled by the hydrogen ionizing photon rate $Q$(H i) before adding to the model stellar continua. We derived the best fitting extinction value $A_{V}$ by reddening the model spectra for ages of 1, 2 and 3 Myr using the average SMC Bar extinction law of Gordon et al. (2003) and comparing to the STIS$+$FOS spectrum using a $\chi^{2}$ approach (Smith et al., 2006; Westmoquette et al., 2014). We normalized the observed and model spectra between 2100–2300 Å (no 2200 Å extinction bump is detected) to provide maximum leverage for determining the reddening from the FUV spectral slope, and determined $\chi^{2}$ over the relatively featureless wavelength range 1410–1515 Å. We find $A_{V}=0.23\pm 0.02$ or $E(B-V)=0.084\pm 0.007$ mag for ages of 1–3 Myr. For comparison, James et al. (2016) find a range of $E(B-V)$ from 0.13–0.23 mag for Mrk 71 as a whole using an LMC extinction law. Micheva et al. (2019) find $E(B-V)=0.21$ mag for Mrk 71-A from the nebular Balmer emission lines. Chen et al. (2023) use the Balmer lines to derive $E(B-V)=0.06\pm 0.03$ mag for Mrk 71 overall. We note that these other determinations are based on nebular line ratios and can be higher than the reddening determined from stellar continua for low metallicity galaxies (Shivaei et al., 2020). Figure 3: Normalized STIS spectra of SSCs A and B in Mrk 71 in the region of the Ly$\alpha$ absorption feature and interstellar absorption lines of Si ii $\lambda 1260$, O i $\lambda 1302$, Si ii $\lambda 1304$ and C ii $\lambda\lambda$1334,1335. For SSC A, the observed spectrum (black) and the Lyman-$\alpha$ corrected spectrum (blue) are shown. The observed spectrum of SSC A (black) should be compared with the spectrum of SSC B (red) to gauge the different strengths of the Ly$\alpha$ absorption. The spectrum for SSC A (blue) should be compared with the spectrum of SSC B (red) to gauge the similar strengths of the interstellar absorption lines. Overall, the derived extinction value is low considering the high column density of neutral hydrogen $\log N({\rm H\,I})=22.22$ cm-2 along the line of sight (Sect. 3). The Gordon et al. (2003) SMC Bar relationship of $N$(H i)/$A_{V}=1.32\times 10^{22}$ cm-2 mag-1 gives a much higher $A_{V}=1.26$ or $E(B-V)=0.46$ mag. If we scale this relationship to take account of the lower metallicity of Mrk 71-A, we obtain $A_{V}=0.81$ mag, which is still $\sim 3.5$ times higher than the derived value of 0.23 mag. This implies there is little dust in the line of sight to Mrk 71-A. We also have a STIS spectrum of the older, nearby cluster Mrk 71-B (projected separation of 83 pc) and derive $\log N({\rm H\,I})=20.942\pm 0.043$ cm-2 along its line of sight by fitting the Ly$\alpha$ absorption, taking into account a MW component, as was done for Mrk 71-A in Sect. 3. The Ly$\alpha$ absorption profiles for the two clusters are shown in Fig. 3. The H i column density of Mrk 71-A is a factor of 20 times higher than that observed along the line of sight of Mrk 71-B. HST images of the Mrk 71 region show that Mrk 71-B has evacuated its surrounding gas and sits in a cavity (James et al., 2016). The H i column associated with Mrk 71-B therefore arises along the line of sight in the interstellar medium of the parent galaxy NGC 2366. The high H i column density measured for Mrk 71-A is clearly local to the embedded SSC and has a value of $\log N({\rm H\,I})=22.20$ cm-2 when corrected for the likely ISM component of NGC 2366. Turning to the strengths of the interstellar metal absorption lines in the STIS spectra of Mrk 71-A and Mrk 71-B, we also show the wavelength region covering the Si ii $\lambda 1260$, O i $\lambda 1302$, Si ii $\lambda 1304$ and C ii $\lambda\lambda$1334,1335 transitions in Fig. 3 for both clusters. We note that the IS absorptions are very similar in strength despite the factor of 20 difference in the $N$(H i) column. Micheva et al. (2017) discuss the strengths of the Si ii IS absorption lines towards Mrk 71-A using the measurements of Leitherer et al. (2011) taken from the FOS spectrum. They find a Si ii $\lambda 1260/\lambda 1526$ ratio of 6.0, indicating that the lines are optically thin. They indirectly derive an upper limit of $\log N({\rm H\,I})<20.0$ cm-2 and discuss this in the context of Lyman continuum escape. With the benefit of having a higher resolution and higher S/N STIS spectrum for Mrk 71-A, we find that the Si ii $\lambda\lambda 1260,1526$ absorption lines have equivalent widths of 1.58 and 0.93 Å respectively, giving a Si ii $\lambda 1260/\lambda 1526$ ratio of 1.7. This intermediate ratio between optically thick and thin gas, together with the fact that the residual intensities of the absorption lines are close to 0.6 (Fig. 3), suggests that the Si ii IS lines are optically thick but with a low gas covering fraction (Rivera-Thorsen et al., 2017a; Östlin et al., 2021). The overall similarity in the metal interstellar features along the lines of sight to Mrk 71-A and Mrk 71-B, as shown in Fig. 3, indicates that they are probably formed along the line of sight within the galaxy and are not local to the clusters, as expected for a low-ionization species. It is puzzling that no distinct metal IS absorption features are associated with the high column of H i gas in front of Mrk 71-A. Higher resolution FUV spectra that separate out the MW and NGC 2366/Mrk 71 components are necessary to investigate this in more detail. Figure 4: Top panel: Comparison of the Mrk 71-A STIS spectrum with the best- fitting C&B models (upper mass limits $M_{U}=300,600$ M⊙, $Z=0.002$, age $=1$ Myr). The main stellar features (above the spectrum) and interstellar lines (below the spectrum) are identified. Bottom panel: Comparison for a C&B model with $M_{U}=300$ M⊙, $Z=0.004$, age $=1$ Myr. The flux is in units of $10^{-15}$ erg cm-2 s-1 Å-1. ### 4.2 Age and modeling of VMS features A common technique to determine ages of young massive clusters is to model the UV spectral features using stellar population synthesis models (e.g. Sirressi et al., 2022). It is well known, however, that SPS models with upper mass cutoffs of 100 M⊙ cannot reproduce stellar He ii $\lambda 1640$ emission at ages of $<3$ Myr because of the absence of VMS in the models (e.g. Wofford et al., 2014; Smith et al., 2016; Senchyna et al., 2017; Leitherer et al., 2018). To overcome this deficiency, SPS models are required to incorporate VMS model atmospheres and evolutionary tracks that adopt realistic mass-loss rates for the optically thick winds of VMS. Recently, Martins & Palacios (2022) have produced the first SPS models including VMS that are tailored to LMC metallicity or 0.4 Z⊙. They adopt empirical mass-loss rates for VMS in R136a (Bestenlehner et al., 2014) for optically thick stellar winds, and the standard mass-loss rates for optically thin O star winds derived by Vink et al. (2001). Martins & Palacios (2022) compare their FUV synthetic spectra at ages of 1 and 2 Myr with the integrated R136a spectrum (Crowther et al., 2016) and NGC 3125-A1 (Wofford et al., 2014) and find reasonable agreement. Overall, the work of Martins & Palacios (2022) shows that it is possible to match the strengths of the UV spectral features, particularly the He ii emission at LMC metallicity when the correct mass-loss rates are used to account for the optically thick winds of VMS. We compare the FUV spectrum of Mrk 71-A with the updated SPS models from Bruzual & Charlot (2003), which we refer to as Charlot and Bruzual or C&B models for single stars. These models have the advantage that they cover metallicities down to $Z=0.0001$ and thus are more suitable than the Martins & Palacios (2022) models, which are tailored for LMC metallicity. The revisions to the Bruzual & Charlot (2003) models are described in Plat et al. (2019) and Sánchez et al. (2022). They include updated stellar evolutionary tracks from Chen et al. (2015) for masses up to 600 M⊙, which adopt the mass-loss formalism of Vink et al. (2011), where mass-loss rates are enhanced as the stellar luminosity approaches the Eddington limit, and the metallicity dependence decreases. This scheme allows for relatively higher mass-loss rates for VMS at low metallicity. The C&B models utilize theoretical spectral libraries from Leitherer et al. (2010) and Chen et al. (2015) for O stars computed with wm-basic (Pauldrach et al., 2001) and the powr library for WR stars (Hamann & Gräfener, 2004). Recently, Wofford et al. (2023) have used the C&B models to fit the HST/COS FUV spectrum of the SSC NGC 3125-A1, which has strong He ii $\lambda 1640$ emission and O v $\lambda 1371$ absorption present, suggestive of VMS (Wofford et al., 2014). They find an excellent fit to the spectrum for $Z=0.008$ and an age of 2.2 Myr using an upper mass limit of 300 M⊙. Previous attempts at modeling the FUV spectrum with Starburst99 (Leitherer et al., 1999) could not reproduce the strength of He ii $\lambda 1640$ without invoking a flat IMF exponent at an age of 3 Myr with an upper mass limit of 100 M⊙ (Wofford et al., 2014). Senchyna et al. (2021) compare the C&B models with HST/COS FUV spectroscopy of 7 nearby star-forming regions at 0.2–0.4 Z⊙ that exhibit broad (1500–2000 km s-1) He ii $\lambda 1640$ emission. They find that the models for continuous SF cannot simultaneously match the UV stellar wind lines and the optical nebular diagnostic lines. The model fits underestimate the strength of the He ii emission and C iv P Cygni absorption and emission, indicating a higher stellar metallicity is required, which is not supported by the nebular lines. To explain this mismatch, Senchyna et al. (2021) suggest that very massive stars formed through binary mass transfer and mergers unaccounted for in the models could explain the under-fitting of the stellar wind lines. To compare the spectral features present in the STIS FUV spectrum of Mrk 71-A we use the C&B suite of SPS models for $Z=0.002$ and 0.004 with upper mass limits of 300 and 600 M⊙ and a Chabrier (2003) initial mass function. The nebular continuum, scaled by $Q$(H i), is added to the model spectrum, and this is reddened by $A_{V}=0.23$ mag using the Gordon et al. (2003) extinction law. The model spectra are then binned to 0.584 Å to replicate the STIS spectrum, which is corrected for foreground MW reddening, Ly$\alpha$ absorption, and radial velocity. The models are normalized to the STIS spectrum over the wavelength range 1420–1500 Å. Figure 5: Comparison of normalized FUV spectra of SSCs containing VMS. The spectra are ordered in increasing age and $Z$ from top to bottom. All the spectra, except for the Sunburst Cluster, were obtained with the STIS G140L grating. The vertical dashed line represents the maximum wind velocity of $-3300$ km s-1 measured for the C iv P Cygni absorption in Mrk 71-A. Comparison of the C&B models with the observed Mrk 71-A spectrum shows that O v wind absorption is only predicted to be present for ages $<2.5$ Myr (Wofford et al., 2023) and thus we do not consider models with older ages. The He ii $\lambda 1640$ emission strength is under-predicted in all the models we consider. The strongest He ii emission occurs for ages between 0.9–1.1 Myr and we show the model fit for 1 Myr in Fig. 4 for upper limits of 300 and 600 M⊙. There is very little difference between the predicted spectra for the two mass upper limits or the age range 0.9–1.1 Myr. The C iv P Cygni emission component is well-fitted but the predicted absorption component is too deep and the wind velocity is too low. We have considered whether the weakness of the C iv P Cygni absorption in the data could be due to dilution by a larger nebular continuum contribution, given the high surface brightness of the high density nebular gas (Section 3). We increased the nebular continuum flux in the model by a factor of 2 at 1500 Å (representing 75% of the stellar flux) and find that the depth of the residual absorption decreases from 0.79 to 0.84 of the continuum level, compared to the observed value of 0.92 for the C iv absorption in the Mrk 71-A spectrum. Increasing the nebular contribution by this amount does dilute the C iv P Cygni absorption but not by the required amount. A larger nebular continuum contribution will also affect the UV continuum shape by increasing the continuum flux beyond 1500 Å relative to shorter wavelengths. This effect is not seen in the observed spectra. The O v wind absorption is well-matched in strength but there is a velocity offset and the emission component is too strong. Finally, N v is too weak and narrow although the large damped Ly$\alpha$ correction renders this feature uncertain in the observations. The narrow widths of the C iv absorption and He ii emission indicate that the wind outflow velocities at $Z=0.002$ in the models are underestimated. We also compare the Mrk 71-A STIS spectrum with C&B models for $Z=0.004$ in the lower panel of Fig. 4. The He ii emission is under-predicted again, as expected, and the C iv P Cygni emission and absorption are too strong, presumably because of the higher metallicity. In summary, the best fitting SPS model is for $Z=0.002$ and $1\pm 1$ Myr. Models at these ages produce the strongest predicted He ii although the strength and width are not well- matched. The derived age is in excellent agreement with the value of $\leq 1$ Myr estimated by Drissen et al. (2000). In summary, the C&B models are not able to reproduce most of the stellar wind features in the Mrk 71-A FUV spectrum. In particular, the He ii flux and line width are both under-predicted. This is in contrast to the excellent fit to the spectrum of NGC 3125-A1 at LMC metallicity by Wofford et al. (2023) using the C&B models. We note that the C&B models are incomplete because they do not include tailored atmosphere models for VMS with optically thick winds. Instead they rely on the wm-basic O star atmosphere models to represent VMS at young ages and these have optically-thin winds. As shown by Crowther et al. (2016), He ii $\lambda 1640$ emission is exclusively produced by VMS at ages of 1–2 Myr, and this feature will be too weak and too narrow in SPS models that do not include optically thick winds for VMS. For NGC 3125-A1, Wofford et al. (2023) determine an age of 2.2 Myr and find that WR stars with masses $>100$ M⊙ start to appear at this early age in the models with an upper mass limit of 300 M⊙ and $Z=0.008$. Thus, the C&B models switch to using optically thick wind models to account for these late WN (WNL) stars, and this will enhance the He ii emission line strength sufficiently to agree with the observations. WR stars are lacking at all ages in the Z=0.002 C&B models. ## 5 Discussion ### 5.1 Comparison with other SSC VMS Spectra In Fig. 5 we compare the FUV spectrum of Mrk 71-A with all known examples of VMS in SSCs. The spectra are ordered by increasing age and $Z$ from top to bottom. The STIS spectra of NGC 5253-5 (Smith et al., 2016), R136a (Crowther et al., 2016) and NGC 3125-A1 (Wofford et al., 2014) are shown together with the VLT MUSE and X-Shooter spectrum of the $z=2.37$ Sunburst cluster (source 5.1) (Meštrić et al., 2023). The STIS spectra have been corrected for damped Ly$\alpha$ absorption, continuum-normalized to aid comparison, and aligned in wavelength using the He ii emission feature. The blue compact dwarf galaxy NGC 5253 hosts a central young starburst containing 3 SSCs (Smith et al., 2020). The cluster NGC 5253-5 coincides with the peak of the H$\alpha$ emission in the galaxy and is visible at FUV wavelengths. R136a is the resolved, central ionizing cluster of the 30 Doradus H ii region in the LMC and the STIS spectrum from Crowther et al. (2016) represents the integrated light of R136a. The cluster A1 in the blue compact dwarf galaxy NGC 3125 is well known for having very strong He ii $\lambda 1640$ emission (Chandar et al., 2004) that could not be modeled with SPS models assuming a Wolf-Rayet origin (Wofford et al., 2014). As described in the previous section, Wofford et al. (2023) have successfully fitted the FUV spectrum with C&B models including VMS. The final object shown in Fig. 5 is the cluster (source 5.1) in the lensed Sunburst Arc galaxy at $z=2.37$ from Meštrić et al. (2023) who identify VMS spectral features and fit the spectrum using the Martins & Palacios (2022) SPS models including VMS. A metallicity of $Z=0.5~{}Z_{\sun}$ was derived by Chisholm et al. (2019) and $0.3~{}Z_{\sun}$ by Pascale et al. (2023). We now compare the VMS spectral features shown in Fig. 5 with the aim of providing insights on the cluster winds at young ages and as a function of metallicity. It is clear that Mrk 71-A has the lowest metallicity because the C iv P Cygni absorption feature is very weak compared to the other clusters, which all have LMC-like metallicities. Chisholm et al. (2019) show that the C iv absorption strength is a good metallicity indicator using Starburst99 SPS models (Leitherer et al., 1999). This dependence is also shown in Fig. 4 for the C&B models. The blue edge of the C iv absorption profile is one of the main observational diagnostics of stellar wind velocities in O stars. The scaling of the wind terminal velocity $v_{\infty}$ with $Z$ is not well constrained from observations because of the difficulty of measuring stellar wind velocities at low $Z$ where the C iv absorption is weak and unsaturated. For SPS modeling as a function of $Z$, the relationship of Leitherer et al. (1992) derived from radiatively-driven wind theory is usually adopted with $v_{\infty}\propto Z^{0.13}$. We can examine the $Z$ dependence by comparing wind velocities in Fig. 5. The dashed vertical line at $-3300$ km s-1 represents the measured maximum wind velocity $v_{\rm edge}$ for Mrk 71-A. It can be seen that the local clusters at LMC-like metallicity have very similar wind velocities and thus there appears to be little if any scaling of wind velocity with $Z$. The Sunburst Cluster at $z=2.37$ does, however, appear to have a lower maximum wind velocity of $-2300$ km s-1. Empirical measurements of wind velocities in the literature for individual O stars as a function of $Z$ show contrasting results. The large HST program Ultraviolet Legacy Library of Young Stars as Essential Standards (ULLYSES)111https://ullyses.stsci.edu/index.html (Roman-Duval et al., 2020) is set to improve our understanding of OB stars as a function of $Z$ by the analysis of the UV spectra for a significant number of OB stars in Local Group galaxies. Hawcroft et al. (2023) used 149 OB stars in the LMC and SMC from the ULLYSES dataset to measure the dependence of the wind terminal velocity on $Z$. They find that $v_{\infty}\propto Z^{0.22}$, which is steeper than the theoretical prediction of $v_{\infty}\propto Z^{0.13}$ (Leitherer et al., 1992). The earlier study of Garcia et al. (2014) determined wind velocities for 8 OB stars in IC 1613 (Z$=0.13~{}Z_{\sun}$) and found no clear differences between IC 1613, SMC or LMC OB stars. ULLYSES should improve on this small sample by increasing the dataset to $\sim 30$ OB stars in Local Group galaxies below SMC metallicity. From the above, and given that C iv absorption originates in O stars, we would expect to see a lower maximum wind velocity for Mrk 71-A for its lower $Z$ but this is not seen and is instead similar to LMC cluster values. Garcia et al. (2014) suggest that the similar wind velocities they find for IC 1613 OB stars compared to similar stars in the LMC and SMC could be due to an enhanced Fe abundance in this galaxy. Drissen et al. (2001) discuss the Fe abundance in the Luminous Blue Variable star V1 in Mrk 71 and find that it is SMC-like from modeling the strengths of the Fe ii absorption lines. We note, however, that Fe is the main driver of mass loss in the inner wind while C, N and O are responsible for the wind acceleration to terminal velocity in the supersonic regime (Vink, 2022). We now discuss the strength and width of the He ii $\lambda 1640$ emission line, which is found exclusively in VMS (Crowther et al., 2016) at very young ages ($<2$ Myr) and are good indicators of wind density and velocity for the He ii formation region. The He ii emission line profiles for Mrk 71-A, R136a and the Sunburst Cluster are remarkably similar in strength and width with EWs of $-3.0$ (Mrk 71-A; Table 1), $-4.4$ (R136a; Crowther et al., 2016) and $-3.0$ Å (Sunburst Cluster; Meštrić et al., 2023). Likewise, the FWHMs are 1770, 1970 and 1610 km s-1. These similarities suggest that despite the difference in metallicity and redshift, the SSC winds in the VMS phase have comparable mass loss rates and velocities or feedback efficiencies. This is in contrast to the weak C iv P Cygni absorption feature in Mrk 71-A, which is dominated by O stars, suggesting a mass loss rate much lower than LMC metallicity although the C iv wind velocities are similar. The other two SSCs NGC 5253-5 and NGC 3125-A1 have stronger and narrower He ii emission features. In Section 4.2, we noted that the C&B models at 2.2 Myr for NGC 3125-A1 contain VMS and WNL stars with the presence of WNL stars probably enhancing the He ii emission line strength. We thus speculate that the stronger and narrower He ii emission feature shared by these two SSCs may be due to both VMS and WNL stars whereas the He ii emission line profiles for Mrk 71-A, R136a and the Sunburst Cluster are produced by VMS only. We note that Wofford et al. (2023) rule out a nebular contribution to He ii in NGC 3125-A1. The auroral O iii] $\lambda\lambda 1661,1666$ emission lines are present in the spectra of Mrk 71-A, NGC 5253-5 and the Sunburst Cluster (Fig. 5). Both Mrk 71-A and NGC 5253-5 are immersed in ultracompact H ii regions whereas R136a and NGC 3125-A1, which do not show O iii], have been cleared of natal gas. This suggests that the Sunburst Cluster may be in the ultracompact H ii region phase. The strongest emission feature in the FUV spectra of the local SSCs in Fig. 5 is N v $\lambda 1240$; the strength of this feature is only apparent when the damped Lyman-$\alpha$ absorption feature has been removed. We note that the two oldest clusters NGC 3125-A1 and the Sunburst show N iv] $\lambda 1486$ emission in agreement with the VMS SPS model predictions of Martins & Palacios (2022). These models predict a strong nitrogen enrichment after 1.5 Myr that boosts the strength of N iv] $\lambda 1486$. O v $\lambda 1371$ is present in the local SSCs and appears as a blue-shifted absorption component with little to no emission, in contrast to SPS models that predict strong emission (Fig. 4). O v $\lambda 1371$ is absent in the Sunburst cluster and Meštrić et al. (2023) argue that this is due to the older age of the cluster. Overall, the similar emission line strengths and wind velocities for the SSC spectra shown in Fig. 4 argue for a weak dependency on metallicity. We find that the wind velocity scaling with $Z$ is close to constant as predicted by Leitherer et al. (1992). The wind mass-loss rates also show little if any scaling with $Z$ as evidenced by the similar He ii emission line profiles at $Z=0.16$ and $0.4~{}Z_{\sun}$. This can be explained by the fact that VMS are expected to be close to their Eddington limits and their mass-loss rates will be strongly enhanced and the metallicity dependence for mass loss will decrease (Vink et al., 2011; Chen et al., 2015). For comparison, the mass-loss rates of optically thin O star winds are predicted to scale as $Z^{0.7-0.8}$ (Leitherer et al., 1992; Vink et al., 2001). The lack of scaling of the cluster wind parameters suggests that the VMS feedback efficiency is largely independent of metallicity over the range investigated for clusters at ages $<3$ Myr. ### 5.2 The local environment of Mrk 71-A We now consider the local environment of Mrk 71-A to provide an overall view of a young SSC embedded in its natal gas at low $Z$. At the center is the 1 Myr old SSC A (Sect. 4.2) with a radius of 0.9 pc (Sect. 2.1) and a mass of $1.4\times 10^{5}$ M⊙ (Micheva et al., 2017). Mrk 71-A is ionizing a giant H ii region with a density up to $\sim 7400$ cm-3 in the immediate vicinity of the SSC and has a thermal pressure $P/k$ of $\sim 2\times 10^{8}$ cm-3 K, derived from the UV [Si iii] lines (Sect. 3). Neutral hydrogen is detected along the line of sight and shown to be associated with Mrk 71-A with a high column density of $N$(H i) $=10^{22.2}$ cm-2 but little dust (Sect. 4.1). Oey et al. (2017) detect a compact CO cloud with a size of 7 pc and mass of $10^{5}$ M⊙ coincident with Mrk 71-A. The presence of high density neutral and molecular gas co-located with the SSC is consistent with the findings of Oey et al. (2023) who use FUV nebular C iv imaging of Mrk 71-A to study the mechanical feedback. They show that the observed diffuse C iv surface brightness and its spatial distribution for the SSC A environment are both consistent with model predictions that it is undergoing strong radiative cooling, and driving a momentum-conserving shell due to catastrophic cooling. Feedback is thus dominated by radiation from the SSC, including from our newly identified VMS. This supports the scenario obtained by Komarova et al. (2021): The giant molecular cloud represents the remnant gas out of which the young SSC formed. It is being fragmented by radiative feedback from the cluster, forming the radiation-driven superwind. The nature of the Lyman-continuum driven wind implies that there must be optically thin channels through which the Lyman continuum photons can escape, and the covering factor of the high column density clouds will then be less than unity. Pascale et al. (2023) model the Sunburst cluster, its escaping Lyman continuum photons (Rivera-Thorsen et al., 2019) and ionized nebula. They find that the cluster is surrounded by highly pressurized, dense clouds ($n_{\rm e}\sim 10^{5}$cm-3), which should have large neutral columns ($N$(H i) $>10^{22.5}$ cm-2) to survive rapid ejection by radiation pressure. The parameters we find for Mrk 71-A bear similarities to this model, particularly our measured high H i column density of $N$(H i) $=10^{22.2}$ cm-2 and $n_{\rm{}_{e}}=7400\pm 3900$ cm-3. ## 6 Summary and Conclusions We have presented STIS and FOS FUV spectra of the local, low metallicity GP analog Mrk 71-A with the aims of identifying the massive star population, verifying the young age, investigating the properties of stellar winds at low $Z$ and studying the embedded natal gas associated with this SSC. The FOS spectrum (Drissen et al., 2000) shows that Mrk 71-A is a rare example of a high excitation, local starburst region with nebular C iv and strong C iii] emission (EW$=20$ Å; Table 1). We are able to uncover the stellar spectral features with our deep and higher resolution STIS spectrum and show that the presence of O v $\lambda 1371$ and broad He ii $\lambda 1640$ emission with the absence of Si iv $\lambda 1400$ P Cygni emission indicates that VMS are present in this very young cluster. We compare the STIS spectrum of Mrk 71-A with the Charlot & Bruzual suite of SPS models for upper limits of 300 and 600 M⊙ and $Z=0.002$ and 0.004. For $Z=0.002$, we find that the He ii emission line strength is under-predicted in all the models and is strongest for ages of 0.9–1.1 Myr. There is very little difference in the fits for upper mass limits of 300 or 600 M⊙ (Fig. 4). Overall, the He ii emission in the models is too weak and narrow because wm- basic O star atmosphere models are adopted to represent VMS and these have optically-thin winds. The C iv P Cygni absorption is too deep and the wind velocity is too low for $Z=0.002$ whereas the $Z=0.004$ models provide a poorer fit to the C iv P Cygni feature because the metallicity is too high. We derive an age based on the C&B models of $1\pm 1$ Myr, which is in excellent agreement with that estimated by Drissen et al. (2000). We compare the low metallicity STIS spectrum of Mrk 71-A with all known examples of SSCs containing VMS: NGC 5253-5 (Smith et al., 2016), R136a (Crowther et al., 2016), NGC 3125-A1 (Wofford et al., 2014) and the $z=2.37$ Sunburst cluster (Meštrić et al., 2023). The comparison spectra have LMC-like metallicity and it is clear that Mrk 71-A has the lowest $Z$ because the C iv P Cygni absorption is weak in comparison. We examine the $Z$ dependence of the cluster wind velocity and find that there appears to be little, if any, scaling with $Z$, despite theoretical predictions (Leitherer et al., 1992) and recent measurements (Hawcroft et al., 2023). The stellar He ii $\lambda 1640$ emission line profiles in Mrk 71-A, R136a and the Sunburst cluster are very similar in terms of strength and width and indicate similar wind densities and velocities irrespective of $Z$. We conclude that the VMS winds over $Z=0.16$–$0.4~{}Z_{\sun}$ have comparable mass-loss rates and velocities or feedback efficiencies. This agrees with the predictions of Vink et al. (2011) that the mass-loss rates of the optically thick VMS winds will be enhanced close to the Eddington limit and the metallicity dependence will decrease. Although some SPS models now extend to upper mass limits of 300 M⊙ or higher, they lack tailored model atmospheres for VMS with their high mass loss rates and decreased metallicity dependence. The only example to date is the LMC metallicity models of Martins & Palacios (2022). More VMS models are clearly needed to realistically model JWST spectra of low metallicity star-forming galaxies at high redshift when VMS, if present, will dominate the stellar wind and ionizing feedback in young globular clusters. The STIS spectrum of Mrk 71-A shows an unusually strong damped Lyman-$\alpha$ absorption feature with $N$(H i) $=10^{22.2}$ cm-2 that is associated with the SSC natal gas. We suggest that the covering factor of the H i must be less than one to allow the Lyman continuum photons to escape. The adiabatic cluster wind is expected to be suppressed due to catastrophic cooling because of the high densities and instead the presence of a Lyman continuum-driven wind is observed (Oey et al., 2017; Komarova et al., 2021). The model of the ionized nebula associated with the Sunburst cluster put forward by Pascale et al. (2023) in which the cluster is surrounded by highly pressurized clouds with large neutral columns has many similarities to the properties we can measure for Mrk 71-A. We thank the referee for their astute and constructive comments on the manuscript. We thank Uros Meštrić and Eros Vanzella for kindly providing us with their Sunburst cluster spectrum. We thank Fabrice Martins for sharing his VMS models and Calum Hawcroft for useful discussions on O stars winds. BLJ is thankful for support from the European Space Agency (ESA). M.H. is a Fellow of the Knut & Alice Wallenberg Foundation. This work made use of v2.2.1 of the Binary Population and Spectral Synthesis (BPASS) models as described in Eldridge et al. (2017) and Stanway & Eldridge (2018). Based on observations made with the NASA/ESA Hubble Space Telescope, at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with program #16261. Support for program #16261 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The data presented in this paper can be obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via https://doi.org/DOI (catalog 10.17909/ye2e-af62). HST (STIS, FOS) ## References * Adamo et al. (2023) Adamo, A., Usher, C., Pfeffer, J., & Claeyssens, A. 2023, MNRAS, 525, L6, doi: 10.1093/mnrasl/slad084 * Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481, doi: 10.1146/annurev.astro.46.060407.145222 * Bestenlehner et al. (2014) Bestenlehner, J. M., Gräfener, G., Vink, J. S., et al. 2014, A&A, 570, A38, doi: 10.1051/0004-6361/201423643 * Binette et al. (2009) Binette, L., Drissen, L., Ubeda, L., et al. 2009, A&A, 500, 817, doi: 10.1051/0004-6361/200811132 * Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000, doi: 10.1046/j.1365-8711.2003.06897.x * Calzetti et al. (2015) Calzetti, D., Johnson, K. E., Adamo, A., et al. 2015, ApJ, 811, 75, doi: 10.1088/0004-637X/811/2/75 * Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392 * Chandar et al. (2004) Chandar, R., Leitherer, C., & Tremonti, C. A. 2004, ApJ, 604, 153 * Chen et al. (2015) Chen, Y., Bressan, A., Girardi, L., et al. 2015, MNRAS, 452, 1068, doi: 10.1093/mnras/stv1281 * Chen et al. (2023) Chen, Y., Jones, T., Sanders, R., et al. 2023, Nature Astronomy, doi: 10.1038/s41550-023-01953-7 * Chisholm et al. (2019) Chisholm, J., Rigby, J. R., Bayliss, M., et al. 2019, ApJ, 882, 182, doi: 10.3847/1538-4357/ab3104 * Claeyssens et al. (2023) Claeyssens, A., Adamo, A., Richard, J., et al. 2023, MNRAS, 520, 2180, doi: 10.1093/mnras/stac3791 * Crowther et al. (2016) Crowther, P. A., Caballero-Nieves, S. M., Bostroem, K. A., et al. 2016, MNRAS, 458, 624, doi: 10.1093/mnras/stw273 * Drew (1989) Drew, J. E. 1989, ApJS, 71, 267, doi: 10.1086/191374 * Drissen et al. (2001) Drissen, L., Crowther, P. A., Smith, L. J., et al. 2001, ApJ, 546, 484, doi: 10.1086/318264 * Drissen et al. (2000) Drissen, L., Roy, J.-R., Robert, C., Devost, D., & Doyon, R. 2000, AJ, 119, 688, doi: 10.1086/301204 * Eldridge et al. (2017) Eldridge, J. J., Stanway, E. R., Xiao, L., et al. 2017, PASA, 34, e058, doi: 10.1017/pasa.2017.51 * Elmegreen (2018) Elmegreen, B. G. 2018, ApJ, 869, 119, doi: 10.3847/1538-4357/aaed45 * Elmegreen & Efremov (1997) Elmegreen, B. G., & Efremov, Y. N. 1997, ApJ, 480, 235, doi: 10.1086/303966 * Garcia et al. (2014) Garcia, M., Herrero, A., Najarro, F., Lennon, D. J., & Alejandro Urbaneja, M. 2014, ApJ, 788, 64, doi: 10.1088/0004-637X/788/1/64 * Garnett et al. (1995) Garnett, D. R., Skillman, E. D., Dufour, R. J., et al. 1995, ApJ, 443, 64, doi: 10.1086/175503 * Gonzalez-Delgado et al. (1994) Gonzalez-Delgado, R. M., Perez, E., Tenorio-Tagle, G., et al. 1994, ApJ, 437, 239, doi: 10.1086/174992 * Gordon et al. (2003) Gordon, K. D., Clayton, G. C., Misselt, K. A., Landolt, A. U., & Wolff, M. J. 2003, ApJ, 594, 279, doi: 10.1086/376774 * Hack et al. (2018) Hack, W., Dencheva, N., Sontag, C., Sosey, M., & Droettboom, M. 2018, in STIS Python User Tools * Hamann & Gräfener (2004) Hamann, W. R., & Gräfener, G. 2004, A&A, 427, 697, doi: 10.1051/0004-6361:20040506 * Hawcroft et al. (2023) Hawcroft, C., Sana, H., Mahy, L., et al. 2023, arXiv e-prints, arXiv:2303.12165, doi: 10.48550/arXiv.2303.12165 * Hernandez et al. (2021) Hernandez, S., Aloisi, A., James, B. L., et al. 2021, ApJ, 908, 226, doi: 10.3847/1538-4357/abd6c4 * HI4PI Collaboration et al. (2016) HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, A&A, 594, A116, doi: 10.1051/0004-6361/201629178 * Izotov et al. (1997) Izotov, Y. I., Thuan, T. X., & Lipovetsky, V. A. 1997, ApJS, 108, 1, doi: 10.1086/312956 * James et al. (2016) James, B. L., Auger, M., Aloisi, A., Calzetti, D., & Kewley, L. 2016, ApJ, 816, 40, doi: 10.3847/0004-637X/816/1/40 * Komarova et al. (2021) Komarova, L., Oey, M. S., Krumholz, M. R., et al. 2021, ApJ, 920, L46, doi: 10.3847/2041-8213/ac2c09 * Krogager (2018) Krogager, J.-K. 2018, VoigtFit: Absorption line fitting for Voigt profiles, Astrophysics Source Code Library, record ascl:1811.016. http://ascl.net/1811.016 * Kroupa (2001) Kroupa, P. 2001, MNRAS, 322, 231 * Kruijssen (2015) Kruijssen, J. M. D. 2015, MNRAS, 454, 1658, doi: 10.1093/mnras/stv2026 * Leitherer et al. (2018) Leitherer, C., Byler, N., Lee, J. C., & Levesque, E. M. 2018, ApJ, 865, 55, doi: 10.3847/1538-4357/aada84 * Leitherer et al. (2010) Leitherer, C., Ortiz Otálvaro, P. A., Bresolin, F., et al. 2010, ApJS, 189, 309, doi: 10.1088/0067-0049/189/2/309 * Leitherer et al. (1992) Leitherer, C., Robert, C., & Drissen, L. 1992, ApJ, 401, 596, doi: 10.1086/172089 * Leitherer et al. (2011) Leitherer, C., Tremonti, C. A., Heckman, T. M., & Calzetti, D. 2011, AJ, 141, 37, doi: 10.1088/0004-6256/141/2/37 * Leitherer et al. (1999) Leitherer, C., Schaerer, D., Goldader, J. D., et al. 1999, ApJS, 123, 3 * Luridiana et al. (2015) Luridiana, V., Morisset, C., & Shaw, R. A. 2015, A&A, 573, A42, doi: 10.1051/0004-6361/201323152 * Martins & Palacios (2022) Martins, F., & Palacios, A. 2022, A&A, 659, A163, doi: 10.1051/0004-6361/202243048 * Meštrić et al. (2023) Meštrić, U., Vanzella, E., Upadhyaya, A., et al. 2023, A&A, 673, A50, doi: 10.1051/0004-6361/202345895 * Micheva et al. (2019) Micheva, G., Christian Herenz, E., Roth, M. M., Östlin, G., & Girichidis, P. 2019, A&A, 623, A145, doi: 10.1051/0004-6361/201834838 * Micheva et al. (2017) Micheva, G., Oey, M. S., Jaskot, A. E., & James, B. L. 2017, ApJ, 845, 165, doi: 10.3847/1538-4357/aa830b * Mingozzi et al. (2022) Mingozzi, M., James, B. L., Arellano-Córdova, K. Z., et al. 2022, ApJ, 939, 110, doi: 10.3847/1538-4357/ac952c * Monreal-Ibero et al. (2010) Monreal-Ibero, A., Vílchez, J. M., Walsh, J. R., & Muñoz-Tuñón, C. 2010, A&A, 517, A27, doi: 10.1051/0004-6361/201014154 * Mowla et al. (2022) Mowla, L., Iyer, K. G., Desprez, G., et al. 2022, ApJ, 937, L35, doi: 10.3847/2041-8213/ac90ca * Oey et al. (2023) Oey, M., Sawant, A., Danehkar, A., et al. 2023, ApJ, in preparation * Oey et al. (2017) Oey, M. S., Herrera, C. N., Silich, S., et al. 2017, ApJ, 849, L1, doi: 10.3847/2041-8213/aa9215 * Östlin et al. (2021) Östlin, G., Rivera-Thorsen, T. E., Menacho, V., et al. 2021, ApJ, 912, 155, doi: 10.3847/1538-4357/abf1e8 * Pascale et al. (2023) Pascale, M., Dai, L., McKee, C. F., & Tsang, B. T. H. 2023, arXiv e-prints, arXiv:2301.10790, doi: 10.48550/arXiv.2301.10790 * Pauldrach et al. (2001) Pauldrach, A. W. A., Hoffmann, T. L., & Lennon, M. 2001, A&A, 375, 161, doi: 10.1051/0004-6361:20010805 * Pérez et al. (2001) Pérez, E., González Delgado, R., & Vílchez, J. M. 2001, Astrophysics and Space Science Supplement, 277, 83, doi: 10.1023/A:1012735812409 * Plat et al. (2019) Plat, A., Charlot, S., Bruzual, G., et al. 2019, MNRAS, 490, 978, doi: 10.1093/mnras/stz2616 * Prichard et al. (2022) Prichard, L., Welty, D., & Jones, A. 2022, in STIS Instrument Handbook for Cycle 30 v. 21 * Reines et al. (2010) Reines, A. E., Nidever, D. L., Whelan, D. G., & Johnson, K. E. 2010, ApJ, 708, 26, doi: 10.1088/0004-637X/708/1/26 * Rigby et al. (2015) Rigby, J. R., Bayliss, M. B., Gladders, M. D., et al. 2015, ApJ, 814, L6, doi: 10.1088/2041-8205/814/1/L6 * Rivera-Thorsen et al. (2017a) Rivera-Thorsen, T. E., Östlin, G., Hayes, M., & Puschnig, J. 2017a, ApJ, 837, 29, doi: 10.3847/1538-4357/aa5d0a * Rivera-Thorsen et al. (2017b) Rivera-Thorsen, T. E., Dahle, H., Gronke, M., et al. 2017b, A&A, 608, L4, doi: 10.1051/0004-6361/201732173 * Rivera-Thorsen et al. (2019) Rivera-Thorsen, T. E., Dahle, H., Chisholm, J., et al. 2019, Science, 366, 738, doi: 10.1126/science.aaw0978 * Roman-Duval et al. (2020) Roman-Duval, J., Proffitt, C. R., Taylor, J. M., et al. 2020, Research Notes of the AAS, 4, 205, doi: 10.3847/2515-5172/abca2f * Roy et al. (1992) Roy, J.-R., Aube, M., McCall, M. L., & Dufour, R. J. 1992, ApJ, 386, 498, doi: 10.1086/171035 * Sánchez et al. (2022) Sánchez, S. F., Barrera-Ballesteros, J. K., Lacerda, E., et al. 2022, ApJS, 262, 36, doi: 10.3847/1538-4365/ac7b8f * Schaerer et al. (2022) Schaerer, D., Izotov, Y. I., Worseck, G., et al. 2022, A&A, 658, L11, doi: 10.1051/0004-6361/202243149 * Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 * Senchyna et al. (2021) Senchyna, P., Stark, D. P., Charlot, S., et al. 2021, MNRAS, 503, 6112, doi: 10.1093/mnras/stab884 * Senchyna et al. (2017) Senchyna, P., Stark, D. P., Vidal-García, A., et al. 2017, MNRAS, 472, 2608, doi: 10.1093/mnras/stx2059 * Senchyna et al. (2022) Senchyna, P., Stark, D. P., Charlot, S., et al. 2022, ApJ, 930, 105, doi: 10.3847/1538-4357/ac5d38 * Shivaei et al. (2020) Shivaei, I., Reddy, N., Rieke, G., et al. 2020, ApJ, 899, 117, doi: 10.3847/1538-4357/aba35e * Silich & Tenorio-Tagle (2013) Silich, S., & Tenorio-Tagle, G. 2013, ApJ, 765, 43, doi: 10.1088/0004-637X/765/1/43 * Silich & Tenorio-Tagle (2017) —. 2017, MNRAS, 465, 1375, doi: 10.1093/mnras/stw2879 * Silich & Tenorio-Tagle (2018) —. 2018, MNRAS, 478, 5112, doi: 10.1093/mnras/sty1383 * Silich et al. (2004) Silich, S., Tenorio-Tagle, G., & Rodríguez-González, A. 2004, ApJ, 610, 226, doi: 10.1086/421702 * Sirressi et al. (2022) Sirressi, M., Adamo, A., Hayes, M., et al. 2022, AJ, 164, 208, doi: 10.3847/1538-3881/ac9311 * Smith et al. (2020) Smith, L. J., Bajaj, V., Ryon, J., & Sabbi, E. 2020, ApJ, 896, 84, doi: 10.3847/1538-4357/ab8f94 * Smith et al. (2016) Smith, L. J., Crowther, P. A., Calzetti, D., & Sidoli, F. 2016, ApJ, 823, 38, doi: 10.3847/0004-637X/823/1/38 * Smith et al. (2006) Smith, L. J., Westmoquette, M. S., Gallagher, J. S., et al. 2006, MNRAS, 370, 513, doi: 10.1111/j.1365-2966.2006.10507.x * Stanway & Eldridge (2018) Stanway, E. R., & Eldridge, J. J. 2018, MNRAS, 479, 75, doi: 10.1093/mnras/sty1353 * Tolstoy et al. (1995) Tolstoy, E., Saha, A., Hoessel, J. G., & McQuade, K. 1995, AJ, 110, 1640, doi: 10.1086/117637 * Vanzella et al. (2020) Vanzella, E., Caminha, G. B., Calura, F., et al. 2020, MNRAS, 491, 1093, doi: 10.1093/mnras/stz2286 * Vanzella et al. (2022) Vanzella, E., Castellano, M., Bergamini, P., et al. 2022, A&A, 659, A2, doi: 10.1051/0004-6361/202141590 * Vink (2022) Vink, J. S. 2022, ARA&A, 60, 203, doi: 10.1146/annurev-astro-052920-094949 * Vink et al. (2001) Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, A&A, 369, 574, doi: 10.1051/0004-6361:20010127 * Vink et al. (2011) Vink, J. S., Muijres, L. E., Anthonisse, B., et al. 2011, A&A, 531, A132, doi: 10.1051/0004-6361/201116614 * Westmoquette et al. (2014) Westmoquette, M. S., Bastian, N., Smith, L. J., et al. 2014, ApJ, 789, 94, doi: 10.1088/0004-637X/789/2/94 * Wofford et al. (2014) Wofford, A., Leitherer, C., Chandar, R., & Bouret, J.-C. 2014, ApJ, 781, 122, doi: 10.1088/0004-637X/781/2/122 * Wofford et al. (2023) Wofford, A., Sixtos, A., Charlot, S., et al. 2023, MNRAS, 523, 3949, doi: 10.1093/mnras/stad1622
different orders. In short, our theoretical developments indicate that the final expression for the spectral shape of the signal should reduce to a regular function with may be expanded in $1/(\nu/\nu_{c})^{\kappa}$-dependent series. In addition, the proposal defined in equation (119) contemplates the condition that the maximum of the function corresponds to the value of the characteristic frequency, $\nu/\nu_{c}=1$, as well as additionally makes it possible to fit simulated data with the simple choice of the only formulation parameter, $a$ (see for example [5], in which the choice of $a$ corresponds to $2.8$ to fit simulated data to describe spectrum of the gravitational radiation determined by numerical simulations using the ’envelope approximation’ [78]). ## VII Discussion of Results There are numerous research works in the literature that address the topic of quantum gravity in a non-commutative environment. However, with regard to the description of the wave function of the Universe based on the formalism of Hořava-Lifshitz quantum gravity and the Wheeler-DeWitt equation, or based on other alternative approaches, the number of articles dealing with this topic are not significant. Most of the published works deal with the temporal evolution of the scale factor of the Universe, a topic of studies commonly designated as “dynamical equations”. In some articles the authors deal with both topics. We indicate a few references on these topics [79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90]). We emphasize once again that concerning the wave function of the Universe in a non-commutative environment, most authors, due to the inherent computational difficulties imposed by the formalism, use approximations that significantly limit their conclusions and direct their studies towards formal aspects without a numerical approach or algebraic results. Regarding the dynamical equations, some studies involving non-commutative formulations appear to be more improved but are still limited by significant computational approximations. It is important to mention, although this is a non-central topic in the present work, studies involving strings and non- commutative gauge theories which have contributed significantly to a better understanding of the influence of non-commutative algebra on the deformation of geometric structures and the impacts of these studies in understanding the accelerated evolution of our Universe in string theory. Edward Witten, precursor of this line of investigation [91], was followed by many scientists who have followed this line of investigation (see for instance [92, 93] and references therein). In our case, in describing the evolution of the Universe’s wave function, as well as in the dynamic equations involving the Universe’s scale factor as well as in relic gravitational waves, we focus on consistently numerically solving the resulting equations without using, from the point from a computational point of view, no approximation, through a numerical approach based on the method known as Runge-Kutta-Fehlberg. This method proves to be quite powerful for solving differential equations, without adopting any numerical approximation. This computational approach made possible, despite the extreme formal complexity of the theory and the resulting equations, a broad numerical study of the evolutionary process focused on in the present work, through the exploration of the theory’s parameter space. This study thus enabled a broad study of the evolution of the functions $\Psi(\eta)$, $\Psi(\xi)$, $\eta(t)$, and $\xi(t)$ and a wide class of solutions that for reasons of brevity we limited their presentation to just a few figures. The most expressive results involving both the contraction phase and the expansion phase of the branched Universe indicate an accelerating cosmic expansion. ## VIII Conclusions The paradigm supporting the theory of renormalization groups -organizing physical phenomena based on energy or distance scales- holds firm in commutative quantum field theories. In their non-commutative counterparts, however, one encounters an uncertain territory. One of the features of non- commutative field theories is the mixing of short and long scales. A striking illustration of this phenomenon can be found in UV/IR mixing [94]. In cases where non-commutativity exists at a small scale, the UV/IR mixing effect is anticipated to manifest at an earlier epoch in the Universe’s history, thereby contributing to raising new questions about the hierarchy problem. Another striking example pertains to the inhomogeneities within the distribution of large-scale structures and anisotropies observed in the CMB radiation. These anomalies bear traces of the non-commutative nature of the early Universe. Specifically, the power spectrum of these structures becomes direction- dependent in non-commutative spacetime [95]. In 1947, Hartland Sweet Snyder introduced the concept of quantizing spacetime in a seminal paper [96]. While this paper has received relatively little citation, it sparked one of the most remarkable inquiries in the realm of physics, namely the possibility of discretizing spacetime. In alignment with Snyder’s proposal, the uncertainty principle of Werner Heisenberg suggests that spacetime possesses a non-commutative structure, which can be represented as $[x^{\mu},x^{\nu}]=i\theta^{\mu\nu}\,.$ (120) This non-commutative property implies a minimum scale of approximately $\sim\sqrt{\theta}$. From a cosmological perspective, to assess the implications of this concept on the dynamics of the Universe, it is most appealing to investigate how the non-commutativity of coordinates affects the deformation of spacetime algebra. We assume that this equation holds within a comoving frame, a coordinate system in which galaxies are freely falling. In this work we adopt an alternative approach by considering a non-commutative quantum cosmological scenario based on the deformation of a mini-superspace of variables obeying Poisson algebra. We aim to examine whether such a perspective can help identify the mechanism that drives the acceleration of the Universe [97]. While the results presented here are preliminary, they are promising in suggesting that non-commutative branch-cut gravity offers an algebraic framework that impacts both the statistical distribution and gravitational dynamics of the matter constituting the primordial Universe. As previously emphasized, the implications of non-commutative algebra are evident in the solutions presented in this study. These implications manifest in various aspects, including the wave function of the early Universe, the dynamical equations involving the cosmic scale factor and its dual counterpart, as well as relic gravitational waves. The results point to a dynamic acceleration driven by a force that in our view, arises from the reconfiguration of matter in the early Universe due to the algebraic structure of non-commutative geometry. This structure captures the intrinsic properties of spacetime at short distances, with significant implications for the dynamical symmetries and conventional duality symmetries of quantum spacetime geometry. A seminal aspect of our study involves the propagation of relic gravitational waves in the stages before and after the transition involving our Universe and its mirror counterpart. In our study, we have identified a similar behavior involving relic gravitational waves and the wave function of the Universe, with respect to a topological quantum leap in the branch-cut transition region, addressed in other studies [16, 8, 9]. Based on Bekenstein’s universal upper bound of a localized quantum system that establishes that $S\leq\frac{2\pi k_{B}RE}{\hbar c}\,,.$ (121) with $E$ representing the total energy of a system enclosed in a circumference with surface area $A$ and radius $R$, we thus developed a Generalized Heisenberg Uncertainty Principle (GHUP) version for the Bekenstein’s criterion [9], establishing a point of contact with Snider’s predictions $\Delta S\Delta E\lesssim\frac{\hbar c}{2}\frac{\pi k_{B}\bigl{(}1+\theta\bigr{)}}{\sqrt{\theta}\ell_{P}}\,,$ (122) in which $\sqrt{\theta}$ represents the minimum scale in the non-commutative space-time algebra, so the minimum observable GHUP length is assumed as $\Delta x\sim\sqrt{\theta}\ell_{P}$. The relic cosmological stochastic background, as expected, is characterized by stability, isotropy, absence of polarization, whose origin are fundamental processes that supposedly occurred in the early Universe [5]. Among these processes, quantum vacuum fluctuations, cosmic phase transitions and cosmic strings stand out [73]. And insofar, as pointed out earlier, beyond redshift $z\sim 1100$, the early Universe is opaque to electromagnetic radiation, these relic gravitational waves are the only capable messengers of the prevailing physical conditions of the early Universe. In this contribution, the results obtained for the description of gravitational waves bring a new perspective, in line with the implications of equation (122) which indicate an uncertainty around the previous boundary conditions, as a result of a minimum scale. And also in line with the implications of the Bekenstein criterion, indicating a quantum leap in the boundary of the branch-cut transition region. These results also indicate, in view of the mixture of intensities of the original sources of relic gravitational waves identified by the distribution of colors associated with the quantities that describe the corresponding power spectra and density parameters, two fundamental aspects. The first concerns the primordial vision of a smooth and isotropic Universe, symmetrical to ours. In this description, the stress-energy must take the form of a perfect fluid fully described by an energy density $\rho_{\epsilon}$ and pressure p, in which the variations of the internal equation of state are so small that they make no qualitative difference in the cosmic evolutionary process. This premise seems to collapse when we examine the results obtained in this contribution, which indicate asymmetric Universes, whether in the distribution of intensities of gravitational wave sources, or in the implications of non-symmetric algebra in the short-distance structure of spacetime. The second aspect concerns the transition region between the different phases of the Universe, bringing to light the discussion about the possibility of observing signatures not only corresponding to the era marked by the dissociation of matter and radiation in our Universe, but also the phases associated with the cosmic seeds, sources of gravitational waves, existing in the mirror Universe. In conclusion, the results suggesting a primordial dynamic acceleration of spacetime demonstrate that non-commutative quantum branch-cut gravity provides a viable theoretical alternative to models such as inflation [6, 7] and bouncing [35, 36, 37]. This exploration aligns with a fundamental characteristic of non-commutative algebra, namely the interplay between small and large scales. As a result, if the effects of non-commutative algebra were indeed present in the primordial Universe, it is reasonable to anticipate their persistence in the present day. A fundamental question then emerges: Can inhomogeneities in the distribution of large-scale structures and anisotropies in the stochastic gravitational wave background (SGWB), if they indeed exist, carry traces of the non- commutative nature of the early Universe? Our results indicate a scenario during the early stages of the Universe, characterized by an SGWB distribution that deviates significantly from the homogeneity expected to be observed today. As a consequence, inhomogeneities in the SGWB distribution, if influenced by traces of non-commutativity, could serve as crucial windows into the initial phases of the early Universe, preceding the recombination era. However, answering this question necessitates further observations, and this theme will remain the primary focus of our ongoing investigations. A final word regarding the results obtained to describe relic gravitational waves in a non-commutative formulation: the regular function that describes the spectral shape of the signal is the result, among many other functional possibilities, of the Lagrangian structure of the branched quantum gravity formulation. proposal. There are still aspects to be revealed in the future, in order to introduce a functional formulation, albeit regular, that highlights the behavior of the spectral form of the signal parametrically dependent on the non-commutative structure. This aspect, although in our formulation it is implicitly contained in the functional structure of the expression (119), deserves a consistent specific approach. ## IX Acknowledgements P.O.H. acknowledges financial support from PAPIIT-DGAPA (IN116824). F.W. is supported by the U.S. National Science Foundation under Grant PHY-2012152. ## Appendix A In what follows, we will elaborate on how to find a transformation from non- commuting coordinates to commuting ones. We assume that $u$ and $v$ still commute, i.e., $\sigma=0$. Let us define the commuting and non-commuting coordinates respectively as $({\tilde{x}}_{i})=({\tilde{u}},\tilde{p}_{u},{\tilde{v}},{\tilde{p}}_{v})\quad\mbox{and}\quad(x_{i})=(u,p_{u},v,p_{v})~{}.$ (123) The commuting and non-commuting coordinates satisfy both the respective Poisson-brackets $\left\\{{\tilde{x}}_{i},{\tilde{x}}_{j}\right\\}={\tilde{g}}_{ij}\quad\mbox{and}\quad\left\\{x_{i},x_{j}\right\\}=g_{ij}\,,$ (124) where on the right side of these equations we have the symplectic metrics, satisfying respectively ${\tilde{g}}_{ji}=-{\tilde{g}}_{ij}\quad\mbox{and}\quad g_{ji}=-g_{ij}~{}.$ (125) This symmetry property is the reason why we can call it a symplectic space. The matrix structure of these metrics is $\displaystyle\left({\tilde{g}}\right)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}0&1&0&0\\\ -1&0&0&0\\\ 0&0&0&1\\\ 0&0&-1&0\end{array}\right)\,;$ (130) $\displaystyle\left(g\right)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}0&1&0&\gamma\\\ -1&0&-\chi&\alpha\\\ 0&\chi&0&1\\\ -\gamma&-\alpha&-1&0\end{array}\right)~{}.$ (135) Using (124), this leads to the non-zero Poisson brackets (only the Poisson brackets are listed which will be non-zero in the non-commuting case) $\displaystyle\left\\{{\tilde{u}},{\tilde{p}}_{u}\right\\}$ $\displaystyle=$ $\displaystyle 1\,;\quad\left\\{{\tilde{v}},{\tilde{p}}_{v}\right\\}=1\,;\quad\mbox{and}\quad\left\\{{\tilde{u}},{\tilde{p}}_{v}\right\\}=0\,;$ $\displaystyle\left\\{{\tilde{v}},{\tilde{p}}_{u}\right\\}$ $\displaystyle=$ $\displaystyle 0\,;\quad\mbox{and}\quad\left\\{{\tilde{p}}_{u},{\tilde{p}}_{v}\right\\}=0\,;$ (136) and for the non-commuting case $\displaystyle\left\\{u,p_{u}\right\\}$ $\displaystyle=$ $\displaystyle 1\,;\quad\left\\{v,p_{v}\right\\}=1,;\quad\mbox{and}\quad\left\\{u,p_{v}\right\\}=\gamma\,;$ $\displaystyle\left\\{v,p_{u}\right\\}$ $\displaystyle=$ $\displaystyle\chi\,;\quad\mbox{and}\quad\left\\{p_{u},p_{v}\right\\}=\alpha~{}.$ (137) Now, we look for a transformation $\displaystyle{\tilde{x}}_{i}$ $\displaystyle=$ $\displaystyle\sum_{j}M_{ij}x_{j}\,,$ (138) such that the above Poisson brackets are satisfied. This gives us conditions for the matrix elements $M_{ij}$. Equation (138) can be cast into the form $\displaystyle{\tilde{u}}$ $\displaystyle=$ $\displaystyle M_{11}u+M_{12}p_{u}+M_{13}v+M_{14}p_{v}\,;$ $\displaystyle{\tilde{p}}_{u}$ $\displaystyle=$ $\displaystyle M_{21}u+M_{22}p_{u}+M_{23}v+M_{24}p_{v}\,;$ $\displaystyle{\tilde{v}}$ $\displaystyle=$ $\displaystyle M_{31}u+M_{32}p_{u}+M_{33}v+M_{34}p_{v}\,;$ $\displaystyle{\tilde{p}}_{v}$ $\displaystyle=$ $\displaystyle M_{41}u+M_{42}p_{u}+M_{43}v+M_{44}p_{v}\,;~{}.$ (139) Using that $\tilde{u}=u$ and $\tilde{v}=v$, we can reduce the matrix $M$ to the expression $\displaystyle\left(M\right)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ M_{21}&M_{22}&M_{23}&M_{24}\\\ 0&0&1&0\\\ M_{41}&M_{42}&M_{43}&M_{44}\end{array}\right)\,.$ (144) Using (137), and the transformation (138) with (144), we obtain $\displaystyle\left\\{{\tilde{u}},{\tilde{p}}_{u}\right\\}$ $\displaystyle=$ $\displaystyle 1=M_{22}+M_{24}\gamma\,;$ $\displaystyle\left\\{{\tilde{v}},{\tilde{p}}_{v}\right\\}$ $\displaystyle=$ $\displaystyle 1=M_{42}\chi+M_{44}\,;$ $\displaystyle\left\\{{\tilde{u}},{\tilde{p}}_{v}\right\\}$ $\displaystyle=$ $\displaystyle 0=M_{42}+M_{44}\gamma\,;$ $\displaystyle\left\\{{\tilde{v}},{\tilde{p}}_{u}\right\\}$ $\displaystyle=$ $\displaystyle 0=M_{22}\chi+M_{24}~{},$ (145) where the Poisson-bracket $\left\\{\tilde{p}_{u},\tilde{p}_{v}\right\\}=0$ will be calculated later. First of all we resolve the set of equations (145), which leads to $\displaystyle M_{22}$ $\displaystyle=$ $\displaystyle\frac{1}{1-\gamma\chi}\,;\quad\quad M_{24}=-\frac{\chi}{1-\gamma\chi}\,;$ $\displaystyle M_{42}$ $\displaystyle=$ $\displaystyle-\frac{\gamma}{1-\gamma\chi}\,;\quad M_{44}=\frac{1}{1-\gamma\chi}~{},$ (146) which leads consequently to the matrix $\displaystyle\left(M\right)$ $\displaystyle=$ $\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ M_{21}&\frac{1}{1-\gamma\chi}&M_{23}&-\frac{\chi}{1-\gamma\chi}\\\ 0&0&1&0\\\ M_{41}&-\frac{\gamma}{1-\gamma\chi}&M_{43}&\frac{1}{1-\gamma\chi}\end{array}\right)~{}.$ (151) An additional information we get when calculating the missing commutator: $\left\\{{\tilde{p}}_{u},{\tilde{p}}_{v}\right\\}=0=-M_{41}+M_{23}+\frac{\alpha}{a-\gamma\chi}~{},$ (152) which leads to $\displaystyle M_{41}$ $\displaystyle=$ $\displaystyle\frac{\alpha}{1-\gamma\chi}+M_{23}~{}~{}~{}.$ (153) So, finally we obtain the structure of the transformation matrix, with still some liberty because $M_{21}$, $M_{23}$ and $M_{43}$ are free to choose! That the momenta now commute is important, because this allows us to write these momenta proportional to derivatives of the conjugate variable. With the last form of the transformation matrix and an appropriate choice of the remaining matrix elements, we obtain for the transformation of the momenta: $\displaystyle{\tilde{p}}_{u}$ $\displaystyle=$ $\displaystyle M_{21}u+\frac{1}{1-\gamma\chi}p_{u}+M_{23}v-\frac{\chi}{1-\gamma\chi}p_{v}$ $\displaystyle{\tilde{p}}_{v}$ $\displaystyle=$ $\displaystyle\left(\frac{\alpha}{1-\gamma\chi}+M_{23}\right)u-\frac{\gamma}{1-\gamma\chi}p_{u}$ (154) $\displaystyle+$ $\displaystyle M_{43}v+\frac{1}{1-\gamma\chi}p_{v}~{}~{}~{}.$ We also define $\Gamma=(\gamma\chi-1)$. There is still an ambiguity, which we call gauge transformation, to choose the remaining matrix elements. There are two possible paths: The first is to select a particular choice for ${\tilde{p}}_{u}$ and ${\tilde{p}}_{v}$ in terms of the non-commuting variables. The second one is to invert the matrix and select a particular choice for the non-commuting momenta $p_{u}$ and $p_{v}$ in terms of the commuting ones. We chose the second path and the particular choice is listed in the main text. The choice then gives conditions to the remaining matrix elements, which can be resolved. The present deduction also implies that there are several options in choosing ${\tilde{p}}_{u}$ and ${\tilde{p}}_{v}$, depending on the particular choice of $M_{21}$, $M_{23}$ and $M_{43}$. Still, for each alternative one has to verify the relations of the Poisson brackets. ## Appendix B In our calculations for the generation of gravitational waves as a result of bubble collisions, we use $\alpha_{\infty}\simeq 2.71\times 10^{-2}$, and assume $u_{\omega}\to 1$ (runaway regime) [5]. The parameter $\kappa_{\infty}$ holds a crucial role in our analysis, as it quantifies the efficiency of converting latent heat into bulk motion, a pivotal factor in defining the amplitude of GW signals. In reference to [98], $\kappa_{\infty}$ is approximately given as $\kappa_{\infty}\approx 3.516\times 10^{-2}$. When considering the parameter $\alpha_{T}$, from [5] we use $\alpha_{T}\approx 3.68\times 10^{-2}(100/g_{*}).$ Using the above values for $\kappa_{\infty}$ and $\alpha_{T}$, we obtain $\Biggl{(}\frac{\alpha_{T}\,\kappa_{\infty}}{1+\alpha_{T}}\Biggr{)}^{2}\approx 1.557411\times 10^{-6}\,,$ (155) with $g_{*}$ is given as [99] $g_{*}=g_{*}^{\rm{[SM]}}=100,\quad\mbox{and thus}\quad\Biggl{(}\frac{100}{g_{*}}\Biggr{)}^{1/3}=1\,.$ (156) Assuming the runaway regime ($u_{\omega}\to 1$), we get $\Bigl{(}\frac{0.11u^{3}_{\omega}}{0.42+u^{2}_{\omega}}\Bigr{)}\approx 7.746479\times 10^{-2}\,,$ (157) and $\Bigl{(}\frac{0.62}{1.8+u^{2}_{\omega}-0.1n_{\omega}}\Bigr{)}\approx 2.29629\times 10^{-1}\,.$ (158) In a first order phase transition, the bubble nucleation process is fixed by the tunnelling probability between the two vacua states of the effective potential (for the details see [5]) $V(T,\phi)=\frac{\gamma}{2}\Bigl{(}T^{2}-T_{0}^{2}\Bigr{)}\phi^{2}-\frac{\sigma}{3}T\phi^{3}+\frac{\lambda}{4}\phi^{4}\,.$ (159) The usual standard model potential does not generates a strong transition required to produce a significant background. In general, the potential must be modified and a minimum change implies in additional gauge bosons (at least two new ones), what in practice means to modify the $\phi T^{3}$ term. The solution for the vacuum states permits the evaluation of the nucleation temperature, that is of the order of $T_{*}=166$ GeV [5]. During the EW phase transition, a fraction of the latent heat is used to excite sound waves, turbulence, and the bulk motion of bubbles, which are able to generate gravitational waves. Thus, the physical conditions of transition must be used for all mechanisms, which are not independent. Once the nucleation temperature $T_{*}$ is computed (or fixed), the duration of the transition can be estimated from [5] $\Biggl{(}\frac{\beta}{H_{*}}\Biggr{)}\simeq 4\,\ln\Biggl{(}\frac{M_{P}}{T_{*}}\Biggr{)}\,.$ (160) However, fixing the nucleation temperature to be $T_{*}=166\,GeV$, the relation above implies $(\beta/H_{*})\approx 155.$ ## Appendix C In this appendix we briefly describe the steps to obtain the gravitational waves spectra described by the spectral shape of the signal, considering the sub-horizon condition with a matter-energy source given by equation (115). To solve this equation, due to the technical difficulties of its resolution, as we stressed before, we used powerful computational algorithms, based on the Runge-Kutta-Fehlberg method, that made it possible to obtain algebraic solutions without the need to implement simplifications or any computational approximation. The corresponding solution of the field equation that describes the evolution over time of the metric disturbance having as its source the potential defined in equation (93), that configures the dynamic composition of matter in the primordial cosmic period, is $\displaystyle{h}(k,\tau)$ $\displaystyle=$ $\displaystyle c_{2}(\eta)\sin(k\tau)+c_{1}(\eta)\cos(k\tau)+\frac{1}{k^{4}\eta^{3}}\Bigl{[}k^{2}\Bigl{(}6.28319\tau^{2}\eta^{2}+\tau\bigl{(}-4.18879\eta^{5/2}+16.7384\eta^{6}-23.4572\eta^{5}$ (161) $\displaystyle-$ $\displaystyle 16.7552\eta^{4}-6.70206\eta^{2}-0.150796\bigr{)}+50.2655\eta^{7/2}\Bigr{)}-12.5664\eta^{2}\Bigr{]}\,.$ $S_{b}(\nu/\nu_{c})$, the spectral shape of the signal, is determined in terms of the solution (161) of equation (115), after integrating out the time dependence of $S_{b}(\nu/\nu_{c},\tau)$, taking the limits $0$ to $\beta/H_{*}$, with $k\to\nu/\nu_{c}$: $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!S_{b}(\nu/\nu_{c})$ $\displaystyle=$ $\displaystyle{\cal A}(\nu/\nu_{c})\int_{0}^{\beta/H_{*}}\\!\\!\\!\\!\\!S_{b}(\nu/\nu_{c},\tau)d\tau$ (162) $\displaystyle=$ $\displaystyle{\cal A}(\nu/\nu_{c})\int_{0}^{\beta/H_{*}}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{1}{4\pi(\nu/\nu_{c})}\frac{h^{2}(\nu/\nu_{c},\tau)}{\eta^{2}(\tau)}d\tau\,,$ where the regulatory function ${\cal A}(\nu/\nu_{c})$ guarantees the normalization of the spectral shape, the boundary condition $S_{n}(\nu/\nu_{c})\to 0$, as well as the position of the maximum of the function at the point $\nu/\nu_{c}=1$. The final expression for the spectral shape of the signal is quite intricate and difficult to manipulate. As a preliminary study, in the following we insert a generic scale power law behavior for the branch-cut scale factor, $\eta(\tau)=a_{n}\tau^{n}$, which covers the cases of radiation ($n=1$) and matter ($n=2$) domination, as well as the de Sitter inflation ($n=-1$) to determine the main ingredients of the time-independent spectral shape of the signal, expressed as $S_{b}(\nu/\nu_{c})$. Taking for simplicity the case $n=1$, we show below a series expansion of the resulting integral, limiting for convergence reasons the final expression to the $(\nu/\nu_{c})^{6}$\- and $(\beta/H_{*})^{6}$-dependent terms. The mean ingredients of this calculation, except for determining the regulatory function ${\cal A}(\nu/\nu_{c})$, are synthesized in the following expression: $\displaystyle S_{n}(\nu/\nu_{c})$ $\displaystyle\approx$ $\displaystyle\Bigl{(}\sin\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}+\cos\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}\Bigr{)}\Biggl{(}0.0795775\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}-0.159155\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}^{\\!\\!3}+1.90986\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}^{\\!\\!5}\\!\Biggr{)}+{\cal O}\Bigl{(}\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{6}\Bigr{)}$ (163) $\displaystyle+$ $\displaystyle\Bigl{(}\frac{\nu_{c}}{\nu}\Bigr{)}^{5}\Biggl{(}3.15843\Bigl{(}\frac{\beta}{H_{*}}\Bigr{)}+92.4467\Bigl{(}\frac{\beta}{H_{*}}\Bigr{)}^{1/2}+169.119log\Bigl{(}\frac{\beta}{H_{*}}\Bigr{)}+0.150401\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}+0.733704\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{3/2}$ $\displaystyle-$ $\displaystyle 0.000596831\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{3}\Biggr{)}+0.0795775log\Bigl{(}\frac{\nu_{c}}{\nu}\Bigr{)}-0.0397887log\Bigl{(}\Bigl{(}\frac{\nu_{c}}{\nu}\Bigr{)}^{2}\Bigr{)}+0.125+{\cal O}\Bigl{(}\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{11/2}\Bigr{)}$ $\displaystyle+$ $\displaystyle\Bigl{(}\sin\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}-\cos\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}\Bigr{)}\Biggl{(}0.0795775\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}^{2}-0.477465\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}^{4}+{\cal O}\Bigl{(}\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{6}\Biggr{)}$ $\displaystyle+$ $\displaystyle\Bigl{(}\sin\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}+\cos\Bigl{(}k\frac{\beta}{H_{*}}\Bigr{)}\Bigr{)}\Biggl{(}-0.0795775\Bigl{(}\frac{\nu_{c}}{\nu}\frac{H_{*}}{\beta}\Bigr{)}+{\cal O}\Bigl{(}\Bigl{(}\frac{H_{*}}{\beta}\Bigr{)}^{6}\Biggr{)}\Biggr{)}\,.$ ## Author Contributions Conceptualization, C.A.Z.V.; methodology, C.A.Z.V. and B.A.L.B. and P.O.H and J.A.deF.P. and D.H. and F.W. and M.M.; software, C.A.Z.V. and B.A.L.B. and M.R. and M.M.; validation, C.A.Z.V. and B.A.L.B. and D.H. and P.O.H. and J.A.deF.P. and F.W.; formal analysis, C.A.Z.V. and B.A.L.B. and P.O.H. and J.A.deF.P. and D.H. and F.W.; investigation, C.A.Z.V. and B.A.L.B. and P.O.H. and J.A.deF.P. and M.R. and M.M. and F.W.; resources, C.A.Z.V.; data curation, C.A.Z.V. and B.A.L.B.; writing—original draft preparation, C.A.Z.V.; writing—review and editing, C.A.Z.V. and B.A.L.B. and P.O.H. and J.A.deF.P. and D.H. and M.R. and M.M. and F.W.; visualization, C.A.Z.V. and B.A.L.B.; supervision, C.A.Z.V.; project administration, C.A.Z.V.; funding acquisition (no funding acquisition). All authors have read and agreed to the published version of the manuscript. ## References * Einstein [1916] A. Einstein, Die grundlage der allgemeinen relativitätstheorie, Annalen der Physik 49, 769 (1916). * Einstein [1917] A. Einstein, Kosmologische betrachtungen zur allgemeinen relativitätstheorie, König.-Preuss. Akad. Wiss. , 142 (1917). * Abbott _et al._ [2016a] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, and _et-al._ , Observation of gravitational waves from a binary black hole merger, Phys. Rev. Lett. 116, 061102 (2016a). * Abbott _et al._ [2016b] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, V. B. Adya, C. Affeldt, M. Agathos, K. Agatsuma, N. Aggarwal, O. D. Aguiar, and _et-al._ , Binary black hole mergers in the first advanced ligo observing run, Phys. Rev. X 6, 041015 (2016b). * de Freitas Pacheco [2023] J. de Freitas Pacheco, _Cosmological Stochastic Gravitational Waves Background_ , edited by C. Zen Vasconcellos, P. Hess, and T. Boller, In New Phenomena and New States of Matter in the Universe: From Quarks to Cosmos (World Scientific Pub. Co., Singapore, 2023). * Guth [1981] A. Guth, Inflationary universe: A possible solution to the horizon and flatness problems, Phys. Rev. D23, 347–356 (1981). * Guth [2004] A. Guth, _Carnegie Observatories Astrophysics Series, Vol. 2: Measuring and Modeling the Universe_ , edited by W. L. Freedman (Cambridge Univ. Press, London, UK, 2004). * Bodmann _et al._ [2023a] B. Bodmann, C. Zen Vasconcellos, P. Hess, J. de Freitas Pacheco, D. Hadjimichef, M. Razeira, and G. Degrazia, A wheeler–dewitt quantum approach to the branch-cut gravitation with ordering parameters, Universe 9 (6), 278 (2023a). * Bodmann _et al._ [2023b] B. Bodmann, D. Hadjimichef, P. Hess, J. de Freitas Pacheco, F. Weber, M. Razeira, G. Degrazia, M. Marzola, and C. Zen Vasconcellos, A wheeler-dewitt non-commutative quantum approach to the branch-cut gravity, Universe 9 (10), 428 (2023b). * Witt [1967] B. D. Witt, Quantum theory of gravity. i. the canonical theory, Phys. Rev. 160, 1113 (1967). * Hořava [2009] P. Hořava, Quantum gravity at a lifshitz point, Phys. Rev. D 79, 084008 (2009). * Zen Vasconcellos _et al._ [2019] C. Zen Vasconcellos, D. Hadjimichef, M. Razeira, G. Volkmer, and B. Bodmann, Pushing the limits of general relativity beyond the big bang singularity, Astron. Nachr. 340, 857 (2019). * Zen Vasconcellos _et al._ [2021a] C. Zen Vasconcellos, P. Hess, D. Hadjimichef, B. Bodmann, M. Razeira, and G. Volkmer, Pushing the limits of time beyond the big bang singularity: The branch cut universe, Astron. Nachr. 342 (5), 765 (2021a). * Zen Vasconcellos _et al._ [2021b] C. Zen Vasconcellos, P. Hess, D. Hadjimichef, B. Bodmann, M. Razeira, and G. Volkmer, Pushing the limits of time beyond the big bang singularity: Scenarios for the branch-cut universe, Astron. Nachr. 342 (5), 776 (2021b). * Bodmann _et al._ [2022] B. Bodmann, C. Zen Vasconcellos, J. de Freitas Pacheco, P. Hess, and D. Hadjimichef, Causality and the arrow of time in the branch-cut cosmology, Astron. Nachr. 344 (1-2), e220086 (2022). * de Freitas Pacheco _et al._ [2022] J. de Freitas Pacheco, C. Zen Vasconcellos, P. Hess, D. Hadjimichef, and B. Bodmann, Branch-cut cosmology and the bekenstein criterion, Astron. Nachr. 344 (1-2), e220070 (2022). * Zen Vasconcellos _et al._ [2022] C. Zen Vasconcellos, P. Hess, J. de Freitas Pacheco, D. Hadjimichef, and B. Bodmann, The branch-cut cosmology: Evidences and open questions, Astron. Nachr. , e20220079 (2022). * Hess _et al._ [2022] P. Hess, Z. Vasconcellos, J. de Freitas Pacheco, D. Hadjimichef, and B. Bodmann, The branch-cut cosmology: A topological canonical quantum-mechanics approach, Astron. Nachr. , e20220101 (2022). * Manders [1989] K. Manders, Domain extension and the philosophy of mathematics, J. Philos. 86 (10), 553 (1989). * Dirac [1937] P. Dirac, Complex variables in quantum mechanics, Proceedings of the Royal Society A 160 (900), 48 (1937). * Aharonov and Bohm [1959] Y. Aharonov and D. Bohm, Significance of electromagnetic potentials in the quantum theory, Phys. Rev. 115 (3), 485 (1959). * Wu _et al._ [2021] K.-D. Wu, T. Kondra, S. Rana, C. Scandolo, G.-Y. Xi-ang, C.-F. Li, G.-C. Guo, and A. Streltsov, Operational resource theory of imaginarity, Phys. Rev. Lett. 126 (9), 090401 (2021). * Hess _et al._ [2015] P. O. Hess, M. Schäfer, and W. Greiner, _Pseudo-Complex General Relativity_ (Springer, Berlin, Germany, 2015). * Hess and Greiner [2017] P. Hess and W. Greiner, _Pseudo-Complex General Relativity: Theory and Observational Consequences_ , edited by C. Zen Vasconcellos, In Centennial of General Relativity: A Celebration (World Scientific Pub. Co., Singapore, 2017). * Renou _et al._ [2021] M.-O. Renou, D. Trillo, M. Weilenmann, T. P. Le, A. Tavakoli, N. Gisin, A. Acín, and M. Navascués, Quantum theory based on real numbers can be experimentally falsified, Nature 600, 625–629 (2021). * Hess and Greiner [2009] P. Hess and W. Greiner, Pseudo-complex general relativity, Int. J. Mod. Phys. E 18 (01), 51 (2009). * Hess and Boller [2020] P. Hess and T. Boller, _The Pseudo-Complex General Relativity: Theory and Observational Predictions_ , edited by C. Zen Vasconcellos, In Topics on Strong Gravity: A Modern View on Theories and Experiments (World Scientific Pub. Co., Singapore, 2020). * Hess [2023] P. Hess, _Review on the Pseudo-complex General Relativity and Dark Energy_ , edited by C. Zen Vasconcellos, P. Hess, and T. Boller, In New Phenomena and New States of Matter in the Universe: From Quarks to Cosmos (World Scientific Pub. Co., Singapore, 2023). * Hawking and Hertog [2018] S. Hawking and T. Hertog, A smooth exit from eternal inflation?, High Energ. Phys. 04, 147 (2018). * Feinberg and Peleg [1995] J. Feinberg and Y. Peleg, Self-adjoint wheeler-dewitt operators, the problem of time, and the wave function of the universe, Phys.Rev. D 52, 1988 (1995). * Friedman [1922] A. Friedman, über die krümmung des raumes, Zeitschrift für Physik 10, 377 (1922). * Lemaître [1927] G. Lemaître, Un univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extra-galactiques, Annales de la Société Scientifique de Bruxelles A47, 49 (1927). * Robertson [1935] H. Robertson, Kinematics and world-structure, Astrophysical Journal 82, 248 (1935). * Walker [1937] A. Walker, On milne’s theory of world‐structure, Proceedings of the London Mathematical Society 42, 90 (1937). * Ijjas _et al._ [2014] A. Ijjas, P. Steinhardt, and A. Loeb, Scale-free primordial cosmology, Phys. Rev. D89, 023525 (2014). * Ijjas and Steinhardt [2018] A. Ijjas and P. Steinhardt, Bouncing cosmology made simple, Class. Quant.Grav. 35 (13), 135004 (2018). * Ijjas and Steinhardt [2019] A. Ijjas and P. Steinhardt, A new kind of cyclic universe, Physics Letters B 795, 666 (2019). * Bertolami and Zarro [2011] O. Bertolami and C. A. Zarro, Hořava-lifshitz quantum cosmology, Phys. Rev. D 84, 044042 (2011). * Kiefer [2012] C. Kiefer, _Quantum Gravity_ (Oxford University Press, Oxford, UK, 2012). * García-Compeán and Mata-Pacheco [2022] H. García-Compeán and D. Mata-Pacheco, Lorentzian vacuum transitions in hořava–lifshitz gravity, Universe 8 (4), 237 (2022). * Hartle and Hawking [1983] J. Hartle and S. Hawking, Wave function of the universe, Phys. Rev. D 28, 2960 (1983). * Hawking [1982] S. Hawking, The boundary conditions of the universe, Pontif. Acad. Sci. Scr. Varia 48, 563 (1982). * Lukasz [2014] A. Lukasz, Novel solution of wheeler-dewitt theory, Applied Mathematics and Physics 2 (3), 73 (2014). * Rovelli [2004] C. Rovelli, _Quantum Gravity_ (Cambridge University Press, Cambridge, UK, 2004). * Rovelli and Smerlak [2011] C. Rovelli and M. Smerlak, Thermal time and tolman–ehrenfest effect: ’temperature as the speed of time’, Classical and Quantum Gravity 28, 075007 (2011). * Rovelli [2015] C. Rovelli, The strange equation of quantum gravity, Class. Quantum Grav. 32, 124005 (2015). * Rovelli [2019] C. Rovelli, _The Order of Time_ (Riverhead Books, New York, USA, 2019). * Cordero _et al._ [2019] R. Cordero, H. Garcia-Compean, and F. J. Turrubiates, A phase space description of the flrw quantum cosmology in hořava–lifshitz type gravity, General Relativity and Gravitation 51, Article 138 (2019). * Vieira _et al._ [2020] H. Vieira, V. Bezerra, C. Muniz, M. Cunha, and H. Christiansen, Class of solutions of the wheeler-dewitt equation with ordering parameters, Phys. Lett. B 809, 135712 (2020). * Abreu _et al._ [2019] E. Abreu, A. Mendes, G. Oliveira-Neto, J. Ananias Neto, J. Rezende Rodrigues, and M. Silva de Oliveira, Hořava–lifshitz cosmological models with non-commutative phase space variables, General Relativity and Gravitation 51, Article 95 (2019). * Maeda _et al._ [2010] K.-i. Maeda, Y. Misonoh, and T. Kobayashi, Oscillating bianchi ix universe in hořava-lifshitz gravity, Phys. Rev. D 82, 064024 (2010). * Caldwell _et al._ [1998] R. Caldwell, R. Dave, and P. Steinhardt, Cosmological imprint of an energy component with general equation of state, Phys. Rev. Let. 80, 1582 (1998). * Zlatev _et al._ [1998] I. Zlatev, L. Wang, and P. Steinhardt, Quintessence, cosmic coincidence, and the cosmological constant, Phys. Rev. Let. 82, 896 (1998). * Hinshaw _et al._ [2013] G. Hinshaw, D. Larson, E. Komatsu, D. Spergel, and C. Bennett, Nine-year wilkinson microwave anisotropy probe (wmap) observations: Cosmological parameter results, The Astrophysical Journal Supplement Series 208 (2), 19 (2013). * França and Rosenfeld [2002] U. França and R. Rosenfeld, interacting constituents in cosmology, Found. Phys. 210, 15 (2002). * Steigl and Hinterleitner [2006] R. Steigl and F. Hinterleitner, Factor ordering in standard quantum cosmology, Class. Quant. Grav. 23, 3879 (2006). * Hawking and Page [1986] S. Hawking and D. Page, Operator ordering and the flatness of the universe, Nucl. Phys. B 264, 185 (1986). * Polyanin and Manzhirov [2006] A. Polyanin and A. Manzhirov, _Handbook of Mathematics for Engineers and Scientists_ (Chapman and Hall/CRC, London,UK, 2006). * DeWitt [1967] B. S. DeWitt, Phys. Rev. 160, 1113 (1967). * Shestakova [2018] T. P. Shestakova, The problem of time and gauge invariance in the quantization of cosmological models. i. canonical quantization methods, Int. J. Mod. Phys. D 27, 1841004 (2018). * Note [1] More precisely, the scale factor $a(t)$, the density $\rho(t)$, the pressure $p(t)$, and the gravitation constant $\Lambda$. * Hartle [2021] J. B. Hartle, Quantum mechanics at the planck scale, The Quantum Universe , 527 (2021). * Weinberg [1972] S. Weinberg, Approximate symmetries and pseudo-goldstone bosons, Phys. Rev. Lett. 29, 1698 (1972). * Dijkstra [2019] C. D. Dijkstra, Naturalness as a reasonable scientific principle in fundamental physics, arXiv:1906.03036 (2019). * Cao and Schweber [1993] T. Cao and S. Schweber, The conceptual foundations and the philosophical aspects of renormalization theory, Synthese 97, 33 ((1993). * Vilenkin [1982] A. Vilenkin, Creation of universes from nothing, Phys. Lett. B 117, 25 (1982). * Ellis [2013] G. F. Ellis, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 44 (3), 242 (2013). * Sato [2021] N. Sato, The effect of spacetime curvature on statistical distributions, Classical and Quantum Gravity 38, 165003 (2021). * Weyl [1918] H. Weyl, Reine infinitesimalgeometrie, Math. Zeit. 2, 384 (1918). * Schutz Jr. [1970] B. Schutz Jr., Perfect fluids in general relativity: Velocity potentials and a variational principle, Phys. Rev. D 2, 2762 (1970). * Guimarães _et al._ [2021] T. M. Guimarães, R. de C. Lima, and S. H. Pereira, Cosmological inflation driven by a scalar torsion function, The European Physical Journal C 81 (2021). * Aschieri [2006] P. Aschieri, Noncommutative symmetries and gravity, Journal of Physics: Conference Series 53, 799–819 (2006). * Caprini and Figueroa [2018] C. Caprini and D. Figueroa, Cosmological backgrounds of gravitational waves, Classical and Quantum Gravity 35, 163001 (2018). * Kamionkowski _et al._ [1994] M. Kamionkowski, A. Kosowsky, and M. S. Turner, Gravitational radiation from first-order phase transitions’, Phys. Rev. D 49, 2837 (1994). * Odintsov and Oikonomou [2022] S. Odintsov and V. Oikonomou, Chirality of gravitational waves in chern-simons f(r) gravity cosmology, Phys. Rev. D 105, 104054 (2022). * Profumo and Yang [2023] S. Profumo and F. Yang, On the anisotropy of the stochastic gravitational wave background from sub-horizon-collapsed primordial black hole mergers, arXiv:2306.07454 (2023). * Caprini _et al._ [2016] C. Caprini, M. Hindmarsh, S. Huber, T. Konstandin, J. Kozaczuk, G. Nardini, J. M. No, A. Petiteau, P. Schwaller, and G. Servant, Science with the space-based interferometer elisa. ii: gravitational waves from cosmological phase transitions, JCAP04 2016, 001 (2016). * Huber and Konstandin [2008] S. Huber and T. Konstandin, Gravitational wave production by collisions: More bubbles, Journal Cosm. Astrop. Phys. 0809, 22 (2008). * Bastos _et al._ [2009] C. Bastos, O. Bertolami, N. Dias, and J. Prata, Noncommutative quantum cosmology, J. Phys. Conf. Ser. 174, 012053 (2009). * Garattini and Nicolini [2011] R. Garattini and P. Nicolini, Phys. Rev. D 83, 064021 (2011). * Mena _et al._ [2007] E. Mena, O. Obregón, and M. Sabido, On the wkb approximation of noncommutative quantum cosmology, Rev. Mex. Fis. S53, 118 (2007). * Compeán _et al._ [2005] H. Compeán, O. Obregón, C. Ramírez, and M. Sabido, Non-commutativity in gravity, topological gravity and cosmology, J. Phys. Conf. Ser. 24, 203 (2005). * Oliveira-Neto _et al._ [2017] G. Oliveira-Neto, M. de Oliveira, G. Monerat, and E. Corrêa Silva, Noncommutativity in the early universe, Int. J. Mod. Phys. D 26, 1750011 (2017). * Vakili _et al._ [2010] B. Vakili, P. Pedram, and S. Jalalzadeh, Late time acceleration in a deformed phase space model of dilaton cosmology, Phys. Lett. B 687, 119 (2010). * Obregon and Quiros [2011] O. Obregon and I. Quiros, Can noncommutative effects account for the present speed up of the cosmic expansion?, Phys. Rev. D 84, 044005 (2011). * Monerat _et al._ [2017] G. Monerat, E. Corrêa Silva, C. Neves, G. Oliveira-Neto, L. Rezende Rodrigues, and M. de Oliveira, Can noncommutativity affect the whole history of the universe?, Int. J. Mod. Phys. D 26, 1750022 (2017). * Oliveira-Neto and Vaz [2017] G. Oliveira-Neto and A. Vaz, Noncommutative cosmological model in the presence of a phantom fluid, Eur. Phys. J. Plus 132, 131 (2017). * Lizzi and Pinzul [2017] F. Lizzi and A. Pinzul, Dimensional deception from noncommutative tori: An alternative to the hořava-lifshitz model, Phys. Rev. D 96, 126013 (2017). * Sheikhahmadi _et al._ [2015] H. Sheikhahmadi, A. Aghamohammadi, and K. Saaidi, Non-commutative and commutative vacua effects in a scalar torsion scenario, Phys. Lett. B 749, 231 (2015). * Piscicchia _et al._ [2023] K. Piscicchia, A. Addazi, A. Marcianò, M. Bazzi, M. Cargnelli, A. Clozza, L. De Paolis, R. Del Grande, C. Guaraldo, and M. Iliescu, Experimental test of noncommutative quantum gravity by vip-2 lead, Phys. Rev. D 107, 026002 (2023). * Witten [1986] E. Witten, Non-commutative geometry and string field theory, Nucl. Phys. B 268, 253 (1986). * Dolan and Nappi [2003] L. Dolan and C. Nappi, Strings and noncommutativity, arXiv:hep-th/0302122v2 (2003). * Andriot _et al._ [2023] D. Andriot, D. Tsimpis, and T. Wrase, Accelerated expansion of an open universe, arXiv:2309.03938 (2023). * Aref’eva _et al._ [2001] I. Aref’eva, D. Belov, A. Koshelev, and O. Rytchkov, Uv/ir mixing for noncommutative complex scalar field theory interacting with gauge fields, Nucl.Phys.B Proc.Suppl. 102, 11 (2001). * Bartlett [1999] J. G. Bartlett, The standard cosmological model and cmb anisotropies, New Astronomy Reviews 43, 83 (1999). * Snyder [1947] H. Snyder, Quantized space-time, Phys.Rev. 71, 38 (1947). * Riess _et al._ [2022] A. G. Riess, W. Yuan, L. M. Macri, D. Scolnic, D. Brout, S. Casertano, D. O. Jones, Y. Murakami, G. S. Anand, L. Breuval, T. G. Brink, A. V. Filippenko, and _et-al._ , A comprehensive measurement of the local value of the hubble constant with 1 km s-1 mpc-1 uncertainty from the hubble space telescope and the sh0es team, ApJ 934, L7 (2022). * Espinosa _et al._ [2010] J. Espinosa, T. Konstandin, J. No, and G. Servant, Energy budget of cosmological first-order phase transitions, JCAP 1006, 028. * Addazi _et al._ [2019] A. Addazi, A. Marcianò, and R. Pasechnik, Probing trans-electroweak first order phase transitions from gravitational waves, Physics 1, 92 (2019).
# Wasserstein-Fisher-Rao Splines Julien Clancy <EMAIL_ADDRESS>Felipe Suárez <EMAIL_ADDRESS>Massachusetts Institute of Technology J.C. was supported by NSF GRFP, and F.S. was suppported by the MathWorks Fellowship and the NSF grant IIS-1838071. ###### Abstract We study interpolating splines on the Wasserstein-Fisher-Rao (WFR) space of measures with differing total masses. To achieve this, we derive the covariant derivative and the curvature of an absolutely continuous curve in the WFR space. We prove that this geometric notion of curvature is equivalent to a Lagrangian notion of curvature in terms of particles on the cone. Finally, we propose a practical algorithm for computing splines extending the work of Chewi et al., 2020a . ## 1 Introduction Let $\mu_{1},\mu_{2},\dots,\mu_{n}$ be $n$ positive measures of differing total masses. How to interpolate them? This question is motivated by cellular trajectory reconstruction where $\mu_{i}$ is a population of cells at time $i$ (Schiebinger et al.,, 2019). Cells move in gene space as they evolve, but also divide and proliferate. While ad-hoc fixes for this issue have been proposed, e.g. via renormalization and using optimal transport (OT) (Chewi et al., 2020a, ), the conservation of mass property inherent to OT makes it a less suitable tool for this task. Curve evolution in Wasserstein space is governed by the continuity equation $\partial_{t}\mu_{t}+\operatorname{div}(v_{t}\mu_{t})=0,$ (1) which can be viewed as a simple restatement of conservation of mass, following the divergence theorem.111Formally, this equation is to be interpreted weakly in duality with functions $\phi\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{d}\times\mathbb{R})$ via the divergence theorem i.e. $\tfrac{d}{dt}\int\varphi\,d\mu_{t}=\int\langle v_{t},\nabla\varphi\rangle\,d\mu_{t}$ . This equation is underdetermined, and the geometry of $W_{2}$ is induced by selecting for each time $t$ the field $v_{t}$ with minimal kinetic energy: $v_{t}=\operatorname*{arg\,min}_{u_{t}}\int\lVert u_{t}\rVert^{2}\,d\mu_{t}=\operatorname*{arg\,min}_{u_{t}}\,\lVert u_{t}\rVert_{\mu_{t}}^{2}.$ It can be seen (Gigli,, 2012) that the optimal $v_{t}$ lies in the closure of $\\{\nabla\varphi\mid\varphi\in C^{\infty}_{c}\\}$. The celebrated Benamou- Brenier theorem states that the $W_{2}$ distance, defined by optimal transport, is equal to the least total kinetic energy among all possible paths. ###### Theorem 1. (Benamou and Brenier,, 2000) Let $\mu_{0},\mu_{1}$ be probability measures. Then $W_{2}^{2}(\mu_{0},\mu_{1})=\inf_{(\mu_{t},v_{t})}\int_{0}^{1}\lVert v_{t}\rVert_{\mu_{t}}^{2}\,dt,$ where the infimum is taken over solutions of the continuity equations with prescribed boundary data $\mu_{0}$ and $\mu_{1}$. Thus, in order to define a new metric between measures of arbitrary mass we have to alter the continuity equation, running the procedure in reverse. Turning back to intuition, where the term $\operatorname{div}(v_{t}\mu_{t})$ represents mass translation, we add another term representing growth or decay: $\partial_{t}\mu_{t}+\operatorname{div}(v_{t}\mu_{t})=4\alpha_{t}\mu_{t}.$ (2) We call this the nonconservative continuity equation.222The factor of $4$ is for notational convenience in accordance with the literature. The term in the right-hand side allows for a relative growth or decay in mass. While measures $(\mu_{t})$ evolving according to (2) may have varying mass, they are granted to stay positive. From here, we measure the magnitude of a pair $(v_{t},\alpha_{t})$ by $\lVert(v_{t},\alpha_{t})\rVert_{\mu_{t}}^{2}=\int\left(|v_{t}|^{2}+4\alpha_{t}^{2}\right)\,d\mu_{t},$ where $\|\cdot\|$ stands for norm in WFR space and $|\cdot|$ for norm of vectors in Euclidean space. The Wasserstein-Fisher-Rao distance is then defined by $\operatorname{WFR}(\mu_{0},\mu_{1})^{2}=\inf_{(\mu_{t},v_{t},\alpha_{t})}\int_{0}^{1}\lVert(v_{t},\alpha_{t})\rVert_{\mu_{t}}^{2}\,dt$ (3) where, again, the minimization is over solutions to (2) with $\mu_{0}$ and $\mu_{1}$ prescribed. For more detailed discussions of the genesis of the equation (2) and the development of the connection with optimal transport see Kondratyev et al.,, 2016; Liero et al.,, 2018; Chizat,, 2017. It was shown simultaneously in Liero et al., (2018); Chizat, (2017) that (3) defines a metric on $\mathcal{M}_{+}(\mathbb{R}^{d})$, the space of non- negative measures, and that it turns this into a geodesic space. Furthermore, Kondratyev et al., (2016) shows it has the structure of a pseudo-Riemannian manifold analogous to the Riemannian structure on $W_{2}$ (Otto,, 2001), with inner product given by $\left\langle(v,\alpha),(w,\beta)\right\rangle_{\mu}=\int\left(\langle v,w\rangle+4\alpha\beta\right)\,d\mu,$ where the tangent space is $T_{\mu}(\mathcal{M}_{+})=\operatorname{clos}_{L^{2}(\mu)}\left\\{(\nabla\alpha,\alpha)\mid\alpha\in\mathcal{C}^{\infty}_{c}(\mathbb{R}^{d})\right\\}.$ ###### Remark. Notice that this is the same tangent space as for $W_{2}$, with the difference being that the $\operatorname{WFR}$ Riemannian metric is the full $H^{1}$ Sobolev norm, while the $W_{2}$ Riemannian metric includes only the gradient. There has been recent interest in understanding the curvature of these spaces (Chewi et al., 2020b, ), and specifically in understanding curvature- minimizing interpolating curves that generalize Euclidean splines (Benamou et al.,, 2019; Chen et al.,, 2018; Chewi et al., 2020a, ). We aim to replicate some of these advances in the Wasserstein-Fisher-Rao case, which, as mentioned, plays an important role in applications. Specifically, we will 1) characterize the covariant derivative in WFR space and use this to define a notion of intrinsic splines; 2) define splines from a pseudo-Eulerian perspective, as measures on paths, and establish a relationship to intrinsic splines; and 3) define a more practically workable notion of WFR splines, analogous to the transport splines of Chewi et al., 2020a , and examine its relationship to intrinsic splines. ## 2 The Covariant Derivative In this section, we derive an expression for the covariant derivative. We first present a new derivation of the covariant derivative in $W_{2}$ that is simpler than the definition in Gigli, (2012). In turn, this new derivation is extended to WFR space. The Riemannian metric on $W_{2}$ is given by $\langle v,w\rangle_{\mu}=\int\langle v,w\rangle\,d\mu$, and the covariant derivative $\frac{\operatorname{D}}{dt}$ and its associated Levi-Civita connection $\nabla$ must satisfy two properties: 1. 1. Leibniz rule. If $\mu_{t}$ is a curve and $(v_{t}^{1})$ and $(v_{t}^{2})$ are two (tangent) vector fields along it, then $\frac{d}{dt}\langle v_{t}^{1},v_{t}^{2}\rangle_{\mu_{t}}=\left\langle v_{t}^{1},\frac{\mathbf{D}}{dt}v_{t}^{2}\right\rangle_{\mu_{t}}+\left\langle\frac{\mathbf{D}}{dt}v_{t}^{1},v_{t}^{2}\right\rangle_{\mu_{t}}.$ (P1) 2. 2. Torsion-freeness. If $X$ and $Y$ are vector fields, then $\nabla_{X}Y-\nabla_{Y}X=[X,Y].$ (P2) Let $(v_{t})$ be the tangent field of the curve $(\mu_{t})$ (as noted above, the minimal field is unique and is a gradient333In general it is merely in the closure of the set of gradients, but if all measures involved are absolutely continuous then it is truly a gradient.). The product rule yields $\displaystyle\frac{d}{dt}\langle v_{t}^{1},v_{t}^{2}\rangle_{\mu_{t}}$ $\displaystyle=\frac{d}{dt}\int\langle v_{t}^{2},v_{t}^{2}\rangle\,d\mu_{t}$ $\displaystyle=\int\left(\langle\partial_{t}v_{t}^{2},v_{t}^{2}\rangle+\langle v_{t}^{2},\partial_{t}v_{t}^{2}\rangle\right)\,d\mu_{t}+\int\langle v_{t}^{2},v_{t}^{2}\rangle\,d(\partial_{t}\mu_{t}).$ Because $(\mu_{t},v_{t})$ solves the continuity equation, the dual definition of $\operatorname{div}(v_{t}\mu_{t})$ and the divergence theorem give $\int\langle v_{t}^{1},v_{t}^{2}\rangle\,d(\partial\mu_{t})=\int\langle\nabla v_{t}^{1}\cdot v_{t},v_{t}^{2}\rangle+\langle\nabla v_{t}^{2}\cdot v_{t},v_{t}^{1}\rangle\,d\mu_{t}.$ Together with (P1), it yields $\left\langle\frac{\mathbf{D}}{dt}v_{t}^{1},v_{t}^{2}\right\rangle_{\mu_{t}}+\left\langle v_{t}^{1},\frac{\mathbf{D}}{dt}v_{t}^{2}\right\rangle_{\mu_{t}}=\langle\partial_{t}v_{t}^{1}+\nabla v_{t}^{1}\cdot v_{t},v_{t}^{2}\rangle_{\mu_{t}}+\langle v_{t}^{1},\partial_{t}v_{t}^{2}+\nabla v_{t}^{2}\cdot v_{t}\rangle_{\mu_{t}}.$ From here, it natural to postulate that $\frac{\mathbf{D}}{dt}v_{t}^{1}=\mathcal{P}_{\mu_{t}}\left(\frac{D}{dt}v_{t}^{1}\right)=\mathcal{P}_{\mu_{t}}\left(\partial_{t}v_{t}^{1}+\nabla v_{t}^{1}\cdot v_{t}\right),$ (4) where $\mathcal{P}_{\mu_{t}}$ is the orthogonal projection onto $T_{\mu_{t}}(\mathcal{P}_{2})$ in $L^{2}(\mu_{t})$. We call $\frac{D}{dt}$ the total derivative. Note that if $v_{t}^{1}=v_{t}=\nabla\varphi_{t}$ then $\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}=\nabla\left(\partial_{t}\varphi_{t}+\frac{1}{2}|\nabla\varphi_{t}|^{2}\right)\in T_{\mu_{t}}(\mathcal{P}_{2})$, so no projection is necessary, and the total and covariant derivatives coincide; in other words, $\frac{\mathbf{D}^{2}}{dt^{2}}\mu_{t}=\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}.$ (5) We now examine the torsion-free property (P2), and we follow Gigli’s argument (Gigli,, 2012, Section §5.1), which we partially repeat for convenience of reference. It is quite technical to give meaning directly to a smooth vector field on all of $\mathcal{P}_{2}$, and so to the Levi-Civita connection, but it can be indirectly defined via the covariant derivative, which is all that will be necessary in this work. Let $(\mu^{1}_{t})$ and $(\mu_{t}^{2})$ be two absolutely continuous curves with measures that are absolutely continuous with respect to the Lebesgue measure, such that $\mu^{1}_{0}=\mu^{2}_{0}=\mu$, and let their velocity fields be $(v^{1}_{t})$ and $(v^{2}_{t})$. Since $v_{0}^{1}$, $v_{0}^{2}$ are gradients, they are in $T_{\mu}(\mathcal{P}_{2})$ for every $\mu$, so we may define two new tangent fields along these curves by $\displaystyle u_{t}^{1}$ $\displaystyle=v^{2}_{0},$ $\displaystyle u_{t}^{2}$ $\displaystyle=v^{1}_{0}.$ With this definition, it is reasonable to interpret $\nabla_{u^{1}_{0}}u^{2}_{t}\Big{|}_{t=0}=\frac{\mathbf{D}}{dt}u_{t}^{2}\Big{|}_{t=0},$ with the derivative being taken along $\mu_{t}^{2}$, and similarly for $\nabla_{u^{2}_{0}}u^{1}_{t}$. Now, fix $\varphi$ and consider the functional $F\colon\mu\mapsto\int\varphi\,d\mu$. By the continuity equation, the derivative of $F$ along $u_{0}^{2}$ at $\mu$ is $\frac{d}{dt}F[\mu_{t}^{1}]\Big{|}_{t=0}=\int\varphi\,d\left(\partial_{t}\mu_{t}^{1}\right)\Big{|}_{t=0}=\int\langle\nabla\varphi,u_{0}^{2}\rangle\,d\mu.$ Then since the covariant derivative above respects the metric, $\displaystyle u_{0}^{1}(u_{0}^{2}(F))[\mu]$ $\displaystyle=\frac{d}{dt}\langle\nabla\varphi,u_{0}^{2}\rangle_{\mu_{t}^{2}}\Big{|}_{t=0}$ $\displaystyle=\left\langle\frac{\mathbf{D}}{dt}\nabla\varphi,u_{0}^{2}\right\rangle_{\mu_{t}^{2}}+\left\langle\nabla\varphi,\frac{\mathbf{D}}{dt}u_{0}^{2}\right\rangle_{\mu_{t}^{2}}\bigg{|}_{t=0}$ $\displaystyle=\left\langle\nabla^{2}\varphi\cdot u^{1}_{0},u^{2}_{0}\right\rangle_{\mu}+\left\langle\nabla\varphi,\nabla_{u^{1}_{0}}u^{2}_{t}\right\rangle_{\mu}$ where we have use the definition of $\frac{\mathbf{D}}{dt}$ to calculate $\frac{\mathbf{D}}{dt}\nabla\varphi$ on the third line. Performing the same calculation for $u_{0}^{2}(u_{0}^{1}(F))[\mu]$ and subtracting, since $\nabla^{2}\varphi$ is symmetric the first terms cancel and we get $u_{0}^{1}(u^{2}(F))[\mu]-u_{0}^{2}(u^{1}(F))[\mu]=\left\langle\nabla\varphi,\nabla_{u^{1}_{0}}u^{2}_{t}-\nabla_{u^{2}_{0}}u^{1}_{t}\right\rangle_{\mu}$ Since gradients $\nabla\varphi$ are dense in $T_{\mu}(\mathcal{P}_{2})$ this means that $\frac{\mathbf{D}}{dt}$ is indeed torsion-free. Now we repeat the argument in WFR space. ###### Theorem 2. Let $\mu_{t}$ be an absolutely continuous curve in Wasserstein-Fisher-Rao space satisfying the continuity equation with tangent fields $(v_{t},\alpha_{t})$. The covariant derivative is given by $\frac{\mathbf{D}^{2}}{dt^{2}}\mu_{t}=\begin{pmatrix}\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\\\ \partial_{t}\alpha_{t}+\frac{1}{2}|\nabla\alpha_{t}|^{2}+2\alpha_{t}^{2}\end{pmatrix}.$ (6) ###### Proof. See appendix A. ∎ Theorem 2 gives a geometrical proof of the dynamical characterization of geodesics in duality with (3) (Chizat,, 2017, Theorem 1.1.15; Liero et al.,, 2018, Theorem 8.12). Indeed, an absolutely continuous geodesic $(\mu_{t})$ in WFR space with tangent $\dot{\mu}_{t}=(\nabla\alpha,\alpha)$ satisfies the Hamilton-Jacobi equation almost surely $\partial_{t}\alpha_{t}+\frac{1}{2}|\nabla\alpha_{t}|^{2}+2\alpha_{t}^{2}=0,$ or, equivalently, the curve $(\mu_{t})$ is autoparallel: $\nabla_{\dot{\mu}_{t}}\dot{\mu}_{t}=\frac{\mathbf{D}^{2}}{dt^{2}}\mu_{t}=0.$ (7) ## 3 E-Splines and P-splines In this section, we define notions of $E$\- and $P$-splines. As in the $W_{2}$ case, we can define an intrisic notion of curvature-minimizing interpolators, which we term $E$-splines, by $\inf_{(\mu_{t},\mathbf{v}_{t})}\int_{0}^{1}\left\lVert\frac{\mathbf{D}}{dt}\mathbf{v}_{t}\right\rVert_{\mu_{t}}^{2}\,dt\text{ s.t. }\mu_{t_{i}}=\mu_{i}$ (8) Though the characterization (6) yields an explicit objective function, there is no practical way to optimize (8). Thus, we first define an analogy to the path splines (P-splines) introduced in Chen et al., (2018); Benamou et al., (2019). We give a brief introduction to them here (see Chewi et al., 2020a, for more details). ### 3.1 Geodesics and $P$-splines in $W_{2}$ In the Wasserstein space $W_{2}$ over a metric space $(\mathcal{X},d)$ it is known that geodesics can be represented as measures over paths (see Lisini,, 2007). Specifically, let $\Omega$ be the set all absolutely continuous paths in $\mathcal{X}$, let $l$ be the length functional on $\Omega$ i.e. for $\omega\in\Omega$, $\ell(\omega):=\int_{0}^{1}|\dot{\omega}_{t}|(t)dt$, and let $e_{t}$ be the time evaluation functional at time $t$. If $P^{*}$ is a measure over paths solution to $\inf_{P\in\mathcal{P}(\Omega)}\int\ell\,dP\text{ s.t. }(e_{t})_{\\#}P=\mu_{t}\text{ for }t\in\\{0,1\\},$ then the $W_{2}$ geodesic between $\mu_{0}$ and $\mu_{1}$ is given by $\mu_{t}=(e_{t})_{\\#}P^{*}$. From this starting point and in the case of $\mathcal{X}=\mathbb{R}^{d}$, Chen et al., (2018) define $P$-splines as the minimizers of $\inf_{P\in\mathcal{P}(\Omega)}\int c\,dP\text{ s.t. }(e_{t_{i}})_{\\#}P=\mu_{i},$ (9) where this time $\Omega$ is the set of paths with absolutely continuous derivative and $c$ is the curvature cost $c(\omega)=\frac{1}{2}\int_{0}^{1}|\ddot{\omega}_{t}|^{2}\,dt$. Letting $W_{2}$ $E$-splines be defined by $\inf_{(\mu_{t},v_{t})}\int_{0}^{1}\left\lVert\frac{\mathbf{D}}{dt}v_{t}\right\rVert_{\mu_{t}}^{2}\,dt\text{ s.t. }\mu_{t_{i}}=\mu_{i},$ (10) Chen et al., (2018) relates them: ###### Theorem 3. (Chen et al.,, 2018, Section 5.3) Let $(\mu_{t})$ be some curve in $W_{2}$ with derivative $(v_{t})$. Then there is a measure $P\in\mathcal{P}(\Omega)$ such that $(e_{t})_{\\#}P=\mu_{t}$, and the objective in (9) is equal to that of (10). Furthermore, $P$ can be defined from the flow maps from the continuity equation. This shows that problem (9) is a relaxation of (10). In Chewi et al., 2020a (, Proposition 2), it is shown that this is not tight in the sense that there exists (Gaussian) measures $\mu_{1},\dots,\mu_{n}$ that are a solution to problem (10), yet are suboptimal for (9). This formulation cannot be directly extended to $\operatorname{WFR}$ since, in (9), the marginals of $P$ must have the same mass at all times. The solution is to consider curves in a different base space, introduced in Liero et al., (2016); Chizat, (2017), which we briefly describe. ### 3.2 The Cone Space The cone space $\mathfrak{C}$ is the manifold $\mathbb{R}^{d}\times\mathbb{R}_{+}$, with $\mathbb{R}^{d}\times\\{0\\}$ identified as a single point.444Fraktur font characters will be reserved for objects relating to the cone space, consistent with Liero et al., (2018). Points $(x,r)\in\mathfrak{C}$ are thought of as tuples of position and mass. The distance is given by $d_{\mathfrak{C}}\left((x_{0},r_{0}),(x_{1},r_{1})\right)^{2}=r_{0}^{2}+r_{1}^{2}-2r_{0}r_{1}\cos(|x_{0}-x_{1}|\wedge\pi).$ Call the vertex of the cone (the point $\mathbb{R}^{d}\times\\{0\\}$) as $\mathfrak{0}$. Notice that if $|x_{0}-x_{1}|\geq\pi$ then $d_{\mathfrak{C}}((x_{0},r_{0}),(x_{1},r_{1}))=r_{0}+r_{1}$. Since for any $(x,r)$ we have $d_{\mathfrak{C}}((x,r),\mathfrak{0})=r$, this reflects that fact that if $x_{0}$ and $x_{1}$ are far away then the shortest path is to the vertex and back out. Writing $(v,p)=\frac{d}{dt}(x,r)$, the Riemannian metric on the cone space is given by $\langle(v_{1},p_{1}),(v_{2},p_{2})\rangle_{(x,r)}=\langle v_{1},v_{2}\rangle r^{2}+p_{1}p_{2}.$ (11) Given a measure $\lambda\in\mathcal{M}(\mathfrak{C})$, we can project it to a measure $\mathfrak{P}\lambda\in\mathcal{M}(\mathbb{R}^{d})$ via $\int f(x)\,d\mathfrak{P}\lambda(x)=\int r^{2}f(x)\,d\lambda(x,r).$ The measure $\lambda$ is then called a lift of $\mathfrak{P}\lambda$. The presence of the term $r^{2}$, as opposed to $r$, is to simplify the parametrization of $\mathfrak{C}$. Observe that there are many possible liftings of a measure $\mu\in\mathcal{M}(\mathbb{R}^{d})$ to $\lambda\in\mathcal{M}(\mathfrak{C})$, the most obvious of which is $d\lambda(x,r)=\delta_{1}(r)\cdot d\mu(x)$, where $\delta_{p}(\cdot)$ stands for the Dirac point measure at $p$. As $\mathfrak{C}$ is (save for the point $\mathfrak{0}$) a Riemannian manifold we can define its $W_{2}$ metric as usual $W_{\mathfrak{C},2}^{2}(\lambda,\eta)=\inf_{\gamma\in\Pi(\lambda,\eta)}\int d_{\mathfrak{C}}^{2}\,d\gamma,$ where $\Pi(\lambda,\eta)$ is the set of all couplings of measures $\lambda$ and $\eta$ on the cone. The following useful characterization is proved in Liero et al., (2016). ###### Theorem 4. (Liero et al.,, 2016, Theorem 7) For any measures $\mu_{0},\mu_{1}\in\mathcal{M}_{+}(\mathbb{R}^{d})$, we have $\operatorname{WFR}(\mu_{0},\mu_{1})=\inf_{\lambda_{0},\lambda_{1}}W_{\mathfrak{C},2}(\lambda_{0},\lambda_{1}),$ where the $\lambda_{i}\in\mathcal{P}(\mathfrak{C})$ project to $\mu_{i}$, $\mathfrak{P}\lambda_{i}=\mu_{i}$ for $i=0,1$. Furthermore, there are optimal lifts $\lambda_{0},\lambda_{1}$, and an optimal coupling $\gamma$ between them. This allows to characterize WFR geodesics in the same way as in Euclidean space. ###### Proposition 1. (Liero et al.,, 2016, Section 3.4) Let $\mu_{0},\mu_{1}\in\mathcal{M}_{+}(\mathbb{R}^{d})$, and let $\gamma\in\mathcal{M}(\mathfrak{C}^{2})$ be the optimal coupling of the optimal lifts in Theorem 4. For each pair of points $z_{0},z_{1}\in\mathfrak{C}$, let $g^{z_{0},z_{1}}_{t}$ be the geodesic between them. Then the curve of measures $(\mu_{t})_{t}$ defined by $\mu_{t}=\mathfrak{P}\left[(g^{z_{1},z_{2}}_{t})_{\\#}\gamma\right]$ is a geodesic in $\operatorname{WFR}$ between $\mu_{0}$ and $\mu_{1}$. We may view the above theorem as saying that $\operatorname{WFR}$ geodesics are identified with certain measures supported on geodesics in $\mathfrak{C}$. In fact, Liero et al., (2016) prove that all curves have such a representation, and it is faithful to first order. ###### Theorem 5. (Liero et al.,, 2016, Theorem 15) Let $(\mu_{t})$ be an absolutely continuous curve in $\operatorname{WFR}$. Then there is a measure $P\in\mathcal{P}(\Omega_{\mathfrak{C}})$ such that $\mu_{t}=\mathfrak{P}\left[(e_{t})_{\\#}P\right]$ and for a.e. $t\in[0,1]$ $|\dot{\mu}_{t}|^{2}=\int|\dot{z}_{t}|^{2}\,dP(z)$ where $|\dot{\mu}_{t}|$ is the metric derivative, which is equal to the intrinsic Riemannian quantity $\lVert\dot{\mu}_{t}\rVert_{\mu_{t}}$. As in the $W_{2}$ case, this analogy can be extended profitably to second order —that is, to $|\ddot{\mu}_{t}|$ in the WFR sense. ### 3.3 $P$-Splines on $\operatorname{WFR}$ In light of the dual characterization of curvature in Wasserstein space of Theorem 3 and the careful reformulation of WFR as $W_{2}$ on the cone of Proposition 1 and theorem 5, we define $P$-splines in $\operatorname{WFR}$ by $\inf_{P\in\mathcal{P}(\Omega_{\mathfrak{C}})}\iint_{0}^{1}|\ddot{z}_{t}|^{2}\,dt\,dP(z)\text{ s.t. }\mathfrak{P}\left[(e_{t_{i}})_{\\#}P\right]=\mu_{i}.$ (12) We prove that this is indeed a relaxation of the $E$-spline problem (8). ###### Theorem 6. Let $\mu_{t}$ be a sufficiently smooth curve in $\operatorname{WFR}$. Then there is a measure $P\in\mathcal{P}(\Omega_{\mathfrak{C}})$ such that $\mathfrak{P}\left[(e_{t})_{\\#}P\right]=\mu_{t}$ for all $t$, and the $E$-cost of $\mu$ is equal to the $P$-cost of $P$. The measure $P$ is induced by the flow maps associated to the curve $\mu_{t}$. ###### Proof. See appendix B. ∎ To understand the proof (and complete the description of the proposition) we must define the flow maps. In $W_{2}$, curves of measures are interpretable as particle flows via the flow maps defined by $\dot{X}_{t}=v_{t}(X_{t}),\,X_{0}=\operatorname{Id}.$ (13) It then holds that $\mu_{t}=(X_{t})_{\\#}\mu_{0}.$ There is a similar characterization for sufficiently smooth paths in $\operatorname{WFR}$ due to Maniglia, (2007). ###### Proposition 2. (Maniglia,, 2007, Proposition 3.6) Let $v\in L^{1}(W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d}),[0,1])$ be a vector field and $\alpha\in\mathcal{C}(\mathbb{R}^{d}\times[0,1])$ a bounded locally Lipschitz scalar function. For $\mu_{0}\in\mathcal{M}_{+}(\mathbb{R}^{d})$, there is a unique weak solution to the nonconservative continuity equation (2) with initial measure $\mu_{0}$. Furthermore, this satisfies $\mu_{t}=(X_{t})_{\\#}(R_{t}^{2}\cdot\mu_{0}),$ (14) for the flow map $(X_{t})$ and scalar field $(R_{t})$ which solve the ODE system $\begin{cases}\dot{X}_{t}=v_{t}(X_{t}),&X_{0}=\operatorname{Id}.\\\ \dot{R}_{t}=2\alpha_{t}(X_{t})\,R_{t},&R_{0}=1.\end{cases}$ Analogous to what happens in the Wasserstein case, theorem 6 shows that the spline problem in Lagrangian terms can be relaxed in geometric terms via the flow map characterization of proposition 2. Thus, under absolute continuity of the measures and the curve, the Lagrangian and geometric formulations agree up to second order. The first order has three equivalent expressions, namely the metric derivative $|\mu_{t}^{\prime}|^{2}$, the WFR derivative $\|\dot{\mu}_{t}\|_{\mu_{t}}^{2}$ and the Lagrangian form $\int|\dot{z}_{t}|^{2}\,dP^{*}(z)$. A similar equivalence holds for the second order derivatives, i.e. the WFR covariant derivative $\|\nabla_{\dot{\mu}_{t}}\dot{\mu}_{t}\|_{\mu_{t}}^{2}$ and the Lagrangian form $\int|\ddot{z}_{t}|^{2}\,dP^{*}(z)$. We conjecture that this equivalence holds for higher order derivatives. ## 4 Transport Splines In this section, we define a tractable and smooth interpolant of measures. Piecewise linear interpolation in $W_{2}$ has the virtue of being simple to construct, which allows for additional regularization, as it is often found in applications (Schiebinger et al.,, 2019; Lavenant et al.,, 2021). It yields however a curve that is not smooth. On the other hand, due to the inherent difficulty in solving the $P$-spline problem over $W_{2}$, Chewi et al., 2020a define a different, and substantially more tractable, interpolant that they call transport splines. Very briefly, they proceed as follows: 1. 1. Start with measures $\mu_{i}$ at times $t_{i}$. 2. 2. Compute the $W_{2}$-optimal couplings $\gamma_{i\to i+1}$ from $\mu_{i}$ to $\mu_{i+1}$, which are induced by maps $T_{i\to i+1}$ provided the $\mu_{i}$ are absolutely continuous. Let $T_{i}=T_{0\to 1}\circ\cdots\circ T_{i-1\to i}$ 3. 3. Define for each $x$ the flow map $X_{t}=S_{t}[x,T_{1}x,\ldots,T_{N}x]$, where $S_{t}$ is the natural cubic spline interpolant in Euclidean space (with the base times $t_{i}$ implicit). 4. 4. Output $\mu_{t}=(X_{t})_{\\#}\mu_{0}$. Not only is computing transport splines very fast, but if the measures $\mu_{i}$ are sampled from an underlying smooth curve in $W_{2}$, then the transport spline converges in supremum norm to the true curve at rate $O(\delta^{2})$, where $\delta$ is the maximum distance between successive times $t_{i}$. Importantly for applications, the curve of measure is smooth, which is not true for a piecewise-geodesic interpolant. Other virtues, and their connection with $P$\- and $E$-splines, are explored in Chewi et al., 2020a . In this section we aim to define an analogous interpolant in $\operatorname{WFR}$. The characterization in Theorem 4 is not directly useful here, even if we knew the optimal lifts $\lambda_{0}$, $\lambda_{1}$, it is not clear that the optimal coupling is induced by a map. Even if it were, that would not be enough for our purposes — we would want to associate a unique mass $r$ to each initial point $x$, and map the position-mass pair $(x,r(x))$ to another position-mass pair $(x^{\prime},r^{\prime}(x^{\prime}))$, with $r^{\prime}$ the unique mass at $x^{\prime}$. Instead we turn to a third characterization of $\operatorname{WFR}$ given by Liero et al., (2018). ###### Theorem 7. (Liero et al.,, 2016, Theorem 8;Liero et al.,, 2018, Theorem 6.6) Let $\mu_{0}$ and $\mu_{1}$ be meausures, and define $c(x,y)=-2\log\cos\left(|x-y|\wedge\tfrac{\pi}{2}\right).$ Then $\operatorname{WFR}(\mu_{0},\mu_{1})^{2}=\inf_{\eta\in\mathcal{M}_{+}}\operatorname{KL}(\eta_{0}\;\|\;\mu_{0})+\operatorname{KL}(\eta_{1}\;\|\;\mu_{1})+\int c(x,y)\,d\eta(x,y).$ (15) Furthermore, the infimum is achieved, and if $\mu_{0}$ and $\mu_{1}$ are absolutely continuous with respect to the Lebesgue measure, the optimal $\eta^{*}$ is unique and is induced by a map. The cost $c$ differs from the term in the cone metric $d_{\mathfrak{C}}$ by taking the minimum against $\tfrac{\pi}{2}$, not $\pi$. This is the difference between transport of points in $\mathfrak{C}$ and transport of measures on $\mathfrak{C}$. To transport $z_{0}=(x_{0},r_{0})$ to $z_{1}=(x_{1},r_{1})$ in $\mathfrak{C}$ we may reduce the mass at $x_{0}$ to $0$, then increase it again at $x_{1}$ to $r_{1}$, whereas to transport $\delta_{z_{0}}$ to $\delta_{z_{1}}$ these can be done simultaneously (having a superpostion of two deltas), so that the WFR geodesic is at each time a combination of two deltas. The superposition results in a lower overall WFR cost when $|x-y|$ is less than $\frac{\pi}{2}$. The optimal coupling $\eta$ in Theorem 7 and the optimal coupling $\gamma$ in Theorem 4 are intimately related, as another theorem of Liero et al., (2018) shows. ###### Theorem 8. (Liero et al.,, 2018, Theorem 6.2) Suppose $\eta$ minimizes the objective in Theorem 7, and let $\eta_{i}=\pi_{i}\eta$ be its marginals. Write $\mu_{i}=\sigma_{i}\eta_{i}+\mu_{i}^{\bot}$ where $\sigma_{i}=d\eta_{i}/d\mu_{i}$ and $\mu_{i}$ is mutually singular with $\eta_{i}^{\bot}$. Define the plan $\gamma_{\eta}$ by $\displaystyle d\gamma_{\eta}(z_{0},z_{1})$ $\displaystyle=\delta_{\sqrt{\sigma_{0}(x_{0})}}(r_{0})\cdot\delta_{\sqrt{\sigma_{1}(x_{1})}}(r_{1})\cdot d\eta(x_{0},x_{1})$ $\displaystyle\quad+\delta_{1}(r_{0})\cdot d\mu_{0}^{\bot}(x_{0})\cdot\delta_{\mathfrak{0}}(z_{1})+\delta_{1}(r_{1})\cdot d\mu_{1}^{\bot}(x_{1})\cdot\delta_{\mathfrak{0}}(z_{0})$ Then $\gamma_{\eta}$ is optimal for the objective in Theorem 4. Now, suppose the $\mu_{0},\mu_{1}$ are such that there $\mu_{i}\ll\eta_{i}$, so that $\mu_{i}^{\bot}=0$ in the theorem above. This happens, for instance, when the conditions of Proposition 2 are satisfied along the geodesic between them. The optimal $\eta$ for Theorem 7 is supported on a map $T$, and thus an optimal $\gamma_{\eta}$ for Theorem 4 is supported on the assignment $\left(x_{0},r_{0}(x_{0})\right)\to\left(T(x_{0}),r_{1}(T(x_{0}))\right)$ which associates to each $x_{0}$ a unique mass $r_{0}(x_{0})=\sqrt{\sigma_{0}(x_{0})}$ and maps it to another unique location and mass $r_{1}(T(x_{0}))=\sqrt{\sigma_{1}(T(x_{0}))}$. This is much stronger than $\gamma_{\eta}$ being induced by a map on $\mathfrak{C}$. Now, define the operator $S^{\mathfrak{C}}_{t}[z_{0},\ldots,z_{N}]$ to be the Riemannian cubic interpolant in $\mathfrak{C}$ of the points $z_{0},\ldots,z_{N}$. We define transport splines over $\operatorname{WFR}$ by the following procedure 1. 1. For each $i$ solve (15) between $\mu_{i}$ and $\mu_{i+1}$ to obtain the optimal coupling $\eta_{i\to i+1}$, and thus the map $T_{i\to i+1}$. Let $T_{i}=T_{0\to 1}\circ\cdots\circ T_{i-1\to i}$. 2. 2. For each $x$, form a path $\widetilde{X}_{t}(x)$ interpolating the $x_{i}=T_{i}(x)$ and a mass path $\widetilde{R}_{t}(x)$ interpolating the masses $r_{i}(x_{i})$. 3. 3. Define the interpolating curve on $[t_{i-1},t_{i}]$ by computing the Cone spline $(X_{t},R_{t})$ with endpoint velocity constraints $\dot{X}_{t_{j}}(x)=\frac{d}{dt}\widetilde{X}_{t}(x)|_{t=t_{j}}$ and $\dot{R}_{t_{j}}(x)=\frac{d}{dt}\widetilde{R}_{t}(x)|_{t=t_{j}}$ for $j=i-1,i$ and $i=1,\dots,n$. In the first step, we obtain a family of transport maps and corresponding masses at each point $x$ from the coupling $\eta_{i\to i+1}$. Notice that two adjacent couplings $\eta_{i-1\to i}$ and $\eta_{i\to i+1}$ may not have the same $i$-th marginal, so the mass $\sqrt{\sigma_{i}}$ may be not uniquely defined. We remedy this by rescaling the coupling to have unit mass, so that the coupling is supported on $\sqrt{\mu_{i}(x_{i})}$. Lifted plans on the cone are scale-invariant, thus the optimality of (15) is preserved under this scaling, as explained in Liero et al., (2016, Section §3.3). In the next step, we would ideally interpolate any sequence $(x_{i},r_{i})_{i}$ using Riemannian cubics on $\mathfrak{C}$, these are difficult to compute; unlike the Euclidean case, there appears to be no closed formula. We propose instead approximating the velocities at each knot from the (Euclidean) spline curves of $X$ and $R$ independently (Chewi et al., 2020a, , Appendix D). We then solve in step 3 the cone spline problem by specifying the velocities computed in the previous step and using De Casteljau’s algorithm. ### 4.1 De Casteljau’s algorithm on the cone In Euclidean space, an interpolating curve with minimal curvature is completely determined by the endpoint velocities. The De Casteljau’s algorithm computes points on this curve by iteratively finding points along the paths between the endpoints and two other control points. We describe the procedure for reference. The De Casteljau curve that interpolates $x_{0}$ and $x_{3}$ with speeds $v_{0}$ and $v_{3}$ at times $t=0$ and $t=1$ respectively is constructed as follows: 1. 1. Compute control points $x_{1}=x_{0}+\frac{v_{0}}{3}$ and $x_{2}=x_{2}-\frac{v_{3}}{3}$. 2. 2. Compute the first intermediate points $w_{i}(t)=(1-t)x_{i}+tx_{i+1}$, for $i=0,1,2$. 3. 3. Compute the second intermediate points $u_{j}(t)=(1-t)w_{j}+tw_{j+1}$, for $j=0,1$. 4. 4. Compute the spline as $p(t)=(1-t)u_{0}+tu_{1}$. We extend this definition to the cone by replacing the linear interpolation by a geodesic interpolation (Absil et al.,, 2016; Gousenbourger et al.,, 2019). It is known that the geodesic has closed-form expression given by: ###### Theorem 9. (Liero et al.,, 2018, Section §8.1) The geodesic interpolator of the points $z_{j}=(x_{j},r_{j})$ is given by $z\left(t\right)=\left(x\left(t\right),r\left(t\right)\right)$, where $\displaystyle x\left(t\right)$ $\displaystyle=\left(1-\rho\left(t\right)\right)x_{0}+\rho\left(t\right)x_{1},$ $\displaystyle r\left(t\right)^{2}$ $\displaystyle=(1-t)^{2}r_{0}^{2}+t^{2}r_{1}^{2}+2t(1-t)r_{0}r_{1}\cos\left|x_{0}-x_{1}\right|,$ $\displaystyle\rho(t)$ $\displaystyle=\frac{1}{|x_{1}-x_{0}|}\arccos\left(\frac{(1-t)r_{0}+tr_{1}\cos|x_{1}-x_{0}|}{r(t)}\right).$ Theorem 9 is only valid for distances $|x_{1}-x_{0}|<\pi$, otherwise the geodesic simply goes from $x_{0}$ through the tip of the cone $\mathfrak{o}$ and back to $x_{1}$. In terms of measures, this behavior corresponds to pure growth-decay and no transport. From the discussion above about geodesics on the cone and geodesics on WFR, and because we want to model scenarios where both transport and growth are present at every moment, we restrict the computation of cone splines to points that are contained within a ball of radius $\pi/2$. We will denote the geodesic interpolation of points $z_{0}$ and $z_{1}$ at time $t$ as $z_{0}\\#_{t}z_{1}$. In order to extend step 1 above, we must first understand how the velocities of a De Casteljau spline at $t=0,1$ relate to the control points $x_{1},x_{2}$. ###### Proposition 3. Let $(x_{0},r_{x_{0}})$ and $(x_{3},r_{x_{3}})$ be given points on the cone, and $(x_{1},r_{x_{1}})$ and $(x_{2},r_{x_{2}})$ be control points. Let $\displaystyle(w_{i}(t),r_{w_{i}}(t))$ $\displaystyle:=(x_{i},r_{x_{i}})\\#_{t}(x_{i+1},r_{x_{i+1}}),\quad i=0,1,2$ $\displaystyle(u_{j}(t),r_{u_{j}}(t))$ $\displaystyle:=(w_{j},r_{w_{j}})\\#_{t}(w_{j+1},r_{w_{j+1}}),\quad j=0,1$ $\displaystyle(p(t),r_{p}(r))$ $\displaystyle:=(u_{0},r_{u_{0}})\\#_{t}(u_{1},r_{u_{1}}),$ be the De Casteljau spline on the cone, then $\displaystyle\dot{p}(0)$ $\displaystyle=3\frac{r_{x_{1}}}{r_{x_{0}}}\frac{\sin|x_{0}-x_{1}|}{|x_{0}-x_{1}|}(x_{1}-x_{0})$ (16) $\displaystyle\dot{r}_{p}(0)$ $\displaystyle=3(r_{x_{1}}\cos|x_{0}-x_{1}|-r_{x_{0}})$ (17) $\displaystyle\dot{p}(1)$ $\displaystyle=3\frac{r_{x_{2}}}{r_{x_{3}}}\frac{\sin|x_{3}-x_{2}|}{|x_{3}-x_{2}|}(x_{3}-x_{2}),$ (18) $\displaystyle\dot{r}_{p}(1)$ $\displaystyle=3\left(r_{x_{3}}-r_{x_{2}}\cos|x_{3}-x_{2}|\right).$ (19) ###### Proof. See Appendix C. ∎ In particular, at an interval $[t_{i},t_{i+1}]$, in order to achieve the position and mass velocities given by $v_{i}=\frac{d}{dt}\widetilde{X}_{t}(x)|_{t=t_{i}}$ and $s_{i}=\frac{d}{dt}\widetilde{R}_{t}(x)|_{t=t_{i}}$, we have to choose the control point $(x_{1},r_{1})$ as $x_{1}=x_{t_{i}}+c_{1}\frac{v_{i}}{\|v_{i}\|}$ and $r_{1}=c_{2}r_{t_{i}}$, where $c_{1},c_{2}$ are solved from (16-17): $\displaystyle c_{1}$ $\displaystyle=\arctan\left(\frac{\|v_{i}\|}{\tfrac{s_{i}}{r_{t_{i}}}+3\delta^{-1}}\right),$ $\displaystyle c_{2}$ $\displaystyle=\tfrac{\delta}{3}\sqrt{\|v_{i}\|^{2}+\left(\tfrac{s_{i}}{r_{t_{i}}}+3\delta^{-1}\right)^{2}},$ where $\delta=t_{i+1}-t_{i}$. Similarly for $t_{i+1}$, we choose $(x_{2},r_{2})$ with $x_{2}=x_{t_{i+1}}-c_{3}\tfrac{v_{i+1}}{\|v_{i+1}\|}$, $r_{2}=c_{4}r_{t_{i+1}}$, and solve (18-19): $\displaystyle c_{3}$ $\displaystyle=\arctan\left(\frac{\|v_{i+1}\|}{3\delta^{-1}-\tfrac{s_{i+1}}{r_{t_{i+1}}}}\right),$ $\displaystyle c_{4}$ $\displaystyle=\tfrac{\delta}{3}\sqrt{\|v_{i+1}\|^{2}+\left(3\delta^{-1}-\tfrac{s_{i+1}}{r_{t_{i+1}}}\right)^{2}}.$ The derivation of the above equations requires that $c_{1},c_{3}\geq 0$; it reflects the fact that $x_{1}-x_{0}$ and $x_{3}-x_{2}$ point in the right direction. This translates to $s_{i}\geq-3\delta^{-1}r_{t_{i}}$ and $s_{i+1}\leq 3\delta^{-1}r_{t_{i+1}}$, which constraints the possible velocities that are achievable by the De Casteljau’s algorithm. To comprehend why this happens, notice that unlike the Euclidean case, the second variable $r$ is constrained to be non-negative. This limits the possible masses of the control points $x_{1},x_{2}$, which in turn limits the endpoint derivatives $\dot{r}_{p}(0)$ and $\dot{r}_{p}(1)$ (equations (17),(19)). There are two limitations to the cone transport spline problem: the maximum $\frac{\pi}{2}$ diameter in the position space and the bounds in knot velocities in mass space, as described above. Furthermore, the velocities are estimated from two independent spline problems. A simple solution to these limitations is to 1) scale down all distances to a space with a smaller diameter and 2) scale up the knot times to allow for smaller endpoint derivatives. ## 5 Numerical Experiments In this section, we illustrate the behavior of the cone transport spline via the De Casteljau’s algorithm. We showcase different aspects of the paths qualitatively by considering spline interpolation of measures in one and two dimensions. In section 5.1 we consider interpolation with varying times. We show in section 5.2 two alternative ways of solving a spline problem given by discrete measures by considering a problem in two dimensions.555The MATLAB code used for generating the figures can be found here https://github.com/felipesua/WFR_splines. ### 5.1 Time effects and Linear interpolation Figure 1: De Casteljau cone interpolation of measures $\mu_{1},\dots,\mu_{4}$. Measure values are shown as a heat map (top) and in a three-dimensional plot (bottom) as a function of time. Knot times $(t_{1},\dots,t_{4})$ are $(0,1,2,10)$ (left), $(0,\tfrac{10}{3},\tfrac{20}{3},10)$ (center), and $(0,8,9,10)$ (right). $\sigma=0.06$. In this subsection, we look at the effects of time distribution of knots for a one dimensional cone spline problem. The map obtained from solving (7) is induced by a map only when the measures are absolutely continuous with respect to the Lebesgue measure. Indeed, the WFR plan from a single particle to two particles cannot be induced by a map. The same effect is present when solving problem (7) computationally. Here we choose to solve problem (7) using entropic regularization (Chizat et al.,, 2018; Peyré et al.,, 2019) and derive an approximation of the map from its solution by taking expectation666We can assume $\eta$ is a probability measure by the scale invariance property of the projection from the cone. of the marginals: $T(x)=\mathbb{E}_{\eta}[y|x]$, inspired by Pooladian and Niles-Weed, (2021). Let $\gamma_{\sigma}(x)=\exp(-\tfrac{1}{2\sigma^{2}}x^{2})\mathds{1}_{[-2,2]}(x/\sigma).$ For our first experiment, we interpolate the measures $d\mu_{1}(x)=\gamma_{\sigma}\left(x-\tfrac{1}{2}\right)\,dx,$ $d\mu_{2}(x)=\tfrac{1}{2}\left(\gamma_{\sigma}(x-0.3)+\gamma_{\sigma}(x-0.7)\right)\,dx,$ $d\mu_{3}(x)=\left(\gamma_{\sigma}(x-0.3)+\gamma_{\sigma}(x-0.7)\right)\,dx,$ $d\mu_{4}(x)=\tfrac{1}{2}\mathds{1}_{[0,1]}(x)\,dx,$ which we interpolate at three set of knot times $(t_{1},\dots,t_{4})$: $(0,1,2,10)$, $(0,\tfrac{10}{3},\tfrac{20}{3},10)$, and $(0,8,9,10)$ in figure 1. ### 5.2 Space discretization Figure 2: Subsampled (left) and gridded (right) interpolating measures. Measures $\mu_{1},\dots,\mu_{4}$ are shown in blue, orange, yellow, and purple respectively (top). Interpolated paths are computed for the points shown in red (middle) as a function of time varying by color. The width of select curves is changed according to the value of the measure (bottom). $\sigma=0.01$ We give details for the interpolation of measures in two dimensions in figure 2. For this example suppose we want to interpolate a set of particles. Since we need to represent them as absolutely continuous measures to be able to obtain a map, we convolve with a kernel and discretize the domain like in the previous section. As noted by Lavenant et al., (2021), gridding grows exponentially with the dimension which makes it infeasible for high- dimensional applications. We show here the result of fine gridding and compare it to sampling uniformly from the support. Suppose the resulting interpolating measures are as follows. Let $\gamma_{\sigma}(x)=\exp(-\tfrac{1}{2\sigma^{2}}\|x\|^{2})\mathds{1}_{B_{2}(0)}(x).$ We interpolate $d\mu_{1}(x)=\tfrac{3}{4}\gamma_{2\sigma}(x)\,dx,$ $d\mu_{2}(x)=0.65\left(\gamma_{\sigma}(x-(\tfrac{\sqrt{2}}{20},\tfrac{\sqrt{2}}{20}))+\gamma_{\sigma}(x-(0,-\tfrac{\sqrt{2}}{20}))+\gamma_{\sigma}(x-(\tfrac{\sqrt{2}}{20},-\tfrac{\sqrt{2}}{20}))\right)\,dx,$ $d\mu_{2}(x)=\tfrac{3}{4}\left(\gamma_{\sigma}(x-(\tfrac{3}{20},\tfrac{3}{20}))+\gamma_{\sigma}(x-(\tfrac{3}{20},-\tfrac{3}{20}))\right)\,dx,$ $d\mu_{4}(x)=\gamma_{2\sigma}(x-(\tfrac{1}{5},0))\,dx.$ ## Acknowledgements We thank Philippe Rigollet and Sinho Chewi for helpful comments and suggestions on the manuscript. ## Appendix A Derivation of the covariant derivative See 2 ###### Proof. Let $\mathbf{u}_{t}^{i}=(u_{t}^{i},\beta_{t}^{i})$ be two tangent fields along a curve $\mu_{t}$, which has derivative $(v_{t},\alpha_{t})$. Metric compatibility then reads $\displaystyle\frac{d}{dt}\langle\mathbf{u}_{t}^{1},\mathbf{u}_{t}^{2}\rangle_{\mu_{t}}$ $\displaystyle=\int\langle\partial_{t}u_{t}^{1},u_{t}^{2}\rangle+\langle u_{t}^{1},\partial_{t}u_{t}^{2}\rangle+4\partial_{t}\beta_{t}^{1}\,\beta_{t}^{2}+4\beta_{t}^{1}\,\partial\beta_{t}^{2}\,d\mu_{t}$ $\displaystyle\quad+\int\langle u_{t}^{1},u_{t}^{2}\rangle+4\beta_{t}^{1}\beta_{t}^{2}\,d(\partial_{t}\mu_{t})$ $\displaystyle=T_{1}+T_{2}$ By the continuity equation and the dual definition of $\operatorname{div}v_{t}\mu_{t}$, the second integral becomes $\displaystyle T_{2}$ $\displaystyle=\int\langle\nabla u_{t}^{1}\cdot v_{t},u_{t}^{2}\rangle+\langle u_{t}^{2},\nabla u_{t}^{2}\cdot v_{t}\rangle+4\alpha_{t}\langle u_{t}^{1},u_{t}^{2}\rangle$ $\displaystyle\quad+\int 4\beta_{t}^{2}\langle\nabla\beta_{t}^{1},v_{t}\rangle+4\beta_{t}^{1}\langle\nabla\beta_{t}^{2},v_{t}\rangle+16\alpha_{t}\beta_{t}^{1}\beta_{t}^{2}\,d\mu_{t}$ Now, we need this to be equal to $\langle\mathbf{u}_{t}^{1},\frac{\mathbf{D}}{dt}\mathbf{u}_{t}^{2}\rangle+\langle\frac{\mathbf{D}}{dt}\mathbf{u}_{t}^{1},\mathbf{u}_{t}^{2}\rangle$. Some terms are uniquely attributable to one or the other, such as $\int\langle\partial_{t}u_{t}^{1},u_{t}^{2}\rangle$, but some are not, such as $4\beta_{t}^{2}\langle\nabla\beta_{t}^{1},v_{t}\rangle$. This can arise from either of the two terms: as $\nabla\beta_{t}^{i}=u_{t}^{i}$, $\left\langle(0,\langle\nabla\beta_{t}^{1},v_{t}\rangle),\mathbf{u}_{t}^{2}\right\rangle_{\mu_{t}}=\left\langle\mathbf{u}_{t}^{1},(4\beta_{t}^{2}v,0)\right\rangle_{\mu_{t}}$ Indeed, by splitting these terms and gathering the others, the possible covariant derivatives that satisfy metric compatibility are of the form $\frac{\mathbf{D}}{dt}\mathbf{u}_{t}=\mathcal{P}_{\mu_{t}}\begin{pmatrix}\partial_{t}u_{t}+\nabla u_{t}\cdot v_{t}+2\alpha_{t}u_{t}+4p\beta_{t}v_{t}\\\ \partial_{t}\beta_{t}+(1-p)\langle\nabla\beta_{t},v_{t}\rangle+2\alpha_{t}\beta_{t}\end{pmatrix}$ for real $p$. Let us check the torsion-free identity with $p=1/2$. In this case, if $\mathbf{u}_{t}=(\nabla\varphi,\varphi)$ is constant in time, then $\frac{\mathbf{D}}{dt}\begin{pmatrix}\nabla\varphi\\\ \varphi\end{pmatrix}=\mathcal{P}_{\mu_{t}}\begin{pmatrix}\nabla^{2}\varphi\cdot v_{t}+2\alpha_{t}\nabla\varphi+2\varphi v_{t}\\\ \frac{1}{2}\langle\nabla\varphi,v_{t}\rangle+2\alpha_{t}\varphi\end{pmatrix}$ Now, with the setup as in the $W_{2}$ case, defining $F\colon\varphi\mapsto\int\varphi\,d\mu$ and writing $\bm{\varphi}=(\nabla\varphi,\varphi)$, we have from the continuity equation $\partial_{t}F[\mu_{t}^{i}]=\int\langle\bm{\varphi},\mathbf{v}_{t}^{i}\rangle\,d\mu_{t}^{i}$ where $\mathbf{v}_{t}^{i}$ is the derivative of $\mu_{t}^{i}$. As above, we have $\displaystyle\mathbf{u}_{0}^{1}(\mathbf{u}^{2}(F))[\mu]$ $\displaystyle=\frac{d}{dt}\langle\bm{\varphi},\mathbf{u}_{t}^{2}\rangle_{\mu_{t}^{2}}\big{|}_{t=0}$ $\displaystyle=\left\langle\frac{\mathbf{D}}{dt}\bm{\varphi},\mathbf{u}_{t}^{2}\right\rangle_{\mu_{t}^{2}}+\left\langle\bm{\varphi},\nabla_{\mathbf{u}_{0}^{1}}\mathbf{u}_{t}^{2}\right\rangle_{\mu_{t}^{2}}\bigg{|}_{t=0}$ Recalling that at $t=0$ we have $\mathbf{u}_{0}^{2}=\mathbf{v}_{0}^{1}=(v_{0}^{1},\alpha_{0}^{1})$, the first term becomes (we may ignore the projection, since $\mathbf{u}_{t}^{2}$ is already tangent) $Q_{1}=\left\langle\begin{pmatrix}\nabla^{2}\varphi\cdot v_{0}^{2}+2\alpha_{0}^{2}\nabla\varphi+2\varphi v_{0}^{2}\\\ \frac{1}{2}\langle\nabla\varphi,v_{0}^{2}\rangle+2\alpha_{0}^{2}\varphi\end{pmatrix},\begin{pmatrix}v_{0}^{1}\\\ \alpha_{0}^{1}\end{pmatrix}\right\rangle_{\mu}$ while the corresponding term from $\mathbf{u}_{0}^{2}(\mathbf{u}^{2}(F))[\mu]$ is $Q_{2}=\left\langle\begin{pmatrix}\nabla^{2}\varphi\cdot v_{0}^{1}+2\alpha_{0}^{1}\nabla\varphi+2\varphi v_{0}^{1}\\\ \frac{1}{2}\langle\nabla\varphi,v_{0}^{1}\rangle+2\alpha_{0}^{1}\varphi\end{pmatrix},\begin{pmatrix}v_{0}^{2}\\\ \alpha_{0}^{2}\end{pmatrix}\right\rangle_{\mu}$ and we must check that these agree. The “top-left”, “top-right”, and “bottom- right” terms are identical for both. The top-middle of the first is equal to the bottom-left for the second, and vice-versa. Thus we have shown that the covariant derivative is given by $\frac{\mathbf{D}}{dt}\mathbf{u}_{t}=\mathcal{P}_{\mu_{t}}\begin{pmatrix}\partial_{t}u_{t}+\nabla u_{t}\cdot v_{t}+2\alpha_{t}u_{t}+2\beta_{t}v_{t}\\\ \partial_{t}\beta_{t}+\frac{1}{2}\langle\nabla\beta_{t},v_{t}\rangle+2\alpha_{t}\beta_{t}\end{pmatrix}$ and in specific, $\frac{\mathbf{D}^{2}}{dt^{2}}\mu_{t}=\begin{pmatrix}\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\\\ \partial_{t}\alpha_{t}+\frac{1}{2}|\nabla\alpha_{t}|^{2}+2\alpha_{t}^{2}\end{pmatrix}$ (20) This quantity is tangent, so no projection is necessary. ∎ ## Appendix B Proof of equivalence in curvature See 6 ###### Proof. Making the definition explicit, we wish of our measure $P$ that the cost of (12) is equal to $\int_{0}^{1}\int\left|\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\right|^{2}+4\left(\partial_{t}\alpha_{t}+\tfrac{1}{2}|\nabla\alpha_{t}|^{2}+2\alpha_{t}^{2}\right)^{2}\,d\mu_{t}\,dt$ (21) Let $\lambda_{0}\in\mathcal{P}(\mathfrak{C})$ be a lift of $\mu_{0}$. By Proposition 2, the measure $\lambda_{t}=(X_{t},R_{t})_{\\#}\lambda_{0}$ is a lift of $\mu_{t}$. In order to compute the covariant derivative of a curve on the cone, we compute the Christoffel symbols. These are given by the formulas $\nabla_{\partial x_{i}}\partial x_{j}=\Gamma_{ij}^{k}\partial_{k},\quad\Gamma_{ij}^{k}=\frac{1}{2}g^{kl}\left(\frac{\partial}{\partial_{j}}g_{il}+\frac{\partial}{\partial_{i}}g_{jl}-\frac{\partial}{\partial_{l}}g_{ij}\right).$ Let $\partial x_{1},\dots,\partial x_{n},\partial_{r}$ be the coordinate basis of the tangent space on the cone, hence $(g_{ij})_{ij}=\begin{pmatrix}r^{2}I_{n}&0\\\ 0&1\end{pmatrix},\quad\frac{\partial}{\partial_{k}}g_{ij}=\begin{cases}2r&i=j\neq r,~{}k=r,\\\ 0,&\text{else}.\end{cases}$ Since $(g^{ij})$ and $\partial_{r}g_{ij}$ are only defined along the diagonal and $\partial_{l}g_{ij}=0$ for $l\neq r$, the only terms that do not vanish are $\Gamma_{rj}^{k}=r^{-1}\delta_{jk},\quad\Gamma_{ij}^{r}=-r\delta_{ij}$ thus the Levi-Civita connection on the cone is given by $\nabla_{\partial_{r}}\partial_{r}=0,\quad\quad\nabla_{\partial_{r}}X=r^{-1}X,\quad\quad\nabla_{X_{1}}X_{2}=-r\langle X_{1},X_{2}\rangle\partial_{r}.$ From this, the covariant derivative of a curve $z_{t}=(x_{t},r_{t})$ on the cone is given by $\ddot{z}_{t}=\nabla_{\dot{z}_{t}}\dot{z}_{t}=\left(\ddot{x}_{t}+2\frac{\dot{r}_{t}}{r_{t}}\dot{x}_{t},~{}\ddot{r}_{t}-r_{t}|\dot{x}_{t}|^{2}\right)$ thus from the Riemannian metric (11) $|\ddot{z}_{t}|^{2}=\left|r_{t}\ddot{x}_{t}+2\dot{r}_{t}\dot{x}_{t}\right|^{2}+\left|\ddot{r}_{t}-r_{t}|\dot{x}_{t}|^{2}\right|^{2}$ Let $\lambda_{0}$ be any lift of $\mu_{0}$ and define $P$ to place mass $\lambda_{0}(z_{0})$ on the path $z_{t}=(X_{t}(x_{0}),R_{t}(r_{0}))$, so that $P$ is supported on the flow map curves in $\Omega_{\mathfrak{C}}$. By applying the total derivative to the defining equations of the flow maps, these curves satisfy $\displaystyle\dot{x}_{t}$ $\displaystyle=v_{t}(x_{t})$ $\displaystyle\ddot{x}_{t}$ $\displaystyle=\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}$ $\displaystyle\dot{r}_{t}$ $\displaystyle=2r_{t}\alpha_{t}(x_{t})$ $\displaystyle\ddot{r}_{t}$ $\displaystyle=2r_{t}\left(\partial_{t}\alpha_{t}+\nabla\alpha_{t}\cdot v_{t}+2\alpha_{t}^{2}\right)$ Now, we have $\int_{\Omega_{\mathfrak{C}}}\int_{0}^{1}|\ddot{z}_{t}|^{2}\,dt\,dP(z)=\int_{\mathfrak{C}}\int_{0}^{1}|\ddot{z}_{t}|^{2}\,dt\,d\lambda_{0}(z_{0})$ Expanding out, this is $\int_{\mathfrak{C}}\int_{0}^{1}\left|r_{t}\ddot{x}_{t}+2\dot{r}_{t}\dot{x}_{t}\right|^{2}+\left|\ddot{r}_{t}-r_{t}|\dot{x}_{t}|^{2}\right|^{2}\,dt\,d\lambda_{0}(z_{0})$ Let us deal with each term separately so the expressions do not become unwieldly. For the first $\displaystyle\int_{\mathfrak{C}}\int_{0}^{1}\left|r_{t}\ddot{x}_{t}+2\dot{r}_{t}\dot{x}_{t}\right|^{2}\,dt\,d\lambda_{0}(z_{0})$ $\displaystyle=$ $\displaystyle\int_{\mathfrak{C}}\int_{0}^{1}r_{t}^{2}\left|\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\right|^{2}(x_{t})\,dt\,d\lambda_{0}(z_{0})$ $\displaystyle=$ $\displaystyle\int_{0}^{1}\int_{\mathfrak{C}}\left|\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\right|^{2}(x_{t})\,d(r_{t}^{2}\cdot\lambda_{0})(z_{0})\,dt$ $\displaystyle=$ $\displaystyle\int_{0}^{1}\int\left|\partial_{t}v_{t}+\nabla v_{t}\cdot v_{t}+4\alpha_{t}v_{t}\right|^{2}(x)\,d\mu_{t}(x)\,dt$ We have used that $(X_{t},R_{t})_{\\#}\lambda_{0}=\lambda_{t}$ and $\mathfrak{P}\lambda_{t}=\mu_{t}$. The second term is dealt with in exactly the same way, $\displaystyle\int_{\mathfrak{C}}\int_{0}^{1}\left|\ddot{r}_{t}-r_{t}|\dot{x}_{t}|^{2}\right|^{2}\,dt\,d\lambda_{0}(z_{0})$ $\displaystyle=$ $\displaystyle\int_{\mathfrak{C}}\int_{0}^{1}4r_{t}^{2}\left|\partial_{t}\alpha_{t}+\tfrac{1}{2}\nabla\alpha_{t}\cdot v_{t}+2\alpha_{t}^{2}\right|^{2}(x_{t})\,dt\,d\lambda_{0}(z_{0})$ $\displaystyle=$ $\displaystyle\int_{0}^{1}\int_{\mathfrak{C}}4\left|\partial_{t}\alpha_{t}+\tfrac{1}{2}\nabla\alpha_{t}\cdot v_{t}+2\alpha_{t}^{2}\right|^{2}(x_{t})\,d(r_{t}^{2}\cdot\lambda_{0})(z_{0})\,dt$ $\displaystyle=$ $\displaystyle\int_{0}^{1}\int_{\mathfrak{C}}4\left|\partial_{t}\alpha_{t}+\tfrac{1}{2}\nabla\alpha_{t}\cdot v_{t}+2\alpha_{t}^{2}\right|^{2}(x_{t})\,d\mu_{t}(x)\,dt.$ ∎ ## Appendix C Cone De Casteljau’s algorithm See 3 ###### Proof. Let $\theta_{i}:=|x_{i+1}-x_{i}|$. Since the expressions for $w$, $u$ and $P$ are all the same with different interpolating points, we compute the derivatives of $w$ $\rho$, $r$ and $\theta$ with the suscripts removed. To compute each derivative, we just replace with the corresponding suscript. For $\theta=|x_{1}-x_{0}|$, $\displaystyle\theta\dot{\theta}$ $\displaystyle=\langle x_{1}-x_{0},\dot{x}_{1}-\dot{x}_{0}\rangle.$ For a point $w_{0}=(1-\rho)x_{0}+\rho x_{1}$, $\displaystyle\dot{w}_{0}$ $\displaystyle=(x_{1}-x_{0})\dot{\rho}+(1-\rho)\dot{x}_{0}+\rho\dot{x}_{1}.$ (22) For the mass $r^{2}=r_{0}^{2}(1-t)^{2}+r_{1}^{2}t^{2}+2r_{0}r_{1}t(1-t)\cos(\theta)$, $\displaystyle r\dot{r}=~{}$ $\displaystyle 2r_{0}\dot{r}_{0}(1-t)^{2}-r_{0}^{2}(1-t)+r_{1}\dot{r}_{1}t^{2}+r_{1}^{2}t$ (23) $\displaystyle+(\dot{r}_{1}r_{0}+\dot{r}_{0}r_{1})t(1-t)\cos(\theta)$ $\displaystyle+r_{0}r_{1}(1-2t)\cos(\theta)-r_{0}r_{1}t(1-t)\sin(\theta)\dot{\theta}.$ For the local time $\rho=\frac{1}{\theta}\arccos\left(\frac{r_{0}(1-t)+r_{1}t\cos(\theta)}{r}\right)$, $\displaystyle\dot{\rho}\theta+\rho\dot{\theta}=~{}$ $\displaystyle\frac{1}{r_{w_{0}}^{2}}\big{(}(1-t)t\sin(\theta)(\dot{r}_{1}r_{0}-\dot{r}_{0}r_{1})$ (24) $\displaystyle+(1-t)t\cos(\theta)r_{0}r_{1}\dot{\theta}$ $\displaystyle+r_{0}r_{1}\sin(\theta)+r_{1}^{2}\dot{\theta}t^{2}\big{)}.$ For the first interpolation points $w_{0},w_{1},w_{2}$ we have in particular that $\dot{x}_{i},\dot{r}_{x_{i}}=0$, hence $\displaystyle\dot{\theta}_{w_{i}}$ $\displaystyle=0,$ $\displaystyle\dot{w}_{i}$ $\displaystyle=(x_{i+1}-x_{i})\dot{\rho}_{w_{i}},$ $\displaystyle\dot{r}_{w_{i}}$ $\displaystyle=\frac{1}{r_{w_{i}}}\left(-r_{x_{i}}^{2}(1-t)+r_{x_{i+1}}^{2}t+r_{x_{i}}r_{x_{i+1}}(1-2t)\cos(\theta_{w_{i}})\right),$ $\displaystyle\dot{\rho}_{w_{i}}$ $\displaystyle=\frac{r_{x_{i}}r_{x_{i+1}}}{\theta_{w_{i}}}\sin(\theta_{w_{i}})-\rho_{w_{i}}\frac{\dot{\theta}_{w_{i}}}{\theta_{w_{i}}}.$ Thus at $t=0$, $\displaystyle\dot{\theta}_{w_{i}}(0)$ $\displaystyle=0$ (25) $\displaystyle\dot{\rho}_{w_{i}}(0)$ $\displaystyle=\frac{r_{x_{i+1}}}{r_{x_{i}}}\frac{\sin(\theta_{i})}{\theta_{i}}$ (26) $\displaystyle\dot{r}_{w_{i}}(0)$ $\displaystyle=r_{x_{i+1}}\cos(\theta_{i})-r_{x_{i}}$ (27) $\displaystyle\dot{w}_{i}(0)$ $\displaystyle=(x_{i+1}-x_{i})\frac{r_{x_{i+1}}}{r_{x_{i}}}\frac{\sin(\theta_{i})}{\theta_{i}}$ (28) and $t=1$, $\displaystyle\dot{\theta}_{w_{i}}(1)$ $\displaystyle=0$ (29) $\displaystyle\dot{\rho}_{w_{i}}(1)$ $\displaystyle=\frac{r_{x_{i}}}{r_{x_{i+1}}}\frac{\sin(\theta_{i})}{\theta_{i}}$ (30) $\displaystyle\dot{r}_{w_{i}}(1)$ $\displaystyle=r_{x_{i+1}}-r_{x_{i}}\cos(\theta_{i})$ (31) $\displaystyle\dot{w}_{i}(1)$ $\displaystyle=(x_{i+1}-x_{i})\frac{r_{x_{i}}}{r_{x_{i+1}}}\frac{\sin(\theta_{i})}{\theta_{i}}.$ (32) We substitute these expression back into the formulas for the derivatives of position (22), mass (23) and local time (24) for $u_{j}$ and $p$, for $j=0,1$. Notice that for both $t=0$ and $t=1$ the depdence on $\dot{\theta}$ vanishes. $\displaystyle\dot{\rho}_{u_{j}}(0)$ $\displaystyle=\frac{r_{x_{j+1}}}{r_{x_{j}}}\frac{\sin(\theta_{j})}{\theta_{j}}$ (33) $\displaystyle\dot{r}_{u_{j}}(0)$ $\displaystyle=2\left(r_{x_{j+1}}\cos(\theta_{j})-r_{j}\right)$ (34) $\displaystyle\dot{u}_{j}(0)$ $\displaystyle=2(x_{j+1}-x_{j})\frac{r_{x_{j+1}}}{r_{x_{j}}}\frac{\sin(\theta_{j})}{\theta_{j}}$ (35) $\displaystyle\dot{\rho}_{u_{j}}(1)$ $\displaystyle=\frac{r_{x_{j}}}{r_{x_{j+1}}}\frac{\sin(\theta_{{j+1}})}{\theta_{j+1}}$ (36) $\displaystyle\dot{r}_{u_{j}}(1)$ $\displaystyle=2\left(r_{x_{j+1}}-r_{x_{j}}\cos(\theta_{j+1})\right)$ (37) $\displaystyle\dot{u}_{j}(1)$ $\displaystyle=2(x_{j+1}-x_{j})\frac{r_{x_{j+1}}}{r_{x_{j}}}\frac{\sin(\theta_{j+1})}{\theta_{j+1}}$ (38) Finally for $p$, $\displaystyle\dot{\rho}_{p}(0)$ $\displaystyle=\frac{r_{x_{1}}}{r_{x_{0}}}\frac{\sin(\theta_{0})}{\theta_{0}},$ (39) $\displaystyle\dot{r}_{p}(0)$ $\displaystyle=3\left(r_{x_{1}}\cos(\theta_{0})-r_{x_{0}}\right),$ (40) $\displaystyle\dot{p}(0)$ $\displaystyle=3(x_{1}-x_{0})\frac{r_{x_{1}}}{r_{x_{0}}}\frac{\sin(\theta_{0})}{\theta_{0}},$ (41) $\displaystyle\dot{\rho}_{p}(1)$ $\displaystyle=\frac{r_{x_{2}}}{r_{x_{3}}}\frac{\sin(\theta_{2})}{\theta_{2}},$ (42) $\displaystyle\dot{r}_{p}(1)$ $\displaystyle=3\left(r_{x_{3}}-r_{x_{2}}\cos(\theta_{2})\right),$ (43) $\displaystyle\dot{p}(1)$ $\displaystyle=3(x_{3}-x_{2})\frac{r_{x_{2}}}{r_{x_{3}}}\frac{\sin(\theta_{2})}{\theta_{2}}.$ (44) ∎ ## References * Absil et al., (2016) Absil, P.-A., Gousenbourger, P.-Y., Striewski, P., and Wirth, B. (2016). Differentiable piecewise-bézier surfaces on riemannian manifolds. SIAM Journal on Imaging Sciences, 9(4):1788–1828. * Benamou and Brenier, (2000) Benamou, J.-D. and Brenier, Y. (2000). A computational fluid mechanics solution to the monge-kantorovich mass transfer problem. Numerische Mathematik, 84(3):375–393. * Benamou et al., (2019) Benamou, J.-D., Gallouët, T. O., and Vialard, F.-X. (2019). Second-order models for optimal transport and cubic splines on the wasserstein space. Foundations of Computational Mathematics, 19(5):1113–1143. * Chen et al., (2018) Chen, Y., Conforti, G., and Georgiou, T. T. (2018). Measure-valued spline curves: An optimal transport viewpoint. SIAM Journal on Mathematical Analysis, 50(6):5947–5968. * (5) Chewi, S., Clancy, J., Gouic, T. L., Rigollet, P., Stepaniants, G., and Stromme, A. J. (2020a). Fast and smooth interpolation on wasserstein space. arXiv preprint arXiv:2010.12101. * (6) Chewi, S., Maunu, T., Rigollet, P., and Stromme, A. J. (2020b). Gradient descent algorithms for bures-wasserstein barycenters. In Conference on Learning Theory, pages 1276–1304. PMLR. * Chizat, (2017) Chizat, L. (2017). Unbalanced optimal transport: Models, numerical methods, applications. PhD thesis, PSL Research University. * Chizat et al., (2018) Chizat, L., Peyré, G., Schmitzer, B., and Vialard, F.-X. (2018). Scaling algorithms for unbalanced optimal transport problems. Mathematics of Computation, 87(314):2563–2609. * Gigli, (2012) Gigli, N. (2012). Second Order Analysis on $(\mathscr{P}_{2}(M),W_{2})$. American Mathematical Soc. * Gousenbourger et al., (2019) Gousenbourger, P.-Y., Massart, E., and Absil, P.-A. (2019). Data fitting on manifolds with composite bézier-like curves and blended cubic splines. Journal of Mathematical Imaging and Vision, 61(5):645–671. * Kondratyev et al., (2016) Kondratyev, S., Monsaingeon, L., Vorotnikov, D., et al. (2016). A new optimal transport distance on the space of finite radon measures. Advances in Differential Equations, 21(11/12):1117–1164. * Lavenant et al., (2021) Lavenant, H., Zhang, S., Kim, Y.-H., and Schiebinger, G. (2021). Towards a mathematical theory of trajectory inference. arXiv preprint arXiv:2102.09204. * Liero et al., (2016) Liero, M., Mielke, A., and Savaré, G. (2016). Optimal transport in competition with reaction: The hellinger–kantorovich distance and geodesic curves. SIAM Journal on Mathematical Analysis, 48(4):2869–2911. * Liero et al., (2018) Liero, M., Mielke, A., and Savaré, G. (2018). Optimal entropy-transport problems and a new hellinger–kantorovich distance between positive measures. Inventiones mathematicae, 211(3):969–1117. * Lisini, (2007) Lisini, S. (2007). Characterization of absolutely continuous curves in wasserstein spaces. Calculus of variations and partial differential equations, 28(1):85–120. * Maniglia, (2007) Maniglia, S. (2007). Probabilistic representation and uniqueness results for measure-valued solutions of transport equations. Journal de mathématiques pures et appliquées, 87(6):601–626. * Otto, (2001) Otto, F. (2001). The geometry of dissipative evolution equations: the porous medium equation. * Peyré et al., (2019) Peyré, G., Cuturi, M., et al. (2019). Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607. * Pooladian and Niles-Weed, (2021) Pooladian, A.-A. and Niles-Weed, J. (2021). Entropic estimation of optimal transport maps. arXiv preprint arXiv:2109.12004. * Schiebinger et al., (2019) Schiebinger, G., Shu, J., Tabaka, M., Cleary, B., Subramanian, V., Solomon, A., Gould, J., Liu, S., Lin, S., Berube, P., et al. (2019). Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. Cell, 176(4):928–943.
# Stochastic linear-quadratic control with a jump and regime switching on a random horizon Ying Hu Univ Rennes, CNRS, IRMAR-UMR 6625, F-35000 Rennes, France. Partially supported by Lebesgue Center of Mathematics “Investissements d’avenir”program- ANR-11-LABX-0020-01, ANR CAESARS (No. 15-CE05-0024) and ANR MFG (No. 16-CE40-0015-01). Email<EMAIL_ADDRESS>Xiaomin Shi School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250100, China. Partially supported by NSFC (No. 11801315), NSF of Shandong Province (No. ZR2018QA001). Email<EMAIL_ADDRESS>Zuo Quan Xu Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong. Partially supported by NSFC (No. 11971409), Hong Kong RGC (GRF No. 15202421), The PolyU-SDU Joint Research Center on Financial Mathematics and the CAS AMSS-PolyU Joint Laboratory of Applied Mathematics, The Hong Kong Polytechnic University. Email<EMAIL_ADDRESS> Abstract. In this paper, we study a stochastic linear-quadratic control problem with random coefficients and regime switching on a horizon $[0,T\wedge\tau]$, where $\tau$ is a given random jump time for the underlying state process and $T$ is a constant. We obtain an explicit optimal state feedback control and explicit optimal cost value by solving a system of stochastic Riccati equations (SREs) with jumps on $[0,T\wedge\tau]$. By the decomposition approach stemming from filtration enlargement theory, we express the solution of the system of SREs with jumps in terms of another system of SREs involving only Brownian filtration on the deterministic horizon $[0,T]$. Solving the latter system is the key theoretical contribution of this paper and we establish this for three different cases, one of which seems to be new in the literature. These results are then applied to study a mean-variance hedging problem with random parameters that depend on both Brownian motion and Markov chain. The optimal portfolio and optimal value are presented in closed forms with the aid of a system of linear backward stochastic differential equations with jumps and unbounded coefficients in addition to the SREs with jumps. Key words. Stochastic linear-quadratic control, regime switching, random horizon, stochastic Riccati equations with jumps, mean-variance hedging Mathematics Subject Classification (2020) 93E20 60H30 91G10 ## 1 Introduction The stochastic linear-quadratic (LQ, for short) optimal control has known important developments since the pioneering works of Wonham [22] and Bismut [3] who studied stochastic LQ problem with deterministic and random coefficients respectively. Kohlmann and Zhou [15] established the relationship between stochastic LQ problems and backward stochastic differential equations (BSDEs). Chen, Li and Zhou [4] studied the indefinite stochastic LQ problem which is different significantly from its deterministic counterpart. Meanwhile, it has wide applications in many fields, such as risk management, optimal investment or mean-variance portfolio selection; see, e.g., [9, 14, 16, 17, 18, 25, 26]. All the aforementioned literature focused on deterministic time horizon. However the controller in reality may only access the model before the realization of some random event, such as before the default time in credit risk theory and death time in actuarial science. Therefore, it is of great theoretical importance to study control problems with random time of eventual exit. Yu [23] and Lv, Wu and Yu [19] studied continuous time mean-variance portfolio selection problems with random horizon in complete and incomplete markets, respectively, where the state process is continuous. They assumed the conditional distribution of the eventual exit follows an Itô process driven by the same Brownian motion as in risky assets dynamics. In practice, there is another situation that is the time of eventual exit $\tau$ arrives by surprise, i.e. $\tau$ is a totally inaccessible random time for the reference filtration, meanwhile the state process may have a jump at the time $\tau$. The theory of enlargement of filtration happens to be a powerful tool for modeling such random time. Please refer to Bielecki and Rutkowski [2] for a systematic account on this subject. Along this line, Pham [20] investigated a general stochastic control problem under a progressive enlargement of filtration and proved a decomposition of the original stochastic control problem under the global filtration into classical stochastic control problems under the reference filtration. Jeanblanc, Mastrolia, Possamai and Reveillac [10] studied an exponential utility maximization problem with bounded random horizon. Neither [20] nor [10] considered jumps in the state (or wealth) processes. Kharroubi, Lim and Ngoupeyou [12] studied a mean-variance hedging problem with jumps on random horizon. The main approach used in [10] and [12] is to reduce their problems to the analysis of the solvability of some BSDEs with jumps. We generalize Kharroubi, Lim and Ngoupeyou’s [12] model to a general stochastic linear-quadratic control problem with regime switching. All the coefficients and the control variable (portfolio) in [12] are one dimensional. By contrast, both the control variable and the Brownian motion in our model are multi-dimensional. Moreover, we will solve the stochastic LQ problems and the associate stochastic Riccati equations (SREs) with jumps for three different cases: one standard case and two singular cases with one of which seems to be new in the literature, while only one singular case was considered in [12]. On the other hand, a Markov chain is usually adopted to reflect the model status or the random market environment. For instance, Zhou and Yin [26] studied a mean-variance portfolio selection problem, where the coefficients are assumed to be deterministic functions of time for each given regime. To better reflect randomness of the market environment, all the coefficients in our model are assumed to be stochastic processes for each given regime. As is well known, the SREs plays a crucial role in representing the optimal controls for stochastic LQ control problems. It is Tang [21] who established the existence and uniqueness results for SREs with uniformly definite coefficients. While there were only partial results for the solvability of indefinite SREs so far; see, e.g., [8]. Zhang, Dong and Meng [24] made a great progress in solving stochastic LQ control and related SREs with jumps with uniformly definite control weight by inverse flow technique. The main theoretical contribution of this paper is the solvability of the associated systems of SREs with jumps. Due to the presence of regime switching, the SREs with jumps in our model is actually a system of BSDEs coupled through the generator of the Markov chain. Different from the inverse flow method used in [24], we will establish the solvability of our SREs with jumps from the point of view of BSDE theory directly. Inspired by [12], we construct the solution to the SREs on random horizon with jumps from another SREs on deterministic finite horizon without jumps. Using the model of our previous work [6], we obtain the solvability of the latter SREs. Different with [6], new terms emerge in the SREs in this paper so that we have to establish nonnegative or uniformly positive lower bounds for their solution. The rest part of this paper is organized as follows. Section 2 presents the general framework and assumptions of stochastic LQ problem on random horizon. In Section 3, we establish the solvability of the system of SREs with jumps. The optimal feedback control and optimal value are presented in Section 4 with the aid of SREs with jumps. In Section 5, the general results are applied to solve a mean-variance hedging problem. ## 2 Problem formulation Let $(\Omega,\mathcal{H},\mathbb{P})$ be a fixed complete probability space on which is defined a standard $n$-dimensional Brownian motion $\\{W_{t}\\}_{t\geq 0}$. Define $\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}$, $\mathcal{F}_{t}=\sigma\\{W_{s}:0\leq s\leq t\\}\bigvee\mathcal{N}$, where $\mathcal{N}$ is the totality of all the $\mathbb{P}$-null sets of $\mathcal{H}$. In this probability space, a random time $\tau$ is given, which represents, for example, the default time in credit or counterparty risk models, or a death time in actuarial models. The random time $\tau$ is not assumed to be an $\mathbb{F}$-stopping time. We therefore use in the sequel the standard approach of progressive enlargement of filtrations. Let $\mathbb{G}$ be the smallest right continuous extension of $\mathbb{F}$ that turns $\tau$ into a $\mathbb{G}$-stopping time. More precisely, $\mathbb{G}:=(\mathcal{G}_{t})_{t\geq 0}$ is defined by $\mathcal{G}_{t}:=\bigcap_{\varepsilon>0}\widetilde{\mathcal{G}}_{t+\varepsilon},\ \mbox{ for all $t\geq 0$},$ where $\widetilde{\mathcal{G}}_{s}:=\mathcal{F}_{s}\vee\sigma(\mathbf{1}_{\tau\leq u};0\leq u\leq s)$, for all $s\geq 0$. On $(\Omega,\mathcal{H},\mathbb{P})$, there is a continuous-time stationary Markov chain $\\{\alpha_{t}\\}_{t\geq 0}$ valued in a finite state space $\mathcal{M}=\\{1,2,...,\ell\\}$ such that $\\{\alpha_{t}\\}_{t\geq 0}$ is independent of $\\{W_{t}\\}_{t\geq 0}$ and $\tau$. The Markov chain $\\{\alpha_{t}\\}_{t\geq 0}$ has a generator $Q=(q_{ij})_{\ell\times\ell}$ with $q_{ij}\geq 0$ for $i\neq j$ and $\sum_{j=1}^{\ell}q_{ij}=0$ for every $i,j\in\mathcal{M}$. Denote by $\mathbb{F}^{\alpha}=(\mathcal{F}_{t}^{\alpha})_{t\geq 0}$ the natural filtration generated by $\alpha$, and $\mathbb{H}:=(\mathcal{H}_{t})_{t\geq 0}$, where $\mathcal{H}_{t}:=\mathcal{G}_{t}\vee\mathcal{F}_{t}^{\alpha}$. We denote by $\mathcal{P}(\mathbb{F})$ the $\sigma$-algebra of $\mathbb{F}$-predictable subsets of $\Omega\times\mathbb{R}^{+}$, i.e. the $\sigma$-algebra generated by the left-continuous $\mathbb{F}$-adapted processes. Define $\mathcal{P}(\mathbb{G})$ and $\mathcal{P}(\mathbb{H})$ in a similar way. We introduce the following notation: $\begin{array}[c]{l}L^{\infty}_{\mathcal{F}_{T}}(\Omega)=\Big{\\{}\xi:\Omega\rightarrow\mathbb{R}\;\Big{|}\;\xi\mbox{ is }\mathcal{F}_{T}\mbox{-measurable, and essentially bounded}\Big{\\}},\\\ L^{2}_{\mathbb{F}}(0,T;\mathbb{R})=\Big{\\{}\phi:[0,T]\times\Omega\rightarrow\mathbb{R}\;\Big{|}\;(\phi_{t})_{0\leq t\leq T}\mbox{ is }\mathcal{P}(\mathbb{F})\mbox{-measurable such that }{\mathbb{E}}\int_{0}^{T}|\phi_{t}|^{2}dt<\infty\Big{\\}},\\\ L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})=\Big{\\{}\phi:[0,T]\times\Omega\rightarrow\mathbb{R}\;\Big{|}\;(\phi_{t})_{0\leq t\leq T}\mbox{ is }\mathcal{P}(\mathbb{F})\mbox{-measurable essentially bounded}\Big{\\}},\\\ S^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})=\Big{\\{}\phi:[0,T]\times\Omega\rightarrow\mathbb{R}\;\Big{|}\;(\phi_{t})_{0\leq t\leq T}\mbox{ is }\ \mbox{c}\grave{\mathrm{a}}\mbox{d-l}\grave{\mathrm{a}}\mbox{g}\ \mathbb{F}\mbox{-adapted essentially bounded}\Big{\\}}.\end{array}$ These definitions are generalized in the obvious way to the cases that $\mathbb{F}$ is replaced by $\mathbb{G}$, $\mathbb{H}$, $[0,T]$ by any random time $[\tau,\iota]$ and $\mathbb{R}$ by $\mathbb{R}^{n}$, $\mathbb{R}^{n\times m}$ or $\mathbb{S}^{n}$, where $\mathbb{S}^{n}$ is the set of symmetric $n\times n$ real matrices. If $M\in\mathbb{S}^{n}$ is positive definite (resp. positive semidefinite), we write $M>0$ (resp. $M\geq 0$). In our argument, $t$, $\omega$, “almost surely” and “almost everywhere”, may be suppressed for simplicity in many circumstances if no confusion occurs. In this paper the integral $\int_{s}^{t}$ stands for $\int_{(s,t]}$. The following assumption is classical in the filtration enlargement theory. ###### Assumption 1 Any $\mathbb{F}$-martingale is a $\mathbb{G}$-martingale. ###### Remark 2.1 Assumption 1 is known as immersion between $\mathbb{F}$ and $\mathbb{G}$ in the filtration enlargement theory. Under Assumption 1, the $\sigma$-field $\mathcal{F}_{\infty}$ and $\mathcal{G}_{t}$ are conditionally independent given $\mathcal{F}_{t}$, and the process $\\{W_{t}\\}_{t\geq 0}$ remains a $\mathbb{G}$-Brownian motion. Please see Theorem 3.2 in Aksamit and Jeanblanc [1] and Remark 4.1 in Kharroubi and Lim [11]. Since $\mathbb{F}^{\alpha}$ is independent of $\mathbb{G}$, we further conclude that $\\{W_{t}\\}_{t\geq 0}$ remains an $\mathbb{H}$-Brownian motion. Since $\mathcal{F}_{t}^{\alpha}$ is independent of $\mathcal{G}_{t}$, under Assumption 1, the stochastic integral $\int_{0}^{t}X_{s}dW_{s}$ is well defined for all $\mathcal{P}(\mathbb{H})$-measurable process $X$ such that $\int_{0}^{t}|X_{s}|^{2}ds<\infty$. ###### Assumption 2 The process $N_{\cdot}:=\mathbf{1}_{\tau\leq\cdot}$ admits an $\mathbb{F}$-compensator of the form $\int_{0}^{\cdot\wedge\tau}\lambda_{s}ds$, i.e. $M_{\cdot}:=N_{\cdot}-\int_{0}^{\cdot\wedge\tau}\lambda_{s}ds$ is a $\mathbb{G}$-martingale, where $\lambda$ is a bounded nonnegative $\mathcal{P}(\mathbb{F})$-measurable process. We remark that $\displaystyle M_{t}$ $\displaystyle=N_{t}-\int_{0}^{t}\lambda_{s}\mathbf{1}_{s\leq\tau}ds=N_{t}-\int_{0}^{t}\lambda_{s}^{\mathbb{G}}ds,\quad t\geq 0,$ where $\lambda_{s}^{\mathbb{G}}:=\lambda_{s}\mathbf{1}_{s\leq\tau}=\lambda_{s}(1-N_{s-})\geq 0,\quad s\geq 0,$ is a $\mathcal{P}(\mathbb{G})$-measurable process. We now introduce the following scalar-valued linear stochastic differential equation (SDE): $\displaystyle\begin{cases}dX_{t}=\left[A^{\alpha_{t}}_{t}X_{t-}+u_{t}^{\prime}B^{\alpha_{t}}_{t}\right]dt+\left[C^{\alpha_{t}}_{t}X_{t-}+D^{\alpha_{t}}_{t}u_{t}\right]^{\prime}dW_{t}+[E^{\alpha_{t}}_{t}X_{t-}+u_{t}^{\prime}F^{\alpha_{t}}_{t}]dM_{t},\ t\in[0,T\wedge\tau],\\\ X_{0}=x,\ \alpha_{0}=i_{0},\end{cases}$ (2.1) where $T$ is a positive constant, $A^{i},\ B^{i},\ C^{i},\ D^{i},\ E^{i},\ F^{i}$ are all $\mathcal{P}(\mathbb{F})$-measurable processes of suitable sizes for all $i\in\mathcal{M}$, $x\in\mathbb{R}$ and $i_{0}\in\mathcal{M}$ are known. The class of admissible controls is defined as the set $\displaystyle\mathcal{U}:=L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m}).$ If $u(\cdot)\in\mathcal{U}$ and $X(\cdot)$ is the associated solution of (2.1), then we refer to $(X(\cdot),u(\cdot))$ as an admissible pair. Let us now state our stochastic LQ problem as follows: $\displaystyle\begin{cases}\mathrm{Minimize}&\ J(x,i_{0},u(\cdot))\\\ \mbox{subject to}&\ (X,u)\mbox{ is an admissible pair for}\ \eqref{state},\end{cases}$ (2.2) where the cost functional is given as the following quadratic form $\displaystyle J(x,i_{0},u(\cdot)):=\mathbb{E}\left\\{\int_{0}^{T\wedge\tau}\Big{(}Q_{t}^{\alpha_{t}}X_{t}^{2}+u_{t}^{\prime}R_{t}^{\alpha_{t}}u_{t}\Big{)}dt+G^{\alpha_{T\wedge\tau}}X_{T\wedge\tau}^{2}\right\\}.$ (2.3) For $(x,i_{0})\in{\mathbb{R}}\times\mathcal{M}$, the problem (2.2) is said to be well-posed, if $\displaystyle V(x,i_{0}):=\inf_{u\in\mathcal{U}}J(x,i_{0},u)>-\infty,$ where $V(x,i_{0})$ is called its optimal value; and the problem is said to be solvable, if there exists a control $u^{*}\in\mathcal{U}$ such that $\displaystyle J(x,i_{0},u^{*})=V(x,i_{0})>-\infty,$ in which case, $u^{*}$ is called an optimal control for the problem (2.2). We introduce a decomposition result for $\mathcal{P}(\mathbb{G})$-measurable processes. Please refer to Proposition 2.1 in [12]. ###### Lemma 2.2 Any $\mathcal{P}(\mathbb{G})$-measurable process $Y=(Y_{t})_{t\geq 0}$ can be represented as $Y_{t}=Y_{t}^{b}\mathbf{1}_{t\leq\tau}+Y_{t}^{a}(\tau)\mathbf{1}_{t>\tau},\quad t\geq 0,$ where $Y^{b}$ is $\mathcal{P}(\mathbb{F})$-measurable and $Y^{a}$ is $\mathcal{P}(\mathbb{F})\otimes\mathcal{B}(\mathbb{R}^{+})$-measurable. ###### Remark 2.3 By Lemma 2.2, it is reasonable to assume that the coefficients $A^{i},\ B^{i},\ C^{i},\ D^{i},\ E^{i},\ F^{i},\ Q^{i},\ R^{i}$ are $\mathcal{P}(\mathbb{F})$-measurable, because the LQ problem (2.2) depends only on their values in the interval $[0,T\wedge\tau]$. ###### Assumption 3 For all $i\in\mathcal{M}$, $\displaystyle\begin{cases}A^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}),\ B^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{m}),\ C^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{n}),\\\ D^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{n\times m}),\ E^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}),\ F^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{m}),\\\ Q^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}),\ R^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{S}^{m}),\end{cases}$ and $G^{i}$ is a bounded $\mathcal{G}_{T\wedge\tau}$-measurable random variable of the form $\displaystyle G^{i}=G^{b,i}\mathbf{1}_{T<\tau}+G^{a,i}_{\tau}\mathbf{1}_{T\geq\tau},$ where $G^{b,i}\in L^{\infty}_{\mathcal{F}_{T}}(\Omega)$ and $G^{a,i}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})$. Here the superscript “$b$” (resp. “$a$”) stands for “before the jump” (resp. “after the jump”). Throughout the paper, the above Assumptions 1, 2 and 3 are always, implicitly or explicitly, put in force. ###### Remark 2.4 There are three kinds of uncertainties in the problem (2.2). One stems from the randomness of the Brownian motion, the second one comes from the random time, and the last one comes from the Markov chain. This paper assumes the coefficients in the cost functional (2.3) fulfill at least one of the following conditions, so that the problem (2.2) is well-posed with a nonnegative optimal value. ###### Assumption 4 (Standard Case) $Q^{i}\geq 0$, $G^{i}\geq 0$ and there exists a constant $\delta>0$ such that $R^{i}\geq\delta I_{m}$, for all $i\in\mathcal{M}$, where $I_{m}$ denotes the $m$-dimensional identity matrix. ###### Assumption 5 (Singular Case I) $Q^{i}\geq 0$, $R^{i}\geq 0$, and there exists a constant $\delta>0$ such that $G^{i}\geq\delta$ and $(D^{i})^{\prime}D^{i}\geq\delta I_{m}$, for all $i\in\mathcal{M}$. ###### Assumption 6 (Singular Case II) $m=1$, $Q^{i}\geq 0$, $R^{i}\geq 0$, $G^{b,i}\geq 0$ and there exists a constant $\delta>0$ such that $G^{a,i}\geq\delta$ and $\lambda(F^{i})^{2}\geq\delta$, for all $i\in\mathcal{M}$. Here “singular” means that the control weight matrix $R^{i}$ in the cost functional (2.3) could be possibly a singular matrix. ## 3 Solvability of stochastic Riccati equations In order to solve the problem (2.2), we introduce the following system of ($\ell$-dimensional) BSDEs with jumps: $\displaystyle\begin{cases}P^{i}_{t}=G^{i}+\int_{t\wedge\tau}^{T\wedge\tau}\Big{[}(2A^{i}_{s}+(C^{i}_{s})^{\prime}C^{i}_{s})P^{i}_{s-}+\lambda^{\mathbb{G}}_{s}(E^{i}_{s})^{2}(P^{i}_{s-}+U^{i}_{s})+2(C^{i}_{s})^{\prime}\Lambda^{i}_{s}+2\lambda^{\mathbb{G}}_{s}E^{i}_{s}U^{i}_{s}+Q^{i}_{s}+\sum\limits_{j=1}^{\ell}q_{ij}P^{j}_{s-}\\\ \qquad\quad-\mathcal{N}(s,P^{i}_{s-},\Lambda^{i}_{s},U^{i}_{s},i)^{\prime}\mathcal{R}(s,P^{i}_{s-},U^{i}_{s},i)^{-1}\mathcal{N}(s,P^{i}_{s-},\Lambda^{i}_{s},U^{i}_{s},i)\Big{]}ds-\int_{t\wedge\tau}^{T\wedge\tau}(\Lambda^{i}_{s})^{\prime}dW_{s}-\int_{t\wedge\tau}^{T\wedge\tau}U^{i}_{s}dM_{s},\\\ \mathcal{R}(s,P^{i}_{s-},U^{i},i)>0,\ \mbox{for $s\leq T\wedge\tau$ and all $i\in\mathcal{M}$},\end{cases}$ (3.1) where for any $(s,P,\Lambda,U)\in[0,T\wedge\tau]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}$, $\displaystyle\mathcal{N}(s,P,\Lambda,U,i)$ $\displaystyle:=P((D^{i}_{s})^{\prime}C^{i}_{s}+B^{i}_{s})+(D^{i}_{s})^{\prime}\Lambda+\lambda^{\mathbb{G}}_{s}E^{i}_{s}F^{i}_{s}(P+U)+\lambda^{\mathbb{G}}_{s}F^{i}_{s}U,$ $\displaystyle\mathcal{R}(s,P,U,i)$ $\displaystyle:=R^{i}_{s}+P(D^{i}_{s})^{\prime}D^{i}_{s}+\lambda^{\mathbb{G}}_{s}(P+U)F^{i}_{s}(F^{i}_{s})^{\prime}.$ (3.2) The system of BSDEs (3.1) is referred as system of stochastic Riccati equations with jumps. Please note that the $\ell$ equations in (3.1) are coupled through $\sum_{j=1}^{\ell}q_{ij}P^{j}_{s-}$. ###### Definition 3.1 A vector process $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ is called a solution of the system of BSDEs (3.1), if it satisfies (3.1), and $(P^{i},\Lambda^{i},U^{i})\in S^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{G}}(0,T;\mathbb{R}^{n})\times L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})$ for all $i\in\mathcal{M}$. Furthermore, it is called nonnegative if $P^{i}\geq 0,\ t\in[0,T]$, a.s.; called uniformly positive if $P^{i}\geq c,\ t\in[0,T]$, a.s.; and called positive if $P^{i}\geq 0,\ P^{i}+U^{i}\geq c,\ t\in[0,T]$, a.s. where $c>0$ is some deterministic constant. We will construct a solution of (3.1) through another BSDE driven only by $W$, brought the idea from [12]. We briefly recall the idea for the reader’s convenience. For any $\mathcal{G}_{T\wedge\tau}$-measurable random variables $\xi^{i}$ of the form $\displaystyle\xi^{i}=\xi^{b,i}\mathbf{1}_{T<\tau}+\xi^{a,i}_{\tau}\mathbf{1}_{T\geq\tau},$ where $\xi^{b,i}\in L^{\infty}_{\mathcal{F}_{T}}(\Omega)$ and $\xi^{a,i}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})$, and any $\mathcal{P}(\mathbb{G})\otimes\mathcal{B}(\mathbb{R}^{\ell})\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{B}(\mathbb{R})$-measurable function $F:\Omega\times[0,T]\times\mathbb{R}^{\ell}\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$, consider the following BSDE driven by $W$ and $N$ (given in Assumption 2) on horizon $[0,T\wedge\tau]$: $P^{i}_{t}=\xi^{i}+\int_{t\wedge\tau}^{T\wedge\tau}F_{s}(P_{s-},\Lambda^{i}_{s},U^{i}_{s})ds-\int_{t\wedge\tau}^{T\wedge\tau}(\Lambda^{i}_{s})^{\prime}dW_{s}-\int_{t\wedge\tau}^{T\wedge\tau}U_{s}^{i}dN_{s},\ \mbox{for all}\ i\in\mathcal{M},$ (3.3) where $P=(P^{1},...,P^{\ell})^{\prime}$. Similar to [12], we will construct a solution to (3.3) from BSDEs without jumps in the Brownian filtration. By Lemma 2.2, there exists a $\mathcal{P}(\mathbb{F})\otimes\mathcal{B}(\mathbb{R}^{\ell})\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{B}(\mathbb{R})$-measurable function $F^{b}$ such that $\displaystyle F_{t}(\cdot,\cdot,\cdot)\mathbf{1}_{t\leq\tau}=F^{b}_{t}(\cdot,\cdot,\cdot)\mathbf{1}_{t\leq\tau},\ t\geq 0.$ The following lemma is a generalization of Theorem 4.1 in [12] to systems of BSDEs, but the proof follows the same lines as in [12], so we omit it. ###### Lemma 3.2 Assume that $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$ is a solution to the following BSDE: $\displaystyle P^{b,i}_{t}=\xi^{b,i}+\int_{t}^{T}F^{b}_{s}(P^{b}_{s},\Lambda^{b,i}_{s},G^{a,i}_{s}-P^{b,i}_{s})ds-\int_{t}^{T}(\Lambda^{b,i}_{s})^{\prime}dW_{s},\ i\in\mathcal{M},$ where $P^{b}=(P^{b,1},...,P^{b,\ell})^{\prime}$. Then BSDE (3.3) admits a solution $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ given by $\displaystyle P^{i}_{t}=P^{b,i}_{t}\mathbf{1}_{t<\tau}+\xi^{a,i}_{\tau}\mathbf{1}_{t\geq\tau},$ $\displaystyle\Lambda^{i}_{t}=\Lambda^{b,i}_{t}\mathbf{1}_{t\leq\tau},$ $\displaystyle U^{i}_{t}=(\xi^{a,i}_{t}-P^{b,i}_{t})\mathbf{1}_{t\leq\tau},\ t\in[0,T],\ i\in\mathcal{M}.$ Inspired by Lemma 3.2, we introduce the following system of BSDEs _without jumps_ on the deterministic horizon $[0,T]$: $\displaystyle\begin{cases}P^{b,i}_{t}=G^{b,i}+\int_{t}^{T}\Big{[}(2A^{i}_{s}+(C^{i}_{s})^{\prime}C^{i}_{s})P^{b,i}_{s}+\lambda_{s}(E^{i}_{s})^{2}G^{a,i}_{s}+2(C^{i}_{s})^{\prime}\Lambda^{b,i}_{s}\\\ \qquad\qquad\qquad+2\lambda_{s}E^{i}_{s}(G^{a,i}_{s}-P^{b,i}_{s})+Q^{i}_{s}+\lambda_{s}(G^{a,i}_{s}-P^{b,i}_{s})+\sum\limits_{j=1}^{\ell}q_{ij}P^{b,j}_{s}\\\ \qquad\qquad\qquad-\widehat{\mathcal{N}}(s,P^{b,i}_{s},\Lambda^{b,i}_{s},i)^{\prime}\widehat{\mathcal{R}}(s,P^{b,i}_{s},i)^{-1}\widehat{\mathcal{N}}(s,P^{b,i}_{s},\Lambda^{b,i}_{s},i)\Big{]}ds-\int_{t}^{T}(\Lambda^{b,i}_{s})^{\prime}dW_{s},\\\ \qquad=G^{b,i}+\int_{t}^{T}\Big{[}(2A^{i}_{s}+(C^{i}_{s})^{\prime}C^{i}_{s}-2\lambda_{s}E^{i}_{s}-\lambda_{s})P^{b,i}_{s}+2(C^{i}_{s})^{\prime}\Lambda^{b,i}_{s}+\lambda_{s}G^{a,i}_{s}(E^{i}_{s}+1)^{2}+Q^{i}_{s}\\\ \qquad\qquad+\sum\limits_{j=1}^{\ell}q_{ij}P^{b,j}_{s}-\widehat{\mathcal{N}}(s,P^{b,i}_{s},\Lambda^{b,i}_{s},i)^{\prime}\widehat{\mathcal{R}}(s,P^{b,i}_{s},i)^{-1}\widehat{\mathcal{N}}(s,P^{b,i},\Lambda^{b,i},i)\Big{]}ds-\int_{t}^{T}(\Lambda^{b,i}_{s})^{\prime}dW_{s},\\\ \widehat{\mathcal{R}}(s,P^{b,i}_{s},i)>0,\ \mbox{for $s\leq T$ and all $i\in\mathcal{M}$},\end{cases}$ (3.4) where $\displaystyle\widehat{\mathcal{N}}(s,P,\Lambda,i):$ $\displaystyle=P((D^{i}_{s})^{\prime}C^{i}_{s}+B^{i}_{s})+(D^{i}_{s})^{\prime}\Lambda+\lambda_{s}G^{a,i}_{s}E^{i}_{s}F^{i}_{s}+\lambda_{s}(G^{a,i}_{s}-P)F^{i}_{s}$ $\displaystyle=P((D^{i}_{s})^{\prime}C^{i}_{s}+B^{i}_{s}-\lambda_{s}F^{i}_{s})+(D^{i}_{s})^{\prime}\Lambda+\lambda_{s}G^{a,i}_{s}(E^{i}_{s}+1)F^{i}_{s},$ $\displaystyle\widehat{\mathcal{R}}(s,P,i):$ $\displaystyle=R^{i}_{s}+P(D^{i}_{s})^{\prime}D^{i}_{s}+\lambda_{s}G^{a,i}_{s}F^{i}_{s}(F^{i}_{s})^{\prime}.$ ###### Definition 3.3 A vector process $(P^{i},\Lambda^{i})_{i\in\mathcal{M}}$ is called a solution of the system of BSDEs (3.4), if it satisfies (3.4), and $(P^{i},\Lambda^{i})\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n})$ for all $i\in\mathcal{M}$. Furthermore, it is called nonnegative if $P^{i}\geq 0,\ t\in[0,T]$, a.s.; and called uniformly positive if $P^{i}\geq c,\ t\in[0,T]$, a.s. with some deterministic constant $c>0$. Let $\displaystyle\widetilde{A}^{i}=A^{i}-\lambda E^{i}-\frac{1}{2}\lambda,\ \widetilde{B}^{i}=B^{i}-\lambda F^{i},\ \widetilde{Q}^{i}=Q^{i}+\lambda G^{a,i}(E^{i}+1)^{2},$ $\displaystyle\widetilde{R}^{i}=R^{i}+\lambda G^{a,i}F^{i}(F^{i})^{\prime},\ S^{i}=\lambda G^{a,i}(E^{i}+1)F^{i}.$ Then (3.4) can be written as $\displaystyle\begin{cases}P^{b,i}_{t}=G^{b,i}+\int_{t}^{T}\Big{[}(2\widetilde{A}^{i}_{s}+(C^{i}_{s})^{\prime}C^{i}_{s})P^{b,i}_{s}+2(C^{i}_{s})^{\prime}\Lambda^{b,i}_{s}+\widetilde{Q}^{i}_{s}+\sum\limits_{j=1}^{\ell}q_{ij}P^{b,j}_{s}\\\ \qquad\qquad\qquad\qquad-\widehat{\mathcal{N}}(s,P^{b,i}_{s},\Lambda^{b,i}_{s},i)^{\prime}\widehat{\mathcal{R}}(s,P^{b,i}_{s},i)^{-1}\widehat{\mathcal{N}}(s,P^{b,i}_{s},\Lambda^{b,i}_{s},i)\Big{]}ds-\int_{t}^{T}(\Lambda^{b,i}_{s})^{\prime}dW_{s},\\\ \widehat{\mathcal{R}}(s,P^{b,i}_{s},i)>0,\ \mbox{for $s\leq T$ and all $i\in\mathcal{M}$},\end{cases}$ where $\displaystyle\widehat{\mathcal{N}}(s,P,\Lambda,i)$ $\displaystyle=P((D^{i}_{s})^{\prime}C^{i}_{s}+\widetilde{B}^{i}_{s})+(D^{i}_{s})^{\prime}\Lambda+S^{i}_{s},$ $\displaystyle\widehat{\mathcal{R}}(s,P,i)$ $\displaystyle=\widetilde{R}^{i}_{s}+P(D^{i}_{s})^{\prime}D^{i}_{s}.$ And it is associated with the following LQ stochastic control problem on $[0,T]$: $\displaystyle\begin{cases}\mathrm{Minimize}&\ \widetilde{J}(x,i_{0},u)\\\ \mbox{subject to}&\ (X,u)\mbox{ is an admissible pair for}\ \eqref{statebm},\end{cases}$ where the state process is $\displaystyle\begin{cases}dX_{t}=\left[\widetilde{A}^{\alpha_{t}}_{t}X_{t}+u_{t}^{\prime}\widetilde{B}^{\alpha_{t}}_{t}\right]dt+\left[C^{\alpha_{t}}_{t}X_{t}+D^{\alpha_{t}}_{t}u_{t}\right]^{\prime}dW_{t},\ t\in[0,T],\\\ X_{0}=x,\ \alpha_{0}=i_{0},\end{cases}$ (3.5) and the cost functional is given as the following quadratic form $\displaystyle\widetilde{J}(x,i_{0},u(\cdot)):=\mathbb{E}\left\\{\int_{0}^{T}\Big{(}\widetilde{Q}_{t}^{\alpha_{t}}X_{t}^{2}+u_{t}^{\prime}\widetilde{R}_{t}^{\alpha_{t}}u_{t}+2u_{t}^{\prime}S^{\alpha_{t}}_{t}X_{t}\Big{)}dt+G^{b,\alpha_{T}}X_{T}^{2}\right\\}.$ (3.6) Compared with the stochastic LQ problem studied in our previous work [6], the cross term $2u_{t}^{\prime}S^{\alpha_{t}}_{t}X_{t}$ involves in (3.6). The existence and uniqueness of solution to (3.4) are proved in Theorems 3.5 and 3.6 of [6] when $S^{i}\equiv 0$. And the method could be applied to the case of $S^{i}=\lambda G^{a,i}(E^{i}+1)F^{i}$, however we need to carefully show that the solution is nonnegative or uniformly positive in different cases. ###### Theorem 3.4 (Standard Case) Under Assumption 4, the system of BSDEs (3.4) admits a unique nonnegative solution $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$. Proof: For $i\in\mathcal{M}$, $t\in[0,T]$, $P\in\mathbb{R}^{\ell}$, and $\Lambda\in\mathbb{R}^{n\times\ell}$, set $\displaystyle\overline{f}(t,P,\Lambda,i)=(2\widetilde{A}^{i}_{t}+(C^{i}_{t})^{\prime}C^{i}_{t}+q_{ii})P^{i}+2(C^{i}_{t})^{\prime}\Lambda^{i}+\widetilde{Q}^{i}_{t}+\sum_{j\neq i}q_{ij}P^{j}.$ From Theorem 3.5 of [6], there exists a unique solution $(\overline{P}^{i},\overline{\Lambda}^{i})_{i\in\mathcal{M}}$ to the corresponding $\ell$-dimensional linear BSDE with the generator $\overline{f}$ and terminal value $G^{b}$, and there exists a constant $c>0$ such that $\overline{P}^{i}\leq c$. Hereafter, we shall use $c$ to represent a generic positive constant independent of $i$, $n$ and $t$, which can be different from line to line. Denote $H(t,P,\Lambda,i)=-\widehat{\mathcal{N}}(t,P,\Lambda,i)^{\prime}\widehat{\mathcal{R}}(t,P,i)\widehat{\mathcal{N}}(t,P,\Lambda,i).$ For $k\geq 1$, $(t,P,\Lambda,i)\in[0,T]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathcal{M}$, define $H^{k}(t,P,\Lambda,i)=\sup_{\widetilde{P}\in\mathbb{R},\;\widetilde{\Lambda}\in\mathbb{R}^{n}}\Big{\\{}H(t,\widetilde{P},\widetilde{\Lambda},i)-k|P-\widetilde{P}|-k|\Lambda-\widetilde{\Lambda}|\Big{\\}}.$ Then it is non-positive and uniformly Lipschitz in $(P,\Lambda)$, and decreases to $H(t,P,\Lambda,i)$ as $k$ goes to infinity. The following BSDE $\displaystyle\begin{cases}dP^{k,i}_{t}=-\Big{[}\overline{f}(t,P^{k}_{t},\Lambda^{k}_{t},i)+H^{k}(t,P^{k,i}_{t},\Lambda^{k,i}_{t},i)\Big{]}dt+(\Lambda^{k,i}_{t})^{\prime}dW_{t},\\\ P^{k,i}_{T}=G^{b,i},\ \mbox{ for all $i\in\mathcal{M}$, }\end{cases}$ is an $\ell$-dimensional BSDE with a Lipschitz generator, so it admits a unique solution, denoted by $\big{(}P^{k,i},\Lambda^{k,i}\big{)}_{i\in\mathcal{M}}$. Notice that $\displaystyle H^{k}(t,0,0,i)\geq H(t,0,0,i),$ and thanks to Lemma A.1 and Assumption 4, $\displaystyle 1-\lambda_{t}G^{a,i}_{t}(F^{i}_{t})^{\prime}[R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime}]^{-1}F^{i}_{t}=\frac{\mathrm{det}(R^{i}_{t})}{\mathrm{det}(R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime})}>0.$ Hence, we have the following estimate $\displaystyle\quad\;\overline{f}(t,0,0,i)+H^{k}(t,0,0,i)$ $\displaystyle\geq\widetilde{Q}^{i}_{t}-\widehat{\mathcal{N}}(t,0,0,i)^{\prime}\widehat{\mathcal{R}}(t,0,i)^{-1}\widehat{\mathcal{N}}(t,0,0,i)$ $\displaystyle=Q^{i}_{t}+\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)^{2}-[\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)F^{i}_{t}]^{\prime}[R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime}]^{-1}[\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)F^{i}_{t}]$ $\displaystyle=Q^{i}_{t}+\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)^{2}-(\lambda_{t}G^{a,i}_{t})^{2}(E^{i}_{t}+1)^{2}(F^{i}_{t})^{\prime}[R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime}]^{-1}F^{i}_{t}$ $\displaystyle=Q^{i}_{t}+\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)^{2}\Big{[}1-\lambda_{t}G^{a,i}_{t}(F^{i}_{t})^{\prime}[R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime}]^{-1}F^{i}_{t}\Big{]}$ $\displaystyle=Q^{i}_{t}+\lambda_{t}G^{a,i}_{t}(E^{i}_{t}+1)^{2}\frac{\mathrm{det}(R^{i}_{t})}{\mathrm{det}(R^{i}_{t}+\lambda_{t}G^{a,i}_{t}F^{i}_{t}(F^{i}_{t})^{\prime})}$ $\displaystyle\geq 0.$ Now we have $\overline{f}(t,0,0,i)+H^{k}(t,0,0,i)\geq 0,\ G\geq 0$, and $\overline{f}(t,P,\Lambda,i)+H^{k}(t,P^{i},\Lambda^{i},i)\leq\overline{f}(t,P,\Lambda,i),$ so by the comparison theorem for multidimensional BSDEs (see Lemma 3.4 of [6] or Lemma 2.2 of [5]), we have $\displaystyle 0\leq P^{k,i}_{t}\leq\overline{P}^{i}_{t}\leq c,$ and also $P^{k,i}_{t}$ is decreasing in $k$, for each $i\in\mathcal{M}$. Let $P^{i}_{t}=\lim\limits_{k\rightarrow\infty}P^{k,i}_{t}$, $i\in\mathcal{M}$. It is important to note that we can regard $\big{(}P^{k,i},\Lambda^{k,i}\big{)}$ as the solution of a scalar-valued quadratic BSDE for each fixed $i\in\mathcal{M}$. Thus by Proposition 2.4 in [13], there exists a process $\Lambda\in L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n\times\ell})$ such that $(P,\Lambda)$ is a solution to BSDE (3.4). We have now established the existence of the solution. The uniqueness part is similar to Theorem 3.5 of [6], so we omit it. $\Box$ ###### Theorem 3.5 (Singular Case I) Under Assumption 5, the system of BSDEs (3.4) admits a unique uniformly positive solution $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$. Proof: Consider the following $\ell$-dimensional decoupled BSDEs: $\begin{cases}dP^{b,i}=-\Big{[}(2\widetilde{A}^{i}+(C^{i})^{\prime}C^{i}+q_{ii})P^{b,i}+2(C^{i})^{\prime}\Lambda^{b,i}+\widetilde{Q}^{i}+H(P^{b,i},\Lambda^{b,i},i)\Big{]}dt+(\Lambda^{b,i})^{\prime}dW,\\\ P^{b,i}_{T}=G^{b,i},\\\ P^{b,i}>0,\ \mbox{ for all $i\in\mathcal{M}$}.\end{cases}$ (3.7) If we can show that (3.7) admits a uniformly positive solution $(\underline{P}^{i},\underline{\Lambda}^{i})_{i\in\mathcal{M}}$, then a solution to (3.4) can be constructed following the procedure of Theorem 3.6 in [6]. So it remains to study the solvability of (3.7). As the system of BSDEs (3.7) is decoupled, we would only consider the solvability of each fixed one of the $\ell$ equations in (3.7) and may drop the superscript $i$ in the remaining proof for notation simplicity. Let us first consider the following BSDE: $\displaystyle\begin{cases}dP_{t}=-\Big{[}-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]\\\ \qquad\qquad\qquad+(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=G^{b},\\\ P>0.\end{cases}$ (3.8) Under Assumption 5, $D^{\prime}D$ is invertible, so the generator $f$ of (3.8) can be rewritten as $\displaystyle f(P,\Lambda)$ $\displaystyle=(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}$ $\displaystyle\qquad-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]$ $\displaystyle=(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}-P(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})$ $\displaystyle\qquad-2(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}(S+D^{\prime}\Lambda)+\lambda G^{a}|(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}F|^{2}$ $\displaystyle\qquad-[S-\lambda G^{a}FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda]^{\prime}(PD^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}$ $\displaystyle\qquad\qquad\times[S-\lambda G^{a}FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda],$ by some basis operations of positive matrices. Under Assumption 5, there is a deterministic constant $c_{2}>0$ such that $\displaystyle|2A-2\lambda E-\lambda+C^{\prime}C+q_{ii}-(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)|\leq c_{2}.$ Let $\varepsilon=\delta e^{-c_{2}T}$, where $\delta>0$ is the constant in Assumption 5. Then from Theorem 2.3 of [13], there is a bounded maximal solution (see page 565 of [13] for its definition) $(P^{\varepsilon},\Lambda^{\varepsilon})$ to the BSDE with terminal value $G^{b}$ and generator $f^{\varepsilon}(P,\Lambda)$, where $\displaystyle f^{\varepsilon}(P,\Lambda)$ $\displaystyle=(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}-P(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})$ $\displaystyle\qquad-2(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}(S+D^{\prime}\Lambda)+\lambda G^{a}|(D^{\prime}C+\widetilde{B})^{\prime}(D^{\prime}D)^{-1}F|^{2}$ $\displaystyle\qquad-[S-\lambda G^{a}FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda]^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}$ $\displaystyle\qquad\qquad\times[S-\lambda G^{a}FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda].$ Notice that $Q\geq 0$, and $\displaystyle\quad\;\lambda G^{a}(E+1)^{2}-2\lambda G^{a}(E+1)(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F+\lambda G^{a}|(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F|^{2}$ $\displaystyle\qquad-\lambda^{2}(G^{a})^{2}[(E+1)F-FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}$ $\displaystyle\qquad\qquad\times[(E+1)F-FF^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]$ $\displaystyle=\lambda G^{a}|E+1-(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F|^{2}$ $\displaystyle\qquad-\lambda^{2}(G^{a})^{2}[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]^{\prime}F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}F$ $\displaystyle\qquad\qquad\times[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]$ $\displaystyle=\lambda G^{a}|E+1-(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F|^{2}(1-\lambda G^{a}F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}F)$ $\displaystyle=\lambda G^{a}|E+1-(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F|^{2}\frac{\mathrm{det}((P\vee\varepsilon)D^{\prime}D)}{\mathrm{det}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})}$ $\displaystyle\geq 0,$ where the last equality is due to Lemma A.1, so we have $\displaystyle f^{\varepsilon}(P,\Lambda)$ $\displaystyle=(2A-2\lambda E-\lambda+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+Q+\lambda G^{a}(E+1)^{2}$ $\displaystyle\qquad-P(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)-2\lambda G^{a}(E+1)(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F$ $\displaystyle\qquad+\lambda G^{a}|(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}F|^{2}$ $\displaystyle\qquad-\lambda^{2}(G^{a})^{2}[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]^{\prime}F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}F$ $\displaystyle\qquad\qquad\times[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]$ $\displaystyle\qquad-2(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}D^{\prime}\Lambda-\Lambda^{\prime}D((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda$ $\displaystyle\qquad-2\lambda G^{a}[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda$ $\displaystyle\geq(2A-2\lambda E-\lambda+C^{\prime}C+q_{ii})P-(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)P+2C^{\prime}\Lambda$ $\displaystyle\qquad-2(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}D^{\prime}\Lambda-\Lambda^{\prime}D((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda$ $\displaystyle\qquad-2\lambda G^{a}[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda.$ Obviously, the following BSDE $\displaystyle\begin{cases}dP=-\Big{[}-c_{2}P+2C^{\prime}\Lambda-2(D^{\prime}C+B-\lambda F)^{\prime}(D^{\prime}D)^{-1}D^{\prime}\Lambda-\Lambda^{\prime}D((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda\\\ \qquad\qquad\quad-2\lambda G^{a}[E+1-F^{\prime}(D^{\prime}D)^{-1}(D^{\prime}C+B-\lambda F)]F^{\prime}((P\vee\varepsilon)D^{\prime}D+\lambda G^{a}FF^{\prime})^{-1}D^{\prime}\Lambda\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=\delta\end{cases}$ admits a solution $(\delta e^{-c_{2}(T-t)},0)$. Hence the maximal solution argument in Theorem 2.3 of [13] gives $\displaystyle P^{\varepsilon}_{t}\geq\delta e^{-c_{2}(T-t)}\geq\delta e^{-c_{2}T}=\varepsilon.$ (3.9) This implies that $(P^{\varepsilon},\Lambda^{\varepsilon})$ is actually a solution to (3.8). On the other hand, $\displaystyle-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]\leq 0,$ and the following linear BSDE $\displaystyle\begin{cases}dP_{t}=-\Big{[}(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=G^{b},\end{cases}$ (3.10) admits a unique solution solution $(\overline{P},\overline{\Lambda})$ such that $\overline{P}\leq c_{3}$ for some constant $c_{3}>0$. Then applying the comparison theorem to BSDEs (3.8) and (3.10), we get $\displaystyle P^{\varepsilon}_{t}\leq\overline{P}_{t}\leq c_{3}.$ This implies that $(P^{\varepsilon},\Lambda^{\varepsilon})$ is actually a solution to the following BSDE: $\displaystyle\begin{cases}dP_{t}=-\Big{[}-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]g^{\varepsilon,c_{3}}(P)\\\ \qquad\qquad\quad+(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=G^{b},\end{cases}$ (3.11) where $g^{\varepsilon,c_{3}}:\mathbb{R}^{+}\rightarrow[0,1]$ is a smooth truncation function satisfying $g^{\varepsilon,c_{3}}(x)=1$ for $x\in[\varepsilon,c_{3}]$, and $g^{\varepsilon,c_{3}}(x)=0$ for $x\in[0,\varepsilon/2]\cup[2c_{3},\infty)$. Finally, under Assumption 5, we have $\displaystyle\quad[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[R+\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]g^{\varepsilon,c_{3}}(P)$ $\displaystyle\leq\frac{2\delta}{\varepsilon}|P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S|^{2}g^{\varepsilon,c_{3}}(P)$ $\displaystyle\leq c(1+|\Lambda|^{2}).$ By Theorem 2.3 in [13], there exists a bounded, maximal solution $(P_{2},\Lambda_{2})$ to the following quadratic BSDE: $\displaystyle\begin{cases}dP_{t}=-\Big{[}-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[R+\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]g^{\varepsilon,c_{3}}(P)\\\ \qquad\qquad\quad+(2\widetilde{A}+C^{\prime}C+q_{ii})P+2C^{\prime}\Lambda+\widetilde{Q}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=G^{b}.\end{cases}$ (3.12) Notice that under Assumption 5, $R\geq 0$. Applying the maximal solution argument in Theorem 2.3 of [13] to BSDEs (3.11) and (3.12), we get $\displaystyle P^{\varepsilon}\leq P_{2}.$ (3.13) On the other hand, $\displaystyle-[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]^{\prime}[R+\lambda G^{a}FF^{\prime}+PD^{\prime}D]^{-1}[P(D^{\prime}C+\widetilde{B})+D^{\prime}\Lambda+S]g^{\varepsilon,c_{3}}(P)\leq 0,$ thus applying the comparison theorem to BSDEs (3.10) and (3.12), we have $\displaystyle P_{2}\leq\overline{P}_{t}\leq c_{3}.$ (3.14) Combining (3.9), (3.13), (3.14), we obtain $g^{\varepsilon,c_{3}}(P_{2})=1$. Hence $(P_{2},\Lambda_{2})$ is actually a solution to (3.7). The uniqueness part is similar to Theorem 3.6 of [6], so we omit it. $\Box$ ###### Theorem 3.6 (Singular Case II) Under Assumption 6, the system of BSDEs (3.4) admits a unique nonnegative solution $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$. Proof: Notice that under Assumption 6, $\displaystyle 1-\frac{\lambda G^{a,i}(F^{i})^{2}}{R^{i}+\lambda G^{a,i}(F^{i})^{2}}=\frac{R^{i}}{R^{i}+\lambda G^{a,i}(F^{i})^{2}}\geq 0,$ then by Theorem 3.4, BSDE (3.4) admits a unique nonnegative solution. $\Box$ The following remark will be used in Section 5. ###### Remark 3.7 If Assumption 6 holds with $G^{i,b}\geq\delta>0$, then the solution $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$ of the system of BSDEs (3.4) is uniformly positive. The argument is similar to the proof of Theorem 3.5, so we will only give the sketch showing that solution to (3.4) has a positive lower bound given by the following BSDE (the superscript $i$ is suppressed): $\displaystyle\begin{cases}dP_{t}=-\Big{[}-\frac{1}{\lambda G^{a}F^{2}}[P(D^{\prime}C+B-\lambda F)+D^{\prime}\Lambda+\lambda G^{a}(E+1)F]^{2}\\\ \qquad\qquad+(2A-2\lambda E-\lambda+|C|^{2}+q_{ii})P+2C^{\prime}\Lambda+Q+\lambda G^{a}(E+1)^{2}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=G^{b},\\\ P>0.\end{cases}$ (3.15) By the proof of Theorem 3.4, the truncation method and Theorem 2.3 in [13], there exists a bounded, maximal solution $(P,\Lambda)$ to (3.15), such that $P\leq c$ for some deterministic constant $c>0$. Under Assumption 6, there exists a constant $c_{4}>0$ such that $|2A-2\lambda E-\lambda+C^{2}+q_{ii}-\frac{2}{F}(D^{\prime}C+B-\lambda F)(E+1)|\leq c_{4},$ and $\frac{1}{\lambda G^{a}F^{2}}(D^{\prime}C+B-\lambda F)^{2}\leq c_{4}.$ Hence, for $P>0$ and $\Lambda\in\mathbb{R}^{n}$, we have the following estimate for the generator of (3.15): $\displaystyle g(P,\Lambda):$ $\displaystyle=-\frac{1}{\lambda G^{a}F^{2}}[P(D^{\prime}C+B-\lambda F)+D^{\prime}\Lambda+\lambda G^{a}(E+1)F]^{2}$ $\displaystyle\qquad\qquad+(2A-2\lambda E-\lambda+|C|^{2}+q_{ii})P+2C^{\prime}\Lambda+Q+\lambda G^{a}(E+1)^{2}$ $\displaystyle\geq(2A-2\lambda E-\lambda+|C|^{2}+q_{ii}-\frac{2}{F}(D^{\prime}C+B-\lambda F)(E+1))P-\frac{1}{\lambda G^{a}F^{2}}(D^{\prime}C+B-\lambda F)^{2}P^{2}$ $\displaystyle\qquad\qquad+2C^{\prime}\Lambda-\frac{2}{F}(E+1)D^{\prime}\Lambda-\frac{2}{\lambda G^{a}F^{2}}P(D^{\prime}C+B-\lambda F)D^{\prime}\Lambda-\frac{1}{\lambda G^{a}F^{2}}|D^{\prime}\Lambda|^{2}$ $\displaystyle\geq- c_{4}P-c_{4}P^{2}+2C^{\prime}\Lambda-\frac{2}{F}(E+1)D^{\prime}\Lambda-\frac{2}{\lambda G^{a}F^{2}}P(D^{\prime}C+B-\lambda F)D^{\prime}\Lambda-\frac{1}{\lambda G^{a}F^{2}}|D^{\prime}\Lambda|^{2}.$ Obviously, the following BSDE $\displaystyle\begin{cases}dP=-\Big{[}-c_{4}P-c_{4}P^{2}+2C^{\prime}\Lambda-\frac{2}{F}(E+1)D^{\prime}\Lambda-\frac{2}{\lambda G^{a}F^{2}}(DC+B-\lambda F)PD^{\prime}\Lambda-\frac{1}{\lambda G^{a}F^{2}}|D^{\prime}\Lambda|^{2}\Big{]}dt+\Lambda^{\prime}dW,\\\ P_{T}=\delta\end{cases}$ admits a solution $(\frac{1}{(1+\frac{1}{\delta})e^{c_{4}(T-t)}-1},0)$. Hence the maximal solution argument in Theorem 2.3 of [13] gives $P\geq\frac{1}{(1+\frac{1}{\delta})e^{c_{4}(T-t)}-1}\geq\frac{1}{(1+\frac{1}{\delta})e^{c_{4}T}-1}>0.$ Combining Lemma 3.2, Theorems 3.4, 3.5 and 3.6, we have the following solvability of the system of BSDEs (3.1). ###### Theorem 3.8 Under Assumption 4 (resp. 5, 6), let $(P^{b,i},\Lambda^{b,i})_{i\in\mathcal{M}}$ be the unique nonnegative (resp. uniformly positive, nonnegative) solution of (3.4) and define $\displaystyle P^{i}_{t}:=P^{b,i}_{t}\mathbf{1}_{t<\tau}+G^{a,i}_{\tau}\mathbf{1}_{t\geq\tau},$ $\displaystyle\Lambda^{i}_{t}:=\Lambda^{b,i}_{t}\mathbf{1}_{t\leq\tau},$ $\displaystyle U^{i}_{t}:=(G^{a,i}_{t}-P^{b,i}_{t})\mathbf{1}_{t\leq\tau},\ t\in[0,T],\ i\in\mathcal{M}.$ Then $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ is a nonnegative (resp. uniformly positive, positive) solution of (3.1). Proof: It is obvious $(P^{i},\Lambda^{i},U^{i})\in S^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{G}}(0,T;\mathbb{R}^{n})\times L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R}))$, for all $i\in\mathcal{M}$. By Lemma 3.2, $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ satisfies the first equality of (3.1). It remains to show $\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)>0$, for $t\leq T\wedge\tau$, $i\in\mathcal{M}$. By the definition of $\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)$, and $P^{i}_{t-}+U^{i}_{t}=G^{a,i}_{t}\mathbf{1}_{t\leq\tau}+G^{a,i}_{\tau}\mathbf{1}_{t>\tau}=G^{a,i}_{\tau\wedge t}$, we have $\displaystyle\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)$ $\displaystyle=R^{i}_{t}+P^{i}_{t-}(D^{i}_{t})^{\prime}D^{i}_{t}+\lambda^{\mathbb{G}}_{t}(P^{i}_{t-}+U^{i}_{t})(F^{i}_{t})(F^{i}_{t})^{\prime}$ $\displaystyle=R^{i}_{t}+P^{i}_{t-}(D^{i}_{t})^{\prime}D^{i}_{t}+\lambda^{\mathbb{G}}_{t}G^{a,i}_{\tau\wedge t}(F^{i}_{t})(F^{i}_{t})^{\prime}.$ In Standard Case, we have $R^{i}_{t}\geq\delta I_{m}$, $G^{i}_{t}\geq 0$ and $P^{i}_{t}\geq 0$, thus $\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)>0,\ i\in\mathcal{M}$. In Singular Case I, we have $R^{i}_{t}\geq 0$, $G^{i}\geq\delta$ , $(D^{i}_{t})^{\prime}D^{i}_{t}\geq\delta I_{m}$ and $P^{i}_{t}\geq c>0$, thus $\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)>0,\ i\in\mathcal{M}$. In Singular Case II, we have $R^{i}_{t}\geq 0$, $G^{i,a}_{t}\geq\delta$, $\lambda^{\mathbb{G}}_{t}(F^{i}_{t})^{2}=\lambda_{t}(F^{i}_{t})^{2}\geq\delta$ and $P^{i}_{t}\geq 0$ for $t\leq T\wedge\tau$, thus $\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)>0$, for $t\leq T\wedge\tau$, $i\in\mathcal{M}$. $\Box$ ###### Remark 3.9 A matrix-valued SRE with Poisson jumps was solved in Zhang, Dong and Meng [24] with uniformly positive control weight (corresponding to our Standard Case), while a scalar-valued SRE with a jump and $m=n=1$, $C=0,\ E=Q=0,\ R=0$ was studied in Kharroubi, Lim and Ngoupeyou [12] (corresponding to our Singular Case I). Up to our knowledge, no existing literature has concerned the solvability of SRE or stochastic LQ problem with jumps corresponding to our Singular Case II. Recall the definition of $\mathcal{R}(t,P,U,i)$ in (3), $\mathcal{R}(t,P,U,i)=R^{i}+P(D^{i})^{\prime}D^{i}+\lambda^{\mathbb{G}}(P+U)F^{i}(F^{i})^{\prime}.$ There are three components $R^{i}$, $P(D^{i})^{\prime}D^{i}$ and $\lambda^{\mathbb{G}}(P+U)F^{i}(F^{i})^{\prime}$ on the right hand. By Theorem 3.8, Assumptions 4, 5 and 6 will result in the uniformly positivity of one of these three components, while nonnegativity of the other two terms, hence the second constraint $\mathcal{R}(t,P,U,i)>0$ in (3.1) is fulfilled. ## 4 Solution to the stochastic LQ problem (2.2) We slightly strengthen Assumption 6 to the following: ###### Assumption 7 (Singular Case II’) $m=1$, $Q^{i}\geq 0$, $R^{i}\geq 0$, and there exists a constant $\delta>0$ such that $G^{i}\geq\delta$ and $\lambda(F^{i})^{2}\geq\delta$, for all $i\in\mathcal{M}$. ###### Theorem 4.1 Under Assumption 4 (resp. 5 and 7), let $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ be nonnegative (resp. uniformly positive, positive) solution of (3.1). Then the problem (2.2) has an optimal control, as a feedback function of the time $t$, the state $X$ and the market regime $i$, $\displaystyle u^{*}(t,X_{t-},i)$ $\displaystyle=-\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)^{-1}\mathcal{N}(t,P^{i}_{t-},\Lambda^{i}_{t},U^{i}_{t},i)X_{t-}.$ (4.1) Moreover, the corresponding optimal value is $\displaystyle V(x,i_{0})=\min_{u\in\mathcal{U}}J(x,i_{0},u)=P_{0}^{i_{0}}x^{2}.$ (4.2) Proof: In light of the length of many equations, $``t,X_{t-},\alpha_{t},P^{\alpha_{t}}_{t-},\Lambda^{\alpha_{t}},U^{\alpha_{t}}"$ will be suppressed where no confusion occurs in the sequel. Substituting (4.1) into the state process (2.1) (with $``i"$ replaced by $``\alpha_{t}"$), we have $\displaystyle\begin{cases}dX_{s}^{*}=\left[A^{\alpha_{s}}_{s}-(B^{\alpha_{s}}_{s})^{\prime}\mathcal{R}^{-1}\mathcal{N}\right]X^{*}_{s-}ds+\left[C^{\alpha_{s}}_{s}-D^{\alpha_{s}}_{s}\mathcal{R}^{-1}\mathcal{N}\right]^{\prime}X^{*}_{s-}dW_{s}\\\ \qquad\qquad\qquad+\left[E^{\alpha_{s}}_{s}-(F^{\alpha_{s}}_{s})^{\prime}\mathcal{R}^{-1}\mathcal{N}\right]X^{*}_{s-}dM_{s},\ s\in[0,T\wedge\tau],\\\ X^{*}_{0}=x,\ \alpha_{0}=i_{0}.\end{cases}$ (4.3) Since the coefficients of SDE (4.3) are square integrable w.r.t. $t$, it admits a unique strong solution $X^{*}$. Actually $\displaystyle\begin{cases}X_{t}^{*}=x\Phi_{t},\ t\in[0,T\wedge\tau),\\\ X_{T\wedge\tau}^{*}=x\Phi_{T}\mathbf{1}_{\tau>T}+x\Phi_{\tau}(1+E_{\tau}-F_{\tau}^{\prime}\mathcal{R}^{-1}\mathcal{N})\mathbf{1}_{\tau\leq T},\end{cases}$ is the solution to (4.3), where $\displaystyle\Phi_{t}=\exp\Big{(}\int_{0}^{t}\Big{(}A-\lambda^{\mathbb{G}}E+(\lambda^{\mathbb{G}}F-B)^{\prime}\mathcal{R}^{-1}\mathcal{N}-\frac{1}{2}|C-D\mathcal{R}^{-1}\mathcal{N}|^{2}\Big{)}ds+\int_{0}^{t}(C-D\mathcal{R}^{-1}\mathcal{N})^{\prime}dW_{s}\Big{)}.$ Let $u^{*}_{t}=u^{*}(t,X^{*}_{t-},\alpha_{t})$. Define a sequence of stopping times $\\{\iota_{j}\\}_{j\geq 1}$ as follows: $\displaystyle\iota_{j}:=\inf\Big{\\{}t>0:|X^{*}_{t-}|+|u_{t}^{*}|>j\Big{\\}}\wedge j,$ with the convention that $\inf\emptyset=\infty$. It is obvious that $\iota_{j}\uparrow\infty$, a.s. as $j\uparrow\infty$. Applying Itô’s formula to $P^{\alpha_{t}}_{t}(X^{*}_{t})^{2}$, we have for any stopping time $\iota\leq T\wedge\tau$, $\displaystyle P_{0}^{i_{0}}x^{2}={\mathbb{E}}\Big{[}P^{\alpha_{\iota\wedge\iota_{j}}}_{\iota\wedge\iota_{j}}(X_{\iota\wedge\iota_{j}}^{*})^{2}+\int_{0}^{\iota\wedge\iota_{j}}\Big{(}Q_{s}(X_{s-}^{*})^{2}+(u_{s}^{*})^{\prime}R_{s}u_{s}^{*}\Big{)}ds\Big{]}.$ (4.4) In Standard Case, $Q^{i}\geq 0,\ R^{i}\geq\delta I_{m}$, it follows $\displaystyle\delta{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}|u^{*}_{t}|^{2}dt\leq{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}(u_{s}^{*})^{\prime}R_{s}u_{s}^{*}dt\leq P_{0}^{i_{0}}x^{2}.$ Sending $j\uparrow\infty$, by the monotone convergence theorem, we have $u^{*}_{t}\in L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m})$. In Singular Case I (resp. Singular Case II’), there exists some constant $c>0$ such that $P_{t}^{i}\geq c$, for all $i\in\mathcal{M}$ by Theorem 3.5 (resp. Theorem 3.6 and Remark 3.7). Then from (4.4), we have $\displaystyle c{\mathbb{E}}\Big{[}|X_{\iota\wedge\iota_{j}}^{*}|^{2}\Big{]}\leq P_{0}^{i_{0}}x^{2}.$ By Fatou’s lemma, we have $\displaystyle{\mathbb{E}}[|X_{\iota}^{*}|^{2}]\leq P_{0}^{i_{0}}x^{2},$ for any stopping time $\iota\leq T\wedge\tau$. This further implies $\displaystyle{\mathbb{E}}\int_{0}^{T\wedge\tau}|X^{*}_{t-}|^{2}dt\leq\int_{0}^{T\wedge\tau}{\mathbb{E}}[|X^{*}_{t-}|^{2}]dt\leq cT.$ (4.5) Applying Itô’s formula to $|X^{*}_{t}|^{2}$ and using the above two inequalities, we have $\displaystyle\qquad x^{2}+{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}\Big{[}(u^{*})^{\prime}(D^{\prime}D+\lambda^{\mathbb{G}}FF^{\prime})u^{*}\Big{]}ds$ $\displaystyle={\mathbb{E}}|X_{T\wedge\tau\wedge\iota_{j}}^{*}|^{2}-{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}\Big{[}(2A+C^{\prime}C+\lambda^{\mathbb{G}}E^{2})(X_{s-}^{*})^{2}+2(B^{\prime}+C^{\prime}D+\lambda^{\mathbb{G}}EF^{\prime})u^{*}X^{*}_{s-}\Big{]}ds$ $\displaystyle\leq c+{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}2c|u^{*}_{s}X^{*}_{s-}|ds$ By Assumption 5 or 7, the elementary inequality $2cab\leq\frac{2c^{2}}{\delta}a^{2}+\frac{\delta}{2}b^{2}$ and (4.5), we have $\displaystyle\delta{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}|u^{*}|^{2}ds$ $\displaystyle\leq c+\frac{2c^{2}}{\delta}{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}|X_{s-}^{*}|^{2}ds+\frac{\delta}{2}{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}|u^{*}|^{2}ds$ $\displaystyle\leq c+\frac{\delta}{2}{\mathbb{E}}\int_{0}^{T\wedge\tau\wedge\iota_{j}}|u^{*}|^{2}ds.$ After rearrangement and sending $j\uparrow\infty$, it follows from the monotone convergent theorem that $u^{*}\in L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m})$. The remaining proof is to verify the optimality of $u^{*}$ through completion of square technique, it is very similar to Theorem 5.1 in [24] or Theorem 4.2 in [6], so we omit it. $\Box$ ## 5 Mean-variance hedging problem We consider a financial market consisting of $m+1$ primitive assets: one risk- free asset with zero interest rate and $m$ risky assets (the stocks) whose price processes follow $\displaystyle dS_{t}=\mathrm{diag}(S_{t})\Big{(}\mu_{t}^{\alpha_{t}}dt+\sigma_{t}^{\alpha_{t}}dW_{t}+F_{t}^{\alpha_{t}}dM_{t}\Big{)},\ t\in[0,T\wedge\tau].$ Assume that the appreciation process $\mu^{i}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R}^{m})$, the volatility $\sigma^{i}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R}^{m\times n})$ and $F^{i}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{m})$, $F^{i}_{j}\geq-1$ for all $i\in\mathcal{M}$, $j=1,...,m$. Also there exists a constant $\delta>0$ such that * (i) $\sigma^{i}(\sigma^{i})^{\prime}\geq\delta I_{m}$; or * (ii) $m=1,\ \mbox{and}\ \lambda(F^{i})^{2}\geq\delta$, for all $i\in\mathcal{M}$. For any $x\in{\mathbb{R}}$ and $\pi\in L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m})$, the wealth process $X$ for a self-financing investor with initial capital $x$ and with quantity $\pi$ invested in the risky assets is described by $\displaystyle\begin{cases}dX_{t}=\pi_{t}^{\prime}\mu^{\alpha_{t}}_{t}dt+\pi_{t}^{\prime}\sigma^{\alpha_{t}}_{t}dW_{t}+\pi_{t}^{\prime}F_{t}^{\alpha_{t}}dM_{t},\ t\in[0,T\wedge\tau],\\\ X_{0}=x,\ \alpha_{0}=i_{0}.\end{cases}$ (5.1) Assume that for each $i\in\mathcal{M}$, $H^{i}$ is a bounded $\mathcal{G}_{T\wedge\tau}$-measurable random variable of the form $\displaystyle H^{i}=H^{b,i}\mathbf{1}_{T<\tau}+H^{a,i}_{\tau}\mathbf{1}_{T\geq\tau},$ (5.2) where $\displaystyle H^{b,i}\in L^{\infty}_{\mathcal{F}_{T}}(\Omega),\ H^{a,i}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R}).$ (5.3) Consider the following mean-variance hedging problem $\displaystyle\min_{\pi\in L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m})}{\mathbb{E}}(X^{\pi}_{T\wedge\tau}-H^{\alpha_{T\wedge\tau}})^{2},$ (5.4) where $X^{\pi}$ is the solution to the wealth equation (5.1). Now we apply the general results obtained in the previous section to the mean- variance hedging problem (5.4), where $\displaystyle A=0,\ B=\mu,\ C=0,\ D=\sigma^{\prime},\ u=\pi,\ E=0,\ Q=0,\ R=0,\ G=1.$ ###### Remark 5.1 If $m=n=1$, and there is no regime switching, the problem (5.4) was solved in [12] under the condition (i), namely $\sigma^{2}\geq\delta>0$. The associated system of BSDEs (3.1) for the problem (5.4) reduces to the following: $\displaystyle\begin{cases}P^{i}_{t}=1+\int_{t\wedge\tau}^{T\wedge\tau}\Big{[}-\mathcal{N}(s,P^{i}_{s-},\Lambda^{i}_{s},U^{i}_{s},i)^{\prime}\mathcal{R}(s,P^{i}_{s-},U^{i}_{s},i)^{-1}\mathcal{N}(s,P^{i}_{s-},\Lambda^{i}_{s},U^{i}_{s},i)+\sum_{j=1}^{\ell}q_{ij}P^{j}_{s-}\Big{]}ds\\\ \qquad\qquad\qquad\qquad-\int_{t\wedge\tau}^{T\wedge\tau}(\Lambda^{i}_{s})^{\prime}dW_{s}-\int_{t\wedge\tau}^{T\wedge\tau}U^{i}_{s}dM_{s},\\\ \mathcal{R}(s,P^{i}_{s-},U^{i}_{s},i)>0,\ \mbox{for $s\leq T\wedge\tau$ and all $i\in\mathcal{M}$},\end{cases}$ (5.5) where $\displaystyle\mathcal{N}(s,P,\Lambda,U,i)$ $\displaystyle=P\mu^{i}_{s}+\sigma^{i}_{s}\Lambda_{s}+\lambda^{\mathbb{G}}_{s}F^{i}_{s}U,$ $\displaystyle\mathcal{R}(s,P,U,i)$ $\displaystyle=P\sigma^{i}_{s}(\sigma^{i}_{s})^{\prime}+\lambda^{\mathbb{G}}_{s}(P+U)F^{i}_{s}(F^{i}_{s})^{\prime}.$ From Theorems 3.5, 3.6 and Remark 3.7, we know that (5.5) admits a uniformly positive solution $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$, such that $(P^{i},\Lambda^{i},U^{i})\in S^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{G}}(0,T;\mathbb{R}^{n})\times L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R}))$ for all $i\in\mathcal{M}$. ###### Remark 5.2 For the solution $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ of (5.5) constructed in Theorem 3.8, we know $P^{i}_{t-}+U^{i}_{t}\equiv 1$, as the terminal value of (5.5) is identically equal to $1$. But we will keep $P^{i}_{t-}+U^{i}_{t}$ in the sequel as it is the case for general terminal value of (5.5). To construct a solution of the problem (5.4), we still need to consider the following system of linear BSDEs with jumps: $\displaystyle h^{i}_{t}$ $\displaystyle=H^{i}-\int_{t\wedge\tau}^{T\wedge\tau}\frac{1}{P^{i}_{s-}}\Big{[}\mathcal{N}(s,P_{s-},\Lambda_{s},U_{s},i)^{\prime}\mathcal{R}(s,P_{s-},U_{s},i)^{-1}\big{(}P^{i}_{s-}\sigma^{i}_{s}\eta^{i}_{s}+\lambda^{\mathbb{G}}_{s}(P^{i}_{s-}+U^{i}_{s})F^{i}_{s}\gamma^{i}_{s}\big{)}$ $\displaystyle\qquad\qquad\qquad-(\Lambda^{i}_{s})^{\prime}\eta^{i}_{s}-\lambda^{\mathbb{G}}_{s}U^{i}_{s}\gamma^{i}_{s}-\sum_{j=1}^{\ell}q_{ij}P^{j}_{s-}(h^{j}_{s-}-h^{i}_{s-})\Big{]}ds$ $\displaystyle\qquad\qquad-\int_{t\wedge\tau}^{T\wedge\tau}(\eta^{i}_{s})^{\prime}dW_{s}-\int_{t\wedge\tau}^{T\wedge\tau}\gamma^{i}_{s}dM_{s},\ \mbox{ for all $i\in\mathcal{M}$}.$ (5.6) Please note that the coefficients in (5) are unbounded as so are $\Lambda^{i}$, $i\in\mathcal{M}$. ###### Theorem 5.3 The system of BSDEs (5) admits a solution $(h^{i},\eta^{i},\gamma^{i})_{i\in\mathcal{M}}$ such that $\displaystyle(h^{i},\eta^{i},\gamma^{i})\in S^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{G}}(0,T;\mathbb{R}^{n})\times L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R}),\ \mbox{for all}\ i\in\mathcal{M}.$ Proof: Consider the following system of BSDEs without jumps: $\displaystyle\begin{cases}dK^{b,i}=\Big{[}\Big{(}P^{b,i}\mu^{i}+\sigma^{i}\Lambda^{b,i}+\lambda F^{i}(1-P^{b,i})\Big{)}^{\prime}\Big{(}P^{b,i}\sigma^{i}(\sigma^{i})^{\prime}+\lambda G^{a,i}F^{i}(F^{i})^{\prime}\Big{)}^{-1}\\\ \qquad\qquad\times\Big{(}K^{b,i}\mu^{i}+\sigma^{i}L^{b,i}+\lambda F^{i}(H^{a,i}-K^{b,i})\Big{)}-\lambda(H^{a,i}-K^{b,i})-\sum\limits_{j=1}^{\ell}q_{ij}K^{b,j}\Big{]}dt+(L^{b,i})^{\prime}dW,\\\ K^{i}_{T}=H^{b,i},\ \mbox{for all}\ i\in\mathcal{M}.\end{cases}$ (5.7) By Theorem 3.6 of [7], the system (5.7) admits a unique solution $(K^{b,i},L^{b,i})_{i\in\mathcal{M}}$ such that $(K^{b,i},L^{b,i})\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n}),\ \mbox{for all}\ i\in\mathcal{M}.$ Define $\displaystyle K^{i}_{t}:=K^{b,i}_{t}\mathbf{1}_{t<\tau}+H^{a,i}_{\tau}\mathbf{1}_{t\geq\tau},$ $\displaystyle L^{i}_{t}:=L^{b,i}_{t}\mathbf{1}_{t\leq\tau},$ $\displaystyle\zeta^{i}_{t}:=(H^{a,i}_{t}-K^{b,i}_{t})\mathbf{1}_{t\leq\tau},\ t\in[0,T],\ i\in\mathcal{M}.$ By Lemma 3.2, $(K^{i},L^{i},\zeta^{i})\in S^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})\times L^{2}_{\mathbb{G}}(0,T;\mathbb{R}^{n})\times L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R})$ is a solution of the following system of BSDEs with jumps: $\displaystyle K^{i}_{t}$ $\displaystyle=H^{i}-\int_{t\wedge\tau}^{T\wedge\tau}\Big{[}\mathcal{N}(s,P_{s-},\Lambda_{s},U_{s},i)^{\prime}\mathcal{R}(s,P_{s-},U_{s},i)^{-1}\big{(}K^{i}_{s-}\mu^{i}_{s}+\sigma^{i}_{s}L^{i}_{s}+\lambda^{\mathbb{G}}_{s}F^{i}_{s}\zeta^{i}_{s}\big{)}$ $\displaystyle\qquad\qquad\qquad\qquad-\sum_{j=1}^{\ell}q_{ij}K^{j}_{s-}\Big{]}ds-\int_{t\wedge\tau}^{T\wedge\tau}(L^{i}_{s})^{\prime}dW_{s}-\int_{t\wedge\tau}^{T\wedge\tau}\zeta^{i}_{s}dM_{s},\ \mbox{for all}\ i\in\mathcal{M}.$ Recall that $P^{i}_{t-}+U^{i}_{t}=1$, so we can define $h^{i}_{t}=\frac{K^{i}_{t}}{P^{i}_{t}},\ \eta^{i}_{t}=\frac{L^{i}_{t}}{P^{i}_{t-}}-\frac{h^{i}_{t-}\Lambda^{i}_{t}}{P^{i}_{t-}},\ \gamma^{i}_{t}=\frac{\zeta^{i}_{t}-h^{i}_{t-}U^{i}_{t}}{P^{i}_{t-}+U^{i}_{t}}$, $t\in[0,T]$. Applying Itô’s formula to $\frac{K^{i}}{P^{i}}$, it is easy to show that $(h^{i},\eta^{i},\gamma^{i})_{i\in\mathcal{M}}$ is a solution of (5). $\Box$ ###### Theorem 5.4 Let $(P^{i},\Lambda^{i},U^{i})_{i\in\mathcal{M}}$ be the solution of (5.5) constructed in Theorem 3.8 and $(h^{i},\eta^{i},\gamma^{i})_{i\in\mathcal{M}}$ be the solution of (5) in Theorem 5.3, then the mean-variance hedging problem (5.4) has an optimal feedback control $\displaystyle\pi^{*}(t,X_{t-},i)=-\mathcal{R}(t,P^{i}_{t-},U^{i}_{t},i)^{-1}\Big{[}\mathcal{N}(t,P^{i}_{t-},\Lambda^{i}_{t},U^{i}_{t},i)(X_{t-}-h_{t-})-P^{i}_{t-}\sigma^{i}_{t}\eta^{i}_{t}-\lambda^{\mathbb{G}}_{t}\gamma^{i}_{t}(P^{i}_{t-}+U^{i}_{t})F^{i}_{t}\Big{]}.$ (5.8) Moreover, the corresponding optimal value is $\displaystyle\min_{\pi\in L^{2}_{\mathbb{H}}(0,T\wedge\tau;\mathbb{R}^{m})}{\mathbb{E}}(X^{\pi}_{T\wedge\tau}-H^{\alpha_{T\wedge\tau}})^{2}$ $\displaystyle=P^{i_{0}}_{0}(x-h^{i_{0}}_{0})^{2}+{\mathbb{E}}\int_{0}^{T\wedge\tau}\sum_{j=1}^{\ell}q_{\alpha_{t}j}P^{j}_{t-}(h^{j}_{t-}-h^{\alpha_{t-}}_{t-})^{2}dt$ $\displaystyle\;\quad+{\mathbb{E}}\int_{0}^{T\wedge\tau}O(t,\alpha_{t})dt,$ (5.9) where $\displaystyle O(t,i)$ $\displaystyle=P^{i}_{t-}(\eta^{i}_{t})^{\prime}\eta^{i}_{t}+(\gamma^{i}_{t})^{2}\lambda^{\mathbb{G}}_{t}(P^{i}_{t-}+U^{i}_{t})-(P^{i}_{t-}\sigma^{i}_{t}\eta^{i}_{t}+\lambda^{\mathbb{G}}_{t}\gamma^{i}_{t}(P^{i}_{t-}+U^{i}_{t})F^{i}_{t})^{\prime}$ $\displaystyle\qquad\times(P^{i}_{t-}\sigma^{i}_{t}(\sigma^{i}_{t})^{\prime}+\lambda^{\mathbb{G}}_{t}(P^{i}_{t-}+U^{i}_{t})F^{i}_{t}(F^{i}_{t})^{\prime})^{-1}(P^{i}_{t-}\sigma^{i}_{t}\eta^{i}_{t}+\lambda^{\mathbb{G}}_{t}\gamma^{i}_{t}(P^{i}_{t-}+U^{i}_{t})F^{i}_{t})\geq 0.$ Proof: One can get (5.8) and(5.4) by applying Itô’s formula to $P^{\alpha_{t}}_{t}(X_{t}-h^{\alpha_{t}}_{t})^{2}$ with some tedious calculation. We shall show $O(t,i)\geq 0$. In light of the length of many equations, $``t,t-,i"$ may be suppressed. From the proof of Theorem 3.8, $P^{i}_{t}\geq c>0,\ P^{i}_{t-}+U^{i}_{t}=1$. If $P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U)=0$, then $\eta=0$ and $\gamma^{2}\lambda^{\mathbb{G}}=0$, hence $O(t,i)=0$. Otherwise, $P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U)>0$. Then by Lemma A.1, we have $\displaystyle\qquad\Big{[}P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U)-(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)^{\prime}(P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime})^{-1}(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)\Big{]}$ $\displaystyle\qquad\qquad\times\mathrm{det}(P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime})$ $\displaystyle=(P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U))$ $\displaystyle\qquad\times\mathrm{det}\Big{(}P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime}-\frac{1}{P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U)}(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)^{\prime}\Big{)}.$ As $P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime}>0$ and $P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U)>0$, it suffices to show $\displaystyle(P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U))(P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime})-(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)^{\prime}\geq 0.$ Actually for any $x\in\mathbb{R}^{n}$, $\displaystyle\qquad(P\eta^{\prime}\eta+\gamma^{2}\lambda^{\mathbb{G}}(P+U))x^{\prime}(P\sigma\sigma^{\prime}+\lambda^{\mathbb{G}}(P+U)FF^{\prime})x$ $\displaystyle\qquad\qquad-x^{\prime}(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)(P\sigma\eta+\lambda^{\mathbb{G}}\gamma(P+U)F)^{\prime}x$ $\displaystyle=P^{2}x^{\prime}\sigma(\eta^{\prime}\eta I_{n}-\eta\eta^{\prime})\sigma^{\prime}x+P\lambda^{\mathbb{G}}(P+U)x^{\prime}(\gamma\sigma-F\eta^{\prime})(\gamma\sigma^{\prime}-\eta F^{\prime})x\geq 0,$ where we used the fact that $P\lambda^{\mathbb{G}}(P+U)\geq 0$ to get the last inequality. This completes the proof. $\Box$ ###### Remark 5.5 The existence of solutions to (3.1), (5.5) and (5) is proved by construction method (Theorems 3.8 and 5.3), unfortunately we cannot prove the uniqueness so far. We hope to address the uniqueness of solutions to (3.1) and (5) in our future research. ## Appendix A APPENDIX ###### Lemma A.1 If $a>0$, $e\in\mathbb{R}^{m}$, $A\in\mathbb{S}^{m}$ and $A>0$, then $(a-e^{\prime}A^{-1}e)\;\mathrm{det}(A)=a\;\mathrm{det}(A-\frac{1}{a}ee^{\prime}).$ Proof: By some calculation, we have $\displaystyle\begin{bmatrix}1&-e^{\prime}A^{-1}\\\ 0&I_{m}\end{bmatrix}\begin{bmatrix}a&e^{\prime}\\\ e&A\end{bmatrix}\begin{bmatrix}1&0\\\ -A^{-1}e&I_{m}\end{bmatrix}=\begin{bmatrix}a-e^{\prime}A^{-1}e&0\\\ 0&A\end{bmatrix},$ and $\displaystyle\begin{bmatrix}1&0\\\ -\frac{1}{a}e&I_{m}\end{bmatrix}\begin{bmatrix}a&e^{\prime}\\\ e&A\end{bmatrix}\begin{bmatrix}1&-\frac{1}{a}e^{\prime}\\\ 0&I_{m}\end{bmatrix}=\begin{bmatrix}a&0\\\ 0&A-\frac{1}{a}ee^{\prime}\end{bmatrix}.$ Taking determinant on both sides of the two equations, the result follows. $\Box$ ## References * [1] Aksamit A, Jeanblanc M. Enlargement of filtration with finance in view. Switzerland: Springer, 2017. * [2] Bielecki T, Rutkowski M. Credit Risk: Modeling, Valuation and Hedging. Springer Finance, 2002. * [3] Bismut M. Linear quadratic optimal stochastic control with random coefficients. SIAM J. Control Optim. 1976, 14(3):419-444. * [4] Chen S, Li X, and Zhou X. Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 1998, 36(5):1685-1702. * [5] Hu Y, Liang G, Tang S. Systems of infinite horizon and ergodic BSDE arising in regime switching forward performance processes. SIAM J. Control optim., 2020, 58(4):2503-2534. * [6] Hu Y, Shi X, Xu Z. Constrained stochastic LQ control with regime switching and application to portfolio selection. arXiv:2004.11832. To appear in Ann. Appl. Probab. * [7] Hu Y, Shi X, Xu Z. Mean-variance asset-liability management with regime switching. arXiv:2201.01433. * [8] Hu Y, Zhou X. Indefinite stochastic Riccati equations. SIAM J. Control Optim. 2003, 42(1):123-137. * [9] Hu Y, Zhou X. Constrained stochastic LQ control with random coefficients, and application to portfolio selection. SIAM Journal on Control and Optimization, 2005, 44(2): 444-466. * [10] Jeanblanc M, Mastrolia T, Possamai D, Reveillac A. Utility maximization with random horizon: a BSDE approach. International Journal of Theoretical and Applied Finance, 2015, 18(7): 43 pages. * [11] Kharroubi I, Lim T. Progressive enlargement of filtrations and backward stochastic differential equations with jumps. J. Theoret. Probab., 2014, 27(3): 683-724. * [12] Kharroubi I, Lim T, Ngoupeyou A. Mean-variance hedging on Uncertain time horizon in a market with a jump. Appl Math Optim, 2013, 68(3): 413-444. * [13] Kobylanski M. Backward stochastic differential equations and partial differential equations with quadratic growth. The Annals of Probability, 2000, 28(2): 558-602. * [14] Kohlmann M, Tang S. Global adapted solution of one-dimensional backward stochastic Riccati equations, with application to the mean-variance hedging. Stochastic Process. Appl. 2002, 97: 255-288. * [15] Kohlmann M. and Zhou X. Relationship between backward stochastic differential equations and stochastic controls: a linear-quadratic approach. SIAM J. Control Optim. 2000, 38(5):1392-1407. * [16] Li X., Zhou X., and Lim A. Dynamic mean-variance portfolio selection with no-shorting constraints. SIAM J. Control Optim. 2002, 40(5):1540-1555. * [17] Lim A. Mean-variance hedging when there are jumps. SIAM J. Control Optim. 2005, 44(5): 1893-1922. * [18] Lim A, Zhou X. Mean-variance portfolio selection with random parameters in a complete market. Math. Oper. Res. 2002, 27(1):101-120. * [19] Lv S, Wu, Z, Yu Z. Continuous-time mean-variance portfolio selection with random horizon in an incomplete market. Automatic. 2016, 69: 176-180. * [20] Pham H. Stochastic control under progressive enlargement of filtrations and applications to multiple defaults risk management. Stochastic Process. Appl. 2010, 120: 1795:1820. * [21] Tang S. General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations. SIAM J. Control Optim. 2003, 42(1):53-75. * [22] Wonham W. On a matrix Riccati equation of stochastic control. SIAM J. Control. 1968, 6(4):681-697. * [23] Yu Z. Continuous-time mean-variance portfolio selection with random horizon. Appl Math Optim. 2013, 68: 333-359. * [24] Zhang F, Dong Y, Meng Q. Backward stochastic Riccati equation with jumps associated with stochastic learar quadratic optimal control with jumps and random coefficients. SIAM J. Control Optim. 2020, 58(1): 393-424. * [25] Zhou X, Li D. Continuous-time mean-variance portfolio selection: A stochastic LQ framework. Appl. Math. Optim. 2000, 42(1):19-33. * [26] Zhou X, Yin G. Markowitz’s mean-variance portfolio selection with regime switching: A continuous-time model. SIAM Journal on Control and Optimization, 2003, 42(4): 1466-1482.
# The Galactic cosmic ray intensity at the evolving Earth and young exoplanets D. Rodgers-Lee1, A. A. Vidotto1, A. M. Taylor2, P. B. Rimmer3,4,5, T. P. Downes6 1 School of Physics, Trinity College Dublin, University of Dublin, College Green, Dublin 2, D02 PN40, Ireland 2 DESY, D-15738 Zeuthen, Germany 3 Department of Earth Sciences, University of Cambridge, Downing St, Cambridge CB2 3EQ, United Kingdom 4 Astrophysics Group Cavendish Laboratory, JJ Thomson Ave, Cambridge CB3 0HE, United Kingdom 5 MRC Laboratory of Molecular Biology, Francis Crick Ave, Cambridge CB2 0QH, United Kingdom 6 Centre for Astrophysics & Relativity, School of Mathematical Sciences, Dublin City University, Glasnevin, D09 W6Y4, Ireland E-mail<EMAIL_ADDRESS> (Accepted 2020 September 3. Received 2020 August 24; in original form 2020 March 16) ###### Abstract Cosmic rays may have contributed to the start of life on Earth. Here, we investigate the evolution of the Galactic cosmic ray spectrum at Earth from ages $t=0.6-6.0$ Gyr. We use a 1D cosmic ray transport model and a 1.5D stellar wind model to derive the evolving wind properties of a solar-type star. At $t=1\,$Gyr, approximately when life is thought to have begun on Earth, we find that the intensity of $\sim$GeV Galactic cosmic rays would have been $\sim 10$ times smaller than the present-day value. At lower kinetic energies, Galactic cosmic ray modulation would have been even more severe. More generally, we find that the differential intensity of low energy Galactic cosmic rays decreases at younger ages and is well described by a broken power- law in solar rotation rate. We provide an analytic formula of our Galactic cosmic ray spectra at Earth’s orbit for different ages. Our model is also applicable to other solar-type stars with exoplanets orbiting at different radii. Specifically, we use our Galactic cosmic ray spectrum at 20 au for $t=600\,$Myr to estimate the penetration of cosmic rays in the atmosphere of HR 2562b, a directly imaged exoplanet orbiting a young solar-type star. We find that the majority of particles $<0.1$GeV are attenuated at pressures $\gtrsim 10^{-5}$ bar and thus do not reach altitudes below $\sim 100$ km. Observationally constraining the Galactic cosmic ray spectrum in the atmosphere of a warm Jupiter would in turn help constrain the flux of cosmic rays reaching young Earth-like exoplanets. ###### keywords: diffusion – (ISM:) cosmic rays – methods: numerical – Sun: evolution – stars: winds, outflows – planetary systems ††pagerange: The Galactic cosmic ray intensity at the evolving Earth and young exoplanets–The Galactic cosmic ray intensity at the evolving Earth and young exoplanets††pubyear: xxxx ## 1 Introduction Galactic cosmic rays have been considered as a source of ionisation for exoplanetary atmospheres (Rimmer & Helling, 2013). Depending on the orbital distance of an exoplanet from its host star it may be possible to disentangle the chemical signature of Galactic cosmic rays from other sources such as stellar radiation and stellar energetic particles. Ionisation by energetic particles, including both Galactic and stellar cosmic rays, is of great interest not only for the chemistry in exoplanetary atmospheres but also at even earlier stages when the protoplanetary disc is still present (Cleeves et al., 2013; Cleeves et al., 2015; Rab et al., 2017; Rodgers-Lee et al., 2017, 2020) and for star formation in general (see Padovani et al., 2020, for a recent review). In terms of the solar system, it is of interest to determine the intensity of Galactic cosmic rays incident on Earth at the time when life is thought to have begun (Mojzsis et al., 1996). Galactic cosmic rays influence and contribute to atmospheric electrical circuits (Rycroft & Harrison, 2012, in the case of the Earth), cloud cover (Svensmark et al., 2017) and biological mutation rates (see discussion in Grießmeier et al., 2005, for instance). Here, we focus on the interaction of Galactic cosmic rays with the stellar winds from solar-type stars specifically, and note that the effect of Galactic cosmic rays on close-in super-Earth exoplanets around M dwarf stars has also been considered (Grießmeier et al., 2005; Grießmeier et al., 2009; Grießmeier et al., 2015). We also investigate the Galactic cosmic ray spectrum impinging on exoplanets orbiting young solar-type stars at different orbital distances than the Earth. The properties of the Sun and its stellar wind are thought to have varied over the lifetime of the Sun. This evolution is inferred from observations of other solar-type stars of different ages since their evolution is thought to be similar. Young solar-type stars typically display much stronger magnetic fields (Vidotto et al., 2014; Folsom et al., 2016; Rosén et al., 2016) and higher X-ray luminosities (Wright et al., 2011; Tu et al., 2015), as well as faster rotation rates (Gallet & Bouvier, 2013), which are thought to result in higher mass-loss rates via stellar winds (Vidotto & Donati, 2017; Ó Fionnagáin et al., 2019). Thus, since the properties of the solar wind change with time this means that the interaction of Galactic cosmic rays with the solar wind will also vary with time. In this paper we investigate how the solar modulation of Galactic cosmic rays varies as a function of the Sun’s life from $0.6-6.0$ Gyr. This evolution of Galactic cosmic ray modulation should also Voyager 1 and 2 measurements have provided us with valuable information about the local interstellar spectrum (LIS) of Galactic cosmic rays outside of the heliosphere (Stone et al., 2013; Cummings et al., 2016; Stone et al., 2019) which are thought to be unaffected by the solar wind. How Galactic cosmic rays then propagate through the magnetised solar wind can be characterised, to first order, as a competitive process between the spatial diffusion of Galactic cosmic rays into the solar system, spatial advection of Galactic cosmic rays out of the system and adiabatic losses of Galactic cosmic rays as they do work against the solar wind (Parker, 1965). The suppression of the LIS of Galactic cosmic rays as they travel through the solar wind to Earth is known as the modulation of Galactic cosmic rays. The present-day solar modulation of Galactic cosmic rays that arrive at Earth has been extensively studied (Parker, 1965; Jokipii, 1971; Potgieter, 2013; Vos & Potgieter, 2015). Given that the solar wind has evolved during its main-sequence lifetime, the flux of Galactic cosmic rays arriving at Earth is expected to have changed throughout the Sun’s life (Svensmark, 2006; Cohen et al., 2012). More specifically, Svensmark (2006) used relationships between the solar rotation rate and the magnetic field strength and velocity of the solar wind to estimate these quantities at different times during the Sun’s life. Cohen et al. (2012) find that during the Archean eon (approximately the period when life is thought to have started on Earth) that the Earth would have experienced a greatly reduced intensity of Galactic cosmic rays. Our approach is similar to Svensmark (2006) which uses a 1D transport equation for the Galactic cosmic rays. We build upon this work by using updated observationally derived relationships between the solar rotation rate and the magnetic field strength and velocity of the solar wind. We focus on a number of radii which are relevant for specific exoplanetary systems around solar- type stars. We discuss the differences in results that we find in Section 5. In this paper we also focus on the conditions in the early solar system to determine the effect that the Sun being a slow/fast rotator would have. In addition, we estimate the flux of Galactic cosmic rays as a function of radius, focusing on radii of particular interest where the signatures of Galactic cosmic rays in an exoplanetary atmosphere may dominate over other sources of ionisation from a solar-type star (i.e. photoionisation and stellar energetic particles). Note, the results presented in Section 3 mainly discuss the evolution of the GCR spectrum at Earth due to the evolution of the solar wind over the Sun’s life. However, the evolution of the GCR spectrum should be similar for other solar-type stars. Thus, in Section 4 we focus on a young solar-type star with a warm Jupiter exoplanet, HR 2562b (Konopacky et al., 2016), orbiting at 20 au. Assuming an unmagnetised exoplanet, we calculate the energy losses of the cosmic rays as they propagate through the upper atmosphere of the exoplanet. Finally, we consider the exoplanetary system HR 2562b (Konopacky et al., 2016), assuming an unmagnetised exoplanet, and calculate the energy losses of the cosmic rays as they propagate through the upper atmosphere of the exoplanet. The paper is structured as follows: in Section 2 we describe the stellar wind model and cosmic ray transport model that we use. We present our results in Sections 3 and 4. We discuss our results in comparison to other results in the literature in Section 5. Finally, we present our conclusions in Section 6. ## 2 Formulation To model the propagation of Galactic cosmic rays from the interstellar medium (ISM) into the solar system (or into a solar-type star system) we solve the 1D transport equation for the cosmic rays, assuming spherical symmetry, given by $\frac{\partial f}{\partial t}=\bm{\nabla}\cdot(\kappa\bm{\nabla}f)-\nabla\cdot(vf)+\frac{1}{3}(\nabla\cdot v)\frac{\partial f}{\partial\mathrm{ln}p}$ (1) where $f(r,p,t)$ is the cosmic ray phase space density, $\kappa(r,p)$ is the spatial diffusion coefficient, $v(r)$ is the radial velocity of the stellar wind and $p$ is the momentum of the particles which are taken to be protons. The first term on the righthand side of Eq. 1 represents the spatial diffusion of cosmic rays through the stellar wind which depends on the level of turbulence and strength of the magnetic field (described in more detail in Sections 2.2 and 2.3). The second term represents spatial advection which acts to suppress the flux of cosmic rays as they travel into the stellar system. The last term represents momentum advection which pushes the cosmic rays to lower energies as they do work against the magnetised stellar wind to enter the stellar system. Fig. 1 shows a schematic of the Galactic cosmic rays diffusing into a stellar system from outside the astrosphere. The velocity profile of the stellar wind is derived from the stellar wind model described in Section 2.3. We focus on the steady-state solution of Eq. 1 which is a reasonable approximation for solar minimum conditions. The fact that we study the steady-state solution of Eq. 1 and also assume azimuthal symmetry means that any short-term modulation effects, shorter than the rotation period of the star, are neglected (see discussion in Potgieter, 2013). We also do not include any drift motions of the cosmic rays (Jokipii et al., 1977) in Eq. 1. This implies that the known temporal variation of Galactic cosmic ray modulation due to the solar cycle cannot be studied here. The drift motion of the cosmic rays also results in latitudinal variations which we do not consider here. Thus, in the future a more complete study of these effects could be studied using a 2D cosmic ray transport code. These effects should be kept in mind when examining our results and that we are implicitly always investigating solar (or stellar) minimum conditions for these stars. It is also important to note that we also do not consider in our model the effect of the termination shock in the stellar wind and the stellar equivalent of the heliosheath. For the solar system, at $10-100$ MeV energies $\gtrsim 80\%$ of the modulation of Galactic cosmic rays occurs in the heliosheath (see Potgieter, 2013, for instance). At the same time, the termination shock can reaccelerate GeV Galactic cosmic rays depending on the magnetic polarity cycle of the Sun. To ascertain how the size and structure of the heliosheath evolves with stellar rotation rate 3D magnetohydrodynamic simulations would be required. Eq. 1 is numerically advanced using a forward in time, second order centred in space differencing scheme for the diffusion term and a first order in space upwinding scheme for the advection terms. The numerical scheme used is overall first order in time. The code that we use is an adapted version of the code presented in Rodgers-Lee et al. (2017, 2020) which now includes momentum advection and a different scheme for the advection terms. A full description of the numerical scheme is given in Appendix A including the implementation of the boundary conditions which is described in Appendix A.2. We validate our code by showing that it reproduces well observations of Galactic cosmic rays measured at Earth which is presented in Appendix A.4. A numerical convergence test for the scheme is given in Appendix A.5. For the boundary conditions, the spatial inner boundary condition is reflective. We use a fixed spatial outer boundary condition with the boundary cell taken to be the LIS value, described in Section 2.1. The momentum inner and outer boundary conditions are outflow. Figure 1: Here we show a schematic of a solar-type stellar system to illustrate Galactic cosmic rays diffusing into a stellar system from outside the astrosphere and eventually arriving at the location of planets. The axisymmetric shape of the astrosphere due to the motion of the star through the ISM is not incorporated in our stellar wind or Galactic cosmic ray model. Table 1: List of parameters for the simulations. The columns are, respectively: the age ($t$) of the Sun, its rotation rate ($\Omega$) in terms of the present-day value ($\Omega_{\odot}=2.67\times 10^{-6}\,\mathrm{rad\,s^{-1}}$), its rotation period ($P_{\mathrm{rot}}$), the heliospheric radius ($R_{\mathrm{h}}$, Eq. 9), the radial velocity ($v_{1\mathrm{au}}$) and the magnitude of the total magnetic field ($|B_{1\mathrm{au}}|$) at $r=1$ au. $|B_{*}|$ and $T_{*}$ are the magnitude of the total magnetic field and the temperature at the base of the wind ($r_{\odot}$) and $\dot{M}$ is the mass-loss rate. The final column is the potential ($\phi$, from the modified force field approximation, Eq. 10), that we find for each of the simulations. $t$ | $\Omega$ | $P_{\mathrm{rot}}$ | $R_{\mathrm{h}}$ | $v_{1\mathrm{au}}$ | $|B_{1\mathrm{au}}|$ | $|B_{*}|$ | $T_{*}$ | $\dot{M}$ | $\phi$ ---|---|---|---|---|---|---|---|---|--- [Gyr] | $[\Omega_{\odot}]$ | [days] | [au] | $\mathrm{[km\,s^{-1}]}$ | [G] | [G] | [MK] | [$M_{\odot}\,\mathrm{yr}^{-1}$] | [GeV] 6.0 | 0.87 | 31 | 47 | 370 | $3.4\times 10^{-5}$ | 1.1 | 1.3 | $4.1\times 10^{-15}$ | 0.09 4.6 | 1.0 | 27 | 122 | 450 | $3.8\times 10^{-5}$ | 1.3 | 1.5 | $2.3\times 10^{-14}$ | 0.21 2.9 | 1.3 | 22 | 500 | 610 | $5.4\times 10^{-5}$ | 1.7 | 2.2 | $2.8\times 10^{-13}$ | 0.57 1.7 | 1.6 | 17 | 696 | 660 | $7.6\times 10^{-5}$ | 2.5 | 2.4 | $5.1\times 10^{-13}$ | 1.19 1.0 | 2.1 | 13 | 950 | 720 | $1.2\times 10^{-4}$ | 3.5 | 2.6 | $8.5\times 10^{-13}$ | 1.96 0.6 | 3.0 | 9 | 1324 | 790 | $1.8\times 10^{-4}$ | 5.5 | 3.0 | $1.5\times 10^{-12}$ | 5.1† 0.6 | 3.5 | 8 | 1530 | 820 | $2.8\times 10^{-4}$ | 6.7 | 3.2 | $2.0\times 10^{-12}$ | 7.45† 0.6 | 4.0 | 7 | 1725 | 850 | $3.5\times 10^{-4}$ | 8.0 | 3.3 | $2.4\times 10^{-12}$ | 10.3† †These values for $\phi$ do not match our results very well below the peak of the Galactic cosmic ray spectrum (Section 3.1.1). ### 2.1 Local interstellar spectrum (LIS) The LIS of Galactic cosmic rays is the spectrum that is thought to be unmodulated by the solar wind and therefore can only be observed outside of the heliosphere. The LIS has been measured by _Voyager 1_ from beyond the heliopause (Stone et al., 2013; Cummings et al., 2016). A model fit to the _Voyager 1_ observations of the LIS, from Vos & Potgieter (2015), is given as a differential intensity, $j_{\mathrm{LIS}}$, as $j_{\mathrm{LIS}}(T)=2.70\frac{T^{1.12}}{\beta^{2}}\left(\frac{T+0.67}{1.67}\right)^{-3.93}\mathrm{m^{-2}}\,\mathrm{s^{-1}}\,\mathrm{sr^{-1}}\,\mathrm{MeV^{-1}}$ (2) where $T$ is the kinetic energy of the cosmic rays in GeV and $\beta$ is the velocity of the particle divided by the speed of light $c$. In the model of Vos & Potgieter (2015) the very LIS is specified at the heliopause, taken to be 122 au. For our simulations the value at the outer boundary is taken to be the LIS111The expression for the LIS in Eq. 2 is different at low energies from the LIS used in Svensmark (2006) and Cohen et al. (2012) which is based on a model fit to older _Voyager 1_ data. At $\sim 1$ GeV and higher energies the spectra are the same but below $\sim 1$ GeV the model fit from Vos & Potgieter (2015) is now more accurate as it is constrained by the more recent _Voyager 1_ data. However, since the difference in the adopted spectra is only at low energies where solar modulation dominates it is unlikely that the different spectra would affect the model results., where the differential intensity of cosmic rays can be expressed in terms of the phase space density ($f$ from Eq. 1) as $j(T)=p^{2}f(p)$. We assume a constant LIS as a function of time in our simulations. The LIS may have evolved as a function of time, due to a corresponding temporal evolution of the star formation rate (SFR) of the Milky Way (Rocha-Pinto et al., 2000) and assuming that the majority of Galactic cosmic rays are produced by supernovae, as discussed in Svensmark (2006). However since the Milky Way’s SFR for the times that we consider ($t>0.6\,$Gyr), shown in Fig. 2 of Svensmark (2006), is within a factor of two of the current value for the Milky Way’s SFR, we do not vary the LIS as a function of time. ### 2.2 Diffusion coefficient The diffusion coefficient of the cosmic rays, in units of $c$, can be estimated from quasi-linear theory (Jokipii, 1966; Schlickeiser, 1989) as $\frac{\kappa(r,p,\Omega)}{\beta c}=\eta_{0}\left(\frac{p}{p_{0}}\right)^{1-\gamma}r_{\mathrm{L}}$ (3) where $r_{\mathrm{L}}=p/[eB(r,\Omega)]$ is the Larmor radius of the protons with $e$ representing the unit of electric charge, $\Omega$ is the adopted stellar rotation rate and $\eta_{0}=\left(\frac{B}{\delta B}\right)^{2}$ (4) where $B^{2}$ relates to the energy density of the large-scale magnetic field and $(\delta B)^{2}$ to the total energy density in the smaller scale magnetic field turbulent modes. The diffusion coefficient $\kappa/\beta c$ describes the scattering length of protons with momentum $p$, and in Eq. 3 is scaled to momentum $p_{0}$, corresponding to momentum of particles whose Larmor radii matches the length of the longest turbulent modes. We adopt $p_{0}=3\,\mathrm{GeV}/c$. The value of $\eta_{0}$ represents the level of turbulence present in the magnetic field (Eq. 4). The value of $\gamma$ is related to the turbulence power spectrum where $\gamma=5/3$ would represent Kolmogorov-type turbulence. The value of $\gamma=1$ was adopted by Svensmark (2006) and Cohen et al. (2012) which fits the present day observations of solar wind modulation quite well and which we also show in Fig. 7 using $\eta_{0}=1$. Thus, we adopt $\eta_{0}=1$ and $\gamma=1$ for all of the simulations. The magnetic field strength of a solar-type star increases with increasing stellar rotation rate (which is discussed further in Section 2.3.2). Given that we adopt a constant value for $\eta_{0}$, this means that the diffusion coefficient decreases with increasing magnetic field strength and therefore also with increasing stellar rotation rate. The possible implications of these assumptions are discussed briefly in Appendix B. The solar wind properties that we adopt for the present day simulation ($t=4.6\,$Gyr) are given in Table 1 and are also described in the subsequent sections. ### 2.3 Stellar wind parameters as a function of time A number of physical quantities relating to the wind of a solar-type star must be defined in order to solve Eq. 1, namely the velocity and magnetic field profile as a function of radius and time, as well as the heliospheric radius. Here, we describe our stellar wind model to simulate the long-term evolution of the wind of a solar-type star, based on empirical relations derived from samples of solar-type stars. In our model, we use rotation as a proxy for age, so that young solar-type stars rotate faster than more evolved solar-type stars. The term “solar-type” star is often used to refer to low-mass stars with masses in the range of $\sim 0.5-1.3\,M_{\odot}$ corresponding to low- mass stars with convective envelopes. We run our stellar wind model only for stars with $M_{*}=1\,M_{\odot}$ to be able to focus on the Sun’s evolution. Thus, it can also be applied to stars with similar masses. The stellar wind model that we use to derive the stellar wind properties as a function of radius for different ages is a 1.5D Weber-Davis model (Weber & Davis, 1967), which assumes that the star is rotating and magnetised. The code that we use which implements this magneto-rotator model is presented in Johnstone et al. (2015) and Carolan et al. (2019), based on the Versatile Advection Code (VAC, Tóth, 1996). We assume that the magnetic field, temperature and density at the base of the stellar wind scale with the stellar rotation rate (Carolan et al., 2019). The surface of the Sun is located at $1\,r_{\odot}$ (i.e. one solar radius) corresponding to the photosphere while the corona is located at $\sim 1.003\,r_{\odot}$, slightly above the photosphere. Our stellar wind model launches the wind from the base of the corona which we approximate as $1\,r_{\odot}$. For any given rotation rate, the stellar wind model then solves for the distance profiles of the magnetic field (the radial and azimuthal components), radial and azimuthal velocity, pressure and mass density. The resulting radial profiles for the relevant physical quantities are then used in Eq. 1. Our stellar wind model is polytropic meaning that the pressure is related to the density via $P\propto\rho^{\alpha}$, where we assume here that $\alpha=1.05$. Therefore, the stellar wind temperature profile is close to being isothermal. The polytropic wind model assumes that the driving mechanism for the solar wind is thermal pressure gradients. More details of our adopted stellar wind model are shown in Carolan et al. (2019). It is important to note that the physical properties that we derive from the stellar wind model are applicable to the Sun and also to other solar-type stars. Therefore, throughout the paper we often refer more generally to stellar winds rather than to the solar wind since our results are equally valid for other solar- type stars. #### 2.3.1 Stellar rotation rate as a proxy for age The evolution in time of the rotation rate for a solar-type star can be derived from large observational samples of solar-type stars with different ages (Fig. 3, Gallet & Bouvier, 2013). At very young ages ($t<5-10\,$Myr), the presence of protoplanetary discs brake the spin up of young stars that would otherwise occur due to gravitational contraction. Once protoplanetary discs are dispersed young stars then continue to spin up at a faster rate until they reach the zero-age main sequence. After this, the spin down of solar-type stars is attributed to stellar winds, which carry away angular momentum. We limit our study to ages $t\geq 0.6\,$Gyr, as some of our assumptions for the properties of the stellar wind base may no longer hold at very young ages. From $\sim 0.6$ to $1$ Gyr, observations show a large spread in rotation rates of solar-type stars, which means that prior to $\sim 1$ Gyr it is not possible to determine the rotation rate of the Sun (e.g. fast or slow rotator). Therefore, for $t=0.6$ Gyr we investigate three scenarios, ranging from the case where the Sun was a slow rotator ($\Omega=3\Omega_{\odot}$) to a fast rotator ($\Omega=4\Omega_{\odot}$) scenario, with an intermediate rotator case of $\Omega=3.5\Omega_{\odot}$. However, after $\sim 1$ Gyr (corresponding to $\sim 2.1\,\Omega_{\odot}$), the rotation rate of the Sun is thought to have converged, such that $\Omega\propto t^{-0.5}$ (Skumanich, 1972). The values of $\Omega$ that we investigate here, as well as the age and other corresponding physical parameters of our simulations, are given in Table 1. We simulate the evolving solar wind for the following rotation rates: $\Omega=0.87,\,1.0,\,1.3,\,1.6,\,2.1,\,3.0,\,3.5,\,4.0\,\Omega_{\odot}$. #### 2.3.2 The evolving winds of solar-type stars Magnetic torques in the winds of solar-type stars are responsible for carrying away most of the stellar angular momentum. To prescribe the evolution of the magnetic field for solar-type stars, we use the empirical relationship between observationally derived values of the large-scale magnetic field strength for low-mass stars and stellar rotation rate (Vidotto et al., 2014) given by $B_{*}(\Omega)=1.3\left(\frac{\Omega}{\Omega_{\odot}}\right)^{1.32\pm 0.14}\,\mathrm{G}.$ (5) The field strength was obtained by averaging surface magnetic maps, which for stars was derived using the Zeeman Doppler Imaging (ZDI) technique. For the Sun the large scale component of solar synoptic maps (derived from Kitt Peak/National Solar Observatory data) was instead used. We use these observationally derived values for $B_{*}(\Omega)$ as the value of the radial component of the magnetic field strength, $B_{*,r}(\Omega)$, at the wind base for the stellar wind model. The initial condition used in the stellar wind model for the radial profile of the magnetic field is that $B_{r}(r,\Omega)=B_{*,r}(\Omega)(r_{\odot}/r)^{2}$ and $B_{\phi}(\Omega)=0$. As the stellar wind simulation evolves, an azimuthal component of the magnetic field develops due to stellar rotation. At large distances, this component falls off as $1/r$. Our steady-state stellar wind models extend out to 1 au. Carolan et al. (2019) showed that the magnetic field at Earth’s orbit from this stellar wind model ($\sim 3.8\times 10^{-5}$G) matches the observed values very well. Finley et al. (2019), for example, shows the observed open magnetic flux in the solar wind varying from $5-15\times 10^{22}$Mx, which results in magnetic field strengths at Earth’s orbit of $1.78-5.3\times 10^{-5}$G. The model also matches the observed values for the mass-loss rate, velocity and density of the solar wind at 1 au very well, as shown in Fig.1 of Carolan et al. (2019). We extrapolate the values of $B_{r}(\Omega)$ and $B_{\phi}(\Omega)$ beyond $1\mathrm{au}$ out to the edge of the heliosphere using power laws with distance such that $\displaystyle B_{r}(r>1\mathrm{au},\Omega)$ $\displaystyle=$ $\displaystyle B_{r,1\mathrm{au}}(\Omega)\left(\frac{1\mathrm{au}}{r}\right)^{2}$ (6) $\displaystyle B_{\phi}(r>1\mathrm{au},\Omega)$ $\displaystyle=$ $\displaystyle B_{\phi,1\mathrm{au}}(\Omega)\left(\frac{1\mathrm{au}}{r}\right)$ (7) Since the radial magnetic field falls as $1/r^{2}$ but the azimuthal field only decreases as $1/r$, this gives rise to the Parker spiral that becomes tighter at larger distances when $B_{\phi}$ dominates. The values we obtain for the total magnetic field strength as a function of orbital distance and stellar rotation rate are used to determine the diffusion coefficient given in Eq. 3. A fit to the values of $B_{\phi,1\,\mathrm{au}}$ and $B_{r,1\,\mathrm{au}}$ as a function of stellar rotation rate, derived from the stellar wind model values, is given in Eq. A1 of Carolan et al. (2019) in combination with the values quoted in their Tables A1-A2. Note, we use the best fit values for the magnetic field strength as a function of stellar rotation rate given in Eq. 5 and thus we do not consider the effect of the uncertainty in the fit here. Note that ZDI only allows the large-scale field to be reliably reconstructed (Johnstone et al., 2010; Arzoumanian et al., 2011; Lang et al., 2014). Fortunately, the stellar wind flows through large-scale fields and therefore the limited resolution of ZDI magnetograms has been demonstrated not to affect the stellar wind (Jardine et al., 2017; Boro Saikia et al., 2020). Lehmann et al. (2019) performed a study of the ZDI technique using controlled input data and showed that the large-scale field morphologies are recovered well. The other two wind base parameters required in our stellar wind models are the base temperature and density. We use the relationship for the stellar wind base temperature as a function of rotation rate from Ó Fionnagáin & Vidotto (2018): $T_{*}=\begin{cases}1.50\left(\frac{\Omega}{\Omega_{\odot}}\right)^{1.2}\mathrm{MK}\hskip 11.38109pt\mathrm{for}\hskip 5.69054pt\Omega<1.4\Omega_{\odot}\\\ 1.98\left(\frac{\Omega}{\Omega_{\odot}}\right)^{0.37}\mathrm{MK\,}\hskip 5.69054pt\mathrm{\,for}\hskip 8.53581pt\Omega\geq 1.4\Omega_{\odot}\end{cases}$ (8) For the base density, we assume that $n_{*}=10^{8}(\Omega/\Omega_{\odot})^{0.6}$cm-3, following the work by Ivanova & Taam (2003). Overall, the radial velocity profile results from the magneto-rotator stellar wind model that we use given a particular set of values for the temperature, density and magnetic field strength at the base of the wind. The stellar wind model, by construction, matches the solar wind velocities observed at Earth well ($v_{\oplus}\simeq 450\,\mathrm{km\,s^{-1}}$, McComas et al., 2008; Usmanov et al., 2014). For each of the stellar wind simulations the wind has reached its terminal velocity by 1 au and so $v(r>1\mathrm{au})=v(1\mathrm{au})$ is used in Eq. 1. The values of $v$ at $1\mathrm{au}$ (denoted by $v_{1\mathrm{au}}$) are given in Table 1. Fig. 9 in Appendix C shows the magnetic field strength and velocity profiles, as a function of radius, derived from the magneto-rotator stellar wind model for $1\,\Omega_{\odot}$ and $4\,\Omega_{\odot}$. From mass conservation, it follows that $\dot{M}=4\pi r^{2}mnv$, with $m=0.5m_{\mathrm{p}}$ being the mean mass of the solar wind particle, considered to be composed of fully ionised hydrogen. At the present-day solar rotation rate, our model assumptions reproduce the present-day value of the solar wind mass-loss rate: $\dot{M}=2\times 10^{-14}M_{\odot}\,\mathrm{yr^{-1}}$. The mass-loss rates calculated at other ages are shown in Table 1 and are used to calculate the heliospheric (or more generally the astrospheric) radius. #### 2.3.3 Heliospheric radius The radius of a solar-type star’s astrosphere, $R_{\mathrm{h}}$, is determined as a balance of the stellar wind ram pressure ($P_{\odot}\propto\dot{M}_{\odot}v_{\odot}/r^{2}$) and the ambient ISM pressure, $P_{\mathrm{ISM}}$. The solar wind ram pressure evolves with time and so, by assuming a constant ISM pressure as a function of time (following Svensmark, 2006), we can estimate $R_{\mathrm{h}}$ as a function of time as $R_{\mathrm{h}}(t)=R_{\mathrm{h,\odot}}\sqrt{\frac{{\dot{M}(t)}v(t)}{{\dot{M}_{\odot}}v_{\odot}}}$ (9) where $R_{\mathrm{h,\odot}}$, $\dot{M}_{\odot}$ and $v_{\odot}$ are the values for the Sun’s current heliospheric radius, mass loss rate and wind velocity at $1\mathrm{au}$, respectively. The present day values for these parameters are given in Table 1, as well as the values for different times. ### 2.4 Our combined stellar wind and cosmic ray propagation simulations We use the output of our stellar wind simulations in our simulations of cosmic ray propagation. To recapitulate, we run a number of stellar wind simulations for a number of different times during a solar-type star’s life (Table 1). For each time, we obtain the stellar wind velocity and the magnetic field profile from the wind base at $r_{\odot}$ out to 1 au, as well as the corresponding mass-loss rate. Beyond 1 au, we use the fact that the stellar wind has reached terminal speed and extrapolate the stellar wind conditions out to the astrospheric radius. For all the cosmic ray propagation simulations, the inner radial spatial boundary is set to $0.1\,$au. We use the mass-loss rate and the radial velocity of the stellar wind in Eq. 9 to derive the heliospheric radius as a function of time. Thus, our outer radial boundary is set to the heliospheric radius, $R_{\mathrm{h}}(t)$. Therefore, the logarithmically spaced radial bins for $i=0,...,N$ are given by $r_{i}=\mathrm{exp}\\{i\times\mathrm{ln}(r_{N}/r_{0})/(N-1)+\mathrm{ln}\,r_{0}\\}$ where $r_{0}=0.1\,$au and $r_{N}=R_{\mathrm{h}}(t)$ with $N=60$. Similarly, $p_{j}=\mathrm{exp}\\{j\times\mathrm{ln}(p_{M}/p_{0})/(M-1)+\mathrm{ln}\,p_{0}\\}$ for $j=0,...,M$ represent the logarithmically spaced momentum bins for the cosmic rays with $M=60$. The minimum and maximum momenta of the cosmic rays that we consider are $p_{0}=0.15\,\mathrm{GeV}/c$ and $p_{M}=100\,\mathrm{GeV}/c$, respectively. The same range in momentum is used for all of the cosmic ray propagation simulations. ## 3 Results We present the results of our numerical study which investigates how the modulation of Galactic cosmic rays by the wind of a solar-type star would evolve throughout a solar-type star’s lifetime. We investigate the evolution of the cosmic ray intensity for a number of different cosmic ray energies with the stellar rotation rate. We then specifically look at the radial dependence of the Galactic cosmic ray spectrum at $\sim 1\,$Gyr when life is thought to have begun on Earth. We also focus on the differential intensity of Galactic cosmic rays at $t=600\,$Myr which is relevant for the warm Jupiter exoplanet, HR 2562b, orbiting a solar-like star at a distance of 20 au. ### 3.1 Galactic cosmic ray spectrum as a function of time Figure 2: Differential intensity of Galactic cosmic rays as a function of kinetic energy at 1 au for different stellar rotation rates, which approximately correspond to different ages during the Sun’s life. The values of $\Omega$ plotted correspond to $t=0.6-6.0$ Gyr for the Sun. The solid black line represents a model fit of the Voyager 1 data for the LIS (Section 2.1) which is set to be the value at the outer boundary of the simulations. The parameters used for each simulation are given in Table 1. We investigate the Galactic cosmic ray spectrum at the orbital distance of Earth as a function of a solar-type star’s lifetime. We focus on a number of different times ranging from $0.6-6.0$ Gyr which are given in Table 1. We chose to investigate $t=1.0\,$Gyr as this approximately matches the time at which life is thought to have started on Earth (3.8 Myr ago, Mojzsis et al., 1996). It is therefore of interest to estimate the intensity of Galactic cosmic rays at this time. The other time of particular interest that we focus on is $t=0.6\,$Gyr since there are observations of a directly imaged exoplanet (HR 2526b) orbiting a star similar in mass to the Sun with an age estimate of $t\sim 0.6\,$Gyr. The impact of Galactic cosmic rays in this exoplanetary system will be discussed further in Section 4. Fig. 2 shows the differential intensity of Galactic cosmic rays as a function of their kinetic energy for a number of different stellar rotation rates at 1 au. The black dashed line represents the present day values that we calculate and the solid black line represents the LIS which is the adopted value of the fixed outer spatial boundary condition. The magenta dashed line represents a solar-type star with a slower rotation rate ($\Omega=0.87\Omega_{\odot}$) than the Sun’s present day value and thus probes the intensity of Galactic cosmic rays in the future when the Sun will be $\sim$6.0 Gyr old. The stellar wind properties present at this time (derived from the stellar wind model) will result in an increase in the number of $\lesssim$GeV cosmic rays reaching Earth, ranging from a factor of $\sim 2$ up to a factor $\sim 5$ for MeV cosmic rays. Examining the intensity of Galactic cosmic rays for faster stellar rotation rates, looking into the Sun’s past, shows that the intensity decreases rapidly for all but the most energetic cosmic rays. The peak in the differential Galactic cosmic ray intensity as a function of increasing stellar rotation shifts to higher energies as a result of the corresponding increase in the stellar magnetic field strength (which will result in smaller diffusion coefficients) combined with the effect of larger stellar wind velocities. The red shaded region represents three simulations at $t=0.6\,$Gyr. Because of the uncertainty of the rotation rate of the Sun at that time, we adopt three values of rotation rate $\Omega=3.0,3.5$ and $4\Omega_{\odot}$. This indicates that at young ages, for $T\lesssim 5$ GeV, there is at least an order of magnitude difference in the differential intensity of Galactic cosmic rays that reached Earth, depending on whether the Sun was a fast or a slow rotator. Again, it is important to note that we do not include the drift motion of the Galactic cosmic rays in our simulations which, depending on the solar cycle, would lead to a change in our results. (a) (b) Figure 3: (a) Differential intensity of Galactic cosmic rays at 1 au as a function of stellar rotation rate, $\Omega$, for cosmic rays of different kinetic energies. (b) Comparison of the diffusive timescale with the advective timescale as a function of a solar-type star’s rotation rate for cosmic rays with different kinetic energies. #### 3.1.1 Modified force field approximation The force field approximation (Gleeson & Axford, 1968) provides a simple analytic expression which depends only on a modulation potential, $\phi$, that was developed to describe the solar modulation of Galactic cosmic rays. Here, we compare our results in Fig. 2 with a modified version of the force field approximation because the canonical force field approximation does not fit our simulations well at $\sim$MeV energies222The fact that the force field approximation does not fit the low energy component of the Galactic cosmic ray spectrum at Earth was first noted by Gleeson & Urch (1973) and is also discussed in detail in Caballero-Lopez & Moraal (2004). . This modified force field approximation for the differential intensity of Galactic cosmic rays at Earth, $j_{\mathrm{1au}}(T)$, can be expressed as: $\frac{j_{\mathrm{1au}}(T)}{E^{2}-E_{p}^{2}}=\beta\left(\frac{j_{\mathrm{LIS}}(T+\phi)}{(E+\phi)^{2}-E_{p}^{2}}\right)$ (10) where $E$ is the proton energy and $E_{p}=0.938$ GeV is the proton rest energy. The difference between this modified force field approximation and the usual force field approximation is the factor of $\beta$ on the right hand side of Eq. 10 which increases the suppression at low energies. For the usual force field approximation, $\phi$ is effectively the average energy loss suffered by a cosmic ray reaching Earth coming in from infinity, i.e. the ISM. The values of $\phi$ which fit our data best are given in Table 1. For $\Omega\gtrsim 2.1\Omega_{\odot}$ the modified force field approximation does not fit the low energy cosmic ray intensities very well (see Fig. 10 in Appendix D for a comparison between the modified force field approximation and our results). On the other hand, for $\Omega\lesssim 2.1\Omega_{\odot}$ the modified force field approximation, along with the values of $\phi$ quoted in Table 1, can be used to well approximate our results at 1 au. It is also important to note that while the (modified) force field approximation can be used to well reproduce the Galactic cosmic ray spectrum at Earth for $\Omega\lesssim 2.1\Omega_{\odot}$, it fails to reproduce the Galactic cosmic ray spectrum at large radii (as discussed in Caballero-Lopez & Moraal, 2004). ### 3.2 Intensity at Earth as a function of time for different energies Fig. 3(a) shows the differential intensity of the cosmic rays at 1 au as a function of $\Omega$, for a number of different kinetic energies. As expected, the lowest energy cosmic rays show the largest decrease in intensity as a function of increasing rotation rate. For $T=0.015-10$ GeV a similar evolution with increasing rotation rate is observed. Using a least-squares fitting method, we find that the intensity of 1 GeV cosmic rays decreases as $\Omega^{-3.8}$ until $\Omega\sim 2\Omega_{\odot}$. For $\Omega\gtrsim 2\Omega_{\odot}$ the intensity decreases more rapidly following a power law of $\Omega^{-9.9}$. For $T=10\,$GeV, the modulation is relatively small until $\Omega\sim 2\Omega_{\odot}$ in comparison to the lower energy cosmic rays. The break in the power laws at $\Omega\sim 2\Omega_{\odot}$ can be understood by comparing the diffusive and advective timescales at 1 au, shown in Fig. 3(b), where $t_{\mathrm{dif}}=\frac{r^{2}}{\kappa(r,p,\Omega)},\hskip 14.22636ptt_{\mathrm{adv}}=\frac{r}{v(r,\Omega)}.$ (11) The diffusion timescale depends on the momentum of the cosmic rays whereas the advective timescale does not. Thus, for any given value of $\Omega$ in Fig. 3(b) the variation in the ratio of $t_{\mathrm{adv}}/t_{\mathrm{dif}}$ as a function of cosmic ray energy occurs because $\kappa\propto p$. The break in the power law occurs at the same rotation rate for all low-energy cosmic rays. In particular, it occurs approximately when $t_{\mathrm{adv}}/t_{\mathrm{dif}}\lesssim 1$ for GeV cosmic rays. The timescales for GeV cosmic rays determines the position of the power law break because cosmic rays with lower energies will always be related to higher energy cosmic rays via momentum advection (i.e. losses). Looking at the LIS spectrum, the differential intensity of $>$GeV cosmic rays is always lower than the intensity of $\lesssim$GeV energies and thus are unable to replace the GeV cosmic rays via momentum advection that are suppressed by spatial advection. Fig. 3(b) can be used to broadly understand the overall modulation of Galactic cosmic rays. For 10 GeV cosmic rays because their diffusive timescale is much shorter than the advective timescale they do not experience much modulation until sufficiently far into the Sun’s past when the magnetic field strength and the velocity of the solar wind have increased significantly. For GeV and MeV cosmic rays their diffusive timescales (for the present day Sun and the past physical values of the solar wind) are always close to, or longer than, the advective timescale. Thus, the modulation of Galactic cosmic rays with these energies as a function of the Sun’s lifetime has always been quite significant. In the future if the magnetic field strength and velocity of the solar wind continue to decrease the differential intensity of Galactic cosmic rays at Earth will converge towards the LIS, with $j(\mathrm{MeV})>j(\mathrm{GeV})$. The magenta dashed line in Fig. 2 shows the differential intensity of cosmic rays at Earth in the future for $t=6.0\,$Gyr, which is still at this time strongly suppressed by the solar wind at low energies. Note that the momentum advection term in Eq. 1 also has an associated timescale but it will always be longer than the spatial advection timescale and therefore would not be responsible for the observed power law break. ### 3.3 Galactic cosmic ray spectrum at the time when life is believed to have started on Earth Here we focus on the Galactic cosmic ray spectrum for a number of different radii at $t=1.0\,$Gyr, shown in Fig. 4, at approximately the time when life is thought to have begun on Earth. The first noticeable feature is that, because the heliosphere was much larger at this earlier time in the Sun’s life (950 au versus 122 au), the differential intensity of cosmic rays at 130 au (blue dashed line) is lower at most energies than the present-day values we observe at Earth (grey dashed line in Fig. 4). We chose 130 au, as this is approximately the present-day location of the edge of the heliosphere. The green dashed line denotes the values we find at 1 au. For energies less than $\sim 5\,$GeV these values are approximately 2 orders of magnitude smaller than the present-day values observed at Earth meaning that the young Earth was far better protected from Galactic cosmic rays than the present-day Earth. Figure 4: Differential intensity of Galactic cosmic rays as a function of kinetic energy at different radii for $t=1.0\,$Gyr, when life is thought to have begun on Earth. The solid black line represents a model fit of the Voyager 1 data for the LIS (Section 2.1) located at 950 au at this younger age. The coloured dashed lines represent the differential intensity found at different radii of interest in the simulation. In particular, the green dashed line corresponds to the values found at 1 au. For comparison, the grey dashed line indicates the values found at the present-day Earth. ## 4 Application to HR 2562b: Propagation of Galactic cosmic rays in the atmosphere of a young warm Jupiter In the previous section we showed the Galactic cosmic ray spectrum that may have been present at the time when life began on Earth, or present at another Earth-like exoplanet orbiting at 1 au from a young solar-type star. Observing the signatures of Galactic cosmic rays in the atmosphere of an Earth-like exoplanet would be important for understanding the origins of life on Earth and would also act as a constraint for the model we present. Unfortunately, it is unlikely with the current/near-future observing facilities that it would be possible to detect such a signature for an Earth-like exoplanetary atmosphere. Thus, in this section we focus on an exoplanetary system with a solar-type host star where we believe it may be possible to detect the signatures of Galactic cosmic rays with the James Webb Space Telescope (Gardner et al., 2006). Our model can be used to guide future observations. It is important to note that even if the chemical effect of Galactic cosmic rays remains unobservable in Earth-like exoplanetary atmospheres that Galactic cosmic rays can still be important for these systems. In order to detect an observable chemical effect driven by Galactic cosmic rays in an exoplanetary atmosphere with a solar-type host star we must first isolate the chemical effects of Galactic cosmic rays from other effects such as from photo-chemistry driven by stellar radiation or from stellar energetic particles. Stellar radiation and stellar energetic particles will generally dominate over Galactic cosmic rays in terms of observable signatures in the atmospheres of close-in exoplanets so we must focus on exoplanets at large orbital distances. Young exoplanets would also be easier to detect because exoplanets cool, and emit less flux, as they age. Thus, we apply our cosmic ray model to HR 2562, a young exoplanetary system with an estimated age of 300-900 Myr (see Konopacky et al., 2016, for a discussion of the different age estimates for the star). This system hosts a warm (therefore meaning young) Jupiter exoplanet at a large distance from its host solar-type star – HR 2562b is a directly imaged planet, observed as part of the Gemini Planet Imager Exoplanet Survey. HR 2562b has a mass of $30\pm 15M_{\mathrm{Jup}}$, orbiting a 1.3$M_{\odot}$ star (F5V) at a distance of $20.3\pm 0.3$ au (Konopacky et al., 2016). At this orbital distance it is possible that Galactic cosmic rays will be more important than photo-driven chemistry in determining the chemical (dis-)equilibrium in the exoplanet’s atmosphere. To estimate the Galactic cosmic ray flux incident on HR 2562b, we use our Galactic cosmic ray spectrum for different radii at $t=0.6\,$Gyr (using $\Omega=3.5\Omega_{\odot}$). Fig. 5 plots the differential intensity of Galactic cosmic rays at a number of different radii. The green dashed line corresponds to the orbital distance of the exoplanet HR 2562b. We then use this Galactic cosmic ray spectrum to trace the subsequent propagation, and energy losses, of the cosmic rays down through the exoplanet’s atmosphere using the Monte Carlo cosmic ray propagation model as described by Rimmer & Helling (2013). Here, we take into account energy losses due to inelastic (ionization and excitation) collisions (i.e. we neglect magnetic mirroring). Rimmer & Helling (2013) contain further details of the Monte Carlo code. The atmosphere we use for our model is a DRIFT-PHOENIX atmosphere (Helling et al., 2008a, b; Witte et al., 2009) for a substellar object with an effective temperature $T_{\rm eff}=1200$ K, surface gravity of $10^{4.5}$ cm s-2 and solar metallicity. The resulting spectra for a range of atmospheric pressures, which correspond to different atmospheric depths, are shown in Fig. 6. The solid black line corresponds to the interpolation of the input spectrum (the green dashed line in Fig. 5) used to initialise the Monte Carlo code. The majority of particles $<0.1$ GeV are attenuated at pressures greater than $10^{-5}$ bar, and the majority of $0.1$ – $10$ GeV particles are attenuated at pressures greater than $10^{-4}$ bar. Much of the energy lost by cosmic rays will be deposited into the atmosphere by ionizing and dissociating various molecular species. This ionization and dissociation leads to the formation of the ions $\mathrm{H_{3}^{+}}$ and $\mathrm{H_{3}O^{+}}$ (Helling & Rimmer, 2019), and most of the formation will occur between 1 mbar and 1 bar. These species are rapidly destroyed by recombination with electrons, at a rate proportional to the pressure. The ions $\mathrm{H_{3}^{+}}$ and $\mathrm{H_{3}O^{+}}$ will be much more likely to survive at 1 mbar than 1 bar, and can then diffuse higher in the atmosphere. The results shown in Fig. 6 can be used to determine if the abundances of these molecules are observable using a chemical network model, such as the models presented in Rimmer et al. (2014); Helling & Rimmer (2019) and Moore et al. (2019), but is beyond the scope of this paper. Figure 5: Differential intensity of Galactic cosmic rays as a function of kinetic energy at different radii for $t=0.6\,$Gyr. The solid black line represents a model fit of the Voyager 1 data for the LIS (discussed in 2.1) located at 1530 au. The coloured dashed lines represent the differential intensity found at different radii of interest in the simulation. In particular, the red dashed line corresponds to the values found at 1 au and the green dashed line corresponds to the intensity of cosmic rays found at the same orbital distance as the exoplanet HR 2562b. For comparison, the grey dashed line indicates the values found at the present-day Earth. Figure 6: Differential intensity of Galactic cosmic rays as a function of kinetic energy for different atmospheric pressures, $P$, (and heights) within a model atmosphere for HR 2562b, based on a Monte Carlo cosmic ray propagation model (Rimmer & Helling, 2013). ## 5 Discussion: Comparison to the literature Our simulation for $t=1.7$ Gyr can be compared with the results of Svensmark (2006). The turquoise line in their Fig. 1 corresponds to the same time denoted by the cyan dots in our Fig. 2. The peak flux occurs at approximately the same energy, i.e. $\sim$GeV. On the other hand the peak flux is approximately a factor of three larger in our simulation and at the lowest energies there is approximately one order of magnitude difference between the simulations. This difference at low energies is very likely due to differences in the adopted radial magnetic field and velocity profiles at small radii. Svensmark (2006) assumed a constant solar wind velocity as a function of radius and that the magnetic field scales as $r^{-1}$. In our case the solar wind velocity is only constant as a function of radius once it has reached its terminal velocity. The magnetic field scales as $r^{-1}$ beyond $r\sim 1\,$au, whereas for $r<1\,$au it scales as $r^{-2}$ since the radial component of the magnetic field dominates at these radii. The evolution of the solar wind properties with time is also different between the two models which likely contributes to the differences seen between the two models. For $T\gtrsim$GeV using a constant solar wind velocity and $B\propto 1/r$ appears sufficient whereas at low energies it underestimates the differential intensity of Galactic cosmic rays. Making a comparison with the results of Cohen et al. (2012) is less straightforward. We have used empirical relations from observations to estimate the temporal evolution of the solar wind properties as a function of the rotation rate as an input for our cosmic ray transport model. In contrast, Cohen et al. (2012) took an observed magnetic map of the Sun and modified the map to mimic the presence of high latitude spots observed in young stars. Their Fig. 4 represents the physical set-up most similar to our model where they have increased the dipole and spot component of the magnetic field by a factor of 10. The green line in their Fig. 4 with a solar period of 10 days is closest to our slow rotating Sun at $t=0.6\,$Gyr with $\Omega=3.0\Omega_{\odot}$. The peak intensity that they find is approximately a factor of 2 or 3 larger than our peak value. The kinetic energy at which the peak is found is very similar. While here we do not consider the interaction of the Galactic cosmic rays with an exoplanetary magnetic field (as is the focus of Grießmeier et al., 2015, for a close-in exoplanet orbiting a M dwarf, for instance), the differential intensity of Galactic cosmic rays that we find for different radii, and times in a solar-type star’s life, can be used in future as an estimate for the boundary condition of simulations focusing on this interaction with exoplanets around other solar-type stars in more detail. ## 6 Conclusions In this paper we investigated how the propagation of Galactic cosmic rays through the stellar systems’ of solar-type stars would change as a function of the solar-type star’s lifetime due to the varying physical conditions of the stellar wind with time. We modelled the modulation of Galactic cosmic rays by solving the associated 1D transport equation assuming diffusive transport, including spatial and momentum advection of Galactic cosmic rays by the stellar wind. We used a polytropic stellar wind model to derive the distance profile of the stellar wind for different stellar rotation rates. We found that for a solar-type star older than the Sun ($t=6.0\,$Gyr) the differential intensity of Galactic cosmic rays will increase between a factor of 2-5 at $T\lesssim\,$GeV. At early ages, at $t=0.6\,$Gyr for instance, the rotation rate of the Sun is unknown. Therefore, we showed that the resulting difference in the differential intensity of Galactic cosmic rays at Earth, depending on whether the Sun was a fast or a slow rotator, is approximately an order of magnitude for $T\lesssim 5\,$GeV energies. Generally, for mildly relativistic cosmic rays ($\lesssim$GeV energies) their associated diffusion timescales have always been comparable to, or longer than, the advective timescale of the stellar winds of solar-type stars. This means that the past and present modulation of these low energy cosmic rays in the solar system has always been severe. Only in the future, as the solar wind becomes weaker, will these low energy cosmic rays begin to reach Earth from the ISM. For faster rotation rates, approximately corresponding to younger ages, 10 GeV cosmic rays begin to be severely modulated due to the increased magnetic field strength and velocity of the solar wind. We compare our results to a modified version of the force field approximation and find that for rotation rates of $\Omega\lesssim 2.1\Omega_{\odot}$ the modified force field approximation can be used to fit our results at 1 au quite well. We provided an analytical fit to our derived spectra in Eq. 10. These fits could be easily incorporated in future models, such as for calculating the spectrum at the top of Earth’s atmosphere for the different ages that we focused on here. We looked specifically at the differential intensity of Galactic cosmic rays that would have been incident on Earth at $t=1.0\,$Gyr, approximately when life is thought have begun on Earth. For $T\lesssim 5\,$GeV the values for the differential intensity that we find are approximately 2 orders of magnitude smaller than the present-day values observed at Earth, similar to previous estimates by Cohen et al. (2012). Finally, we applied our model to the case of HR 2562b which is a warm Jupiter orbiting a young $\sim$solar-like star ($t=0.6$ Gyr) at 20 au. After calculating the differential intensity of Galactic cosmic rays at the orbital distance of this exoplanet, we determine how the cosmic rays would deposit their energy as they propagate through the exoplanet’s atmosphere. Here, we assumed the atmosphere to be unmagnetised. We found that the majority of cosmic ray particles with energies between 0.1 and 10 GeV are attenuated at pressures greater than $10^{-4}$ bar. Our results can be used to guide future searches for the chemical signatures of Galactic cosmic rays in exoplanetary atmospheres with, for example, the JWST. An observational signature of Galactic cosmic rays in an exoplanetary atmosphere of a warm Jupiter may help constrain the Galactic cosmic ray spectrum present around young Earth-like exoplanets. ## Acknowledgements The authors thank Dr Christiane Helling for providing the model atmosphere of HR 2562b. The authors also thank A. C. Cummings and B. Heikkila for providing the IMP 8 data. DRL and AAV acknowledge funding from the Irish Research Council Laureate Awards 2017/2018 and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 817540, ASTROFLOW). P. B. R. thanks the Simons Foundation for support under SCOL awards 59963. This work has made use of DESY’s high- performance computing facility. We would like to thank the anonymous referee for helpful comments which improved the manuscript. ## Appendix A The numerical code In this section we give details of the numerical code that was used including a description of the numerical scheme, how the boundary conditions are implemented, a definition of the overall timestep for the code as well as a validation of our code using Galactic cosmic ray observations at Earth and a resolution test. The code presented here assumes spherical symmetry and is adapted version of the code that was originally presented in Rodgers-Lee et al. (2017) which had two spatial dimensions. The version of the code presented here uses a logarithmically spaced spatial grid (which was used in Rodgers-Lee et al., 2020), as well as a logarithmically spaced momentum grid which was not included in the previous version of the code. The last term in Eq. 1 describing momentum advection is also now included. We use a different numerical scheme for the advective terms which is described below. ### A.1 Numerical scheme Here we describe the numerical scheme used to discretise Eq. 1. Both the spatial and momentum bins are logarithmically spaced and so we introduce a change of variables such that $u\equiv\ln r$ and $w\equiv\ln p$. Let $\tilde{\kappa}$ be the diffusion coefficient when written as a function of $u$ and $w$. Given any variable $X$, the notation $X^{n}_{i,j}$ denotes the variable $X$ at $u_{i}$, $w_{j}$ and time $t^{n}$ with $u_{i}=i\Delta u,\hskip 14.22636ptw_{j}=j\Delta w,\hskip 14.22636ptt^{n}=n\Delta t$ (12) where $\Delta u$ ($\Delta w$) is the radial (momentum) logarithmic grid spacing and $\Delta t$ is the timestep. For the diffusive term in Eq. 1 we use a first order forward in time and second order centred in space scheme. The diffusion equation can be expressed in terms of $u$ as $\displaystyle\frac{\partial f}{\partial t}$ $\displaystyle=$ $\displaystyle\bm{\nabla}\cdot(\kappa\bm{\nabla}f)$ (13) $\displaystyle=$ $\displaystyle\frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\kappa\frac{\partial f}{\partial r}\right)$ $\displaystyle=$ $\displaystyle\frac{1}{r^{3}}\frac{\partial}{\partial u}\left(r\tilde{\kappa}\frac{\partial f}{\partial u}\right)=e^{-3u}\frac{\partial}{\partial u}\left(e^{u}\tilde{\kappa}\frac{\partial f}{\partial u}\right)$ We can discretise this using a forward in time, centred in space scheme as $\displaystyle f^{n+1}_{i,j}=f^{n}_{i,j}+\frac{\Delta t}{(\Delta u)^{2}}e^{-3u_{i}}\bigg{(}e^{-u_{i+1/2}}\tilde{\kappa}_{i+1/2,j}\left[f^{n}_{i+1,j}-f^{n}_{i,j}\right]$ $\displaystyle-e^{-u_{i-1/2}}\tilde{\kappa}_{i-1/2,j}\left[f^{n}_{i,j}-f^{n}_{i-1,j}\right]\bigg{)}.$ (14) For the spatial advective term we use a finite volume first order in time and space upwinding scheme. Thus, written in conservative form the advection equation becomes $\displaystyle\frac{\partial f}{\partial t}=-\bm{\nabla}\cdot(\bm{v}f)=-\bm{\nabla}\cdot\bm{F}$ (15) with $\bm{F}=\bm{v}f$. Written in terms of $u$ this becomes, $\frac{\partial f}{\partial t}=-e^{-3u}\frac{\partial}{\partial u}(e^{2u}vf).$ (16) Eq. 16 can then be expressed as $\frac{\Delta f}{\Delta t}\approx-\left(\frac{F^{n}_{i+1/2,j}-F^{n}_{i-1/2,j}}{\Delta u}\right)$ (17) where $F_{i\pm 1/2,j}=e^{2u_{i\pm 1/2}}v^{*}_{i\pm 1/2}f^{n,*}_{i\pm 1/2,j}$ with $v^{*}_{i\pm 1/2}$ and $f^{n,*}_{i\pm 1/2,j}$ being the so-called resolved states. Therefore $f^{n,*}_{i+1/2,j}=\begin{cases}f^{n}_{i,j}\hskip 25.60747pt\text{if }v^{*}_{i+1/2}\geq 0\\\ f^{n}_{i+1,j}\hskip 17.07164pt\text{if }v^{*}_{i+1/2}<0\\\ \end{cases}$ (18) and similarly for $f^{n,*}_{i-1/2,j}$ where $v^{*}_{i+1/2}=(v_{i+1}+v_{i})/2$. Thus, written as a difference scheme this is $\displaystyle f^{n+1}_{i,j}$ $\displaystyle=$ $\displaystyle f^{n}_{i,j}-\frac{\Delta te^{-3u_{i}}}{\Delta u}\bigg{(}\bigg{.}e^{2u_{i+1/2}}v^{*}_{i+1/2}f^{n,*}_{i+1/2,j}$ (19) $\displaystyle-$ $\displaystyle e^{2u_{i-1/2}}v^{*}_{i-1/2}f^{n,*}_{i-1/2,j}\bigg{.}\bigg{)}.$ The momentum advection term is discretised in a similar way to the spatial advection term. Thus, the momentum advection term can be expressed as, $\frac{\partial f}{\partial t}=\frac{(\bm{\nabla}\cdot\bm{v})}{3}\frac{\partial f}{\partial\mathrm{ln}p}=\frac{(\bm{\nabla}\cdot\bm{v})}{3}\frac{\partial f}{\partial w}.$ (20) For the 1D spherical case this becomes $\frac{\partial f}{\partial t}=\frac{1}{3r^{2}}\frac{\partial}{\partial r}(r^{2}v)\frac{\partial f}{\partial w}=\frac{e^{-3u}}{3}\left(\frac{\partial e^{2u}v}{\partial u}\right)\frac{\partial f}{\partial w}$ (21) where we can rewrite this as a differencing scheme in terms of an effective velocity, $v^{\prime}_{i,j+1/2}$, as $f^{n+1}_{i,j}=f^{n}_{i,j}+\frac{\Delta te^{-3u_{i}}}{3\Delta w}v^{\prime}_{i,j+1/2}\left(f^{n,*}_{i,j+1/2}-f^{n,*}_{i,j-1/2}\right)$ (22) where $v^{\prime}_{i,j+1/2}=\left(\frac{\partial e^{2u}v}{\partial u}\right)_{i,j}=\frac{e^{2u_{i+1/2}}v_{i+1/2}-e^{2u_{i-1/2}}v_{i-1/2}}{\Delta u}$ (23) and is independent of the index $j$. Finally, $f^{n,*}_{i,j+1/2}=\begin{cases}f^{n}_{i,j}\hskip 28.45274pt\text{if }v^{\prime}_{i,j+1/2}\geq 0\\\ f^{n}_{i,j+1}\hskip 19.91692pt\text{if }v^{\prime}_{i,j+1/2}<0\\\ \end{cases}$ (24) and similarly for $f^{n,*}_{i,j-1/2}$. Thus the overall scheme for Eq. 1 is given by, $\displaystyle f^{n+1}_{i,j}$ $\displaystyle=$ $\displaystyle f^{n}_{i,j}+\frac{\Delta t}{(\Delta u)^{2}}e^{-3u_{i}}\bigg{(}e^{-u_{i+1/2}}\tilde{\kappa}_{i+1/2,j}\left[f^{n}_{i+1,j}-f^{n}_{i,j}\right]\bigg{.}$ (25) $\displaystyle-$ $\displaystyle e^{-u_{i-1/2}}\tilde{\kappa}_{i-1/2,j}\left[f^{n}_{i,j}-f^{n}_{i-1,j}\right]\bigg{.}\bigg{)}$ $\displaystyle-$ $\displaystyle\frac{\Delta te^{-3u_{i}}}{\Delta u}\bigg{(}e^{2u_{i+1/2}}v^{*}_{i+1/2}f^{n,*}_{i+1/2,j}\bigg{.}$ $\displaystyle-$ $\displaystyle e^{2u_{i-1/2}}v^{*}_{i-1/2}f^{n,*}_{i-1/2,j}\bigg{.}\bigg{)}$ $\displaystyle+$ $\displaystyle\frac{\Delta te^{-3u_{i}}}{3\Delta w}v^{\prime}_{i,j+1/2}\left(f^{n,*}_{i,j+1/2}-f^{n,*}_{i,j-1/2}\right)$ ### A.2 Boundary conditions The inner radial boundary condition is reflective meaning the cosmic rays cannot enter/leave via this boundary. To implement this boundary condition in the code we treat the spatial diffusion and advection terms separately. For the spatial advective term the velocity of the solar wind in the boundary cell is set to be the opposite of the velocity of the solar wind in the cell beside the boundary, i.e. $v_{0}=-v_{1}$ which ensures that the advective flux across the boundary is zero ($v^{*}_{1/2}f^{n,*}_{1/2,j}=0$). To implement a reflective boundary for the diffusion term we ensure that the diffusive flux across the boundary is zero, i.e. $\kappa_{1/2,j}\nabla f|_{1/2,j}=0$. Therefore, $f^{n}_{0,j}=f^{n}_{1,j}$. The outer radial boundary condition is a fixed boundary condition set to the LIS value in the radial boundary cell. This is implemented in the code by simply fixing the value of the boundary cell to the LIS value which is constant in time. Cosmic rays can enter/leave the spatial grid via the outer radial boundary condition but they do not decrease/increase the value of the boundary cell. The lower and upper momentum boundary conditions are both outflow. This means no momentum is advected onto the momentum grid via the momentum boundaries, but momentum may leave the computational domain via these boundaries which requires no change to the current upwind numerical scheme. To ensure that momentum is not advected onto the grid, for the lower momentum boundary this requires that if $v^{\prime}_{i,1/2}\geq 0$ then $f^{n,*}_{i,1/2}=0$. Similarly for the upper momentum boundary, if $v^{\prime}_{i,M-1/2}\leq 0$ then $f^{n,*}_{i,M-1/2}=0$. ### A.3 Timestep To define the timestep for our scheme we first define a Courant condition for each separate term in Eq. 1. Thus, the diffusive timestep is defined as $\Delta t_{\mathrm{dif}}=\mathrm{min}((\Delta x_{i})^{2}/4\kappa_{i,j})$, the spatial advection timestep is defined as $\Delta t_{\mathrm{adv}}=\mathrm{min}(\Delta x_{i}/v_{i})$ and the momentum advection timestep is defined as $\Delta t_{\mathrm{mom}}=\mathrm{min}(3\Delta x_{i}\mathrm{ln(}\Delta p_{j})/v_{i})$. Then, the overall timestep for the scheme is defined as $\Delta t=\left(\frac{1}{\alpha\Delta t_{\mathrm{dif}}}+\frac{1}{\Delta t_{\mathrm{adv}}}+\frac{1}{\Delta t_{\mathrm{mom}}}\right)^{-1}$ (26) where $\alpha=1/6$ is chosen. Since the diffusion coefficient and the velocity profile of the solar wind remain constant at a given simulated epoch the timestep for the scheme also remains constant for a given simulation run. ### A.4 Model validation using present-day data We use current observations of Galactic cosmic rays at Earth and in the local ISM to compare with and constrain our numerical model. The Earth observations consist of IMP 8 (McDonald, 1998), BESS (from Shikaze et al., 2007) and PAMELA (from Adriani et al., 2013) data spanning a number of years. The local ISM observations are taken from Voyager 1 (Cummings et al., 2016). Our model can be seen to fit the observations well. An average magnetic field strength of 1.3 G is used at the wind base, which is derived from a large scale magnetic field map of the Sun, as an input for the stellar wind model. We note that the value of 1.3 G agrees with the observed magnetic field strength of the dipolar component of the Sun averaged over solar cycles 21 to 23 (see Fig. 1 in Johnstone et al., 2015). Overall, the results from our model at 1 au match the observations quite well, with small discrepancies that are most likely due to the use of a simple 1D model to model an intrinsically asymmetric system. These small discrepancies could also be related to the variation of cosmic rays due to the solar cycle, which are not accounted for in the present paper. Figure 7: Differential intensity of Galactic cosmic rays as a function of kinetic energy. The solid black line represents a model fit of the Voyager 1 data (the green diamonds) for the LIS which are the values used at the spatial outer boundary (122 au). The red dots represent our simulation results for 1 au. The yellow, magenta and blue triangles are the IMP 8, BESS and PAMELA observations, respectively. ### A.5 Resolution test We perform a resolution study using the $||\ell||_{2}$ norm for the simulation set-up using the present day values for the solar wind (given in Table 1), shown in Fig. 8. The $||\ell||_{2}$ norm is defined as $||\ell(a,b)||_{2}=\sqrt{\frac{1}{n}\sum^{n}_{i=0}|x_{i,j;a}-x_{i,j;b}|^{2}}$ (27) where the indices $i$ and $j$ indicate the spatial and momentum positions. The indices $a,b$ correspond to two simulations with different resolutions. Five resolutions are considered increasing the number of bins in the radial (and momentum) direction with $N_{\mathrm{r}}(=N_{\mathrm{p}})=30,60,90,120,180$. The $||\ell||_{2}$ norm is calculated at the same time for each of the simulations. This time is chosen to be sufficiently large that the solution has effectively reached a steady state. A plot of $||\ell(a,b)||_{2}$ on a log-log scale should yield a straight line with a slope between -1 and -2 for our scheme since it is second order in space for the diffusive term but first order in space for advective terms. It is also first order in time but since the solutions are close to steady-state, as noted above, this will not manifest itself in this resolution study. The least-squares fitted slope of the data gives -1.74 indicating that the code is converging as expected and we conclude that our results are well resolved. Figure 8: $||\ell||_{2}$ norm plotted as a function of resolution where $N_{r,p}$ means $N_{r}=N_{p}$ where $N_{r}$ is the number of grid zones in the spatial direction and $N_{p}$ is the number of momentum bins used. ## Appendix B Cosmic ray parameters Throughout the paper we have used the same transport properties for the Galactic cosmic rays as a function of time. Here, we briefly discuss what this assumption physically implies about the system. The power law index, $\gamma$, from Eq. 3, reflects the driving source of the turbulence in the solar wind which determines the turbulence power spectrum. The parameter $\eta_{0}$ describes the level of turbulence in the solar wind with a higher value meaning that the cosmic rays travel further before scattering. Events such as coronal mass ejections (CMEs) are thought to drive of the turbulence in the solar wind but the exact connections still remain debated (Cranmer, 2017). Small scale convective motions on the solar surface (McIntosh et al., 2011) could additionally be transferred via Alfvén waves to the large scale dipolar magnetic field structure and transported outwards in the solar wind but it is also possible that these waves will dissipate in the corona. Based on the solar flare-CME relation (Schmieder et al., 2015) it is thought that young stars could produce more CMEs (Osten & Wolk, 2015) because they have been found to have higher flare rates (Maehara et al., 2012). This may lead to a stronger turbulent component of the magnetic field in the stellar system. At the same time young stars also have stronger magnetic fields and so how the ratio of $(B/\delta B)^{2}$ might change for a star younger than the Sun is overall unclear, as well as the fact that the stronger stellar magnetic fields of young stars may confine stellar CMEs (Alvarado-Gómez et al., 2018). Generally though, a decrease in $(B/\delta B)^{2}$ means smaller diffusion coefficients which would increase the level of modulation suffered by Galactic cosmic rays. In our model we adopt $\eta_{0}=1$ which is already at the Bohm limit where the cosmic rays scatter once per gyroradius. Thus, in our model the magnetic field is already as turbulent as it can be using the diffusion approximation. If instead the level of turbulence in the magnetic field decreased as a function of increasing stellar rotation rate (i.e. larger values for $\eta_{0}$) the Galactic cosmic rays would not suffer as much modulation as presented here. For solar-type stars older than the Sun it is possible that a decrease in CME rates could result in less turbulence in the solar wind. This would lead to larger diffusion coefficients for Galactic cosmic rays and less modulation than is presented for $\Omega=0.87\Omega_{\odot}$ in Fig. 2, for instance. ## Appendix C Magnetic field and velocity profiles from the stellar wind model In Fig. 9 we show the magnitude of the magnetic field components and the radial velocity as a function of radius for two of the rotation rates that we adopt ($1\Omega_{\odot}$ and $4\Omega_{\odot}$). The dashed lines represent values derived from the stellar wind model, as described in Section 2.3 which extend to 1 au. The solid lines represent the values that we use in the cosmic ray model which extend from 0.1 au out to the edge of the stellar astrosphere. From 0.1-1 au we use the values from the stellar wind model and beyond 1 au we extrapolate from the values of the quantities at 1 au as described in Section 2.3.2. (a) (b) Figure 9: (a) and (b) show the magnetic field and velocity profiles as a function of radius for two different stellar rotation rates, $1\Omega_{\odot}$ and $4\Omega_{\odot}$. The dashed lines represent the values derived from the stellar wind model (labelled as ‘SWM’ in the plots) which extend to 1 au. The solid lines represent the values used for the cosmic ray model (labelled as ‘CRM’ in the plots) which assume the values from the stellar wind model out to 1 au and then extrapolate the values from 1 au out to the edge of the astrosphere as described in Section 2.3.2. ## Appendix D Modified force field approximation comparison Here, in Fig. 10, we present the comparison of our simulation results with the modified force field approximation. Our simulations results showed more suppression at low energies than the normal force field approximation. This led us to provide a modified force field approximation, given in Eq. 10, which matches our results at 1 au well for $\Omega\leq 2.1\Omega_{\odot}$. Therefore Eq. 10, along with the values of $\phi$ given in Table 1, can be used to reproduce these results. For $\Omega>2.1\Omega_{\odot}$, the modified force field approximation matches the peak well but fails to reproduce our simulation results at the lowest kinetic energies. Figure 10: Differential intensity of Galactic cosmic rays as a function of kinetic energy. The solid black line represents a model fit of the Voyager 1 data for the LIS. The coloured dashed lines and the red shaded area represent our simulation results, as shown in Fig. 2. The open symbols represent the modified force field approximation which fits our simulation results well, with the value of $\phi$ that is used shown in the figure. See Section 3.1.1 for more details. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## References * Adriani et al. (2013) Adriani O., Barbarino G. C., Bazilevskaya G. A., Bellotti R., Boezio M., Bogomolov E. A., Bongi M., Bonvicini V., Borisov S., Bottai S., Bruno A., Cafagna F., Campana D., 2013, ApJ, 765, 91 * Alvarado-Gómez et al. (2018) Alvarado-Gómez J. D., Drake J. J., Cohen O., Moschou S. P., Garraffo C., 2018, ApJ, 862, 93 * Arzoumanian et al. (2011) Arzoumanian D., Jardine M., Donati J. F., Morin J., Johnstone C., 2011, MNRAS, 410, 2472 * Boro Saikia et al. (2020) Boro Saikia S., Jin M., Johnstone C. P., Lüftinger T., Güdel M., Airapetian V. S., Kislyakova K. G., Folsom C. P., 2020, A&A, 635, A178 * Caballero-Lopez & Moraal (2004) Caballero-Lopez R. A., Moraal H., 2004, Journal of Geophysical Research (Space Physics), 109, A01101 * Carolan et al. (2019) Carolan S., Vidotto A. A., Loesch C., Coogan P., 2019, MNRAS, 489, 5784 * Cleeves et al. (2013) Cleeves L. I., Adams F. C., Bergin E. A., 2013, ApJ, 772, 5 * Cleeves et al. (2015) Cleeves L. I., Bergin E. A., Qi C., Adams F. C., Öberg K. I., 2015, ApJ, 799, 204 * Cohen et al. (2012) Cohen O., Drake J. J., Kóta J., 2012, ApJ, 760, 85 * Cranmer (2017) Cranmer S. R., 2017, ApJ, 840, 114 * Cummings et al. (2016) Cummings A. C., Stone E. C., Heikkila B. C., Lal N., Webber W. R., Jóhannesson G., Moskalenko I. V., Orlando E., Porter T. A., 2016, ApJ, 831, 18 * Finley et al. (2019) Finley A. J., Hewitt A. L., Matt S. P., Owens M., Pinto R. F., Réville V., 2019, ApJ, 885, L30 * Folsom et al. (2016) Folsom C. P., Petit P., Bouvier J., Lèbre A., Amard L., Palacios A., Morin J., Donati J. F., Jeffers S. V., Marsden S. C., Vidotto A. A., 2016, MNRAS, 457, 580 * Gallet & Bouvier (2013) Gallet F., Bouvier J., 2013, A&A, 556, A36 * Gardner et al. (2006) Gardner J. P., Mather J. C., Clampin M., Doyon R., Greenhouse M. A., Hammel H. B., Hutchings J. B., Jakobsen P., Lilly S. J., et al. 2006, Space Science Reviews, 123, 485 * Gleeson & Axford (1968) Gleeson L. J., Axford W. I., 1968, ApJ, 154, 1011 * Gleeson & Urch (1973) Gleeson L. J., Urch I. H., 1973, Ap&SS, 25, 387 * Grießmeier et al. (2009) Grießmeier J. M., Stadelmann A., Grenfell J. L., Lammer H., Motschmann U., 2009, Icarus, 199, 526 * Grießmeier et al. (2005) Grießmeier J. M., Stadelmann A., Motschmann U., Belisheva N. K., Lammer H., Biernat H. K., 2005, Astrobiology, 5, 587 * Grießmeier et al. (2015) Grießmeier J. M., Tabataba-Vakili F., Stadelmann A., Grenfell J. L., Atri D., 2015, A&A, 581, A44 * Helling et al. (2008a) Helling C., Dehn M., Woitke P., Hauschildt P. H., 2008a, ApJ, 675, L105 * Helling et al. (2008b) Helling C., Dehn M., Woitke P., Hauschildt P. H., 2008b, ApJ, 677, L157 * Helling & Rimmer (2019) Helling C., Rimmer P. B., 2019, Philosophical Transactions of the Royal Society of London Series A, 377, 20180398 * Ivanova & Taam (2003) Ivanova N., Taam R. E., 2003, ApJ, 599, 516 * Jardine et al. (2017) Jardine M., Vidotto A. A., See V., 2017, MNRAS, 465, L25 * Johnstone et al. (2010) Johnstone C., Jardine M., Mackay D. H., 2010, MNRAS, 404, 101 * Johnstone et al. (2015) Johnstone C. P., Güdel M., Brott I., Lüftinger T., 2015, A&A, 577, A28 * Johnstone et al. (2015) Johnstone C. P., Güdel M., Lüftinger T., Toth G., Brott I., 2015, A&A, 577, A27 * Jokipii (1966) Jokipii J. R., 1966, ApJ, 146, 480 * Jokipii (1971) Jokipii J. R., 1971, Reviews of Geophysics and Space Physics, 9, 27 * Jokipii et al. (1977) Jokipii J. R., Levy E. H., Hubbard W. B., 1977, ApJ, 213, 861 * Konopacky et al. (2016) Konopacky Q. M., Rameau J., Duchêne G., et al. 2016, ApJ, 829, L4 * Lang et al. (2014) Lang P., Jardine M., Morin J., Donati J. F., Jeffers S., Vidotto A. A., Fares R., 2014, MNRAS, 439, 2122 * Lehmann et al. (2019) Lehmann L. T., Hussain G. A. J., Jardine M. M., Mackay D. H., Vidotto A. A., 2019, MNRAS, 483, 5246 * Maehara et al. (2012) Maehara H., Shibayama T., Notsu S., Notsu Y., Nagao T., Kusaba S., Honda S., Nogami D., Shibata K., 2012, Nature, 485, 478 * McComas et al. (2008) McComas D. J., Ebert R. W., Elliott H. A., Goldstein B. E., Gosling J. T., Schwadron N. A., Skoug R. M., 2008, Geophysical Research Letters, 35, L18103 * McDonald (1998) McDonald F. B., 1998, Space Science Reviews, 83, 33 * McIntosh et al. (2011) McIntosh S. W., de Pontieu B., Carlsson M., Hansteen V., Boerner P., Goossens M., 2011, Nature, 475, 477 * Mojzsis et al. (1996) Mojzsis S. J., Arrhenius G., McKeegan K. D., Harrison T. M., Nutman A. P., Friend C. R. L., 1996, Nature, 384, 55 * Moore et al. (2019) Moore L., Melin H., J. O., S. S. T., I. M. J., M. G., S. M., A. S. C., 2019, Philosophical Transactions of the Royal Society of London Series A, 377, 20190067 * Ó Fionnagáin & Vidotto (2018) Ó Fionnagáin D., Vidotto A. A., 2018, MNRAS, 476, 2465 * Ó Fionnagáin et al. (2019) Ó Fionnagáin D., Vidotto A. A., Petit P., Folsom C. P., Jeffers S. V., Marsden S. C., Morin J., do Nascimento J. D., BCool Collaboration 2019, MNRAS, 483, 873 * Osten & Wolk (2015) Osten R. A., Wolk S. J., 2015, ApJ, 809, 79 * Padovani et al. (2020) Padovani M., Ivlev A. V., Galli D., Offner S. S. R., Indriolo N., Rodgers-Lee D., Marcowith A., Girichidis P., Bykov A. M., Kruijssen J. M. D., 2020, Space Science Reviews, 216, 29 * Parker (1965) Parker E. N., 1965, Planetary and Space Science, 13, 9 * Potgieter (2013) Potgieter M. S., 2013, Living Reviews in Solar Physics, 10, 3 * Rab et al. (2017) Rab C., Güdel M., Padovani M., Kamp I., Thi W.-F., Woitke P., Aresu G., 2017, A&A, 603, A96 * Rimmer & Helling (2013) Rimmer P. B., Helling C., 2013, ApJ, 774, 108 * Rimmer et al. (2014) Rimmer P. B., Helling C., Bilger C., 2014, International Journal of Astrobiology, 13, 173 * Rocha-Pinto et al. (2000) Rocha-Pinto H. J., Scalo J., Maciel W. J., Flynn C., 2000, A&A, 358, 869 * Rodgers-Lee et al. (2020) Rodgers-Lee D., Taylor A. M., Downes T. P., Ray T. P., 2020, MNRAS, 491, 4742 * Rodgers-Lee et al. (2017) Rodgers-Lee D., Taylor A. M., Ray T. P., Downes T. P., 2017, MNRAS, 472, 26 * Rosén et al. (2016) Rosén L., Kochukhov O., Hackman T., Lehtinen J., 2016, A&A, 593, A35 * Rycroft & Harrison (2012) Rycroft M. J., Harrison R. G., 2012, Space Science Reviews, 168, 363 * Schlickeiser (1989) Schlickeiser R., 1989, ApJ, 336, 243 * Schmieder et al. (2015) Schmieder B., Aulanier G., Vršnak B., 2015, Sol. Phys., 290, 3457 * Shikaze et al. (2007) Shikaze Y., Haino S., Abe K., Fuke H., Hams T., Kim K. C., Makida Y., Matsuda S., Mitchell J. W., Moiseev A. A., Nishimura J., Nozaki M., Orito S., Ormes J. F., Sanuki T., Sasaki M., Seo E. S., Streitmatter R. E., Suzuki J., et. al 2007, Astroparticle Physics, 28, 154 * Skumanich (1972) Skumanich A., 1972, ApJ, 171, 565 * Stone et al. (2019) Stone E. C., Cummings A. C., Heikkila B. C., Lal N., 2019, Nature Astronomy, 3, 1013 * Stone et al. (2013) Stone E. C., Cummings A. C., McDonald F. B., Heikkila B. C., Lal N., Webber W. R., 2013, Science, 341, 150 * Svensmark (2006) Svensmark H., 2006, Astronomische Nachrichten, 327, 871 * Svensmark et al. (2017) Svensmark H., Enghoff M. B., Shaviv N. J., Svensmark J., 2017, Nature Communications, 8, 2199 * Tóth (1996) Tóth G., 1996, Astrophysical Letters and Communications, 34, 245 * Tu et al. (2015) Tu L., Johnstone C. P., Güdel M., Lammer H., 2015, A&A, 577, L3 * Usmanov et al. (2014) Usmanov A. V., Goldstein M. L., Matthaeus W. H., 2014, ApJ, 788, 43 * Vidotto & Donati (2017) Vidotto A. A., Donati J. F., 2017, A&A, 602, A39 * Vidotto et al. (2014) Vidotto A. A., Gregory S. G., Jardine M., Donati J. F., Petit P., Morin J., Folsom C. P., Bouvier J., Cameron A. C., Hussain G., Marsden S., Waite I. A., Fares R., Jeffers S., do Nascimento J. D., 2014, MNRAS, 441, 2361 * Vos & Potgieter (2015) Vos E. E., Potgieter M. S., 2015, ApJ, 815, 119 * Weber & Davis (1967) Weber E. J., Davis Leverett J., 1967, ApJ, 148, 217 * Witte et al. (2009) Witte S., Helling C., Hauschildt P. H., 2009, A&A, 506, 1367 * Wright et al. (2011) Wright N. J., Drake J. J., Mamajek E. E., Henry G. W., 2011, ApJ, 743, 48
# Image reconstruction from tissue scattered events for ${\beta}^{+}\gamma$ coincidences in Compton-PET Satyajit Ghosh Department of Physics, Indian Institute of Technology Bombay, Mumbai, INDIA Pragya Das Department of Physics, Indian Institute of Technology Bombay, Mumbai, INDIA Abstract For long time non-pure beta emitters are avoided from PET imaging due to extra dose and increase in background from Compton scattering. But advent of high-resolution Compton camera system opens up new domain of imaging. Various non-pure beta emitters are formed as beam irradiation byproduct in therapy which can be used in online beam range verification. In this case, the number of usable counts for imaging is generally 1-3 order lesser than normal PET scan. On the other hand, we know that in human PET scanner, 30-60% can be tissue scattered coincidences in 3D case containing 80% single scattered events. In this work, we have investigated feasibility of imaging using only single scattered coincidences for non-pure beta emitters in a Compton-PET system. The locus of tissue scatter point can be reduced to in generally two points after using Compton cone from both ends of 511 keV detections. Finally, annihilation point is estimated using Compton cone of 1157 keV gamma and time- of-flight information for the 511 keV. We believe independent assessment of underlying activity from single scattered data sets will increase confidence in image interpretation. --- ## 1 Introduction For long time non-pure beta emitter radioisotopes (e.g., 44mSc,94Tc,14O,68Ga,124I,10C) are not used in PET imaging. This is because of extra dose and Compton scattering background that the quasi-simultaneously emitted extra gamma ray produces. But with the development of excellent resolution Compton camera systems this situation had changed. New concept of imaging using triple coincidence data was proposed [1]. In this new imaging, Compton cone drawn using extra gamma interaction points was used to estimate original annihilation point on the LOR, similar to TOF-PET imaging [2]. Application of these type of radioisotopes is in generally of two types. It is used as conventional radiopharmaceutical, e.g., [44Sc]Sc-PSMA-617 [3] in prostate cancer imaging. Besides, various non-pure beta emitters are formed as beam irradiation byproduct in ion therapy [4]. Hence, it is online or offline beam range monitoring agent. But in this case, generally the emitted count is 1-3 orders magnitude smaller than conventional PET scan [5]. On the other hand, it is known that tissue scattering can contribute to 30-60% of coincidences in human 3D-PET [6] in which 80% are single scattered [7]. So, in this work, we have investigated the feasibility of image reconstruction from those tissue scattered events in Compton-PET system. Our aim is to produce a physically meaningful image from the single scattered data which is independent from unscattered data. We believe having two independent image of same underlying activity distribution will assists us in better diagnosis. In this context, it is worth to mention that the motivation of WGI imaging concept is indeed to use all types data independently [8]. We performed GATE [9] simulation with finite resolution parameters for a Compton-PET system with silicon as scatterer ring and LaBr3:Ce as absorber ring. Geometrical arrangement and parameters were chosen keeping in mind the sensitivity and resolution. Line sources of 44Sc was used. And a cylindrical water phantom of diameter 10 cm was placed axially. At first, we had shown that the locus of tissue scattering point of a single scattered coincident (single scatter surface) is prolate spheroid (for scattering angle, ${\theta}_{s}<{90}^{0}$) and spindle toroid (for ${\theta}_{s}>{90}^{0}$) where to acquire these single scattered coincidences photo-peak and off-peak energy windows positioned in accordance with detector resolution were used. Data acquisition was performed using an appropriately defined trigger logic. Compton cones from both end of 511 keV detection were projected on the single scatter surface to obtain two 3D curves which cut each other in generally at two points forming two possible broken LORs. Finally, annihilation point was estimated by projecting Compton cone of 1157 keV and TOF information was used to choose between the two. The image we obtained is physically meaningful and proves the feasibility of single scattered imaging in non-pure beta emitter cases. ## 2 Materials and Methods At first, we have discussed the locus of single scattering point in case of Compton-PET system. For proving the feasibility a GATE simulation was performed. Trigger logic was developed for data extraction. Finally the image reconstruction algorithm was proposed for single scatter imaging. ### 2.1 Locus of scattering point We have drawn a typical Compton-PET set in figure (1). From here on, we named the locus as single scatter surface to avoid any confusion. Now, we assume a single scattered coincident event where annihilation happened at point O and tissue scattering at C. To find the locus of the scattering point C, at first, we write down the equation of locus depending only on scattering angle in tissue (${\theta}_{s}$), $\overrightarrow{AC}.\overrightarrow{CB}=\left|\overrightarrow{AC}\right|\left|\overrightarrow{CB}\right|\ cos\ {\theta}_{s}$ $\Rightarrow\left(\overrightarrow{r}-\overrightarrow{r_{A}}\right).\left(\overrightarrow{r_{B}}-\overrightarrow{r}\right)=\left|\overrightarrow{r}-\overrightarrow{r_{A}}\right|\left|\overrightarrow{r_{B}}-\overrightarrow{r}\right|\ cos\ {\theta}_{s}$ (1) where tissue scattering angle is calculated using this equation ${\theta}_{s}=\arccos\left(2-\frac{511}{{E_{1}}+{E_{2}}}\right)$ where $E_{1}$ and $E_{2}$ are energies deposited in scatterer and absorber by the tissue scattered photon and here we have assumed a full energy deposition. Figure 1: Compton-PET set up with a single scatter event get detected at points A, B (scatterer ring detection points) and points D, E (absorber ring detection points) where annihilation happended at O and tissue scattering happended at point C; tissue scattering angle is ${\theta}_{s}$ and scattering angles in scatterer ring are ${\theta}_{a}$, ${\theta}_{b}$; locus of scattering point shown in blue curve and Compton cones from both ends of 511 keV detection are shown in green colored cones. Now applying Compton cone constraint from both side of 511 keV detection the tissue scattering point can be further localised. The equations of the Compton cones are $\left(\overrightarrow{r}-\overrightarrow{r_{A}}\right).\ \widehat{n_{A}}=\left|\overrightarrow{r}-\overrightarrow{r_{A}}\right|\ cos\ {\theta}_{a}$ (2) and, $\left(\overrightarrow{r}-\overrightarrow{r_{B}}\right).\ \widehat{n_{B}}=\left|\overrightarrow{r}-\overrightarrow{r_{B}}\right|\ cos\ {\theta}_{b}$ (3) where ${\theta}_{a}$ and ${\theta}_{b}$ are scattering angles from scatterer ring and $\widehat{n_{a}}$ and $\widehat{n_{b}}$ are unit vectors along line joining from absorption to scattering point respectively. It is known that eq. (1), which represents single scatter surface (blue curve in figure 1), is a surface equation of a prolate spheroid for ${\theta}_{s}<{90}^{0}$ and of spindle toroid for ${\theta}_{s}>{90}^{0}$. For further constraining the locus of scattering point, we have solved eq. (1) with eq. (2) and (3) which means solution between single scatter surface and cones. We found that the solution to be closed contour 3D curves on the single scatter surface. And two curves from both end of 511 keV detection cut each other in generally at two points. Further discussion about this is given in section 3. ### 2.2 GATE simulation We performed a GATE [9] simulation of a Compton-PET system. Silicon scatterer ring of thickness 2.5 cm and radius 20 cm was chosen. Radius was chosen larger since we are working with human scanner. And LaBr3:Ce absorber of ring radius 28 cm and thickness 3 cm was used. Axial width of each ring was 28 cm. Energy resolutions of scatterer and absorber were 2.5% and 5% @511 keV and time resolutions were 1 ns and 200 ps respectively. Finally the spatical resolution was chosen to be 2 mm and 5 mm respectively. For image resolution study a 44Sc line source, situated at the centre of the scanner, of activity 1 MBq was used. Activity was chosen low to have a smaller number of random events. A cylindrical water phantom of diameter 10 cm and height 28 cm was defined axially. The decision of working with a human scale Compton-PET set up was due to the fact that the scatter fraction in human PET scan is significant enough to interest us in the proposed idea whereas in small animal imaging scatter fraction is not so high. ### 2.3 Trigger logic After generating the data from GATE simulation, we had defined a trigger logic to select out usable valid triple gamma single scattered events. A coincidence time window of 10 ns was used for data selection. At first, two different energy windows, for 511 keV, the energy window was from 10-255 keV and for 1157 keV gamma it is 255-818 keV were used to select out scatterer detector interactions. If three hits in the above specified energy windows (two for 511 keV and one for 1157 keV) for the scatterer were obtained then we collected all the events in absorber ring falling in that coincidence time window and sorted out events with only three hits in absorber ring. In next stage, correspondence between individual scatter hit and absorber interactions were made. At first, the absorber hit corresponding to 1157 keV is identified depending on closeness of summed energy to 1157 keV. Then remaining two absorber hits were allocated depending on closeness from scatter hits. Finally, single scattered coincidences were acquired using photo-peak and off- peak energy windows of 495-525 keV and 250-495 keV respectively. ### 2.4 Image reconstruction Image reconstruction was performed without applying any typical algorithm (e.g., MLEM, OSEM [7]). Rather the annihilation points were estimated independently for each event. At first, we had calculated two possible scattering points on the single scatter surface as explained in section 2.1. Then Compton cone of 1157 keV was projected on the two separate broken LORs to obtain at most four possible annihilation points (figure 2). One point among those was selected depending on FOV constraint and TOF information. Figure 2: A single scattered event is detected at points A, B in scatterer ring and at point D, E in absorber ring; the intersection between single scatter surface, two Compton cones for 511 keV lefts us with two possible scattering points K, J; the intersection between Compton cone of 1157 keV and broken LORs gives us four possible annihilation points; one among those are chosen depending on FOV constraint and TOF information. ## 3 Results As described in the section 2.1, using Compton cones from both sides of 511 keV detection, the locus can be further constrained. The cross section between single scatter surface and the cone in these cases is a type of 3D curves (red and green curve in figure 3) such that two such 3D curves cut each other in generally at two points. It is worth to mention here that the generation of two cross points is not due to finite resolution of detectors and hence those can be quite a distant apart (figure 3). Figure 3: The solution between single scatter surface (yellow envelope) and Compton cones from each end of 511 keV detection is shown as green and red curve, in generally these two curves cut each other at two points, it is to be noted that two point is not due to finite resolution of detector. We had performed the GATE simulation of Compton-PET system with parameters described in section 2.2. Then root output data was processed using the trigger logic described in section 2.3. The trigger logic was implemented through MATLAB scripts. Figure 4-6 shows the 2D energy histogram plot between scatter energy deposition vs. absorber energy deposition for 511 and 1157 keVs. For unscattered photons we can find x+y=511 keV line where x and y are absorber and scatter deposition respectively which shows that the proposed trigger logic is able to collect 511 keV data (figure 4). The line is discontinued at scatter energy 10 keV and 255 keV, because of energy window on scatter deposition (see section 2.3). Besides the width of the x+y=511 line is decided by photo-peak width chosen in trigger logic. On the other hand, for 1157 keV detection similar x+y=1157 keV line can be seen (figure 5). Here we have discontinuity on scatterer energy at 255 keV and 818 keV due to energy window applied in trigger logic. Point to be noted, here unlike 511 keV we have count below the x+y=1157 line. This is because there is no window applied on total energy like photo-peak energy window for 511 keV. We have assumed a full deposition of energy of 1157 keV gamma in scatterer and absorber. Finally, for single scattered events, rather than having a line we have area bounded by x+y=250 keV, x+y=495 keV, x=10 keV, and x=255 keV (figure 6). First two bounds are due to off-peak window and last two are applied in initial stage of trigger logic. Figure 4: The count histogram color plot between scatterer and absorber energy deposition for 511 keV unscattered photon detection. Figure 5: The count histogram color plot between scatterer and absorber energy deposition for 1157 keV photon detection. Figure 6: The count histogram color plot between scatterer and absorber energy deposition for 511 keV single scattered photon detection. Figure 7: Single scattered image (cross sectional) of line source, pixel size was chosen to be $8\times 8$ $mm^{2}$. Figure 8: Intensity line profile (horizontal) of the single scatter image through (0,0) point, pixel size was chosen to be $8\times 8\ mm^{2}$. Finally, we had produced image from single scattered data set (figure 7). For image resolution study, we had calculated FWHM of intensity line profile (horizontal) of the image. The histograms were shown in figure (8) with FWHM calculated to be 35.864 mm. To sum up, we were able to produce physically meaningful single scatter images. This proves the feasibility of single scatter imaging for Compton-PET system with triple gamma source. ## 4 Conclusion We have proposed the idea of feasibility of imaging from single scattered (inside tissue) data in triple gamma imaging. Tissue scattered data in human PET scan can go up to 40-60%. On the other hand, triple gamma imaging suffers from low count specially in online ion range verification in ion therapy and hence in that context the idea of imaging from scattered data is relevant. Although a better resolution image than unscattered image can’t be expected from scattered data due to inherent resolution effects, we believe that producing image from two independent data sets – unscattered and single scattered – will improve our diagnosis ability. We have shown the feasibility of the proposed concept. Analysing GATE simulation data, we are able to produce physically meaningful images. The trigger logic used here is not claimed to be perfect. Rather simplicity is invoked to make the task computationally simple as this work is related to only feasibility. We believe that the idea proposed can be beneficial in triple gamma imaging based beam range monitoring and late point imaging in case of Scandium DOTA-TOC imaging. ## 5 Acknowledgement Authors wish to gratefully acknowledge the Center for Development of Advanced Computing (C-DAC), Pune, India, for providing the supercomputing facilities [10] for data analysis. ## References * [1] C Grignon et al. “Nuclear medical imaging using $\beta$\+ $\gamma$ coincidences from 44Sc radio-nuclide with liquid xenon as detection medium” In _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_ 571.1-2 Elsevier, 2007, pp. 142–145 * [2] D Giovagnoli et al. “A Pseudo-TOF Image Reconstruction Approach for Three-Gamma Small Animal Imaging” In _IEEE Transactions on Radiation and Plasma Medical Sciences_ IEEE, 2020 * [3] Elisabeth Eppard “Pre-Therapeutic Dosimetry Employing Scandium-44 for Radiolabeling PSMA-617” In _Prostatectomy_ IntechOpen, 2018 * [4] K Parodi, A Ferrari, F Sommerer and H Paganetti “Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code” In _Physics in Medicine & Biology_ 52.12 IOP Publishing, 2007, pp. 3369 * [5] Christopher Kurz et al. “Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring” In _Medical Physics_ 42.7 Wiley Online Library, 2015, pp. 3979–3991 * [6] Habib Zaidi and Marie-Louise Montandon “Scatter compensation techniques in PET” In _PET clinics_ 2.2 Elsevier, 2007, pp. 219–234 * [7] Dale L Bailey, Michael N Maisey, David W Townsend and Peter E Valk “Positron emission tomography” Springer, 2005 * [8] Eiji Yoshida et al. “Whole gamma imaging: a new concept of PET combined with Compton imaging” In _Physics in Medicine & Biology_ 65.12 IOP Publishing, 2020, pp. 125013 * [9] Sébastien Jan et al. “GATE: a simulation toolkit for PET and SPECT” In _Physics in Medicine & Biology_ 49.19 IOP Publishing, 2004, pp. 4543 * [10] Param Yuva-II “India’s fastest supercomputer, 2013” In _URL http://cdac. in/index. aspx_ , 2013
# Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models Terry Yue Zhuo 1,2 Armel Zebaze 3 Nitchakarn Suppattarachai 1 Leandro von Werra 3 Harm de Vries 4 Qian Liu 5 Niklas Muennighoff 6 1 Monash University 2 CSIRO’s Data61 3 Hugging Face 4 ServiceNow Research 5 Sea AI Lab 6 Contextual AI <EMAIL_ADDRESS> https://github.com/bigcode-project/astraios The work was partially done at Hugging Face. ###### Abstract The high cost of full-parameter fine-tuning (FFT) of Large Language Models (LLMs) has led to a series of parameter-efficient fine-tuning (PEFT) methods. However, it remains unclear which methods provide the best cost-performance trade-off at different model scales. We introduce Astraios, a suite of 28 instruction-tuned OctoCoder models using 7 tuning methods and 4 model sizes up to 16 billion parameters. Through investigations across 5 tasks and 8 different datasets encompassing both code comprehension and code generation tasks, we find that FFT generally leads to the best downstream performance across all scales, and PEFT methods differ significantly in their efficacy based on the model scale. LoRA usually offers the most favorable trade-off between cost and performance. Further investigation into the effects of these methods on both model robustness and code security reveals that larger models tend to demonstrate reduced robustness and less security. At last, we explore the relationships among updated parameters, cross-entropy loss, and task performance. We find that the tuning effectiveness observed in small models generalizes well to larger models, and the validation loss in instruction tuning can be a reliable indicator of overall downstream performance. Figure 1: Mean task performance of Astraios models across 5 representative tasks and 8 datasets. We indicate the average percentage of total parameters updated for each PEFT method. ## 1 Introduction Large language models (LLMs) (Zhao et al., 2023) trained on Code (Code LLMs) have shown strong performance on various software engineering tasks (Hou et al., 2023). There are three main model paradigms: (A) Code LLMs for code completion (Nijkamp et al., 2022; Fried et al., 2022; Li et al., 2023); (B) Task-specific fine-tuned Code LLMs for a single task (Hou et al., 2023); and (C) Instruction-tuned (Ouyang et al., 2022) Code LLMs that excel at following human instructions and generalizing well on unseen tasks (Wang et al., 2023b; Muennighoff et al., 2023c). Recent instruction-tuned Code LLMs, including WizardCoder (Luo et al., 2023) and OctoCoder (Muennighoff et al., 2023a), have achieved state-of-the-art performance on various tasks without task-specific fine-tuning. However, with the increasing parameters of Code LLMs, it becomes more expensive to perform full-parameter fine-tuning (FFT) to obtain instruction-tuned models. In practise, to save computational cost, parameter- efficient fine-tuning (PEFT) have been applied. This training strategy aims to achieve comparable performance to FFT by updating fewer parameters. While there are many PEFT methods (Ding et al., 2022), the predominant PEFT method is still LoRA, which is proposed in 2021 (Hu et al., 2021). However, there is no empirical evidence showing LoRA remains the best for instruction-tuned code LLMs. In this paper, we investigate instruction-tuned code LLMs with the following research question: what are the best PEFT methods for Code LLMs? Existing analysis on PEFT methods presents several opportunities for further exploration: (1) Beyond Task-Specific LLMs. Most prior works (Zhou et al., 2022; Ding et al., 2023) only focus on the model paradigm (B), where the selected base models are fine-tuned on specific downstream tasks. While these studies provide insights into PEFT methods on task-specific LLMs, the transferability of their findings to the instruction tuning paradigm is unclear. (2) Diverse Domains. Studies on PEFT methods tend to evaluate in the predominant domains like vision (Sung et al., 2022; He et al., 2023) and text (Houlsby et al., 2019; He et al., 2021), leaving other domains like code underexplored. (3) Inclusive PEFT Methods. Prior investigations on PEFT mainly consider a limited number of methods, such as adapter-based tuning (Houlsby et al., 2019) or reparametric tuning (Aghajanyan et al., 2021), which does not capture the full breadth of available methods. (4) Multidimensional Evaluation. Previous works only consider limited evaluation on representative downstream tasks (Chen et al., 2022; Fu et al., 2023; Ding et al., 2023). We argue that other evaluation dimensions like model robustness (Han et al., 2021) and output code safety (Weidinger et al., 2021; Zhuo et al., 2023b; Pearce et al., 2022; Dakhel et al., 2023) are also important, especially in the era of LLM agents (Ouyang et al., 2022; Xie et al., 2023). (5) Scalability. Most prior PEFT work has only explored LLMs with insufficient scales of model sizes and training time, which makes their scalability questionable (Lester et al., 2021; Chen et al., 2022; Hu et al., 2023). To explore these identified opportunities further, we introduce Astraios, a suite of 28 instruction-tuned Code LLMs, which are fine-tuned with 7 tuning methods based on the StarCoder (Li et al., 2023) base models (1B, 3B, 7B, 16B). We instruction-tune the models based on the open-source dataset, CommitPackFT from OctoPack (Muennighoff et al., 2023a), to balance their downstream capabilities. We utilize PEFT configurations with Hugging Face’s best practices (Mangrulkar et al., 2022) and integrate a few PEFT methods from recent frameworks (Hu et al., 2023). We first inspect the scalability of different tuning methods through the lens of cross-entropy loss during instruction tuning. Specifically, we assess the scales of model size and training time. Our main evaluation focuses on 5 representative code tasks, including clone detection (Svajlenko and Roy, 2021), defect detection (Zhou et al., 2019), code synthesis (Muennighoff et al., 2023a), code repair Muennighoff et al. (2023a), and code explain (Muennighoff et al., 2023a). We further study the tuning methods from two aspects: model robustness (Wang et al., 2023a) and code security (Pearce et al., 2022). We assess how well models can generate code based on the perturbed examples and how vulnerable the generated code can be. The main experimental results can be found in Figure 1, where we observe that FFT generally leads to the best downstream performance across all scales. In addition, we find that PEFT methods differ significantly in their efficacy depending on the model scale. At 16B parameters, Parallel Adapter (He et al., 2021) and LoRA (Hu et al., 2021) are the most competitive methods with FFT. Meanwhile, at 1B parameters, they are both slightly outperformed by P-Tuning and (IA)3. Thus, the choice of the PEFT method should be considered along with the model scale at hand. Nevertheless, LoRA usually offers the most favourable trade-off between cost and performance. Meanwhile, we also observe that larger PEFT Code LLMs perform better on code generation tasks while they do not show such patterns on code comprehension tasks like clone detection and defect detection. In addition, increasing model size improves generation task performance but exhibits vulnerabilities to adversarial examples and biases towards insecure code. Additionally, we investigate the relationships among updated parameters, cross-entropy loss, and task performance. We find that the final loss of small PEFT models can be extrapolated to the larger ones. We also observe strong correlations between final loss and overall downstream task performance. Although the instruction dataset we choose is general and is not directly correlated with the benchmark downstream tasks, we suggest that the performance on such general data can serve as a proxy for the downstream performance. ## 2 The Astraios Suite and Benchmark In this section, we document our model choices, training configurations, and evaluations in detail for easy reproducing our experimental results in this paper. Figure 2: Percentage (%) of total parameters updated for each PEFT method in Astraios models. Table 1: Summary of tuning methods and the trainable parameters of different model scales. Type | Name | 1B | 3B | 7B | 16B ---|---|---|---|---|--- Low-Rank | LoRA (Hu et al., 2021) | 3,588,096 | 7,372,800 | 12,472,320 | 17,776,640 Prompt | P-Tuning (Liu et al., 2023) | 12,650,496 | 23,882,496 | 50,466,816 | 113,448,960 Adapter | (IA)3 (Liu et al., 2022) | 251,904 | 516,096 | 870,912 | 1,239,040 AdapterH (Houlsby et al., 2019) | 50,331,648 | 103,809,024 | 176,160,768 | 251,658,240 AdapterP (Pfeiffer et al., 2020) | 25,165,824 | 51,904,512 | 88,080,384 | 125,829,120 Parallel (He et al., 2021) | 26,738,688 | 54,263,808 | 90,832,896 | 128,450,560 FFT | FFT | 1,137,207,296 | 3,043,311,104 | 7,327,263,232 | 15,517,456,384 ### 2.1 Model #### Base Model There are many Code LLMs available that could be a suitable base model. However, some of them are not fully open such as Code-Llama (Roziere et al., 2023), where the training data is not discussed. To maximize transparency, we select the StarCoder series as our base models. Concretely, four model scales including 1B, 3B, 7B and 16B parameters are selected. #### PEFT Model We focus on three kinds of PEFT methods (Ding et al., 2022): (1) Adapter-based Tuning (Houlsby et al., 2019): An early approach, which injects small-scale neural modules as adapters to LLMs and only tune these adapters for model adaptation. (2) Prompt-based Tuning (Li and Liang, 2021): It wraps the original input with additional context introducing virtual task-specific tokens without adding layers of modules like adapters. (3) Intrinsic-rank- based Tuning (Aghajanyan et al., 2021): A representative method is LoRA, which assumes that the change of weights during model tuning has a low rank and thus low-rank changes to the matrices suffice. For all methods, we utilize the implementations in the open-source PEFT library111https://github.com/huggingface/peft (Mangrulkar et al., 2022) and the LLM-Adapters work (Hu et al., 2023) built on top of it. We benchmark 6 PEFT methods, including 4 adapter-based, 1 prompt-based, and 1 intrinsic-rank- based tuning methods as shown in Table 1. We also include FFT for each model size. The ratio of updated parameters of each PEFT method is presented in Figure 2. ### 2.2 Instruction Tuning #### Dataset Following previous work, we select the dataset CommitPackFT+OASST from OctoPack (Muennighoff et al., 2023a) as the instruction tuning dataset, which helps StarCoder to achieve superior performance. We note that there could be other choices by utilizing other datasets (e.g., the publicly available dataset CodeAlpaca (Chaudhary, 2023)) . However, they usually focus on a certain aspect of code-related tasks and lack generality to different tasks. #### Configuration We train all models with a sequence length of 2048 tokens, with the batch size as 1, the warm-up step as 12, and the global steps as 200. We set the learning rate as $1\times 10^{-4}$ for PEFT models and $1\times 10^{-6}$ FFT models with a cosine scheduler in both cases. For PEFT methods, we use 8-bit- quantized models during training (Dettmers et al., 2022). ### 2.3 Evaluation #### Code Comprehension To evaluate code comprehension, we select two representative tasks: clone detection and defect detection. Clone detection aims to identify segments of code that are either exact duplicates or structurally similar with variations in identifiers, literals, types, layout, and comments, or even more broadly similar in terms of functionality. Defect detection targets for identifying bugs, vulnerabilities, or antipatterns in code. We select two widely-used datasets from CodeXGLUE benchmark Lu et al. (2021): BigCloneBench (Svajlenko and Roy, 2021) and Devign (Zhou et al., 2019). As the original BigCloneBench and Devign are designed to evaluate classification models, we prepend additional instructions to prompt the instruction-tuned models to complete such tasks. We follow the evaluation settings of CodeXGLUE and use F1 and Accuracy for BigClone and Devign, respectively. Due to the non-trivial number of test examples in these two datasets, we sample 2,000 from each to save costs. As BigCloneBench and Devign are in the binary classification tasks, we use temperature 0 for model inference to get deterministic outputs. #### Code Generation We use HumanEvalPack (Muennighoff et al., 2023a), a benchmark recently proposed that enables easy evaluation of instruction-tuned Code LLMs. The benchmark is structured around three core tasks in code generation, each designed to test different capabilities of the model. The first task, Code Synthesis, involves the model in synthesizing functional code given a function with a docstring detailing the desired code behavior. The second task, Code Repair, challenges the model to identify and fix a subtle bug in an otherwise correct code function, using provided unit tests as a guide. The third and final task, Code Explanation, requires the model to generate a clear and concise explanation for a correctly written code function. For the evaluation on HumanEvalPack, we use its Python and Java splits and compute Pass@1 for each task. We use temperature 0.2 and sample 20 outputs per test example. #### Model Robustness Evaluating the robustness of code generation models is crucial in understanding their real-world applicability and reliability. Models that can maintain high-performance levels despite variations and perturbations in input data are more likely to be effective in diverse and dynamic coding environments (Bielik and Vechev, 2020; Henkel et al., 2022; Wang et al., 2023a). Motivated by such model behaviors, we utilize ReCode (Wang et al., 2023a), a benchmark framework designed to assess the robustness of Code LLMs. We use HumanEval (Chen et al., 2021) as the base dataset and curated it to mimic natural variations while preserving the semantic integrity of the original inputs. The perturbations cover a range of transformations (Zhuo et al., 2023c) on code format, function, variable names, code syntax, and docstrings. These transformations are not arbitrary but represent changes occurring naturally in coding practices. The quality of the perturbed data in ReCode is verified through human evaluation and objective similarity scores, ensuring the relevance and reliability of the dataset for robustness assessment. We use temperature 0.2 and 20 samples per test example for the generation. To compute the level of model robustness, we adopt Robust Pass@k (RP@k) from ReCode and also compute Robust Change@k (RC@k) as follows: $RP@k:=\mathbb{E}_{x}\left[1-\frac{{n-r_{c}s(x)}}{{\binom{n}{k}}}\right]$ (1) $RC@k:=\left|Pass@k-Robust\ Pass@k\right|$ (2) #### Code Security One limitation of Code LLMs is their tendency to generate code with potential security vulnerabilities, as various studies have highlighted (Dakhel et al., 2023; Asare et al., 2023). In our work, we aim to empirically examine how PEFT methods can influence the security aspects of Code LLM outputs. We utilize the “Asleep at the Keyboard” (AATK) benchmark (Pearce et al., 2022), which includes 89 security-centric scenarios, to provide a comprehensive evaluation across three distinct dimensions: Diversity of Weakness (DoW), encompassing 18 unique vulnerability classes from the MITRE Common Weakness Enumeration (CWE) taxonomy, sourced from the 2021 CWE Top 25 Most Dangerous Software Weaknesses; Diversity of Prompt (DoP), assessing responses to different prompts within the SQL injection vulnerability class; and Diversity of Domain (DoD), involving scenarios in Verilog, a hardware description language. Our analysis predominantly focuses on the DoW axis, comprising 54 scenarios–25 in C and 29 in Python–covering 18 CWEs. This focus is due to the automatic evaluation challenges associated with the other two dimensions. After filtering out scenarios that lack an automated test, we thoroughly examine 40 scenarios, including 23 in C and 17 in Python. We use temperature 0.2 and 20 samples per test example for the generation. ## 3 Preliminary Study: Cross-Entropy Loss Cross-entropy loss has been used as the principal performance metric in training LLMs for NLP tasks (Brown et al., 2020; Hernandez et al., 2021; Zhang et al., 2022b). Most studies on modeling loss focus on either pre-training (Kaplan et al., 2020) or FFT (Chung et al., 2022). Previous studies have consistent findings on loss (Kaplan et al., 2020; Hoffmann et al., 2022; Aghajanyan et al., 2023): The final loss tends to decrease when the training computation (e.g., model sizes, training data and training time) increases. These observations indicate that more training time and more trainable model parameters can lead to better alignment with the tuning data. However, there is no systematic investigation for PEFT, especially for Code LLMs. Based on the updated parameters for each tuning method in Table 1, we hypothesize that each PEFT method has a similar trend to previous findings of loss. Inspired by Kaplan et al. (2020), we study the loss change for instruction tuning Code LLMs, varying two factors: (1) Model Size (1B - 16B); and (2) Training Time (measured in global step, maximum 200 steps). Due to the limited budget, We do not study how the amount of training data may affect the loss. Figure 3: Final loss across model sizes. #### Model Size Scaling We present the results of final loss in Figure 3 when varying the model size from 1B to 16B. Our first observation is that train and test loss are well aligned, indicating that the models trained on the selected tuning methods are not overfitted. The second observation is that both train and test loss also strictly decrease when the model size increases. Although these observations are aligned with the aforementioned observations (Kaplan et al., 2020; Hoffmann et al., 2022), they show the different scales of loss change, suggesting different tuning methods may require different levels of power. Compared to other tuning methods, FFT demonstrates a slightly better loss performance than PEFT methods like LoRA and Parallel Adapter. As we notice that heavier PEFT methods (which update more parameters) tend to have a better final loss, we hypothesize that more trainable parameters in the model may result in a smaller loss, regardless of how the parameters are updated during training. 1B Astraios models. 3B Astraios models. 7B Astraios models. 16B Astraios models. Figure 4: Test loss of Astraios models across training time measured by Global Step. #### Training Time Scaling We show the changes in test loss on the Astraios when varying the training time in Figure 4. We notice that the loss continues decreasing when the model is trained longer. Although the loss changes of (IA)3 are consistently insignificant. Notably, the loss of P-Tuning decreases drastically to 50 steps but behaves similarly to other prompt-based methods. In terms of tuning stability, we observe that P-tuning is more unstable than other methods, where the loss change appears to be a non-monotonic pattern. When comparing FFT against PEFT methods, we find that FFT tends to decrease even after 200 steps, while PEFT methods do not show a decreasing trend clearly. We hypothesize that it may be due to the number of updated parameters, where FFT updates the full parameters in the model. Table 2: Results of Astraios models on Defect Detection and Clone Detection. The best performance is highlighted in bold. The second best performance is underlined. Method | Defect Detection | Clone Detection ---|---|--- 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B LoRA | 44.15 | 44.90 | 49.05 | 31.95 | 9.30 | 12.05 | 14.10 | 8.80 P-Tuning | 53.70 | 27.75 | 40.55 | 11.00 | 19.27 | 23.52 | 13.35 | 3.24 AdapterH | 45.75 | 45.80 | 46.25 | 41.75 | 8.59 | 8.17 | 12.05 | 8.18 AdapterP | 45.55 | 46.05 | 46.85 | 27.35 | 8.88 | 8.63 | 12.05 | 9.00 Parallel | 34.50 | 33.50 | 52.55 | 42.30 | 9.55 | 8.94 | 10.16 | 17.21 (IA)3 | 53.90 | 33.55 | 37.20 | 23.70 | 8.28 | 11.76 | 23.19 | 8.13 FFT | 50.80 | 44.20 | 48.30 | 43.65 | 8.34 | 12.68 | 8.04 | 12.62 ## 4 Main Results: Task Performance Figure 5: Accuracy results of Astraios models on Defect Detection. Figure 6: F1 results of Astraios models on Clone Detection. As cross-entropy loss only indicates how well Code LLMs can be aligned with the training data, it greatly depends on the specific training content and may not serve as a reliable proxy of performance on various tasks of source code. Therefore, We seek to examine how well selective PEFT methods contribute to task performance in this section. To benchmark the performance, we leverage the representative code downstream tasks: (1) Defect Detection, (2) Code Clone, (3) Code Synthesis, (4) Code Repair and (5) Code Explanation. For the first two code comprehension tasks, there is no existing study stating that the larger code LLMs result in a better understanding of code. We are the first to study this aspect when varying the model sizes. Regarding the latter three code generation tasks, previous power-law studies (Kaplan et al., 2020; Hoffmann et al., 2022) have shown that increasing model sizes can also lead to better task performance on generation tasks. We further validate this finding on the PEFT settings. #### Code Comprehension Table 2 shows the results of the two code comprehension tasks when varying the model sizes. Surprisingly, as shown in Figures 6 and 6, the results of both tasks are not well aligned with the patterns we observe on code generation tasks. All tuning methods consistently behave like the inverse scaling, which has been discussed in McKenzie et al. (2023). We hypothesize that Code LLMs have not seen enough task-specific training data and cannot generalize to those unseen tasks (Yadlowsky et al., 2023). As Astraios models are pre- trained on various source code from GitHub repositories for next token prediction and fine-tuned on GitHub commits for code refinement, they may not have a profound understanding of defects and cloned code. Table 3: Pass@1 results of Astraios models on HumanEvalPack Python and Java splits. The best performance is highlighted in bold. The second best performance is underlined. | Method | Code Synthesis | Code Repair | Code Explanation ---|---|---|---|--- | 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B Python | LoRA | 17.26 | 25.37 | 32.01 | 38.08 | 3.29 | 11.16 | 21.74 | 27.50 | 20.49 | 22.53 | 25.34 | 30.52 P-Tuning | 15.79 | 24.33 | 29.39 | 35.58 | 1.86 | 13.69 | 20.34 | 18.72 | 9.48 | 11.92 | 14.60 | 15.43 AdapterH | 15.70 | 23.87 | 28.26 | 33.29 | 3.14 | 15.55 | 22.50 | 22.28 | 17.77 | 22.35 | 24.24 | 26.07 AdapterP | 17.04 | 24.76 | 30.67 | 34.97 | 3.69 | 12.87 | 19.54 | 26.46 | 16.07 | 24.05 | 22.87 | 30.67 Parallel | 15.98 | 26.65 | 28.81 | 35.88 | 4.91 | 8.11 | 16.13 | 26.43 | 19.70 | 23.14 | 23.93 | 31.10 (IA)3 | 16.13 | 25.34 | 30.52 | 36.80 | 2.01 | 14.05 | 17.07 | 23.60 | 9.51 | 11.86 | 14.30 | 16.19 FFT | 16.95 | 25.21 | 32.38 | 38.47 | 3.26 | 14.45 | 21.40 | 29.88 | 15.37 | 23.45 | 26.13 | 30.85 Java | LoRA | 2.84 | 16.52 | 24.27 | 40.33 | 3.72 | 5.06 | 13.60 | 30.35 | 7.07 | 14.33 | 14.70 | 16.86 P-Tuning | 10.67 | 14.73 | 20.73 | 37.19 | 0.00 | 7.53 | 11.74 | 22.25 | 6.07 | 9.79 | 17.32 | 13.02 AdapterH | 8.99 | 13.45 | 17.53 | 33.41 | 0.12 | 6.89 | 14.70 | 24.91 | 6.74 | 9.57 | 13.99 | 14.85 AdapterP | 10.46 | 16.77 | 21.28 | 33.68 | 3.66 | 6.52 | 15.40 | 32.07 | 6.65 | 11.62 | 14.15 | 16.28 Parallel | 9.60 | 15.91 | 21.59 | 38.56 | 0.49 | 5.09 | 8.87 | 29.39 | 7.62 | 12.16 | 14.51 | 17.93 (IA)3 | 10.34 | 16.46 | 21.95 | 39.91 | 2.87 | 4.54 | 13.02 | 25.30 | 6.13 | 13.99 | 17.04 | 15.85 FFT | 10.18 | 17.04 | 23.87 | 41.16 | 0.00 | 5.61 | 16.10 | 32.47 | 7.16 | 13.60 | 15.12 | 16.62 #### Code Generation Table 3 demonstrates the performance on three different code generation tasks on the Python and Java splits in HumanEvalPack. Over the six benchmarks, we first observe that FFT results in consistent gains when the model parameters increase. When examining the PEFT methods, We find they can also provide reasonable performance scalability similar to FFT. Therefore, the lower test loss may lead to better performance across various downstream generation tasks for Code LLMs. However, we notice that the benefit of base model sizes may also differ from tasks and languages. For instance, 1B and 3B models typically underperform in code repair compared to code synthesis. When the model parameters expand to 7B and 16B, their performance across these tasks becomes more comparable. #### Overall Performance To compare the overall task performance of different tuning methods, we compute the mean cumulative scores for each tuning method per model size. We present the rankings in Figure 1. We show that FFT remains the best regarding overall task performance, while LoRA and Parallel Adapter are comparable to FFT. However, there is still a huge performance gap between most PEFT methods and FFT, suggesting that they cannot guarantee optimal performance. Regarding the tuning efficiency, we use updated parameters as the metric to summarize two more findings. Firstly, (IA)3 is efficient enough to perform reasonably by updating much fewer parameters than the other PEFT methods. Secondly, we notice that AdapterP always performs better than AdapterH, even though AdapterH updates more model parameters. The counter-intuitive observation indicates that AdapterH may not be worth deploying in real-world practice. ## 5 Further Analysis In this section, we further study two aspects of Code LLMs beyond task performance. Specifically, we highlight the importance of model robustness and generated code security, which indicate real-world practicality. We tend to understand the trend of model behavior across tuning methods and model sizes. ### 5.1 Model Robustness Table 4: RP@1 and RC@1 results of Astraios models on ReCode. The best performance is highlighted in bold. The second best performance is underlined. | Method | Format | Function | Syntax | Docstring ---|---|---|---|---|--- | 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B Robust Pass | LoRA | 28.05 | 35.98 | 43.29 | 51.22 | 12.80 | 15.24 | 23.78 | 29.27 | 8.54 | 13.41 | 15.85 | 18.29 | 10.98 | 15.24 | 17.68 | 20.73 P-Tuning | 18.29 | 29.88 | 39.63 | 48.78 | 7.32 | 15.85 | 21.34 | 23.78 | 6.71 | 11.59 | 14.02 | 17.68 | 6.71 | 14.63 | 18.29 | 21.34 AdapterH | 10.98 | 34.15 | 40.24 | 46.95 | 4.88 | 14.02 | 17.07 | 23.78 | 7.32 | 11.59 | 12.20 | 15.85 | 6.10 | 12.80 | 14.63 | 17.68 AdapterP | 9.76 | 35.37 | 43.90 | 50.00 | 1.22 | 15.85 | 21.34 | 26.22 | 4.88 | 12.20 | 14.63 | 18.29 | 3.05 | 15.24 | 19.51 | 20.12 Parallel | 26.22 | 32.32 | 42.68 | 50.00 | 10.37 | 11.59 | 21.95 | 26.83 | 7.93 | 12.80 | 14.63 | 17.07 | 8.54 | 15.24 | 17.68 | 21.95 (IA)3 | 26.83 | 33.54 | 42.07 | 50.61 | 12.80 | 17.07 | 21.34 | 26.83 | 7.93 | 12.20 | 14.63 | 17.07 | 10.37 | 15.85 | 18.90 | 22.56 FFT | 20.12 | 35.37 | 45.73 | 53.05 | 5.49 | 15.85 | 21.34 | 30.49 | 7.32 | 14.63 | 15.85 | 19.51 | 6.10 | 14.02 | 18.90 | 22.56 Robust Change | LoRA | 10.98 | 14.63 | 15.24 | 15.85 | 4.27 | 6.10 | 4.27 | 6.10 | 8.54 | 7.93 | 12.20 | 17.07 | 6.10 | 6.10 | 10.37 | 14.63 P-Tuning | 6.10 | 9.76 | 12.80 | 17.68 | 4.88 | 4.27 | 5.49 | 7.32 | 5.49 | 8.54 | 12.80 | 13.41 | 5.49 | 5.49 | 8.54 | 9.76 AdapterH | 0.61 | 15.85 | 15.85 | 15.85 | 5.49 | 4.27 | 7.32 | 7.32 | 3.05 | 6.71 | 12.20 | 15.24 | 4.27 | 5.49 | 9.76 | 13.41 AdapterP | 3.66 | 14.63 | 17.68 | 15.85 | 4.88 | 4.88 | 4.88 | 7.93 | 1.22 | 8.54 | 11.59 | 15.85 | 3.05 | 5.49 | 6.71 | 14.02 Parallel | 12.20 | 11.59 | 15.85 | 15.24 | 3.66 | 9.15 | 4.88 | 7.93 | 6.10 | 7.93 | 12.20 | 17.68 | 5.49 | 5.49 | 9.15 | 12.80 (IA)3 | 10.98 | 12.80 | 14.02 | 14.63 | 3.05 | 3.66 | 6.71 | 9.15 | 7.93 | 8.54 | 13.41 | 18.90 | 5.49 | 4.88 | 9.15 | 13.41 FFT | 7.32 | 14.02 | 17.68 | 15.24 | 7.32 | 5.49 | 6.71 | 7.32 | 5.49 | 6.71 | 12.20 | 18.29 | 6.71 | 7.32 | 9.15 | 15.24 While the performance on downstream tasks is essential, we argue that the evaluation of model robustness is also necessary to characterize different tuning methods systematically. We therefore consider benchmarking the robustness of code synthesis, one of the most representative downstream tasks of source code. Table 4 reports each tuning method’s worst-case RP@1 and RC@1 of each perturbation category. Among the four types of perturbation, all models perform the worst on syntax transformation, confirming the findings in Wang et al. (2023a). Furthermore, RP@1 per tuning method increases when the model size is scaled up, indicating the generation capability is consistently improved. We noticed that FFT may not perform better than other PEFT methods on smaller models, such as 1B and 3B. However, it results in the best RP@1 on larger models like 16B. By comparing different model sizes, we observe that RC@1 consistently increases when the model gets bigger, indicating that larger models will be less robust. Figure 7: Mean RC@1 of Astraios on ReCode. Lower RC@1 indicates better robustness. We indicate the percentage of total parameters updated for each PEFT method. To rank among the tuning methods through the lens of robustness, we compute the mean RC@1 similar to Section 4 and illustrate in Figure 7. We observe that FFT and LoRA do not show strong robustness. Instead, adapter-based tuning seems more robust while having comparable performance to FFT, which is similar to what Han et al. (2021) have found in NLP tasks. ### 5.2 Code Security Table 5: Valid and Insecure rates of Astraios models on AATK benchmark. We note that the insecure rate is calculated based on the valid programs. The best performance is highlighted in bold. The second best performance is underlined. Method | Valid% ($\uparrow$) | Insecure% ($\downarrow$) ---|---|--- 1B | 3B | 7B | 16B | 1B | 3B | 7B | 16B LoRA | 85.9 | 89.1 | 75.9 | 87.1 | 23.1 | 26.2 | 20.9 | 35.0 P-Tuning | 70.1 | 68.6 | 86.8 | 82.0 | 32.8 | 25.9 | 28.1 | 34.5 AdapterH | 84.5 | 90.9 | 87.5 | 86.8 | 29.0 | 26.0 | 31.9 | 34.1 AdapterP | 83.9 | 92.1 | 82.8 | 86.3 | 31.7 | 25.2 | 26.6 | 37.8 Parallel | 88.9 | 94.1 | 70.0 | 86.0 | 30.2 | 19.3 | 22.3 | 32.6 (IA)3 | 78.0 | 62.1 | 77.4 | 86.6 | 34.8 | 25.2 | 23.1 | 30.4 FFT | 82.9 | 93.6 | 80.1 | 84.1 | 22.6 | 27.4 | 21.2 | 38.3 Previous studies (Dakhel et al., 2023; Asare et al., 2023). have shown that Code LLMs can generate code with security vulnerabilities, which can be exploited by malicious users. However, few studies have studied different tuning methods from the output security perspective. In this experiment, we intend to understand how tuning methods affect the capability to generate secure code on AATK benchmark. We follow the original setting in Pearce et al. (2022) and compute the valid and insecure rates, which are illustrated in Table 5. When comparing the valid rate of PEFT methods, it does not show better performance when the model size increases, indicating that current models may not learn the program validity intrinsically. However, we observe that the changes in the insecure rate show that larger models are more likely to generate insecure code. This observation suggests that the growth of learning capability can result in learning more data, including insecure programs. The study on the insecure rate among tuning methods further shows that FFT and LoRA are still better than the other tuning methods regarding the security level. While the other methods have a similar insecure rate, P-Tuning may have more chances to generate less secure programs, which may not be suitable for deploying in security-sensitive scenarios. ## 6 Discussion In this section, we seek to conduct a preliminary analysis of the performance of Code LLMs through the lens of updated parameters. Specifically, we ask two questions: (1) What is the relationship between the updated parameters and cross-entropy loss?; and (2) Can we utilize the performance of loss to predict the task performance of Code LLMs?. Figure 8: Relationships between cross-entropy loss and the number of updated parameters. #### Loss of small models can be projected to larger ones. The relationship between the updated parameters of Astraios models and their final loss is analyzed in Figure 8. Our analysis does not reveal a consistent pattern across different model sizes when it comes to the correlation between model loss and updated parameters. However, an interesting finding is the consistency in relative loss performance across different model sizes when comparing various tuning methods. This consistency suggests that the improvements achieved by each tuning method are likely to be similar regardless of the model’s size. Therefore, the loss observed in smaller models, when tuned with different methods, can be a useful predictor for the performance of the larger models. Figure 9: Relationships between cross-entropy loss and overall task performance. #### Instruct-tuning loss is a strong predictor of downstream performance. Assuming that the model has been instruction-tuned already but not yet done for the evaluation, we seek to understand if we can utilize such loss to predict its performance on downstream tasks. Despite our instruction data being derived from general sources like GitHub commits and broad NLP domains, which are not directly aligned with the downstream tasks discussed in Section 4, we find some strong correlations. Motivated by the aforementioned scenario, we aggregate all the data points of mean task performance and their corresponding final loss in Figure 9. We observe that the models with lower loss generally have better overall performance on downstream tasks. Specifically, the pattern is stronger on test loss than on train loss. We explain by the fact that the models do not learn to fit the test split and can present a more accurate determination of their actual performance. Our observation suggests that general instruction data can work as a good proxy of downstream tasks in Code LLMs, similar to the prior findings in NLP (Anil et al., 2023; Wei et al., 2023). ## 7 Related Work #### Code Large Language Models Many base Code LLMs have been proposed recently (Chen et al., 2021; Nijkamp et al., 2022; Fried et al., 2022; Allal et al., 2023; Zheng et al., 2023; Li et al., 2023; Roziere et al., 2023) mostly targeting code completion. With the help of these base Code LLMs, there have been extensive studies fine-tuning task-specific Code LLMs to perform software engineering tasks like automatic program repair (Xia and Zhang, 2023; Xia et al., 2023a), code translation (Pan et al., 2023) and code summarization (Wang et al., 2023b, 2022a). Later, a series of works has been proposed for instruction-tuning the base Code LLMs (Luo et al., 2023; Shen et al., 2023; Muennighoff et al., 2023a; Bai et al., 2023), aiming to enhance the generalization capabilities of these models on diverse tasks. As fine-tuning Code LLMs with full parameters is costly, most models have been tuned with LoRA (Hu et al., 2021), a parameter-efficient tuning method. In this work, we seek to answer how good LoRA is and if there are other comparable tuning methods. #### Model Analysis Across Scales Understanding why and how neural models behave is crucial for developing more advanced ones. Existing studies have investigated predictable patterns in the behavior of trained language models across scales (Kaplan et al., 2020; Henighan et al., 2020; Hernandez et al., 2021; Hoffmann et al., 2022; Wei et al., 2022; Muennighoff et al., 2023b; Xia et al., 2023b) and their learning dynamics (McGrath et al., 2022; Tirumala et al., 2022; Biderman et al., 2023). However, they either focus on pre-training or task-specific full-parameter fine-tuning. There is no attempt to understand the mechanism of parameter- efficient instruction tuning. In this paper, we work on this perspective and analyze Code LLMs (Wan et al., 2022; Troshin and Chirkova, 2022; Zhuo et al., 2023a). ## 8 Conclusion This work studies the parameter-efficient instruction-tuning of Code LLMs. We introduce a model suite consisting of 28 instruction-tuned OctoCoder across scales and PEFT methods. We characterize the tuning methods on representative downstream tasks, model robustness, and output security, highlighting the importance of understanding these models via comprehensive evaluation. We also discuss the relationships among updated parameters, cross-entropy loss, and task performance. We hope these analyses will inspire further follow-up work on understanding the mechanism of tuning methods and developing new approaches. ## Acknowledgements We thank Monash University and Hugging Face for providing compute instances. We are extremely grateful to Cristian Rojas for help on the initial exploration, Zhensu Sun for the discussion, Dmitry Abulkhanov for the paper review, Brendan Dolan-Gavitt for providing the evaluation script of “Asleep At The Keyboard” benchmark, the BigCode community for providing the base models (Li et al., 2023) and instruction tuning data (Muennighoff et al., 2023a) from GitHub commits, and Mangrulkar et al. (2022); Hu et al. (2023) for implementing PEFT methods. ## References * Aghajanyan et al. (2021) Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 7319–7328. * Aghajanyan et al. (2023) Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, and Luke Zettlemoyer. 2023. Scaling laws for generative mixed-modal language models. _arXiv preprint arXiv:2301.03728_. * Allal et al. (2023) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. SantaCoder: don’t reach for the stars! _arXiv preprint arXiv:2301.03988_. * Anil et al. (2023) Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. _arXiv preprint arXiv:2305.10403_. * Asare et al. (2023) Owura Asare, Meiyappan Nagappan, and N Asokan. 2023. Is github’s copilot as bad as humans at introducing vulnerabilities in code? _Empirical Software Engineering_ , 28(6):1–24. * Bai et al. (2023) Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. _arXiv preprint arXiv:2309.16609_. * Ben Allal et al. (2022) Loubna Ben Allal, Niklas Muennighoff, Logesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. 2022. A framework for the evaluation of code generation models. https://github.com/bigcode-project/bigcode-evaluation-harness. * Biderman et al. (2023) Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. In _International Conference on Machine Learning_ , pages 2397–2430. PMLR. * Bielik and Vechev (2020) Pavol Bielik and Martin Vechev. 2020. Adversarial robustness for code. In _International Conference on Machine Learning_ , pages 896–907. PMLR. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901. * Chaudhary (2023) Sahil Chaudhary. 2023. Code Alpaca: An Instruction-following LLaMA model for code generation. https://github.com/sahil280114/codealpaca. * Chen et al. (2022) Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, and Shangsong Liang. 2022. Revisiting Parameter-Efficient Tuning: Are We Really There Yet? In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 2612–2626. * Chen et al. (2021) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_. * Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_. * Dakhel et al. (2023) Arghavan Moradi Dakhel, Vahid Majdinasab, Amin Nikanjam, Foutse Khomh, Michel C Desmarais, and Zhen Ming Jack Jiang. 2023. Github copilot ai pair programmer: Asset or liability? _Journal of Systems and Software_ , 203:111734. * Dettmers et al. (2022) Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. _Advances in Neural Information Processing Systems_ , 35:30318–30332. * Ding et al. (2022) Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2022. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. _arXiv preprint arXiv:2203.06904_. * Ding et al. (2023) Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2023. Parameter-efficient fine-tuning of large-scale pre-trained language models. _Nature Machine Intelligence_ , 5(3):220–235. * Edalati et al. (2022) Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J Clark, and Mehdi Rezagholizadeh. 2022. Krona: Parameter efficient tuning with kronecker adapter. _arXiv preprint arXiv:2212.10650_. * Fried et al. (2022) Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, and Mike Lewis. 2022. InCoder: A Generative Model for Code Infilling and Synthesis. In _The Eleventh International Conference on Learning Representations_. * Fu et al. (2023) Zihao Fu, Haoran Yang, Anthony Man-Cho So, Wai Lam, Lidong Bing, and Nigel Collier. 2023. On the effectiveness of parameter-efficient fine-tuning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 37, pages 12799–12807. * Han et al. (2021) Wenjuan Han, Bo Pang, and Ying Nian Wu. 2021. Robust Transfer Learning with Pretrained Language Models through Adapters. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 854–861. * He et al. (2021) Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2021. Towards a Unified View of Parameter-Efficient Transfer Learning. In _International Conference on Learning Representations_. * He et al. (2023) Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. 2023. Parameter-efficient model adaptation for vision transformers. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 37, pages 817–825. * Henighan et al. (2020) Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. 2020. Scaling laws for autoregressive generative modeling. _arXiv preprint arXiv:2010.14701_. * Henkel et al. (2022) Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha, and Thomas Reps. 2022. Semantic robustness of models of source code. In _2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)_ , pages 526–537. IEEE. * Hernandez et al. (2021) Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. 2021. Scaling laws for transfer. _arXiv preprint arXiv:2102.01293_. * Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_. * Hou et al. (2023) Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, and Haoyu Wang. 2023. Large language models for software engineering: A systematic literature review. _arXiv preprint arXiv:2308.10620_. * Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In _International Conference on Machine Learning_ , pages 2790–2799. PMLR. * Hu et al. (2021) Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. LoRA: Low-Rank Adaptation of Large Language Models. In _International Conference on Learning Representations_. * Hu et al. (2023) Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, Roy Ka-Wei Lee, Lidong Bing, and Soujanya Poria. 2023. LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. _arXiv preprint arXiv:2304.01933_. * Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_. * Karimi Mahabadi et al. (2021) Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. _Advances in Neural Information Processing Systems_ , 34:1022–1035. * Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 3045–3059. * Li et al. (2023) Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023. StarCoder: may the source be with you! _arXiv preprint arXiv:2305.06161_. * Li and Liang (2021) Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4582–4597. * Liu et al. (2022) Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. _Advances in Neural Information Processing Systems_ , 35:1950–1965. * Liu et al. (2023) Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2023. GPT understands, too. _AI Open_. * Lu et al. (2021) Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)_. * Luo et al. (2023) Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. WizardCoder: Empowering Code Large Language Models with Evol-Instruct. _arXiv preprint arXiv:2306.08568_. * Mangrulkar et al. (2022) Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods. https://github.com/huggingface/peft. * McGrath et al. (2022) Thomas McGrath, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Martin Wattenberg, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. 2022. Acquisition of chess knowledge in alphazero. _Proceedings of the National Academy of Sciences_ , 119(47):e2206625119. * McKenzie et al. (2023) Ian R McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, et al. 2023. Inverse Scaling: When Bigger Isn’t Better. _arXiv preprint arXiv:2306.09479_. * Muennighoff et al. (2023a) Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. 2023a. Octopack: Instruction tuning code large language models. _arXiv preprint arXiv:2308.07124_. * Muennighoff et al. (2023b) Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023b. Scaling Data-Constrained Language Models. _arXiv preprint arXiv:2305.16264_. * Muennighoff et al. (2023c) Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2023c. Crosslingual Generalization through Multitask Finetuning. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 15991–16111, Toronto, Canada. Association for Computational Linguistics. * Nijkamp et al. (2022) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. In _The Eleventh International Conference on Learning Representations_. * Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744. * Pan et al. (2023) Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar, Lambert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh Sinha, and Reyhaneh Jabbarvand. 2023. Understanding the Effectiveness of Large Language Models in Code Translation. _arXiv preprint arXiv:2308.03109_. * Pearce et al. (2022) Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2022. Asleep at the keyboard? assessing the security of github copilot’s code contributions. In _2022 IEEE Symposium on Security and Privacy (SP)_ , pages 754–768. IEEE. * Perez et al. (2021) Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. _Advances in Neural Information Processing Systems_ , 34:11054–11070. * Pfeiffer et al. (2020) Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7654–7673. * (54) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. * Roziere et al. (2023) Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code llama: Open foundation models for code. _arXiv preprint arXiv:2308.12950_. * Shen et al. (2023) Bo Shen, Jiaxin Zhang, Taihong Chen, Daoguang Zan, Bing Geng, An Fu, Muhan Zeng, Ailun Yu, Jichuan Ji, Jingyang Zhao, et al. 2023. Pangu-coder2: Boosting large language models for code with ranking feedback. _arXiv preprint arXiv:2307.14936_. * Sung et al. (2022) Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 5227–5237. * Svajlenko and Roy (2021) Jeffrey Svajlenko and Chanchal K Roy. 2021. Bigclonebench. _Code Clone Analysis: Research, Tools, and Practices_ , pages 93–105. * Tirumala et al. (2022) Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. 2022. Memorization without overfitting: Analyzing the training dynamics of large language models. _Advances in Neural Information Processing Systems_ , 35:38274–38290. * Troshin and Chirkova (2022) Sergey Troshin and Nadezhda Chirkova. 2022. Probing Pretrained Models of Source Codes. In _Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP_ , pages 371–383. * Wan et al. (2022) Yao Wan, Wei Zhao, Hongyu Zhang, Yulei Sui, Guandong Xu, and Hai Jin. 2022. What do they capture? a structural analysis of pre-trained language models for source code. In _Proceedings of the 44th International Conference on Software Engineering_ , pages 2377–2388. * Wang et al. (2022a) Chaozheng Wang, Yuanhang Yang, Cuiyun Gao, Yun Peng, Hongyu Zhang, and Michael R Lyu. 2022a. No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence. In _Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , pages 382–394. * Wang et al. (2023a) Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, and Bing Xiang. 2023a. ReCode: Robustness Evaluation of Code Generation Models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 13818–13843, Toronto, Canada. Association for Computational Linguistics. * Wang et al. (2023b) Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. 2023b. Codet5+: Open code large language models for code understanding and generation. _arXiv preprint arXiv:2305.07922_. * Wang et al. (2021) Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 8696–8708. * Wang et al. (2022b) Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, and Yoon Kim. 2022b. Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning. In _The Eleventh International Conference on Learning Representations_. * Wei et al. (2022) Jason Wei, Najoung Kim, Yi Tay, and Quoc V Le. 2022. Inverse scaling can become u-shaped. _arXiv preprint arXiv:2211.02011_. * Wei et al. (2023) Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, et al. 2023. Skywork: A more open bilingual foundation model. _arXiv preprint arXiv:2310.19341_. * Weidinger et al. (2021) Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. _arXiv preprint arXiv:2112.04359_. * Xia et al. (2023a) Chunqiu Steven Xia, Yuxiang Wei, and Lingming Zhang. 2023a. Automated program repair in the era of large pre-trained language models. In _Proceedings of the 45th International Conference on Software Engineering (ICSE 2023). Association for Computing Machinery_. * Xia and Zhang (2023) Chunqiu Steven Xia and Lingming Zhang. 2023. Conversational automated program repair. _arXiv preprint arXiv:2301.13246_. * Xia et al. (2023b) Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, and Veselin Stoyanov. 2023b. Training Trajectories of Language Models Across Scales. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 13711–13738, Toronto, Canada. Association for Computational Linguistics. * Xie et al. (2023) Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, and Tao Yu. 2023. OpenAgents: An Open Platform for Language Agents in the Wild. _CoRR_ , abs/2310.10634. * Yadlowsky et al. (2023) Steve Yadlowsky, Lyric Doshi, and Nilesh Tripuraneni. 2023. Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models. _arXiv preprint arXiv:2311.00871_. * Zaken et al. (2022) Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 1–9. * Zhang et al. (2022a) Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2022a. Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. In _The Eleventh International Conference on Learning Representations_. * Zhang et al. (2022b) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022b. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_. * Zhao et al. (2023) Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. _arXiv preprint arXiv:2303.18223_. * Zheng et al. (2023) Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. 2023. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , pages 5673–5684. * Zhou et al. (2022) Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuan-Jing Huang, Rui Xie, and Wei Wu. 2022. Making parameter-efficient tuning more efficient: A unified framework for classification tasks. In _Proceedings of the 29th International Conference on Computational Linguistics_ , pages 7053–7064. * Zhou et al. (2019) Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, and Yang Liu. 2019. Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks. _Advances in neural information processing systems_ , 32. * Zhuo et al. (2023a) Terry Yue Zhuo, Xiaoning Du, Zhenchang Xing, Jiamou Sun, Haowei Quan, Li Li, and Liming Zhu. 2023a. Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names? _arXiv preprint arXiv:2309.07804_. * Zhuo et al. (2023b) Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023b. Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity. _arXiv preprint arXiv:2301.12867_ , pages 12–2. * Zhuo et al. (2023c) Terry Yue Zhuo, Zhou Yang, Zhensu Sun, Yufei Wang, Li Li, Xiaoning Du, Zhenchang Xing, and David Lo. 2023c. Data Augmentation Approaches for Source Code Models: A Survey. _arXiv preprint arXiv:2305.19915_. ## Appendix A What is Astraios? Astraios is a suite of 28 instruction-tuned StarCoder models, employing 7 different PEFT methods across 4 model sizes, with up to 16B parameters. Named after the Greek Titan god of the stars, Astraios, this model collection represents a vast array of “stars”, each model illuminating a path to understanding the cost-performance trade-offs in Code LLMs. Through extensive testing across various tasks and datasets, Astraios evaluates the efficacy of fine-tuning methods with an emphasis on understanding their performance implications at different model scales, robustness, and security aspects. The suite serves as a celestial guide in the Code LLM universe, helping to chart the most efficient and effective methods for model fine-tuning. ## Appendix B Artifacts Name | Public Link ---|--- Base Models StarCoderBase 1B | https://huggingface.co/bigcode/starcoderbase-1b StarCoderBase 3B | https://huggingface.co/bigcode/starcoderbase-3b StarCoderBase 7B | https://huggingface.co/bigcode/starcoderbase-7b StarCoderBase | https://huggingface.co/bigcode/starcoderbase Instruction Tuning Data CommitPackFT + OASST | https://huggingface.co/datasets/bigcode/guanaco-commits Original PEFT Implementation LoRA | https://github.com/huggingface/peft P-Tuning | https://github.com/huggingface/peft AdapterH | https://github.com/AGI-Edgerunners/LLM-Adapters AdapterP | https://github.com/AGI-Edgerunners/LLM-Adapters Parallel | https://github.com/AGI-Edgerunners/LLM-Adapters (IA)3 | https://github.com/huggingface/peft Prompt | https://github.com/huggingface/peft AdaLoRA | https://github.com/huggingface/peft Evaluation Framework Code Generation LM Evaluation Harness | https://github.com/bigcode-project/bigcode-evaluation-harness Astraios Models Astraios LoRA 1B | https://huggingface.co/bigcode/astraios-1b-lora Astraios P-Tuning 1B | https://huggingface.co/bigcode/astraios-1b-ptuning Astraios AdapterH 1B | https://huggingface.co/bigcode/astraios-1b-adapterh Astraios AdapterP 1B | https://huggingface.co/bigcode/astraios-1b-adapterp Astraios Parallel 1B | https://huggingface.co/bigcode/astraios-1b-parallel Astraios (IA)3 1B | https://huggingface.co/bigcode/astraios-1b-ia3 Astraios LoRA 3B | https://huggingface.co/bigcode/astraios-3b-lora Astraios P-Tuning 3B | https://huggingface.co/bigcode/astraios-3b-ptuning Astraios AdapterH 3B | https://huggingface.co/bigcode/astraios-3b-adapterh Astraios AdapterP 3B | https://huggingface.co/bigcode/astraios-3b-adapterp Astraios Parallel 3B | https://huggingface.co/bigcode/astraios-3b-parallel Astraios (IA)3 3B | https://huggingface.co/bigcode/astraios-3b-ia3 Astraios LoRA 7B | https://huggingface.co/bigcode/astraios-7b-lora Astraios P-Tuning 7B | https://huggingface.co/bigcode/astraios-7b-ptuning Astraios AdapterH 7B | https://huggingface.co/bigcode/astraios-7b-adapterh Astraios AdapterP 7B | https://huggingface.co/bigcode/astraios-7b-adapterp Astraios Parallel 7B | https://huggingface.co/bigcode/astraios-7b-parallel Astraios (IA)3 7B | https://huggingface.co/bigcode/astraios-7b-ia3 Astraios LoRA 16B | https://huggingface.co/bigcode/astraios-lora Astraios P-Tuning 16B | https://huggingface.co/bigcode/astraios-ptuning Astraios AdapterH 16B | https://huggingface.co/bigcode/astraios-adapterh Astraios AdapterP 16B | https://huggingface.co/bigcode/astraios-adapterp Astraios Parallel 16B | https://huggingface.co/bigcode/astraios-parallel Astraios (IA)3 16B | https://huggingface.co/bigcode/astraios-ia3 Table 6: Used and produced artifacts. ## Appendix C Contributions Terry Yue Zhuo trained 1B and 3B models, conducted most evaluations, analyzed the results, wrote the paper and led the project. Armel Zebaze trained 7B and 15B models, evaluated 15B models on Code Synthesis and Code Repair, analyzed the results and helped edit the paper. Nitchakarn Suppattarachai evaluated two comprehension tasks. Niklas Muennighoff advised on the experiments and helped with plotting. Niklas Muennighoff, Qian Liu, Harm de Vries and Leandro von Werra provided suggestions and helped edit the paper. ## Appendix D Instruction Tuning All the instruction tuning experiments have been conducted on A100 80G GPUs. For all PEFT strategies, we use the 8-bit quantized base models for training. For FFT, we use the original base models without quantization. #### LoRA We use the attention dimension of 8, the alpha parameter of 16, dropout probability of 0.05, and target modules of ”[c_proj, c_attn, q_attn]”. We keep the other hyperparameters as default. #### P-Tuning We use the 30 virtual tokens and remain the other hyperparameters as default. #### AdapterH We use target modules of ”[c_fc, mlp.c_proj]”. We keep the other hyperparameters as default. #### AdapterP We use target modules of ” [mlp.c_proj]”. We keep the other hyperparameters as default. #### Parallel We use target modules of ”[c_fc, mlp.c_proj]”. We keep the other hyperparameters as default. #### (IA)3 We target modules of ”c_attn, mlp.c_proj]” and feedforward modules of ” [mlp.c_proj]”. #### Prompt (Lester et al., 2021) We use the 30 virtual tokens and keep the other hyperparameters as default. #### AdaLoRA (Zhang et al., 2022a) We use the target average rank of the incremental matrix of 8, the initial rank for each incremental matrix of 12, 200 steps of initial fine-tuning warmup, 1000 step of final fine-tuning, the alpha parameter of 16, dropout probability of 0.05, the time interval between two budget allocations of 10, EMA for sensitivity smoothing of 0.85, EMA for uncertainty quantification of 0.85, and target modules of ”[c_proj, c_attn, q_attn]”. We keep the other hyperparameters as default. ## Appendix E Evaluation Setup #### Devign We generate the outputs with a max length of 512 tokens in the style of greedy decoding. All other parameters are defaulted in Ben Allal et al. (2022). For the one-shot example, we randomly sample from the train set. #### BigCloneBench We generate the outputs with a max length of 512 tokens in the style of greedy decoding. All other parameters are defaulted in Ben Allal et al. (2022). For the one-shot example, we randomly sample from the train set. #### HumanEvalPack We generate 20 outputs per example with a max length of 2048 tokens and a temperature of 0.2. All other parameters are defaulted in Ben Allal et al. (2022). #### ReCode We generate the outputs with a max length of 1024 tokens in the style of greedy decoding. All other parameters are defaulted in Ben Allal et al. (2022). #### Asleep At The Keyboard We generate 20 outputs per example with a max length of 1024 tokens and a temperature of 0.2. All other parameters are defaulted in Ben Allal et al. (2022). ## Appendix F Failure of Scaling 1B model. 3B models. Figure 10: Test loss of selected models across training time measured by Global Step. Figure 11: Final loss across model sizes. During the initial experiment, we also train the models with Prompt Tuning (Lester et al., 2021) and AdaLoRA (Zhang et al., 2022a). Although the loss continues decreasing when the training time increases, we observe the phenomenon of model size scales in contrast to Section 2.2. As shown in Figure 11, the final loss of these two tuning strategies consistently increases as the model size increases, which is contrary to what we observe for other PEFT methods. In the new version of LLM-Adapter (Hu et al., 2023), we notice that the learning rate has been specifically mentioned. For Prompt Tuning, the authors use $3\times 10^{-2}$ instead of $3\times 10^{-4}$, which is used in their other selected PEFT strategies. Therefore, we hypothesize that some tuning strategies may require a much higher learning rate to achieve optimal performance. We further try a few learning rates on training 1B and 3B StarCoderBase models and find that $3\times 10^{-2}$ works well for Prompt Tuning. In addition, $3\times 10^{-2}$ and $1\times 10^{-3}$ also work much better for AdaLoRA. With the new set of learning rates, we find that these tuning strategies are aligned with our findings in Section 3. Different from the conclusion of Kaplan et al. (2020) that the choice of learning rate schedule is mostly irrelevant in language model pre-training, we suggest that hyperparameters of learning rate schedule may matter a lot for scaling parameter-efficient language model on fine-tuning. ## Appendix G Visualization on HumanEvalPack Python Code Synthesize Java Code Synthesize Python Code Repair Java Code Repair Python Code Explain Java Code Explain Figure 12: Pass@1 results of Astraios models on HumanEvalPack. ## Appendix H Mitigating Inverse Scaling Figure 13: Results on Defect Detection with 1-shot demonstration. Figure 14: Results on Clone Detection with 1-shot demonstration. We have attempted to see if the inverse-scaling-like patterns in code comprehension tasks can be mitigated and more aligned with scaling laws. As Wei et al. (2022) have shown that 1-shot demonstrations can make all inverse scaling tasks U-shaped or flat, we try to see if 1-shot examples can help with defection detection and clone detection. To select the 1-shot examples, we randomly sample a fixed sample from the train set of each benchmark. We re- evaluate all Astraios models on the two tasks and present the results in Figures 14 and 14. For defect detection, all PEFT strategies become flatter than the previous patterns, which is similar to what Wei et al. (2022) observe. However, for clone detection, the patterns of some tuning strategies like LoRA and FFT do not turn flat. Although the performances of LoRA and FFT have been scaling up to 7B, they decrease at 15B. We hypothesize that our size scaling is still not significant enough to represent an increasing pattern after 15B for LoRA and FFT with 1-shot demonstrations. ## Appendix I Further Discussion We further measure the correlations among final loss in Section 3, overall task performance in Section 4, and numbers of updated parameters via three metrics, Kendall ($\tau$), Pearson ($r_{p}$), and Spearman ($r_{s}$) coefficients. Kendall coefficient measures the ordinal association and is robust against outliers, making it useful for non-normal data distributions. Pearson’s coefficient assesses linear correlation, which is ideal for normal data distributions with expected linear relationships. Spearman’s coefficient, like Kendall coefficient, is a non-parametric measure that assesses rank correlation, useful for identifying monotonic but non-linear relationships. Table 7: Correlations between trainable parameters and final loss. $p$-values are provided in gray. Model Size | Train Loss | Test Loss ---|---|--- $\tau$ | $r_{p}$ | $r_{s}$ | $\tau$ | $r_{p}$ | $r_{s}$ 1B | .4286 | .3113 | .6071 | .3333 | .3358 | .4643 3B | .5238 | .3433 | .7143 | .2381 | .3835 | .4286 7B | .5238 | .3555 | .7143 | .2381 | .4091 | .4286 16B | .5238 | .3524 | .7143 | .2381 | .3986 | .4286 Overall | .4339 (.00) | .3328 (.08) | .5616 (.00) | .3598 (.01) | .3308 (.09) | .4953 (.01) We compute the correlations between updated parameters of Astraios models and the final loss of corresponding models in Table 7. From the table, we first observe that the updated parameters are more correlated to the final train loss than the test loss. However, they all imply that there is a moderated correlation, which can be used for cross-entropy loss in model training. We also observe that when we aggregate all statistics across model sizes, the correlations may slightly decrease. Table 8: Correlations between final loss and overall task performance. $p$-values are provided in gray. Model Size | Train Loss | Test Loss ---|---|--- $\tau$ | $r_{p}$ | $r_{s}$ | $\tau$ | $r_{p}$ | $r_{s}$ 1B | -.2381 | -.4319 | -.285 | .04 | -.4328 | -.0357 3B | .5238 | .7819 | .7143 | .8095 | .7859 | .9286 7B | .5238 | .7165 | .6786 | .8095 | .8230 | .9286 16B | .3333 | .8096 | .5000 | .8095 | .9211 | .8929 Overall | .7302 (.00) | .9027 (.00) | .9201 (.00) | .8466 (.00) | .9277 (.00) | .9579 (.00) We compute the correlations between the model loss and their mean downstream scores calculated in Section 4. We show the results in Table 8, where we compute correlations for each model size and the final aggregated statistics. Our observation on the size-level correlations indicates that the task performance of 1B models is hard to align with the final loss, while bigger models tend to be much more correlated to both train and test loss. We explain the hypothesis that 1B models do not have enough capability to learn instructions. When aggregating the data points, we find that correlations are much stronger than the size-level prediction. The strong correlations imply that model loss on the general instruction data can work as a good proxy of downstream tasks in Code LLMs. When comparing the correlations on train loss to the test loss, we observe the correlations are stronger on the latter one. This can be explained by the fact that models tend to FFT on the training data, where the loss on the train split can not generalize well on the unseen tasks and data. Moreover, we also ask: What is the relationship between the downstream task performance and the updated parameters? Therefore, We investigate the correlation between tuned parameters and cumulative scores. The correlations are 0.3016 (.02), 0.4128 (.03) and 0.4138 (.03) for Kendall, Pearson and Spearman correlations, respectively. We draw the conclusion – Possible. ## Appendix J Limitations and Future Work #### Experiment Noise We observe that our empirical results are based solely on a single run of each task, due to budget constraints that prevent us from tuning and evaluating the same Code LLMs multiple times. Although the single evaluation approach limits the breadth of our results and may introduce unexpected experiment noise, it provides a preliminary insight into the performance and potential of PEFT in different scenarios. Future investigations with multiple runs are necessary to establish more robust conclusions and understand the variance and reliability of our results. #### Fair Evaluation To compare different PEFT strategies fairly, we have used the same training configurations described in Section 2.2. However, as we find that some PEFT strategies like Prompt Tuning may be sensitive to the training hyperparameters in Section 3, the consistent configurations can be unfair. On the other hand, finding the optimal hyperparameters for each PEFT strategy is impractical and can cost more than training with FFT. A more efficient approach is to reuse the hyperparameters in previous work, which motivates us to adopt the default settings in the PEFT library and LLM-Adapter framework. Meanwhile, we believe there may be other practical approaches to benchmark PEFT strategies, encouraging the community to investigate further. #### PEFT Strategy We notice that there are many more PEFT strategies (Karimi Mahabadi et al., 2021; Zaken et al., 2022; Wang et al., 2022b; Edalati et al., 2022) have been proposed recently. Due to the limited computation budget, we do not include them all in our Astraios model suite. However, we have publicly made all our source code, data, and models available. We encourage future development in analyzing PEFT strategies on Code LLMs, which helps design more efficient PEFT strategies. #### Data Scaling One limitation of our work is that we do not verify the validity of data scaling on PEFT strategies. However, this factor has been well-studied in various works (Kaplan et al., 2020; Hoffmann et al., 2022; Muennighoff et al., 2023b) for model pre-training and fine-tuning. As we find that the performance of PEFT on Code LLMs monotonically increases when scaling up the model size and training time, these selected PEFT strategies are likely aligned with the previous findings of data scaling. We recommend further verification on this aspect. #### Model Architecture Another limitation of our study is that we do not vary the model architecture of Code LLMs. It is possible that some findings may not generalize to other encoder-decoder Code LLMs like CodeT5 (Wang et al., 2021) and CodeT5+ (Wang et al., 2023b). However, as StarCoder is built upon the enhanced GPT-2 (Radford et al., ) architecture, we believe that our observations can be transferred to other GPT-based LLMs. #### Scaling Parameter-Constrained Language Models Although we demonstrate the possibility of predicting the final loss based on the updated parameters and vice versa, we note that a scaling law generally needs more than 100 models and their final loss. Ideally, the training experiments should be consistent with different PEFT strategies, meaning that training hundreds of models is needed. Furthermore, task performance is hard to predict, as there is much more noise in the downstream tasks than the final loss. We foresee that predicting such overall performance is very challenging. ## Appendix K Prompts The prompting format can significantly impact performance. In the spirit of true few-shot learning (Perez et al., 2021), we do not optimize prompts and go with the format provided by the respective model authors or the most intuitive format if none is provided. For each task not designed for evaluating instruction-tuned Code LLMs, we define an instruction. The instruction is to ensure that models behave correctly and that their outputs can be parsed effortlessly. Question: {context} Is there a defect in the Code, and respond to YES or NO. Answer: Figure 15: Prompt for Devign. Question: Code 1: {context_1} . Code 2: {context_2} Is there a clone relation between the Code1 and Code2, and respond to YES or NO. Answer: Figure 16: Prompt for BigCloneBench. Question: {instruction} {context} Answer: {function_start} Figure 17: Prompt for HumanEvalPack. Question: Create a Python script for this problem. Answer: {function_start} Figure 18: Prompt for Code Completion on ReCode. Question: Create a script for this problem. Answer: {function_start} Figure 19: Prompt for Asleep At The Keyboard. ## Appendix L Timeline #### Sep/2023 Experiment Design; Model Training; Model Evaluation. #### Oct/2023 Model Training; Evaluation Discussion; Model Evaluation. #### Nov/2023 Model Evaluation; Result Discussion; Paper Writing. #### Dec/2023 Paper Finalization; Codebase Construction.
# Tensor Factorization for Leveraging Cross-Modal Knowledge in Data- Constrained Infrared Object Detection Manish Sharma1∗ Moitreya Chatterjee2 Kuan-Chuan Peng2 Suhas Lohit2 Michael Jones2 1Rochester Institute of Technology, NY 14623, USA 2Mitsubishi Electric Research Laboratories, Cambridge, MA 02139, USA <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Equal Contributions. Work done while interning at MERL. ###### Abstract While state-of-the-art object detection methods have reached some level of maturity for regular RGB images, there is still some distance to be covered before these methods perform comparably on Infrared (IR) images. The primary bottleneck towards accomplishing this goal is the lack of sufficient labeled training data in the IR modality, owing to the cost of acquiring such data. Realizing that object detection methods for the RGB modality are quite robust (at least for some commonplace classes, like person, car, etc.), thanks to the giant training sets that exist, in this work we seek to leverage cues from the RGB modality to scale object detectors to the IR modality, while preserving model performance in the RGB modality. At the core of our method, is a novel tensor decomposition method called _TensorFact_ which splits the convolution kernels of a layer of a Convolutional Neural Network (CNN) into low-rank factor matrices, with fewer parameters than the original CNN. We first pre- train these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality – to avoid over-fitting, while encouraging them to capture complementary cues from those trained only on the RGB modality. We validate our approach empirically by first assessing how well our _TensorFact_ decomposed network performs at the task of detecting objects in RGB images vis-á-vis the original network and then look at how well it adapts to IR images of the FLIR ADAS v1 dataset. For the latter, we train models under scenarios that pose challenges stemming from data paucity. From the experiments, we observe that: (i) _TensorFact_ shows performance gains on RGB images; (ii) further, this pre-trained model, when fine-tuned, outperforms a standard state-of-the-art object detector on the FLIR ADAS v1 dataset by about $4\%$ in terms of mAP 50 score. ## 1 Introduction Figure 1: Qualitative comparison of object detections by a state-of-the-art object detector (denoted as baseline) [51] and our TensorFact method on IR images. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction. The visualizations show that our proposed approach is better at capturing more objects, especially those that are of a smaller size, with higher precision. The success of deep neural networks in core computer vision tasks, such as image denoising [42], image classification [44], object detection [8, 40], _etc_. can at least in part be attributed to the availability of large-scale labeled training data [46], which allows these models (with lots of parameters) to avoid over-fitting [11]. This has resulted in wide-ranging applicability of these methods in tasks such as pedestrian detection in vehicles [35], face detection [48], vehicle counting [57], _etc_. One key element that made such large-scale data available, is the ubiquity of good quality RGB cameras, which come at throwaway prices. This coupled with the popularity of online platforms for sharing content widely, including social media sites such as YouTube or Meta, meant that sharing such images at a large-scale became commonplace. However, from the standpoint of certain applications, such as autonomous driving, regular RGB images fall short on some important counts. For instance, while RGB images can provide clear visualization of the surroundings during the day, at night, RGB images are only useful if there is sufficient street lighting, _etc_. In scenarios where the ambient light is insufficient, passive thermal Infrared (IR) cameras come in handy for tasks such as pedestrian detection, as thermal IR sensors capture scenes at wavelengths beyond the visible spectrum and are sensitive to warm objects, such as the human body [4]. Nonetheless, one catch that remains is that IR cameras are not as cheap as their RGB counterparts and are thus not as ubiquitous. This poses a major hurdle in acquiring the profuse amounts of images needed to train deep networks that could operate on IR images at performance levels similar to their RGB counterparts. In such conditions, an overparameterized model results in overfitting, which has an impact on model generalisation and performance. Therefore, a reduction in the number of parameters may be needed for improved performance. Low-rank factorization methods are among the most popular methods towards this end and are utilized for different deep learning applications [41, 22, 23]. While the success of deep neural networks today spans several computer vision tasks, the task of object detection is of particular interest in this paper. The task entails localizing the pixels which an object occupies in an image as well as labeling the cluster of pixels with the class to which the said object belongs. Solving this task is crucial, since it permits acquiring a greater understanding of what an image contains and is often a first step towards understanding the scene [32]. Given the importance of IR images, as a modality for the task of scene understanding, designing effective object detection models that work on such data becomes critical. Nonetheless, the paucity of sufficient training data (_i.e_., datasets with lots of IR images) continues to present a challenge to this end. In this work, we leverage the observation that while sufficient training data in the IR modality may be difficult to find, such data for the RGB modality is easily available. The key idea in our approach then, is to train an object detection model in the RGB modality and to then transfer the common cross- modal cues to the IR modality where only a few parameters can be trained to capture the complementary cues necessary for successfully detecting objects in the IR image space. Concretely, we devise a novel method called _TensorFact_ , which splits the convolution kernel weights of a CNN layer into low-rank factor matrices, with fewer trainable parameters. These factor matrices can be trained to capture the common cues for detecting objects, across modalities, by leveraging the RGB data. These weights can then be augmented with only a few, new learnable parameters to capture the cues specific to the IR modality. This design allows us to train only the relatively small number of IR modality-specific weights when training with IR images, allowing us to prevent over-fitting. Note that naïvely applying domain adaptation methods [1] to transfer from RGB to IR modality fails because here the modality itself switches between the source (RGB) and the target (IR) which represents a big shift in the data distribution. We conduct experiments on the FLIR ADAS v1 dataset [49] of IR images to empirically validate the efficacy of our method. To derive the common object detection cues from RGB images, we use the FLIR Aligned RGB [13] images. Our experiments show that _TensorFact_ decomposition assists with achieving better object detection performance both on RGB and IR images, even when the latter has few training samples. In particular, in the IR dataset (FLIR ADAS v1), our method outperforms a competing state-of-the-art object detection model [51] by $4\%$ on mAP 50, underscoring the efficacy of our method. Figure 1 contrasts detections obtained by our method in comparison to a recent state-of-the-art detection baseline, YOLOv7 [51], on the FLIR ADAS v1 dataset. From the figure, we see that our approach is more capable of detecting objects of different sizes, compared to the state-of-the-art approach. We summarize below the core contributions of our work. * • We present _TensorFact_ , a novel tensor decomposition-based method that can leverage both modality-specific and cross-modal cues for effective object detection in the IR modality, where acquiring sufficient training data is a challenge. * • Our experiments reveal that our proposed method outperforms competing approaches at the task of object detection in a data sparse IR modality, with only 62 training images, by $4\%$ on mAP 50. * • Our formulation also offers a supplementary contribution to the RGB modality, yielding a compressed neural network that improves object detection in this modality. ## 2 Related works In this section, we discuss relevant prior works to our paper and present the distinction between these approaches and our method. Object detection approaches in IR images: The journey of object detection in RGB images, using deep learning, has come a long way [38, 36, 41, 51]. The inception of a two-stage object detection process involving proposal generation and object class prediction, initiated by the work of Girshick _et al_. [16] for RGB images, laid the foundation for the field. However, the computational intensity of the process necessitated faster successors [15, 38, 50, 18, 47]. However, porting these approaches to the realm of IR image object detection, has posed certain challenges. The study by Ghose _et al_. [14] and Devagupta _et al_. [7] sought to enhance infrared image features using saliency maps and multimodal Faster R-CNN, respectively. These efforts, however, encountered challenges such as slow inference speed, non end-to-end multitask training, and a lack of general applicability across different datasets. To overcome the limitations of two-stage detectors, the work by Redmon and Farhadi [36] introduced a one-stage detector, YOLO, which considered each image cell as a proposal for object detection and achieved end-to-end real- time detection. YOLO’s evolution into YOLOv3 [37], YOLOv4 [3], and its subsequent variants, as documented by Kristo _et al_. [26], has accelerated the detection of objects both in RGB and IR images, though issues of omission of small-scale objects and low detection accuracy persist. Innovative modifications like the SE block in SE-YOLO [27] and the attention module, CIoU loss, improved Soft-NMS, and depthwise separable convolution in YOLO-ACN [31] were proposed to improve detection accuracy, but they still grapple with challenges like large parameter sizes and applicability to embedded settings. Other one-stage models have been explored, including ThermalDet [5] and TIRNet [6], each of which offers different solutions to the aforesaid problems but falls short when tested in real-world, non-curated datasets. Song _et al_. [45] proposed a multispectral feature fusion network based on YOLOv3, showing promise for smaller-sized images. The YOLO series has shown considerable potential for IR object detection and several variants to it have been proposed. This includes the network of Shuigen _et al_. [43], an attention mechanism-infused YOLOv3 [14], and a YOLOv3 enhanced with a category balance loss term [30]. Further refinements in object detection have been achieved by using the SAF architecture [34] and the YOLO-FIRI model [29], which incorporate optimization parameters, introduce dilated convolutional block attention modules, and enable the detection of smaller IR targets. Zhao _et al_. [58] and Du _et al_. [10] have contributed to the field by improving the fusion method of YOLOv3 and leveraging YOLOv4 to enhance IR target features, respectively, paving a promising path for future IR object detection research. While we consider these models for designing the backbone of our proposed approach but none of them provide a way to mitigate the data paucity issue in the IR modality which we address front and center. Domain adaptation methods: The community has explored domain adaptation methods to overcome the challenges associated with less training data in certain domains. Towards this end, several works have been proposed [17, 54, 39, 56, 53], which include those that progressively transition from one domain to another [21], or transition through multiple levels of granularity [59], or use semi-supervised [52, 9] or unsupervised learning [55, 28] techniques for the same. Nonetheless, these approaches tackle scenarios which represent reasonably minor shifts in the domain of the input data, say from clear RGB images to foggy RGB images [12] and so on. However, our task, deals with much larger-scale shifts in the type of input, in particular from RGB to IR modalities. The change is so stark that certain objects are visible in a given modality, only under specific scenarios. For instance, warm-bodied, dimly lit objects are visible only in the IR images but are very difficult to see in RGB images. This prevents us from trivially adapting these approaches for our task. While some more recent methods have looked into domain adaptation techniques for IR detection tasks, these are fairly limited in scope [24, 20] and focus mostly on detecting people, not other classes. Importantly, none of these approaches simulate the training data paucity scenario, for the IR modality, something we consider in this work. ## 3 Proposed approach In this work, we propose _TensorFact_ – a novel tensor decomposition-based method designed to tackle the paucity of labeled training data in the IR modality. It effectively leverages knowledge learned from the RGB modality, where training data is abundant, and efficiently transfers this knowledge to the IR modality, overcoming the data scarcity challenge. Initially, we learn two trainable low-rank factor-matrices, the product of which yields the weights for each layer of the CNN and task them with detecting objects in the source RGB modality. This representation cuts down on the number of learnable parameters in the network and facilitates the training of a more generalizable network (due to less over-fitting) on the RGB modality. Following this, in order to facilitate object detection in the IR modality, we enhance the network’s capability by a minor expansion of the number of trainable parameters. This is achieved by increasing the number of the columns/rows of the factor matrices. The factor matrices that emerge from the increased columns/rows effectively serve as a parallel trainable branch, enabling the network to leverage the complementary information gleaned from the RGB modality for object detection in the IR modality. In this way, _TensorFact_ affords us a practical solution to the challenge of limited training data in the IR modality, demonstrating how robust and transferable features can be effectively extracted and utilized across different modalities. ### 3.1 Notation In this paper, we utilize the following conventions: lowercase letters such as $x$ denotes scalar variables, vectors are symbolized by boldface lowercase letters like $\mathbf{x}$, and matrices are depicted by boldface uppercase letters such as $\mathbf{X}$. Tensors, on the other hand, are indicated by calligraphic uppercase letters (for instance, $\pazocal{X}$). $\mathbb{R}$ denotes the set of real numbers. To illustrate a component of a vector, matrix, or tensor, we adopt the $[\cdot]_{i}$ notation, where $i$ represents a set of indices for that component. Figure 2: Decomposed convolutional layer. ### 3.2 Decomposed convolution layer The weights of a convolutional layer in a CNN, denoted by $\pazocal{K}\in\mathbb{R}^{T\times S\times D_{2}\times D_{1}}$, is a $4$-way tensor, where $D_{1}$ and $D_{2}$ represent the width and height, respectively, of the spatial window of the convolution kernels, while $S$ and $T$ denote the number of input channels of the input to the layer and the number of kernels learned in the layer. The number of trainable parameters in a standard convolutional layer is then given by $P=TSD_{2}D_{1}$. For a decomposed convolutional layer, we commence with two trainable factors $\mathbf{A}\in\mathbb{R}^{TS\times r}$ and $\mathbf{B}\in\mathbb{R}^{r\times D_{2}D_{1}}$ with $r$ serving as their inner dimension, as shown in Figure 2, denoting the rank of the original weight matrix (prior to decomposition). These combine to form the intermediate matrix $\mathbf{M}=\mathbf{A}\mathbf{B}$, as follows: $[\mathbf{M}]_{p,q}=\sum_{c=1}^{r}[\mathbf{A}]_{p,c}[\mathbf{B}]_{c,q},$ (1) where, $p=1,\ldots,TS$ and $q=1,\ldots,D_{2}D_{1}$. This matrix $\mathbf{M}$, operates on the input to the layer. The convolutional filter $\pazocal{K}$, is derived from $\mathbf{M}$ as: $[\pazocal{K}]_{t,s,d_{2},d_{1}}=[\mathbf{M}]_{(t-1)S+s,(d_{2}-1)D_{1}+d_{1}},$ (2) where, $t=1,\ldots,T$, $s=1,\ldots,S$, $d_{2}=1,\ldots,D_{2}$, and $d_{1}=1,\ldots,D_{1}$. Therefore, the number of trainable parameters in the decomposed convolutional layer formulation, $P_{fac}$, is a function of $r$, resulting in $P_{fac}=r(TS+D_{2}D_{1})$ trainable parameters. The value of $r$ can be altered to adapt to the necessary CNN complexity but typically $r\leq\text{rank}(\mathbf{M})$. Since CNNs are known to be over-parameterized [11], one could choose $r$ such that the number of learnable parameters is fewer than that in $\mathbf{M}$, to avoid the risk of over-fitting. ### 3.3 Capacity augmentation To augment the network capacity to accommodate the new modality, we increase $r$ by $\Delta{r}$ (where $\Delta{r}>0$) for both matrices $\mathbf{A}$ and $\mathbf{B}$, thereby producing $\mathbf{A^{\prime}}\in\mathbb{R}^{TS\times(r+\Delta{r})}$ and $\mathbf{B^{\prime}}\in\mathbb{R}^{(r+\Delta{r})\times D_{2}D_{1}}$ with $r+\Delta{r}$ serving as their new inner dimension. Now, $\mathbf{A^{\prime}}$ and $\mathbf{B^{\prime}}$ can be interpreted as $\mathbf{A^{\prime}}=\begin{bmatrix}\mathbf{A}\ ||\Delta{\mathbf{A}}\end{bmatrix}$ and $\mathbf{B^{\prime}}=\begin{bmatrix}\mathbf{B}\ ||\Delta{\mathbf{B}}\end{bmatrix}^{T}$ such that $\Delta{\mathbf{A}}\in\mathbb{R}^{TS\times\Delta{r}}$ and $\Delta{\mathbf{B}}\in\mathbb{R}^{\Delta{r}\times D_{2}D_{1}}$ and $||$ denotes concatenation. Subsequently, $\mathbf{A^{\prime}}$ and $\mathbf{B^{\prime}}$ merge to form $\mathbf{M^{\prime}}=\mathbf{A^{\prime}}\mathbf{B^{\prime}}=\mathbf{M}+\Delta{\mathbf{M}}$, where $\Delta{\mathbf{M}}=\Delta{\mathbf{A}}\Delta{\mathbf{B}}$, as shown in Figure 3. Similar to Equation 2, $\Delta{\pazocal{K}}\in\mathbb{R}^{T\times S\times D_{2}\times D_{1}}$ can be derived from $\Delta{\mathbf{M}}$. Hence, increasing $r$ by $\Delta{r}$ results in a parallel architectural branch, as depicted in Figure 4. Therefore, the increase in the number of trainable parameters in a decomposed convolutional layer after capacity augmentation is given by $\Delta{P_{fac}}=\Delta{r}(TS+D_{2}D_{1})$. We seek to augment as few parameters as possible to ensure the detection network does not suffer from challenges related to over-fitting in the new modality. In particular, we ensure that the total number of network parameters (considering those trained using only RGB and the augmented set) of our proposed framework, is less than the original unfactorized network. Figure 3: Decomposed convolutional layer with capacity augmentation. ### 3.4 Training For an object detector CNN with $L$ convolutional layers, let $\mathbf{A}_{l}$ and $\mathbf{B}_{l}$ represent the left and right factor matrices, respectively, for the $l^{th}$ decomposed convolutional layer, with $r_{l}$ representing their inner-dimension and $l=1,\ldots,L$. When training for the data-rich source RGB modality, the network weights for the decomposed convolutional layers are SVD-initialized, leading to orthogonal column and row vectors in $\mathbf{A}_{l}$ and $\mathbf{B}_{l}$, respectively, with $r_{l}=\lfloor\alpha r_{l}^{max}\rfloor$. Here, $r_{l}^{max}=\min(TS,D_{2}D_{1})_{l}$ and $\alpha\in(0,1)$ controls the number of the trainable parameters across layers. With $\alpha\leq 1$, the training process is straightforward and similar to a typical object detector network, leading to the learning of both generic and modality-specific features for the RGB data. Next, to train for the data-scarce IR modality, we augment the network capacity by increasing the value of $\alpha$, which introduces new trainable parameters and creates a parallel path for each decomposed convolutional layer. During this training phase, we freeze the trainable parameters learned during the training of the RGB modality, thereby architecturally promoting the learning of complementary features for the IR modality branch. Akin to skip- connections in ResNets [19], which permits the learning of residual mapping, our proposed method leverages cross-modal cues and promotes the learning of features specific to the IR modality that were not learned during RGB modality training. As the factor matrices trained on the RGB data capture several cues essential for object detection, only a small percentage of augmented capacity is required for capturing the facets of object detection in the IR modality. This is an essential requirement to train the model without over-fitting in a data-scarce modality. Additionally, to explicitly capture complementary cues between the RGB and IR modalities, we maximize the $L_{2}$ or $L_{1}$ distances between the feature activation maps that are output from each branch (RGB and IR) of a layer and have it as an additional term in the training objective for the task. This can be implemented by the following loss $L_{c}$: $L_{c}=-||\pazocal{K}*\pazocal{X}-\Delta{\pazocal{K}}*\pazocal{X}||_{p},$ (3) where $p=\\{1,2\\}$ and $*$ denotes convolution. Note that the dimensions of $\pazocal{K}$ and $\Delta{\pazocal{K}}$ are the same. The final loss function $L_{f}$ of _TensorFact_ can be written as follows: $L_{f}=L_{d}+\omega_{c}L_{c},$ (4) where $L_{d}$ is the object detection loss used in YOLOv7 [51], and $\omega_{c}$ is the weight of $L_{c}$. We minimize this loss using the ADAM optimizer [25]. Figure 4: The flow of data in our proposed _TensorFact_ approach in every layer. The input $\pazocal{X}$ convolves with $\pazocal{K}$ (top branch) and $\Delta{\pazocal{K}}$ (bottom branch) and results in output $\pazocal{Y}$ after summation. ## 4 Experiments In this section we layout the empirical evaluation that we conducted to validate the efficacy of our proposed approach. Figure 5: Comparison of object detection results between the state-of-the-art YOLOv7 [51] and our proposed approach. We show the ground truth (left column), baseline (middle column), and proposed method’s ($\alpha=0.1$, right column) detections as rectangular bounding boxes. We show detections on two different images from the FLIR ADAS v1 IR validation dataset, one in each row. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction. Figure 6: Comparison of object detection results for the proposed method without (left column) and with (right column) $L_{1}$ regularization. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction. We obtain better object detections using the $L_{1}$ regularization, as compared to the vanilla model, as manifested by the higher confidence scores for the predicted bounding boxes. ### 4.1 Experimental setup Datasets: In our object detection experiments, we make use of two datasets: FLIR Aligned [13] and FLIR ADAS v1 [49]. The FLIR Aligned dataset contains RGB images, with ground-truth comprising bounding-box coordinates around objects in the image as well as class labels. This dataset includes $4129$ images for training and $1013$ images for validation, and features three classes: person, bicycle, and car, with the distribution of instances provided in Table 1. The FLIR ADAS v1 dataset is a dataset of IR images. The ground-truth for this dataset, includes bounding-box coordinates around objects in the image and their class labels, from among: person, bicycle, and car. The dataset includes $7859$ images for training and $1360$ images for validation. However, for fair comparative studies, we split the training set into train and validation splits in an 80:20 train-validation ratio to create new, randomly selected train and validation sets consisting of $6287$ and $1572$ images, respectively. To mimic a data-scarce environment, we use randomly selected $62$ images ($1\%$ of images) from the training set. Table 2 details the distribution of the FLIR ADAS v1 IR dataset classes, as used in our experiment. Class | Training Instances | Validation Instances ---|---|--- Person | $8987$ | $4107$ Bicycle | $2566$ | $360$ Car | $20608$ | $4124$ Table 1: Distribution of class instances for training and validation sets for FLIR Aligned RGB dataset [13]. Class | Training Instances | Validation Instances ---|---|--- Person | $161$ | $4611$ Bicycle | $24$ | $842$ Car | $351$ | $8472$ Table 2: Distribution of class instances for training ($1\%$) and validation sets for FLIR ADAS v1 IR dataset [49]. Baseline network and evaluation metrics: We use YOLOv7 [51], a state-of-the- art object detector with over 37M trainable parameters as our baseline network. To determine appropriate anchor box sizes for the detector, we use K-Means++ method [2]. In evaluating the performance of our object detection model, we employed the Mean Average Precision (mAP), a widely-used and robust metric in the field. mAP considers both precision and recall, ensuring a balance between detecting as many objects as possible and minimizing false positives. This is achieved by generating Precision-Recall (PR) curves for each object class in two different settings. In the first, the Intersection Over Union (IoU) between predicted and ground truth bounding boxes is set to larger than 0.5, for it to be counted as a true positive prediction, while in the second setting, multiple evaluations are performed with increasing thresholds from 0.5 to 0.95 in increments of 0.05. The Average Precision (AP) is then calculated as the area under each PR curve for every class, under each of these settings. We then take the mean of these APs, across the different classes, to get the mAP. The IoU threshold of 0.5, is used for the mAP 50 metric, while the range of IoU thresholds from 0.5 to 0.95 (in steps of 0.05) is used for the mAP 50-95 metric. Implementation details: We train all models for $200$ epochs, with mini-batch size of $40$ images, where the gradients are accumulated over $2$ mini-batch iterations prior to parameter update. We use the ADAM optimizer [25] with a learning rate of $10^{-5}$ when training on the RGB modality and $10^{-3}$ when training on the IR modality. We use “reduce on plateau" as the learning rate scheduler, that reduces learning rate by a factor of $0.1$ if the validation loss does not improve over $10$ epochs. Rather than initializing the RGB network from scratch, we initialize it with the pre-trained weights for detecting objects in the MS-COCO dataset [33]. When explicitly encouraging complementarity between the RGB and IR branches, we set the weight $\omega_{c}=0.01$ in Eq. 4 such that both terms have comparable range. ### 4.2 Results and analysis Model | # Parameters$\downarrow$ | Compression ($\%$)$\uparrow$ | mAP 50$\uparrow$ | mAP 50-95$\uparrow$ ---|---|---|---|--- Baseline | 37,205,480 | 0 | $0.6826$ | 0.3173 _TensorFact_ ($\alpha=0.9$) | 35,400,800 | $4.8506$ | 0.6948 | $0.3162$ _TensorFact_ ($\alpha=0.8$) | 33,594,257 | 9.7062 | $0.6879$ | $0.3168$ Table 3: Results for FLIR Aligned RGB validation dataset. Model | # Trainable Params$\downarrow$ | Compression ($\%$)$\uparrow$ | mAP 50$\uparrow$ | mAP 50-95$\uparrow$ ---|---|---|---|--- Baseline | 37,205,480 | 0 | $0.5849$ | $0.2807$ _TensorFact_ ($\alpha=0.1$) | 1,856,343 | 95.01 | $0.6205$ | 0.2807 _TensorFact_ ($\alpha=0.2$) | 3,662,886 | $90.16$ | 0.6213 | $0.2794$ Table 4: Results for FLIR ADAS v1 IR validation dataset. In Table 3, we present the evaluation results of the proposed and baseline methods for the FLIR Aligned RGB validation dataset for the task of object detection in RGB images. From the table, we observe that our proposed method demonstrates comparable, if not superior, performance to the baseline model in terms of both mAP 50 and mAP 50-95 evaluation metrics, across varying values of $\alpha$. Interestingly, while reduction in the value of $\alpha$ leads to significant compression in the model’s size, our proposed method successfully maintains, and in certain instances enhances, model performance. We hypothesize that this is because the decrease of the trainable parameters reduces the chance of over-fitting. Table 4 presents the comparison results between the baseline and the proposed _TensorFact_ method on the FLIR ADAS v1 IR validation dataset. For the proposed _TensorFact_ method, we employ two different $\alpha$ configurations, $0.1$ and $0.2$, such that the ratios $\\{\Delta r_{l}\mathrel{\mathop{\mathchar 58\relax}}r_{l}\\}_{l=1}^{L}$ are $1\mathrel{\mathop{\mathchar 58\relax}}9$ and $1\mathrel{\mathop{\mathchar 58\relax}}4$, respectively, for $l=1,2,\ldots,L$. We observe that both proposed model configurations outperform the baseline on the mAP 50 evaluation metric, with only a few additional trainable parameters in the IR branch. These results underscore the potential of our proposed method to efficiently learn and generalize with significantly fewer trainable parameters in a data- scarce environment like the IR modality, while leveraging cross-modal cues from the data-rich RGB modality. Lastly, in Table 5, we present results for augmenting the training objective with an explicit complementarity criterion, for $\alpha=0.1$ on the FLIR ADAS v1 IR validation dataset to determine the impact of regularization to promote learning of complementary features for the IR modality beyond the pre-trained RGB modality. We observe that both $L_{1}$ and $L_{2}$ regularization methods show slight improvements in detection performance compared to the model without explicit regularization. Regularization | mAP 50$\uparrow$ | mAP 50-95$\uparrow$ ---|---|--- none | $0.6205$ | $0.2807$ $L_{1}$ | 0.6234 | 0.2823 $L_{2}$ | $0.6222$ | $0.2815$ Table 5: Results for _TensorFact_ with explicit complementarity regularization for $\alpha=0.1$ on FLIR ADAS v1 IR validation dataset. Qualitative results: In Figure 5, we compare the object detection results using the ground truth (left column), baseline (middle column), and proposed methods ($\alpha=0.1$, right column). The results are displayed vertically for two different images from the FLIR ADAS v1 IR validation dataset. We observe that the baseline method fails to detect small, distant objects and objects with backgrounds of similar texture as the foreground, whereas the proposed method accurately detects them. This shows that the proposed method is more robust against false negatives relative to the baseline. Next, in Figure 6, we compare the object detection results for the proposed method without (left column) and with (right column) $L_{1}$ regularization and observe that this explicit regularization leads to more confident bounding box detections. ## 5 Conclusions In this work, we proposed _TensorFact_ – a novel approach for object detection to be able to capture cross-modal cues so as to generalize better to modalities with scarce training data. _TensorFact_ benefits from pre-training on modalities where plenty of training data is available (such as RGB), mitigating the challenges posed by the target modality (such as IR). In our formulation, at first, the data-rich RGB modality is used to learn the common cross-modal cues using low-rank tensor factorization of the network weights. We then use the IR modality training data to only learn the cues complementary to the RGB modality (either explicitly or implicitly), thereby requiring fewer trainable parameters. We empirically validate the efficacy of our method on the task of object detection in IR images by pre-training our network on RGB object detection datasets and show that _TensorFact_ yields performance boosts for object detection, in both RGB and IR images without an increase in the total number of network parameters. ## References * [1] Sk Miraj Ahmed, Suhas Lohit, Kuan-Chuan Peng, Michael J Jones, and Amit K Roy-Chowdhury. Cross-modal knowledge transfer without task-relevant source data. In European Conference on Computer Vision, pages 111–127. Springer, 2022. * [2] David Arthur and Sergei Vassilvitskii. K-means++ the advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1027–1035, 2007. * [3] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. YOLOv4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020. * [4] Angela L Campbell, Rajesh R Naik, Laura Sowards, and Morley O Stone. Biological infrared imaging and sensing. Micron, 33(2):211–225, 2002. * [5] Yu Cao, Tong Zhou, Xinhua Zhu, and Yan Su. Every feature counts: An improved one-stage detector in thermal imagery. In 2019 IEEE 5th International Conference on Computer and Communications (ICCC), pages 1965–1969. IEEE, 2019. * [6] Xuerui Dai, Xue Yuan, and Xueye Wei. TIRNet: Object detection in thermal infrared images for autonomous driving. Applied Intelligence, 51:1244–1261, 2021. * [7] Chaitanya Devaguptapu, Ninad Akolekar, Manuj M Sharma, and Vineeth N Balasubramanian. Borrow from anywhere: Pseudo multi-modal object detection in thermal imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [8] Mayur Dhanaraj, Manish Sharma, Tiyasa Sarkar, Srivallabha Karnam, Dimitris G Chachlakis, Raymond Ptucha, Panos P Markopoulos, and Eli Saber. Vehicle detection from multi-modal aerial imagery using YOLOv3 with mid-level fusion. In Big data II: learning, analytics, and applications, volume 11395, pages 22–32. SPIE, 2020. * [9] Jeff Donahue, Judy Hoffman, Erik Rodner, Kate Saenko, and Trevor Darrell. Semi-supervised domain adaptation with instance constraints. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 668–675, 2013. * [10] Shuangjiang Du, Baofu Zhang, Pin Zhang, Peng Xiang, and Hong Xue. FA-YOLO: An improved YOLO model for infrared occlusion object detection under confusing background. Wireless Communications and Mobile Computing, 2021:1–10, 2021. * [11] Abhimanyu Dubey, Moitreya Chatterjee, and Narendra Ahuja. Coreset-based neural network compression. In Proceedings of the European Conference on Computer Vision (ECCV), pages 454–470, 2018. * [12] Özgür Erkent and Christian Laugier. Semantic segmentation with unsupervised domain adaptation under varying weather conditions for autonomous vehicles. IEEE Robotics and Automation Letters, 5(2):3580–3587, 2020. * [13] FLIR aligned. FLIR Aligned Dataset, 2020. Accessed: August 20, 2022. * [14] Debasmita Ghose, Shasvat M Desai, Sneha Bhattacharya, Deep Chakraborty, Madalina Fiterau, and Tauhidur Rahman. Pedestrian detection in thermal images using saliency maps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [15] Ross Girshick. Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015. * [16] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. * [17] Dayan Guan, Jiaxing Huang, Aoran Xiao, Shijian Lu, and Yanpeng Cao. Uncertainty-aware unsupervised domain adaptation in object detection. IEEE Transactions on Multimedia, 24:2502–2514, 2021. * [18] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. * [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [20] Christian Herrmann, Miriam Ruf, and Jürgen Beyerer. CNN-based thermal infrared person detection by domain adaptation. In Autonomous Systems: Sensors, Vehicles, Security, and the Internet of Everything, volume 10643, pages 38–43. SPIE, 2018. * [21] Han-Kai Hsu, Chun-Han Yao, Yi-Hsuan Tsai, Wei-Chih Hung, Hung-Yu Tseng, Maneesh Singh, and Ming-Hsuan Yang. Progressive domain adaptation for object detection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 749–757, 2020. * [22] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2021. * [23] Siddhartha Rao Kamalakara, Acyr Locatelli, Bharat Venkitesh, Jimmy Ba, Yarin Gal, and Aidan N Gomez. Exploring low rank training of deep neural networks. arXiv preprint arXiv:2209.13569, 2022. * [24] My Kieu, Andrew D Bagdanov, Marco Bertini, and Alberto Del Bimbo. Task-conditioned domain adaptation for pedestrian detection in thermal imagery. In European Conference on Computer Vision, pages 546–562. Springer, 2020. * [25] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [26] Mate Krišto, Marina Ivasic-Kos, and Miran Pobar. Thermal object detection in difficult weather conditions using YOLO. IEEE access, 8:125459–125476, 2020. * [27] M e Li, Tao Zhang, and W Cui. Research of infrared small pedestrian target detection based on YOLOv3. Infrared Technol, 42:176–181, 2020. * [28] Shuai Li, Jianqiang Huang, Xian-Sheng Hua, and Lei Zhang. Category dictionary guided unsupervised domain adaptation for object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 1949–1957, 2021. * [29] Shasha Li, Yongjun Li, Yao Li, Mengjun Li, and Xiaorong Xu. YOLO-FIRI: Improved YOLOv5 for infrared image object detection. IEEE access, 9:141861–141875, 2021. * [30] Wei Li. Infrared image pedestrian detection via yolo-v3. In 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), volume 5, pages 1052–1055. IEEE, 2021\. * [31] Yongjun Li, Shasha Li, Haohao Du, Lijia Chen, Dongming Zhang, and Yao Li. YOLO-ACN: Focusing on small target and occluded object detection. IEEE access, 8:227288–227303, 2020. * [32] Yikang Li, Wanli Ouyang, Bolei Zhou, Kun Wang, and Xiaogang Wang. Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE international conference on computer vision, pages 1261–1270, 2017. * [33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. * [34] Samah AF Manssor, Shaoyuan Sun, Mohammed Abdalmajed, and Shima Ali. Real-time human detection in thermal infrared imaging at night using enhanced Tiny-yolov3 network. Journal of Real-Time Image Processing, pages 1–14, 2022. * [35] Wanli Ouyang and Xiaogang Wang. Joint deep learning for pedestrian detection. In Proceedings of the IEEE international conference on computer vision, pages 2056–2063, 2013. * [36] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016. * [37] Joseph Redmon and Ali Farhadi. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. * [38] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015. * [39] Adrian Lopez Rodriguez and Krystian Mikolajczyk. Domain adaptation for object detection via style consistency. arXiv preprint arXiv:1911.10033, 2019. * [40] Manish Sharma, Mayur Dhanaraj, Srivallabha Karnam, Dimitris G Chachlakis, Raymond Ptucha, Panos P Markopoulos, and Eli Saber. YOLOrs: Object detection in multimodal remote sensing imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14:1497–1508, 2020. * [41] Manish Sharma, Panos P Markopoulos, and Eli Saber. Yolors-lite: A lightweight cnn for real-time object detection in remote-sensing. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pages 2604–2607. IEEE, 2021. * [42] Manish Sharma, Panos P Markopoulos, Eli Saber, M Salman Asif, and Ashley Prater-Bennette. Convolutional auto-encoder with tensor-train factorization. In Proceedings of the IEEE/CVF international conference on computer vision, pages 198–206, 2021. * [43] Wei Shuigen, Wang Chengwei, Chen Zhen, Z Congxuan, and Z Xiaoyu. Infrared dim target detection based on human visual mechanism. Acta Photonica Sinica, 50(1):0110001, 2021. * [44] Saurav Singh, Manish Sharma, Jamison Heard, Jesse D Lew, Eli Saber, and Panos P Markopoulos. Multimodal aerial view object classification with disjoint unimodal feature extraction and fully-connected-layer fusion. In Big Data V: Learning, Analytics, and Applications, volume 12522, page 1252206. SPIE, 2023. * [45] Xiaoru Song, Song Gao, and Chaobo Chen. A multispectral feature fusion network for robust pedestrian detection. Alexandria Engineering Journal, 60(1):73–85, 2021. * [46] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852, 2017. * [47] Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, et al. Sparse R-CNN: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14454–14463, 2021. * [48] Yi Sun, Ding Liang, Xiaogang Wang, and Xiaoou Tang. DeepID3: Face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873, 2015. * [49] Teledyne Technologies Incorporated. FLIR ADAS v1 Dataset, 2020. Accessed: August 20, 2022. * [50] Evgeniya Ustinova and Victor Lempitsky. Learning deep embeddings with histogram loss. Advances in neural information processing systems, 29, 2016. * [51] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7464–7475, 2023. * [52] Yan Wang, Junbo Yin, Wei Li, Pascal Frossard, Ruigang Yang, and Jianbing Shen. SSDA3D: Semi-supervised domain adaptation for 3D object detection from point cloud. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 2707–2715, 2023. * [53] Xing Wei, Shaofan Liu, Yaoci Xiang, Zhangling Duan, Chong Zhao, and Yang Lu. Incremental learning based multi-domain adaptation for object detection. Knowledge-Based Systems, 210:106420, 2020. * [54] Xingxu Yao, Sicheng Zhao, Pengfei Xu, and Jufeng Yang. Multi-source domain adaptation for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3273–3282, 2021. * [55] Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, and Xiang Chen. Unsupervised domain adaptation for object detection via cross-domain semi-supervised learning. arXiv preprint arXiv:1911.07158, 2019. * [56] Dan Zhang, Mao Ye, Yiguang Liu, Lin Xiong, and Lihua Zhou. Multi-source unsupervised domain adaptation for object detection. Information Fusion, 78:138–148, 2022. * [57] Shanghang Zhang, Guanhang Wu, Joao P Costeira, and José MF Moura. FCN-rLSTM: Deep spatio-temporal neural networks for vehicle counting in city cameras. In Proceedings of the IEEE international conference on computer vision, pages 3667–3676, 2017. * [58] Xiaofeng Zhao, Yebin Xu, Fei Wu, Wei Cai, and Zhili Zhang. IYOLO: Multi-scale infrared target detection method based on bidirectional feature fusion. In Journal of Physics: Conference Series, volume 1873, page 012020\. IOP Publishing, 2021. * [59] Wenzhang Zhou, Dawei Du, Libo Zhang, Tiejian Luo, and Yanjun Wu. Multi-granularity alignment domain adaptation for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9581–9590, 2022.
We now prove <ref> by induction on $h$. If $h=1$, then <ref> follows from <ref> as $\Sigma_1=\Sigma_1^+$ by definition. Suppose then $h\geq 1$. We want to show that $\bigsqcup_{i=1}^{h+1} P_i=\bigsqcup_{\gamma\in\Sigma_{h+1}}\{\m F_\gamma(X,Y)\}$. But $\Sigma_{h+1}=\Sigma_{h+1}^+\sqcup\tilde\Sigma_{h}$, so, by <ref> and inductive hypothesis, it suffices to show that \[\bigsqcup_{\gamma\in\tilde\Sigma_{h}}\{\m F_\gamma(X,Y)\}=\bigsqcup_{\gamma\in\Sigma_{h}}\{\m F_\gamma(X,Y)\}.\] But this easily follows from the definition of $\tilde\Sigma_{h}$ since $\m F_{\tilde\alpha}=\m F_\alpha$ for any $\alpha\in\Sigma_{\singpointnt_h,h}$ (Notation <ref>). To prove <ref>, first note that from <ref>, for any $h\leq n$ the indexed set $P_h$ is non-empty. Then <ref> is implied by <ref> if for any $f_\ell\in P_n$, the polynomial $f|_\ell(X)=f_\ell(X,0)$ has no non-zero multiple roots. But this follows from <ref> since $\bar C_\alpha$ is regular for any $\alpha\in\Sigma_n$, and so $f|_\gamma$ has no multiple roots in $\bar k^\times$ for any $\gamma\in\Sigma_h^+$ as $D_\gamma=k[X_{j_\gamma}^{\pm 1}]$ in this case (Lemma <ref>). As already observed, this concludes the proof of Theorem <ref>. § SUPERELLIPTIC EQUATIONS Let $k$ be a perfect field and let $\bar k$ be an algebraic closure of $k$. Denote by $G_k$ the absolute Galois group of $k$. As application of the construction presented in the previous sections, we consider a curve $C_{0,k}$ in $\G_{m,k}^2$ defined by an equation \[y^s=h(x),\] for some polynomial $h\in k[x]$ and some $s\in\Z_+$ not divisible by $\mathrm{char}(k)$. By convention the polynomial $f(x,y)$ defining $C_{0,k}$ will be $y^s-h(x)$. Denote by $C_0$ the curve $C_{0,k}\times_{k}\bar k$. Note that $C_0$ is smooth, but may be not connected, e.g. when $h(x)$ is an $s$-th power. \[h(x)=\sum_{i=m_0}^d c_i x^i,\qquad c_i\in k,\] where $c_{m_0}$ and $c_d$ are non-zero. We want to study a Baker's resolution of $C_0$ \[\dots\stackrel{s_{n+1}}{\twoheadrightarrow}C_{n+1}\stackrel{s_n}{\twoheadrightarrow}C_{n}\stackrel{s_{n-1}}{\twoheadrightarrow}\dots\stackrel{s_1}{\twoheadrightarrow}C_1\] as in Theorem <ref>, where the Galois-invariant sets $\singpointnt_n$ which the birational morphisms $s_n$ resolve are as large as possible, i.e. $\singpointnt_n=\bigsqcup_{\alpha\in\Sigma_n}\Sing(\bar C_\alpha)$. For the purpose of the construction of the Baker's resolution $x_1=x$. The Newton polygon $\Delta$ of $f$ always has at least two edges: $\ell_1$ with endpoints $(m_0,0)$, $(0,s)$ and normal vector $\gcd(m_0,s)^{-1}(s,m_0)$, and $\ell_2$ with endpoints $(d,0)$, $(0,s)$ and normal vector $\gcd(d,s)^{-1}(-s,-d)$. If $h$ is a monomial then $\Delta$ is a segment, otherwise $\Delta$ is a triangle. In the latter case, the third edge $\ell$ has endpoints $(m_0,0), (d,0)$ and normal vector $(0,1)$. Construct the completion $C_1$ of $C_0$ with respect to $\Delta$, as described in <ref>. For any $i=1,2$ let $v_i\neq(0,1)$ be the normal vector of $\ell_i$ and set $\alpha_i=(v_i,())\in\Sigma_1$. From Proposition <ref> it follows that \[f|_{\alpha_i}= X_1^\ast\cdot (a_l X_1^l+a_0),\qquad l\in\Z_+,\,\,\,a_0,a_l\in k^\times,\] where $\mathrm{char}(k)\nmid l$. In fact, if $i=1$ then $l=\gcd(m_0,s)$, $a_l=-c_{m_0}$, $a_0=1$, while if $i=2$ then $l=\gcd(d,s)$, $a_l=1$, $a_0=-c_d$. In particular, $f|_{\alpha_i}$ has no multiple roots in $\bar k^\times$. Suppose now that $h$ is not a monomial. Let $v=(0,1)$ be the normal vector of $\ell$ and let $\alpha=(v,())$ be the corresponding element of $\Sigma_1$. Consider $\m F_\alpha\in\bar k[X_1,Y]$. Note that since $v=(0,1)$, we can choose $M_\alpha=\lb\begin{smallmatrix}\!1\!&0\!\\\!0\!&1\!\end{smallmatrix}\rb$ and so $\m F_\alpha=f(X_1,Y)$. In particular, $f|_\alpha=f(X_1,0)=-h(X_1)$. Since $D_\alpha=\bar k[X_1^{\pm 1}]$, the singular points of $\bar C_\alpha$ correspond to the non-zero multiple roots of $f|_\alpha$, or, equivalently, to the non-zero multiple roots of $h$. Hence $\singpointnt_1$ is the set of those points. If $\singpointnt_1=\varnothing$, then $C_1$ is (outer) regular. We deduce the following lemma. If $h$ has no multiple root in $\bar k^\times$, then $C_1$ is an outer regular (generalised) Baker's model of the smooth completion of $C_0$. Suppose $\singpointnt_1\neq\varnothing$. Construct the morphism $s_1:C_2\rightarrow C_1$ resolving $\singpointnt_1$. Let $v$ and $\alpha$ as above. Rename the variable $Y$ of $\m F_\alpha$ to $\tilde Y$, so that $\m F_\alpha\in \bar k[X_1,\tilde Y]$. Let $p\in\singpointnt_1$ and let $r\in\bar k^\times$ be the multiple root of $h$ corresponding to $p$. One has $\bar{\m G}_p=X_1-r$. Note that $\bar{\m G}_p$ does not divide $\m F_\alpha$, so choose $\tilde{\m G}_p=\bar{\m G}_p$. \[\m F_{\alpha,p}(\tilde X_2,\tilde Y)=\m F_\alpha(\tilde X_2+r,\tilde Y)=f(\tilde X_2+r,\tilde Y)=\tilde Y^s-h(\tilde X_2+r).\] It follows that the Newton polygon $\Delta_{\alpha,p}$ of $\m F_{\alpha,p}$ has a unique edge $\ell_r$ with normal vector in $\Z_+^2$. Denoting by $m_r$ the multiplicity of the root $r$ of $h$, the endpoints of $\ell_r$ are $(m_r,0)$, $(0,s)$ and $\beta_r=\gcd(m_r,s)^{-1}(s,m_r)$ is its normal vector. Let $\gamma_r=\beta_r\circ_{g_p}\alpha$, where $g_p$ is the polynomial related to $\tilde{\m G}_p$ by $M_\alpha$. Define $h_r(x)=h(x)/(x-r)^{m_r}\in\bar k[x]$. Then Proposition <ref> implies \[f|_{\gamma_r}(X_2)=X_2^\ast\cdot(-a_rX_2^{\gcd(m_r,s)}+1),\] where $a_r=h_r(r)$. In particular, since $\mathrm{char}(k)\nmid s$, the polynomial $f|_{\gamma_r}$ has no multiple root in $\bar k^\times$. Therefore $\bar C_{\gamma_r}$ is regular for any non-zero multiple root $r$ of $h$. Moreover, $\bar C_{\tilde\alpha}$ is also regular as $\bar C_{\tilde\alpha}\simeq \bar C_\alpha\setminus \singpointnt_1$. Recall the notation $\tilde\Sigma_1=\hat\Sigma_1\cup\{\tilde\alpha\}$, where $\hat\Sigma_1=\Sigma_1\setminus\{\alpha\}$. Since \[\Sigma_2=\{\gamma_r\mid r\,\,\text{multiple root of }h\}\cup\tilde\Sigma_1\] the schemes $\bar C_\gamma$ are regular for all $\gamma\in\Sigma_2$. We obtain the following result. If $h$ has multiple roots in $\bar k^\times$, then $C_1$ is singular, but $C_2$ is an outer regular generalised Baker's model of the smooth completion of $C_0$. Note that $C_2=\bigcup_{\gamma\in\Sigma_2} C_\gamma$ since $C_0\subseteq C_\gamma$ for any $\gamma\in\Sigma_2$. We want to give an explicit description of the curve $C_{2,k}=C_2/G_k$, when $h$ has multiple roots in $\bar k^\times$. First note that for any $\gamma\in\tilde\Sigma_1$ the polynomials defining the curves $C_\gamma$ have coefficients in $k$. Therefore $G_k\subseteq\mathrm{Aut}(C_\gamma)$ for all $\gamma\in\tilde\Sigma_1$ and the charts $C_\gamma/G_k$ of $C_{2,k}$ easily follows. It remains to describe the curve $\big(\bigcup_{\sigma\in G_k}C_{\gamma_{\sigma(r)}}\big)/G_k$ for any non-zero multiple root $r$ of $h$. Let $g\in k[x]$ be the minimal polynomial of a multiple root $r\in\bar k^\times$ of $h$. Let $m_r$, $h_r$, $\beta_r$, $\gamma_r$ as above. Set $s_r=\gcd(m_r,s)$. Note that $\ord_g(h)=m_r$. If $\lb\begin{smallmatrix}\!\delta_1\!&\delta_2\!\\\!\beta_1\!&\beta_2\!\end{smallmatrix}\rb$ is the matrix attached to $\beta_r$ used for the construction of $C_{\gamma_r}$ then \[\O_{C_{\gamma_r}}(C_{\gamma_r})=\sfrac{\bar k[X_1^{\pm 1},X_2^{\pm 1},Y]}{(1-X_2^{s_r}\cdot h_r(X_1), X_2^{\delta_1}Y^{\beta_1}-X_1+r)}.\] Define $g_r, h_g\in \bar k[x]$ by $g_r(x)=g(x)/(x-r)$, $h_g(x)=h(x)/g(x)^{m_r}$. Note that $g_r(X_1)$ is invertible in $\O_{C_{\gamma_r}}(C_{\gamma_r})$. Consider the homomorphism \[\phi_r:\sfrac{\bar k[X_1^{\pm 1},X_2^{\pm 1},Y]}{(1-X_2^{s_r}\cdot h_g(X_1), X_2^{\delta_1}Y^{\beta_1}-g(X_1))}\longrightarrow \sfrac{\bar k[X_1^{\pm 1},X_2^{\pm 1},Y]}{(1-X_2^{s_r}\cdot h_r(X_1), X_2^{\delta_1}Y^{\beta_1}-X_1+r)},\] taking $X_1\mapsto X_1$, $X_2\mapsto X_2\cdot g_r(X_1)^{\beta_2}$, $Y\mapsto Y\cdot g_r(X_1)^{-\delta_2}$. Let $A_g:=\mathrm{Dom}(\phi_r)$. Note that $\Spec A_g=C_{\gamma_g}$, where $\gamma_g=\beta_r\circ_{g}\alpha\in\Omega$. Then $\phi_r$ induces an open immersion $\iota_r:C_{\gamma_r}\hookrightarrow C_{\gamma_g}$. The glueing of the open immersions $\iota_{\sigma(r)}$, for $\sigma\in G_k$, gives an isomorphism \[\big(\textstyle\bigcup_{\sigma\in G_k}C_{\gamma_{\sigma(r)}}\big)\simeq C_{\gamma_g},\] commuting with the Galois action. Since $C_{\gamma_g}$ is defined by polynomials with coefficients in $k$, the quotient $C_{\gamma_g}/G_k$ is easy to describe, as required. § EXAMPLE Let $C_{0.\F_2}:f=0\subset \G_{m,\F_2}^2$ with $f=x_1^4+1+y^2+y^3$. Note that $C_{0,\F_2}$ is smooth. Write $C_0=C_{0,\F_2}\times_{\F_2}\bar \F_2$, where $\bar \F_2$ is an algebraic closure of $\F_2$. §.§ Construction of $C_1$ The Newton polygon $\Delta$ of $f$ is \[\begin{tikzpicture}[scale=0.5] \draw[->] (-0.6,0) -- (4.6,0) node[right] {$x_1$}; \draw[->] (0,-0.6) -- (0,3.6) node[above] {$y$}; \tkzDefPoint(4,0){A} \tkzDefPoint(0,2){B} \tkzDefPoint(0,3){C} \tkzDefPoint(0,0){O} \tkzLabelPoint[below](A){$(4,0)$} \tkzLabelPoint[left](C){$(0,3)$} \foreach \n in {O,A,B,C} \node at (\n)[circle,fill,inner sep=1.5pt]{}; \draw (A) -- (O) node [midway,below, fill=none] {$\ell_3$}; \draw (O) -- (C) node [midway,left, fill=none] {$\ell_1$}; \draw (C) -- (A) node [midway,right, above, fill=none] {$\ell_2$}; \end{tikzpicture}\] We want to construct the completion $C_1$ of $C_0$ with respect to $\Delta$ as explained in <ref>. For any edge $\ell_i$ of $\Delta$ let $\beta_i$ be the normal vector of $\ell_i$. Then $\beta_1=(1,0)$, $\beta_2=(-3,-4)$, $\beta_3=(0,1)$. Let $\alpha_i=(\beta_i,())\in\Sigma_1$ for $i=1,2,3$. Then $\Sigma_1=\{\alpha_1,\alpha_2,\alpha_3\}$ and \[C_1=C_{\alpha_1}\cup C_{\alpha_2}\cup C_{\alpha_3},\] where we omitted $C_0$ as $C_0\subset C_\alpha$ for every $\alpha\in\Sigma_1$. From Proposition <ref> the polynomials $f|_{\alpha_1}$ and $f|_{\alpha_2}$ are separable (up to a power of $X_1$) and so the corresponding curves $C_{\alpha_1}$ and $C_{\alpha_2}$ are regular. On the other hand, $1\in \F_2$ is a non-zero multiple root of $f|_{\alpha_3}$, so $C_{\alpha_3}$ may be singular. Let us compute the defining polynomial $\m{F}_{\alpha_3}$. The identity matrix $I\in\SL_2(\Z)$ is attached to $\beta_3$, so we fix $M_{\alpha_3}=I$. Via $I$ we get \[\m{F}_{\alpha_3}=X_1^4+1+Y^2+Y^3.\] Then $C_{\alpha_3}=\Spec \bar\F_2[X_1^{\pm1},Y]/(\m{F}_{\alpha_3})$ is singular. Thus $C_1$ is not smooth, having $1$ singular point, visible on $C_{\alpha_3}$. §.§ Construction of $C_2$ Rename the variable $Y$ of $C_{\alpha_3}$ to $\tilde Y$. Let $p$ be the singular point of $C_{\alpha_3}$. Then $\bar{\m G}_{p}=X_1+1$. Choose $\tilde{\m G}_{p}=\bar{\m G}_{p}$. We will construct the morphism $s_1:C_2\rightarrow C_1$ resolving the set $\singpointnt_1=\{p\}$. Note that $\singpointnt_1=\bigsqcup_{\alpha\in\Sigma_1}\Sing(\bar C_\alpha)$. Let $\alpha=\alpha_3$ and $\beta=\beta_3$. \[\tilde{\m{G}}_{p}\lb(x_1,y)\bullet M_{\alpha}^{-1}\rb=x_1+1,\] so $g_p=x_1+1\in \F_2[x_1,y]$ is the polynomial related to $\tilde{\m G}_{p}$ by $M_\alpha$. Define $g_2=g_p$ and $f_2=x_2-g_2$. Note that since $\singpointnt_1$ consists of a single point, we have $\tilde{\m G}_{\singpointnt_1}=\tilde{\m G}_p$ and $g_{\singpointnt_1}=g_p$. Then $\alpha_p=\tilde\alpha$. Compute $\ord_{\beta}(g_p)=0$ and $\tilde\alpha=\alpha_p=(0,1)\circ_{g_{\singpointnt_1}}\alpha=((0,0,1),(g_2))$. Then \[C_{\tilde\alpha}=C_{\alpha_p}=\Spec\frac{\bar \F_2[X_1^{\pm 1},\tilde X_2^{\pm 1},\tilde Y]}{(\m{F}_{\alpha_3}, \tilde X_2+X_1+1)}.\] The normal form of $\m{F}_{\alpha_3}$ by $\tilde X_2-\m{G}$ with respect to the lexicographic order given by $X_1>\tilde X_2>\tilde Y$ is \[\m{F}_{\alpha,p}=\m F_\alpha\big(\tilde X_2+1,\tilde Y\big)=\tilde X_2^4+\tilde Y^2+\tilde Y^3.\] The Newton polygon of $\m{F}_{\alpha,p}$ is \[\begin{tikzpicture}[scale=0.5] \draw[->] (-0.6,0) -- (4.6,0) node[right] {$\tilde X_2$}; \draw[->] (0,-0.6) -- (0,3.6) node[above] {$\tilde Y$}; \tkzDefPoint(4,0){A} \tkzDefPoint(0,2){B} \tkzDefPoint(0,3){C} \tkzLabelPoint[below](A){$(4,0)$} \tkzLabelPoint[below left](B){$(0,2)$} \tkzLabelPoint[right](C){$(0,3)$} \foreach \n in {A,B,C} \node at (\n)[circle,fill,inner sep=1.5pt]{}; \draw (A) -- (B) node [midway,below, fill=none] {$\ell_4$}; \draw (B) -- (C); \draw (C) -- (A); \end{tikzpicture}\] There is only $1$ edge, denoted $\ell_4$, with normal vector in $\Z_+^2$. The normal vector of $\ell_4$ is $\beta_4=(1,2)$. It follows that $v_4=\beta_4\circ_{g_p}\beta=(0,1,2)$. Hence $\gamma_4=\beta_4\circ_{g_p}\alpha=(v_4,(g_2))$ is the corresponding element of $\Sigma_p$. Then $\Sigma_2=\{\alpha_1,\alpha_2,\tilde\alpha_3,\gamma_4\}$. To check whether $C_{\gamma_4}$ is regular, compute $\m{F}_{\gamma_4}$. The matrix $M_{\beta_4}=\left(\begin{smallmatrix}\!1\!&1\!\\\!1\!&2\!\end{smallmatrix}\right)$, attached to $\beta_4$, defines the change of variables $\tilde X_2=X_2Y$, $\tilde Y=X_2Y^2$, from which we get \begin{align*} &\tilde X_2-\tilde{\m{G}}_p=\m{F}_2,&&\m{F}_2=X_2Y+X_1+1,& \end{align*} where $\m F_2$ is the generator of the ideal $\got a_{\gamma_4}$. Therefore the curve \[C_{\gamma_4}=\Spec \frac{\bar \F_2[X_1^{\pm 1},X_2^{\pm 1},Y]}{(\m{F}_{\gamma_4})+\got a_{\gamma_4}}\] is singular, and so is the projective curve $C_2=C_{\alpha_1}\cup C_{\alpha_2}\cup C_{\tilde\alpha_3} \cup C_{\gamma_4}$. In the union we omitted $C_0$, as $C_0\subset C_{\alpha_1}$. §.§ Construction of $C_3$ Let $q$ be the singular point of $C_{\gamma_4}$. We now construct the morphism $s_2:C_3\rightarrow C_2$ resolving $\singpointnt_2=\{q\}$. Let $\gamma=\gamma_4$. Rename the variable $Y$ of $C_{\gamma}$ to $\tilde Y$. Choose $\tilde{\m{G}}_q=\bar{\m{G}}_q=X_2+1$. By definition \[M_{\gamma}=((1)\oplus M_{\beta_4})\cdot M_{\alpha_p}=\left(\begin{smallmatrix}1&0&0\\0&1&1\\0&1&2\end{smallmatrix}\right)\cdot \left(\begin{smallmatrix}1&0&0\\0&1&0\\0&0&1\end{smallmatrix}\right)=\left(\begin{smallmatrix}1&0&0\\0&1&1\\0&1&2\end{smallmatrix}\right),\quad M_\gamma^{-1}=\left(\begin{smallmatrix}1&0&0\\0&2&-1\\0&-1&1\end{smallmatrix}\right).\] Then $g_q=x_2^2+y\in \F_2[x_1,x_2,y]$ is the polynomial related to $\tilde{\m G}_q$ by $M_\gamma$, as \[\tilde{\m{G}}_q\lb(x_1,x_2,y)\bullet M_{\gamma}^{-1}\rb=x_2^2y^{-1}+1.\] Let $g_3=(x_1+1)^2+y$ be the Laurent polynomial in $k[x_1^{\pm 1}, y^{\pm 1}]$ congruent to $g_q$ modulo $f_2$. Compute $\ord_{v_4}(g_q)=2$. Then \[\tilde\gamma=\gamma_q=(0,1)\circ_{g_q}\gamma=((0,1,2,2),(g_2,g_3)).\] The normal form of $\m{F}_{\gamma}$ by $\tilde X_3-\tilde{\m{G}}_q$ with respect to the lexicographic order given by $X_2>\tilde X_3>\tilde Y$ is \[\m{F}_{\gamma,q}=\tilde X_3^2+(\tilde X_3+1)\tilde Y^2.\] The Newton polygon of $\m{F}_{\gamma,q}$ is \[\begin{tikzpicture}[scale=0.5] \draw[->] (-0.6,0) -- (2.6,0) node[right] {$\tilde X_3$}; \draw[->] (0,-0.6) -- (0,2.6) node[above] {$\tilde Y$}; \tkzDefPoint(2,0){A} \tkzDefPoint(0,2){B} \tkzDefPoint(1,2){C} \tkzLabelPoint[below](A){$(2,0)$} \tkzLabelPoint[left](B){$(0,2)$} \tkzLabelPoint[right](C){$(1,2)$} \foreach \n in {A,B,C} \node at (\n)[circle,fill,inner sep=1.5pt]{}; \draw (A) -- (B) node [midway,below, fill=none] {$\ell_5$}; \draw (B) -- (C); \draw (C) -- (A); \end{tikzpicture}\] There is only $1$ edge, denoted $\ell_5$, with normal vector in $\Z_+^2$. The normal vector of $\ell_5$ is $\beta_5=(1,1)$ and so the corresponding element of $\Sigma_q$ is \[\gamma_5=\beta_5\circ_{g_q}\gamma=((0,1,3,2),(g_2,g_3)).\] Hence $\Sigma_3=\{\alpha_1,\alpha_2,\tilde\alpha_3,\tilde\gamma_4,\gamma_5\}$. The matrix $M_{\beta_5}=\left(\begin{smallmatrix}\!1\!&0\!\\\!1\!&1\!\end{smallmatrix}\right)$, attached to $\beta_5$, defines the change of variables $\tilde X_3=X_3Y$, $\tilde Y=Y$ from which we get \begin{align*} &\tilde X_3-\tilde{\m{G}}_q=\m{F}_3&&\m{F}_3=X_3Y+X_2+1,& \end{align*} and $\m F_2=X_2Y+X_1+1$ is the image of the generator of $\got a_\gamma$ under $M_{\beta_5}$. Then $\got a_{\gamma_5}=(\m F_2,\m F_3)$ and \[C_{\gamma_5}=\Spec \frac{\bar \F_2[X_1^{\pm 1},X_2^{\pm 1},X_3^{\pm 1},Y]}{(\m{F}_{\gamma_5})+\got a_{\gamma_5}}\] is regular (even if $f|_{\gamma_5}$ is not separable). Therefore the curve \[C_3=C_{\alpha_2}\cup C_{\alpha_3}\cup C_{\tilde\alpha_3} \cup C_{\tilde\gamma_4} \cup C_{\gamma_5}\] is regular as well, and is a generalised Baker's model of the smooth completion of $C_0$. It is not outer regular, since $\bar C_{\gamma_5}$ has a singular point. One more step is therefore necessary (and sufficient by Proposition <ref>) to construct an outer regular generalised Baker's model. Note that in the description of $C_3$ we omitted $C_0$, as $C_0\subset C_{\alpha_1}$. Finally, the polynomials defining the charts $C_\gamma$, $\gamma\in\Sigma_3$ have coefficients in $\F_2$, so the construction of the generalised Baker's model $C_3/G_{\F_2}$ of the smooth completion of $C_{0,\F_2}$ easily follows. § BIRATIONAL SMOOTH HYPERSURFACE OF A VARIETY Let $k$ be a perfect field. Recall that an algebraic variety $Z$ over $k$, denoted $Z/k$, is a scheme $Z\rightarrow \Spec k$ of finite type. Let $Z/k$ be a geometrically reduced algebraic variety, pure of dimension $n$. Suppose either $n>0$ or $k$ infinite. Then there exists a separable polynomial $f\in k(x_1,\dots,x_n)[y]$, such that $k(Z)=k(x_1,\dots,x_n)[y]/(f)$. Let $Z_1,\dots, Z_m$ be the irreducible components of $Z$. From <cit.>, <cit.> it follows that $k(Z)\simeq \bigoplus_{i=1}^mk(Z_i)$. Let $i=1,\dots,m$. As $Z$ is pure, $\dim Z_i=\dim Z=n$. Since $Z_i$ is geometrically reduced and integral, it follows from <cit.> that the field of functions $k(Z_i)$ is a finite separable extension of a purely trascendental extension $k(x_1,\dots,x_n)$. Hence there exists a monic irreducible separable polynomial $f_i\in k(x_1,\dots,x_n)[y]$ such that \[k(Z_i)\simeq k(x_1,\dots,x_n)[y]/(f_i).\] We want to show that we can inductively choose the polynomials $f_i$ above such that $\gcd(f_i,f_j)=1$ for all $j<i$. Suppose we have fixed $f_1,\dots,f_{i-1}$ for some $i\geq 1$, and let $g_i\in k(x_1,\dots,x_n)[y]$ be any monic irreducible polynomial such that $k(Z_i)\simeq k(x_1,\dots,x_n)[y]/(g_i)$. Since $k(x_1,\dots,x_n)$ is infinite, there exists $c\in k(x_1,\dots,x_n)$ such that $\tau_c g_i\neq f_j$ for any $j<i$, where $\tau_c g_i$ is the polynomial defined by $\tau_c g_i(y)=g_i(y-c)$. But $\tau_c g_i$ and $f_j$ are irreducible monic polynomials, so $\gcd(\tau_c g_i,f_j)=1$. Moreover, $\tau_c g_i$ is separable and \[k(x_1,\dots,x_n)[y]/(g_i)\simeq k(x_1,\dots,x_n)[y]/(\tau_c g_i)\] via the map taking $y\mapsto y-c$. Then choose $f_i=\tau_cg_i$. Thus assume $\gcd(f_i, f_j)=1$ for any $i,j=1,\dots,m$. From the Chinese Remainder Theorem it follows that \[k(Z)\simeq \bigoplus_{i=1}^mk(Z_i)\simeq \bigoplus_{i=1}^m\frac{k(x_1,\dots,x_n)[y]}{(f_i)}\simeq \frac{k(x_1,\dots,x_n)[y]}{(f)},\] where $f=\prod_{i=1}^mf_i$. The following result is a variant of <cit.>. Let $Z/k$ be a geometrically reduced, separated algebraic variety, pure of dimension $n$. Suppose either $n>0$ or $k$ infinite. Then there exists a smooth affine hypersurface $V$ in $\A_k^{n+1}$ birational to $Z$. Lemma <ref> shows that there exists a separable polynomial $f\in k(x_1,\dots,x_n)[y]$ such that $k(Z)\simeq k(x_1,\dots,x_n)[y]/(f)$. Rescaling $f$ by an element of $k(x_1,\dots,x_n)$ if necessary, we can assume that $f$ is a polynomial in $k[x_1,\dots,x_n,y]$ with no irreducible factors in $k[x_1,\dots,x_n]$. Hence the total quotient ring of $k[x_1,\dots,x_n,y]/(f)$ is $k(x_1,\dots,x_n)[y]/(f)$. It follows that there exists a birational map $Z\-->Z_0$, where $Z_0$ is the affine hypersurface defined by $f(x_1,\dots,x_n,y)=0$. Let $A=k[x_1,\dots,x_n,y]/(f)$ be the coordinate ring of $Z_0$. If $Z_0$ is smooth then we are done. Suppose $Z_0$ is not smooth. Then there exists $h\in J\cap k[x_1,\dots,x_n]$, where $J\subset k[x_1,\dots,x_n,y]$ is the ideal defining the singular locus of $Z_0$. The rest of the proof follows the spirit of <cit.>. Expand $f=\sum_{i=0}^dc_iy^i$, where $c_i\in k[x_1,\dots,x_n]$, and $c_0\neq 0$. Via the change of variable $(hc_0^2)y'=y$ we get $f=\sum_{i=0}^d c_i(hc_0^2)^i(y')^i$. Dividing by $c_0$, we define $f'=1+\sum_{i=1}^d c_ic_0^{i-1}(hc_0y')^i$ and $Z_0'=\Spec k[x_1,\dots,x_n,y']/(f')$. Then via the homomorphism $y\mapsto (hc_0^2)y'$ we see that $Z_0'$ is isomorphic to the smooth dense open subvariety $D(hc_0)$ of $Z_0$. Thus $Z_0'$ is a smooth affine hypersurface in $\A_k^{n+1}$ birational to $Z$. If a smooth affine curve $C_0/k$ is birational to a smooth projective curve $C/k$, then $C$ is isomorphic to the smooth completion of $C_0$. Equivalently, there exists an open immersion with dense image $C_0\hookrightarrow C$. Since $C$ is complete and $C_0$ is smooth and separated (as affine), the birational map $C_0\--> C$ uniquely extends to a separated birational morphism $\iota: C_0\rightarrow C$. Denoting by $\tilde C$ the smooth completion of $C_0$ note that $\iota$ decomposes into the canonical open immersion $C_0\hookrightarrow \tilde C$ and the morphism $\tilde\iota:\tilde C\rightarrow C$ extending the rational map given by $\iota$. Therefore it suffices to prove that $\tilde\iota$ is an isomorphism. First note that $\tilde \iota$ is proper by <cit.> since $\tilde C$ and $C$ are complete. Furthermore, both $\tilde C$ and $C$ are smooth, so they are geometrically reduced and have irreducible connected components. For any connected component $\tilde U$ of $\tilde C$ there is a connected component $U$ of $C$ such that $\tilde \iota$ restricts to a morphism $\iota_U:\tilde U\rightarrow U$. Note that $\iota_U$ is a proper birational morphism, as $\tilde U$ is a closed subscheme of $\tilde C$ and $\tilde\iota$ is proper birational. Since both $\tilde U$ and $U$ are integral and smooth of dimension $1$, and so normal, <cit.> implies that $\iota_{U}:\tilde U\rightarrow U$ is an isomorphism. It follows that $\tilde \iota: \tilde C\rightarrow C$ is an isomorphism. Every smooth projective curve $C/k$ has a dense affine open which is isomorphic to a smooth plane curve. From Theorem <ref> there exists a smooth affine plane curve $C_0$ birational to $C$. Then Lemma <ref> concludes the proof. § EXISTENCE OF A BAKER'S MODEL Let $k$ be a perfect field. We say that a curve $C/k$ is nice if it is geometrically connected, smooth and projective over $k$. In this appendix we slightly extend some results in <cit.> for studying the existence of a Baker's model of a nice curve. Define the index of a nice curve $C/k$ to be the smallest extension degree of a field $K/k$ such that $C(K)\neq \varnothing$. Let $C$ be a nice curve of genus $1$. Then $C$ admits a Baker's model if and only if $C$ has index at most $3$. Suppose $C$ has index at most $3$. Then by <cit.> the curve $C$ is nondegenerate. Hence $C$ has an outer regular Baker's model. Suppose now that $C$ admits a Baker's model. Then there exists a smooth curve $C_0\hookrightarrow C$ defined in $\G_{m,k}^2$ by $f\in k[x^{\pm 1},y^{\pm 1}]$ such that the completion $C_1$ of $C_0$ with respect to the Newton polygon $\Delta$ of $f$ is regular. We follow the spirit of the proof of <cit.>. Since the arithmetic genus of $C$ is $1$ there is exactly $1$ interior integer point of $\Delta$. There are $16$ equivalence classes of integral polytopes with this condition (see <cit.>). Then without loss of generality we can assume $\Delta$ is in this list. Note that there is an edge $\ell\subseteq\partial\Delta$ such that $\#(\ell\cap\Z^2)\leq 4$. Let $v$ be the normal vector of $\ell$ and $\alpha=(v,())\in\Sigma_1$. Then $f|_{\alpha}$ has at most $3$ roots in $\bar k^\times$ by Proposition <ref>. Therefore the splitting field $K$ of $f|_{\alpha}$ has degree $\leq 3$ over $k$. Furthermore, by definition $C_1$ has at least one point defined over $K$ visible on $C_\alpha$. Thus $C_1$, and so $C$, has index at most $3$. The lemma above implies that there are nice curves which does not have a Baker's model. Indeed, if $k$ is a number field, <cit.> proves there exist nice curves of genus $1$ of any index. Let $C$ be a nice curve of genus $g\leq 3$. If $k$ is finite or $C(k)\neq\varnothing$ then $C$ admits a Baker's model. The first theorem in <cit.> and <cit.> show $C$ is nondegenerate except when $C$ is birational to a curve $C_0$ given in $\G_{m,k}^2$ by \begin{align*} f^{(2)}&=(x+y)^4+(xy)^2+xy(x+y+1)+(x+y+1)^2,& &\text{with }k=\F_2,\text{ or}\\ f^{(3)}&=(x^2+1)^2+y-y^3, & &\text{with }k=\F_3. \end{align*} Recall that if $C$ is nondegenerate then it has an outer regular Baker's model. Therefore it suffices to show that in the two exceptional cases above the completion $C_1$ of the curve $C_0$ with respect to its Newton polygon is smooth. We use the notation of <ref>. Suppose $k=\F_2$ and $C_0: f^{(2)}=0$ over $\G_{m,\F_2}^2$. Note that $C_0$ is smooth. Denote $f=f^{(2)}$. The Newton polygon $\Delta$ of $f$ is \[\begin{tikzpicture}[scale=0.4] \draw[->] (-0.6,0) -- (4.6,0) node[right] {$x$}; \draw[->] (0,-0.6) -- (0,4.6) node[above] {$y$}; \tkzDefPoint(4,0){A} \tkzDefPoint(0,4){B} \tkzDefPoint(2,2){C} \tkzDefPoint(2,1){D} \tkzDefPoint(1,2){E} \tkzDefPoint(1,1){F} \tkzDefPoint(2,0){G} \tkzDefPoint(0,2){H} \tkzDefPoint(0,0){O} \tkzLabelPoint[below](A){$(4,0)$} \tkzLabelPoint[left](B){$(0,4)$} \foreach \n in {O,A,B,C,D,E,F,G,H} \node at (\n)[circle,fill,inner sep=1.5pt]{}; \draw (A) -- (O) node [midway,below, fill=none] {$\ell_1$}; \draw (O) -- (B) node [midway,left, fill=none] {$\ell_2$}; \draw (B) -- (A) node [midway,right, above, fill=none] {$\ell_3$}; \end{tikzpicture}\] where the normal vectors of the edges $\ell_1$, $\ell_2$, $\ell_3$ of $\Delta$ are respectively $\beta_1=(0,1)$, $\beta_2=(1,0)$, $\beta_3=(-1,-1)$. Then by fixing $\delta_{\beta_1}=(1,0)$, $\delta_{\beta_2}=(-1,-1)$, $\delta_{\beta_3}=(0,1)$ we have \[ \] for every $i=1,2,3$. Note that the points on $Y=0$ are regular points of $C_{\ell_i}$. Thus $C_{\ell}$ is smooth for any edge $\ell$ of $\Delta$ and so $C_1$ is smooth. Suppose $k=\F_3$ and $C_0: f^{(3)}=0$ over $\G_{m,\F_3}^2$. Note that $C_0$ is smooth. Denote $f=f^{(3)}$. The Newton polygon $\Delta$ of $f$ is \[\begin{tikzpicture}[scale=0.4] \draw[->] (-0.6,0) -- (4.6,0) node[right] {$x$}; \draw[->] (0,-0.6) -- (0,3.6) node[above] {$y$}; \tkzDefPoint(4,0){A} \tkzDefPoint(0,3){B} \tkzDefPoint(2,0){C} \tkzDefPoint(0,1){D} \tkzDefPoint(0,0){O} \tkzLabelPoint[below](A){$(4,0)$} \tkzLabelPoint[left](B){$(0,4)$} \foreach \n in {O,A,B,C,D} \node at (\n)[circle,fill,inner sep=1.5pt]{}; \draw (A) -- (O) node [midway,below, fill=none] {$\ell_1$}; \draw (O) -- (B) node [midway,left, fill=none] {$\ell_2$}; \draw (B) -- (A) node [midway,right, above, fill=none] {$\ell_3$}; \end{tikzpicture}\] where the normal vectors of the edges $\ell_1$, $\ell_2$, $\ell_3$ of $\Delta$ are respectively $\beta_1=(0,1)$, $\beta_2=(1,0)$, $\beta_3=(-3,-4)$. We can choose $\delta_{\beta_1}=(1,0)$ so that \[ \] The points on $Y=0$ are regular points of $C_{\ell_1}$ and so $C_{\ell_1}$ is smooth. Furthermore, up to a power of $X$ the polynomials $f|_{\ell_2}$ and $f|_{\ell_3}$ equal $X^3+X^2-1$ and $-X+1$ respectively. It follows that the charts $C_{\ell_2}$ and $C_{\ell_3}$ of $C_1$ are regular. Thus $C_1$ is smooth.
# Energy-Based Open-World Uncertainty Modeling for Confidence Calibration Yezhen Wang1 Bo Li111footnotemark: 1 Tong Che211footnotemark: 1 Kaiyang Zhou3 Ziwei Liu3 Dongsheng Li1 1Microsoft Research Asia 2MILA 3S-Lab, Nanyang Technological University Equal contribution. Ordered by dice rolling. Correspondence to <EMAIL_ADDRESS> ###### Abstract Confidence calibration is of great importance to the reliability of decisions made by machine learning systems. However, discriminative classifiers based on deep neural networks are often criticized for producing overconfident predictions that fail to reflect the true correctness likelihood of classification accuracy. We argue that such an inability to model uncertainty is mainly caused by the closed-world nature in softmax: a model trained by the cross-entropy loss will be forced to classify input into one of $K$ pre- defined categories with high probability. To address this problem, we for the first time propose a novel $K$+1-way softmax formulation, which incorporates the modeling of open-world uncertainty as the extra dimension. To unify the learning of the original $K$-way classification task and the extra dimension that models uncertainty, we 1) propose a novel energy-based objective function, and moreover, 2) theoretically prove that optimizing such an objective essentially forces the extra dimension to capture the marginal data distribution. Extensive experiments show that our approach, Energy-based Open- World Softmax (EOW-Softmax), is superior to existing state-of-the-art methods in improving confidence calibration. ## 1 Introduction Given the considerable success achieved so far by deep neural networks (DNNs), one might be wondering if DNN-based systems can be readily deployed to solve real-world problems. On the one hand, DNNs can achieve high accuracy if trained with large-scale datasets [15]. But on the other hand, contemporary DNNs are often criticized for producing overconfident predictions [14], which fail to represent the true correctness likelihood of accuracy. This has raised concerns over safety and reliability for using machine learning systems in real-world scenarios. Having a confidence-calibrated system is critical. For instance, in healthcare applications, the intelligence system should produce low-confidence predictions when it is uncertain about the input—say they differ significantly from the training data—so the decision-making process can be transferred to human doctors for more accurate diagnosis and safer handling. Research on confidence calibration for DNNs has received increasing attention in recent years [14, 27, 19, 23, 27]. Since most classifiers are based on softmax, a common practice to improve calibration is to insert a temperature scaling parameter to the softmax function and adjust it in a validation set [14]. Besides, methods like Smoothing labels [36, 28], which essentially combines the one-hot ground-truth vector with a uniform distribution, has also been shown effective in improving calibration. Figure 1: Comparison between (a) the conventional softmax and (b) our proposed Energy-based Open-World softmax (EOW-Softmax). Our new formulation introduces an extra dimension to model uncertainty, which is supposed to produce high scores when the input deviates from the training data distribution. In this way, the original $K$ classification scores can be well calibrated. However, most existing confidence calibration methods have overlooked the underlying problem that causes neural network classifiers to generate overconfident predictions, i.e. the inability to model _uncertainty_ in output probabilities. We argue that the culprit for causing such a problem is the closed-world nature in softmax [2, 34]. This is easy to understand: during training the model is asked to classify input into one of $K$ pre-defined categories with high probability (due to the cross-entropy loss), and as such, the model has no choice but to assign one of the $K$ categories to any unseen data, likely with high probability as well. A potential countermeasure is to adopt a $K+1$-way formulation where the new category can represent uncertainty about the input data. In this way, the $K$ classification scores might be better regularized, and hence better calibrated. However, learning such a classifier is challenging as we do not have access to those data with the $K+1$-th label, thus lacking supervision to teach the network when to give low/high confidence. Furthermore, designing the extra dimension is a non-trivial task as it is directly linked to the formulation of the learning objective. It is also unclear how such a dimension should be constructed, e.g., to design it as another logit produced by the same network or an independent branch that regresses to uncertainty. In this paper, we propose _Energy-based Open-World Softmax_ (EOW-Softmax), a novel approach that introduces a $K+1$-way softmax based on energy functions [25]. Specifically, the neural network classifier is designed to produce $K+1$ logits, where the first $K$ dimensions encode the scores for the original $K$-way classification task, while the extra dimension aims to model open- world uncertainty. See Figure 1 for a comparison between a model based on the conventional softmax and that based on our EOW-Softmax. Besides, we resort to an energy-based $K+1$-way classification objective function to unify the learning of the $K$-way classification task and the uncertainty modeling. Further more, we theoretically justify that optimizing the proposed objective function essentially forces the summation of original $K$ softmax scores ($K+1$ scores in total) to be directly proportional to the marginal density $p(x)$, hence explaining why our EOW-Softmax helps calibrate a model’s confidence estimates. The contributions of this paper are summarized as follows. 1) First, we overcome the closed-world softmax problem by transforming the conventional $K$-way softmax to a novel $K+1$-way formulation, where the extra dimension is designed to model open-world uncertainty. 2) Second, a novel energy-based objective function is developed to unify the learning of the original $K$-way classification task and the uncertainty modeling. 3) A theoretical proof is further provided to explain why our learning objective can help the network capture uncertainty. 4) Finally, we conduct extensive experiments on standard benchmark datasets to demonstrate that our method can lead to a better calibrated model compared with other state-of-the-arts. ## 2 Related Works Confidence Calibration With the emergence of deep learning technologies and their wide successes, concerns over whether they are reliable to be deployed in practice have also arisen. This is because researchers have found that contemporary deep neural networks (DNNs) often produce overconfident predictions [14], even on input images that are totally unrecognizable to humans [32]. Many approaches for improving confidence calibration have been developed. A widely used method is temperature scaling [14, 27, 19], which inserts a scaling parameter to the softmax formulation (called ‘temperature’) and adjusts it in a validation set with a goal to ‘soften’ the softmax probabilities. Regularization methods, such as label smoothing [36] and Mixup [37], have also been demonstrated effective in improving calibration. In particular, label smoothing modifies the ground-truth labels by fusing them with a uniform distribution, essentially forcing neural networks to produce ‘more flattened’ probabilities; whereas Mixup is a data augmentation method that randomly mixes two instances at both the image and label space, with a byproduct effect of improving calibration. Bayesian methods have also been explored for calibration. For instance, Monte Carlo Dropout [7] applies dropout in both training and testing; Deep Ensembles [23] uses as prediction the output averaged over an ensemble of models. Adding adversarial perturbations to the input has been found effective in smoothing the output probabilities [23, 27]. In [26], a GAN model [11] is trained to generate out- of-distribution (OOD) data and the classifier is encouraged to produce low- confidence probabilities on these data. Such an idea has also been investigated in [16] where adversarial perturbations are utilized to synthesize OOD data. In [12], a Joint Energy-based Model (JEM) is proposed to improve calibration by learning the joint distribution based on energy functions [25]. A recent work [38] suggests that calibrating confidence across multiple domains is beneficial to OOD generalization [42]. Studies on why neural networks produce overconfident predictions have also been covered in the literature. In [16], the authors suggest that ReLU neural networks are essentially piecewise linear functions, thus explaining why OOD data can easily cause softmax classifiers to generate highly confident output. In [30], the authors identify that data variance and model curvature cause most generative models to assign high density to OOD data. The authors in [34] point out that the overconfidence issue is related to the closed-world assumption in softmax, and design a distance-based one-vs-all (OvA) classifier as the countermeasure. Two works related to ours are JEM [12] and the OvA classifier [34]. Compared with JEM, our approach is much easier to train because we only need to optimize a _single_ classification objective to achieve both discriminative classifier learning and generative modeling (see Sec. 3.3), while JEM has to simultaneously optimize two separate objectives. Moreover, JEM has ignored the closed-world softmax issue, which is addressed in this work with an augmented softmax. Compared with the OvA classifier, our approach is significantly different: we endow the classifier with the ability to model open-world uncertainty, which is attributed to the extra dimension in softmax learned via a novel energy-based objective function to capture the marginal data distribution; in contrast, the OvA classifier converts the $K$-way classification problem into multiple binary classification problems. Energy-Based Models (EBMs) have been widely used in the area of generative modeling [41, 8, 1, 4]. The basic idea in EBMs is to learn dependencies between variables (e.g., images and labels) represented using energy functions; and to assign low energies to correct configurations while give high energies to incorrect ones [25]. However, training EBMs, especially on high-dimensional data like images, has been notoriously hard due to sampling issues [21, 13]. A widely used sampler is Stochastic Gradient Langevin Dynamics (SGLD) [39], which injects noises to the parameter update and anneals the stepsize during the course of training. Following prior work [33, 6, 12], we also leverage SGLD to optimize our energy-based objective function. Figure 2: Model architecture of our approach _EOW-Softmax_. The extra dimension introduced in the augmented softmax (dashed) is learned using an energy-based function to model open-world uncertainty, such that it can assign high uncertainty scores to abnormal input far away from the training data distribution, which in turn lower the classifier’s confidence on the original $K$-way classification task. Note that the sampling in SGLD is performed on the latent (feature) space rather than the input (image) space. ## 3 Methodology According to [34], we argue that the culprit for causing the overconfidence problem in most neural network classifiers’ output is the closed-world nature in softmax. As a result, a model trained by the cross-entropy loss has to pick one of $K$ pre-defined categories with high confidence. To overcome this problem, we propose _Energy-based Open-World Softmax_ , or EOW-Softmax, to regularize the $K$-way classification scores in such a way that allows their confidence to be calibrated. The main idea in EOW-Softmax is to re-formulate the original $K$-way classification task as a novel $K$+1-way classification problem, where the extra dimension is designed to model _open-world uncertainty_. To learn such a $K$+1-way classifier in an end-to-end manner, we propose a novel energy-based objective function, which essentially forces the extra dimension to be negatively correlated to the marginal data distribution. In this way, the $K$ classification scores are automatically calibrated to be less confident over input fallen beyond the training data distribution. See Figure 2 for an overview of our model architecture. The rest of this section are organized as follows. In Sec. 3.1, we provide a brief background on energy-based models (EBMs), which are required by our approach to construct the objective function. In Sec. 3.2, we discuss in detail the design of EOW- Softmax. Sec. 3.3 gives a theoretical insight on why EOW-Softmax can help calibrate a model’s confidence estimates. ### 3.1 A Brief Background on EBMs The main building block in EBMs is an energy function $E_{\theta}:\mathbb{R}^{D}\to\mathbb{R}$ (parameterized by $\theta$), which aims to map a $D$-dimensional datapoint to a scalar.111When the input is an image, $D$ can be understood as the length of the flattened tensor. The learning is designed in such a way that $E_{\theta}$ can assign low energies to observed configurations of variables while give high energies to unobserved ones [25]. With $E_{\theta}$, any probability density $p(x)$ for $x\in\mathbb{R}^{D}$ in an EBM can be written as $p_{\theta}(x)=\frac{\exp(-E_{\theta}(x))}{Z(\theta)},$ (1) where $Z(\theta)=\int_{x}\exp(-E_{\theta}(x))$ denotes the normalizing constant, also known as the partition function. An EBM can be represented by using any function as long as the function can generate a single scalar given some input $x$. In this work, we assume $E_{\theta}$ is represented by a deep neural network. People usually adopt gradient estimation to optimize an EBMs [21, 13] and sample data from it by Markov Chain Monte Carlo (MCMC) methods [9, 10, 18, 39]. ### 3.2 Energy-Based Open-World Softmax Open-World Softmax As discussed before, conventional softmax-based classifiers lack the ability to model open-world uncertainty. To address this problem, we design a neural network classifier to output probabilities on $K+1$ categories (see Figure 2), with the $K$+1-th score representing open-world uncertainty—the network should be able to produce high uncertainty scores to abnormal input, which in turn can lower the confidence on the original $K$ categories’ prediction. Let $f_{\theta}:\mathbb{R}^{D}\to\mathbb{R}^{K+1}$ be our neural network model (excluding the softmax layer), which produces $K+1$ logits, and $f_{\theta}(x)[i]$ the $i$-th logit given input $x$, with $i\in\\{1,...,K,K+1\\}$. The output probabilities can then be obtained by passing these $K+1$ logits to a softmax normalization layer, i.e. $h_{\theta}(x)[i]=\frac{\exp(f_{\theta}(x)[i])}{\sum_{j=1}^{K+1}\exp(f_{\theta}(x)[j])},$ (2) where $h_{\theta}$ is the combination of the neural network $f_{\theta}$ and the softmax normalization layer. Energy-Based Learning Objective Now the question is how to design a learning objective that allows $h_{\theta}(x)[K+1]$ to encode uncertainty? Our idea here is to associate the score of $h_{\theta}(x)[K+1]$ to the marginal data distribution. Intuitively, when the input comes from within the training data distribution $p(x)$, the model is supposed to be confident in its decision, and therefore, $h_{\theta}(x)[K+1]$ should be low (conversely, $\sum_{i=1}^{K}h_{\theta}(x)[i]$ should be high). If the input deviates from the training data distribution, the model should become uncertain about whether its decision is correct. In this case, $h_{\theta}(x)[K+1]$ should be high to indicate a higher level of uncertainty, which naturally forces $\sum_{i=1}^{K}h_{\theta}(x)[i]$ to stay low (due to the softmax normalization). However, directly training $h_{\theta}(x)[K+1]$ to capture the marginal distribution $p(x)$ (i.e. generative modeling) is difficult [30]. Instead, we propose a novel learning objective with the help of EBMs [25]. First, we define our energy function as $E_{\theta}(x)=\log h_{\theta}(x)[K+1].$ (3) Then, our energy-based objective function is defined as $\min\limits_{\theta}\mathbb{E}_{p(x)}\bigg{[}-\log h_{\theta}(x)[y]\bigg{]}+\lambda\mathbb{E}_{p_{\bar{\theta}}(x)}\bigg{[}-\log h_{\theta}(x)[K+1]\bigg{]},$ (4) where $\lambda>0$ is a hyper-parameter; the first term is the maximum log- likelihood objective for the $K$-way classification task using the ground- truth label $y$; the second term can also be seen as maximum log-likelihood objective—for recognizing data sampled from $p_{\bar{\theta}}(x)$. Note that $p_{\bar{\theta}}(x)$ denotes the model distribution with frozen parameters of $p_{\theta}(x)$ of current iteration ($p_{\bar{\theta}}(x)$ will always be the same as $p_{\theta}(x)$ but without the gradient calculation of $\theta$ since parameter $\theta$ is frozen here and $\bar{\theta}$ should be regarded as a constant in each updates). We will show later in Sec. 3.3 that optimizing Eq. (4) can actually lead the summation of rest $K$ softmax scores of original classes to be directly proportional to the marginal density $p(x)$, which in turn can make the $K+1$-th softmax score be negatively correlated to $p(x)$. SGLD-Based Optimization We approximate the expectation in the second term in Eq. (4) using a sampler based on Stochastic Gradient Langevin Dynamics (SGLD) [39]. Specifically, the SGLD sampling process follows $z_{t+1}=z_{t}-\frac{\alpha}{2}\frac{\partial E_{\theta}(z_{t})}{\partial z_{t}}+\sqrt{\alpha}\epsilon,\quad\epsilon\sim\mathcal{N}(0,I),$ (5) where $t$ denotes the SGLD iteration, $\alpha$ the step-size, and $\epsilon$ a random noise sampled from a normal distribution. In practice, $\alpha$ is usually fixed as a constant. Most SGLD-based methods draw samples from the image space. This forces the information to flow through the entire neural network, which is computationally expensive. Inspired by [4], we choose to draw samples from the latent space (see Figure 2). Therefore, $z$ in Eq. (5) represents features rather than an image. Such a design significantly accelerates the training since the information only goes partially through the network model, which allows much deeper architectures such as ResNet-101 [15] to fit into limited resources. Moreover, the latent space is typically smoother than the image space [3], which facilitates the estimate of gradients. ### 3.3 Theoretical Insight In order to prove that our objective can force $h_{\theta}(x)[K+1]$ to be negatively correlated with $p(x)$, i.e. representing uncertainty, we tend to show that in theory, optimizing our objective in Eq. (4) is equivalent to minimize the KL divergence between $p(x)$ and another EBM-modeled distribution $q_{\theta}(x)$222It is worth noting that this $q_{\theta}(x)$ is not the modeled EBMs $p_{\theta}(x)$ in Sec. 3.2., where $q_{\theta}(x)$ is defined by energy function $E^{\prime}_{\theta}(x)=-\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i].$ (6) To this end, we introduce an extra objective $\min\limits_{\theta}\mathbb{E}_{p(x)}\bigg{[}-\log h_{\theta}(x)[y]\bigg{]}+\mathbb{E}_{q_{\bar{\theta}}(x)}\bigg{[}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]\bigg{]},$ (7) Similar to Eq. (4), here $\bar{\theta}$ means the parameters are frozen. We will show that optimizing Eq. (4) essentially optimize Eq. (7) and optimizing Eq. (7) is an equivalent of $\min D_{KL}(p(x)||q_{\theta}(x))$ in following Proposition 1 and Theorem 1, respectively. ###### Proposition 1. Given two EBMs $p_{\bar{\theta}}(x)$ and $q_{\bar{\theta}}(x)$ with energy functions defined in Eqs. (3) and (6) respectively, the optimization of Eq. (4) is actually equivalent to optimize a combination of one $K$-way classification objective and Eq. (7) with some suitable coefficient $\mu$. ###### Proof. Since the first optimization term in Eq. (7) is identical with Eq. (4) as well as the objective of maximum log-likelihood of $K$-way classification problems, we only need to consider the second terms in both equations and prove that they are equivalent to each other. Specifically, for the gradient of the second term of Eq. (7), we have333The equality between Eqs. (9) and (10) holds because $q_{\bar{\theta}}(x)=q_{\theta}(x)=\frac{\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}{Z^{\prime}(\theta)}$ and $\frac{\partial}{\partial\theta}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]=\frac{\frac{\partial}{\partial\theta}\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}{\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}$. $\displaystyle\frac{\partial}{\partial\theta}\ \mathbb{E}_{q_{\bar{\theta}}(x)}\Big{[}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]\Big{]}$ (8) $\displaystyle=\ $ $\displaystyle\int_{x}q_{\bar{\theta}}(x)\frac{\partial}{\partial\theta}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]$ (9) $\displaystyle=\ $ $\displaystyle\int_{x}\frac{\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}{Z^{\prime}(\theta)}\cdot\frac{\frac{\partial}{\partial\theta}\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}{\sum\limits_{i=1}^{K}h_{\theta}(x)[i]}$ (10) $\displaystyle=\ $ $\displaystyle\frac{1}{Z^{\prime}(\theta)}\int_{x}\frac{\partial}{\partial\theta}\sum\limits_{i=1}^{K}h_{\theta}(x)[i]$ (11) $\displaystyle=\ $ $\displaystyle-\frac{1}{Z^{\prime}(\theta)}\int_{x}\frac{\partial}{\partial\theta}h_{\theta}(x)[K+1]$ (12) $\displaystyle=\ $ $\displaystyle-\frac{Z(\theta)}{Z^{\prime}(\theta)}\int_{x}\frac{h_{\theta}(x)[K+1]}{Z(\theta)}\cdot\frac{\frac{\partial}{\partial\theta}h_{\theta}(x)[K+1]}{h_{\theta}(x)[K+1]}$ (13) $\displaystyle=\ $ $\displaystyle-\frac{Z(\theta)}{Z^{\prime}(\theta)}\int_{x}p_{\bar{\theta}}(x)\frac{\partial}{\partial\theta}\log h_{\theta}(x)[K+1],$ (14) where $Z^{\prime}(\theta)$ and $Z(\theta)$ represents the partition functions of $q_{\theta}(x)$ and $p_{\theta}(x)$ respectively. If we use $\mu$ to denote $\frac{Z(\theta)}{Z^{\prime}(\theta)}$, we can restate the Eq. (14) as $\mu\frac{\partial}{\partial\theta}\mathbb{E}_{p_{\bar{\theta}}(x)}\bigg{[}-\log h_{\theta}(x)[K+1]\bigg{]},$ which is exactly the gradient of Eq. (4)’s second term. As a result, the objective of Eq. (4) and of Eq. (7) are same under a suitable coefficient $\mu$ in each iteration. Further more, remembering that the first term of Eq. (4) is an equivalent of the objective of maximum log-likelihood of $K$-way classification problems, we consequently conclude that optimizing our objective of Eq. (4) essentially optimize Eq. (7) and a $K$-way classification objective. ∎ According to Proposition 1, we know that our learning objective of Eq. (4) can optimize the discriminative $K$-way classification objective and the generative modeling objective in a unified discriminative objective, which has never been explored in existing confidence calibration work. Moreover, if we can further prove that optimizing Eq. (7) is de facto an equivalent of minimizing the KL-divergence between $p(x)$ and the distribution $q_{\theta}(x)$, then our objective of Eq. (4) could minimize this KL- divergence either due to Proposition 1. To prove that, we need to recur to Lemma 1 based on [21], which shows how to efficiently compute the gradient of the KL divergence between the real distribution and an approximated distribution modeled via EBMs. ###### Lemma 1. Given a training dataset with data $\\{x\\}$ sampled from distribution $r(x)$, and an energy model distribution $r_{\phi}(x)$ parameterized by $\phi$ and associated to an energy function $E_{\phi}(x)$, the objective of minimizing $D_{KL}(r(x)||r_{\phi}(x))$ can be optimized by descending the following gradient w.r.t $\phi$, $\mathop{\mathbb{E}}\limits_{x^{+}\sim r(x)}\Bigg{[}\frac{\partial E_{\phi}(x^{+})}{\partial\phi}\Bigg{]}-\mathop{\mathbb{E}}\limits_{x^{-}\sim r_{\phi}(x)}\Bigg{[}\frac{\partial E_{\phi}(x^{-})}{\partial\phi}\Bigg{]}.$ (15) The proof can refer [21]. By descending the gradient in Eq. (15), the first term decreases the energy of samples $x^{+}$ drawn from the data distribution, while the second term increases the energy of samples $x^{-}$ drawn from the energy model distribution. Based on Lemma 1, we can introduce following theorem followed by a proof. ###### Theorem 1. Let $p(x)$ denote the training data distribution, and $q_{\theta}(x)$ the energy model distribution represented by the energy function $E^{\prime}_{\theta}(x)$ defined in Eq. (6), we can achieve minimization of $D_{KL}(p(x)||q_{\theta}(x))$ by optimizing Eq. (7). ###### Proof. According to Lemma 1, the KL divergence between $p(x)$ and $q_{\theta}(x)$ can be optimized by descending the gradient in Eq. (15). We can replace the need of computing expectation over the parameterized density $q_{\theta}(x)$ in Eq. (15) by fixing the parameters $\theta$, denoted by $q_{\bar{\theta}}(x)$ 444only the numerical value of Eq. (15) matters in optimization process.. As such, Eq. (15) is converted to $\mathbb{E}_{p(x)}\Bigg{[}\frac{\partial E^{\prime}_{\theta}(x)}{\partial\theta}\Bigg{]}-\mathbb{E}_{q_{\bar{\theta}}(x)}\Bigg{[}\frac{\partial E^{\prime}_{\theta}(x)}{\partial\theta}\Bigg{]}.$ (16) Now we can optimize $D_{KL}(p(x)||q_{\theta}(x))$ via an objective $\mathbb{E}_{p(x)}\Big{[}E^{\prime}_{\theta}(x)\Big{]}-\mathbb{E}_{q_{\bar{\theta}}(x)}\Big{[}E^{\prime}_{\theta}(x)\Big{]}$ which holds a numerically same gradient with regard to $\theta$ as Eqs. (16) and (15). Then we have555the inequality holds because of the fact $\forall y\leq K,\ \ \sum\limits_{i=1}^{K}h_{\theta}(x)[i]\geq h_{\theta}(x)[y].$ $\displaystyle\mathbb{E}_{p(x)}\Bigg{[}E^{\prime}_{\theta}(x)\Bigg{]}-\mathbb{E}_{q_{\bar{\theta}}(x)}\Bigg{[}E^{\prime}_{\theta}(x)\Bigg{]}$ (17) $\displaystyle=$ $\displaystyle\mathbb{E}_{p(x)}\Bigg{[}-\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]\Bigg{]}+\mathbb{E}_{q_{\bar{\theta}}(x)}\Bigg{[}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]\Bigg{]}$ $\displaystyle\leq$ $\displaystyle\mathbb{E}_{p(x)}\Bigg{[}-\log h_{\theta}(x)[y]\Bigg{]}+\mathbb{E}_{q_{\bar{\theta}}(x)}\Bigg{[}\log\sum\limits_{i=1}^{K}h_{\theta}(x)[i]\Bigg{]}.$ Therefore, Eq. (7) is an upper bound of an equivalent variant of the KL- divergence between $p(x)$ and $q_{\theta}(x)$. ∎ Combing Proposition 1 and Theorem 1, we can conclude that our objective of Eq. (4) can minimize $D_{KL}(p(x)||q_{\theta}(x))$. Therefore, once our objective converged, we obtain $p(x)\simeq q_{\theta}(x)=\frac{\exp(-E^{\prime}_{\theta}(x))}{Z^{\prime}(\theta)}\propto\sum\limits_{i=1}^{K}h_{\theta}(x)[i]$, which result in the summation of $K$ softmax scores of original classes to be directly proportional to the marginal density $p(x)$ and in turn make the $K+1$-th softmax score be negatively correlated to $p(x)$. Table 1: Comparison between our approach EOW-Softmax and nine baselines on four benchmark datasets. It is clear that our approach generally leads to a better calibrated model than the baselines (lower ECE & NLL), while maintaining the accuracy. $\uparrow$: the higher the better. $\downarrow$: the lower the better. Method | MNIST (MLP) | CIFAR10 (VGG11) | CIFAR100 (ResNet50) | Tiny-ImageNet (ResNet50) ---|---|---|---|--- Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ | Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ | Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ | Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ Vanilla Training | 98.32 | 1.73 | 0.29 | 90.48 | 6.30 | 0.43 | 71.57 | 19.1 | 1.58 | 46.71 | 25.2 | 2.95 TrustScore [20] | 98.32 | 2.14 | 0.26 | 90.48 | 5.30 | 0.40 | 71.57 | 10.9 | 1.43 | 46.71 | 19.2 | 2.75 MC-Dropout [7] | 98.32 | 1.71 | 0.34 | 90.48 | 3.90 | 0.47 | 71.57 | 9.70 | 1.48 | 46.72 | 17.4 | 3.17 Label Smoothing [36] | 98.77 | 1.68 | 0.30 | 90.71 | 2.70 | 0.38 | 71.92 | 3.30 | 1.39 | 47.19 | 5.60 | 2.93 Mixup [37] | 98.83 | 1.74 | 0.24 | 90.59 | 3.30 | 0.37 | 71.85 | 2.90 | 1.44 | 46.89 | 6.80 | 2.66 JEM [12] | 97.23 | 1.56 | 0.21 | 90.36 | 3.30 | 0.34 | 70.28 | 2.46 | 1.31 | 45.97 | 5.42 | 2.47 OvA DM [34] | 96.67 | 1.78 | 0.27 | 89.56 | 3.55 | 0.37 | 70.11 | 3.58 | 1.40 | 45.55 | 4.22 | 2.50 Temperature Scaling [14] | 95.14 | 1.32 | 0.17 | 89.83 | 3.10 | 0.33 | 69.84 | 2.50 | 1.23 | 45.03 | 4.80 | 2.59 DBLE [40] | 98.69 | 0.97 | 0.12 | 90.92 | 1.50 | 0.29 | 71.03 | 1.10 | 1.09 | 46.45 | 3.60 | 2.38 EOW-Softmax (_ours_) | 98.91 | 0.88 | 0.15 | 90.24 | 1.57 | 0.25 | 71.33 | 1.08 | 1.03 | 46.97 | 3.45 | 2.22 Figure 3: Results on CIFAR-100-C. Each bar represents the mean ECE on 19 different corruptions, with the vertical line segment denoting the standard deviation. In general, EOW-Softmax achieves the lowest ECE under five different corruption levels. ## 4 Experiments ### 4.1 Experimental Setup Settings We adopt three settings to evaluate our approach. 1) _Confidence Calibration_ : This aims to evaluate the effectiveness of a method in improving confidence calibration. Following [40], we use four datasets: MNIST [24] (MLP), CIFAR-10 [22] (VGG11 [35]), CIFAR-100 [22] (ResNet50 [15]), and Tiny-ImageNet [5] (ResNet50). Network architectures used for these datasets are indicated in the parentheses. 2) _OOD Detection_ : A ResNet50 classifier is trained on CIFAR-100 and tested on the combination of CIFAR-100’s test split and an OOD dataset, i.e. CIFAR-10 and SVHN [31]. The goal for the classifier is to assign low confidence to as many OOD samples as possible. The accuracy is computed on predictions with confidence higher than a threshold. 3) _Robustness under Corruption_ : A classifier is trained on CIFAR-100. Its calibration performance is evaluated on CIFAR-100-C [17], where the images are perturbed by 19 different corruptions with five intensity levels. Baselines We compare our approach with nine baseline methods: MC-Dropout [7], Temperature Scaling [14], Mixup [37], Label Smoothing [36], TrustScore [20], JEM [12], DBLE [40], and OvA DM [34]. Evaluation Metrics Following prior work [14], we use two metrics to assess how well the confidence in a model’s predictions is calibrated, namely Expected Calibration Error (ECE) [29] and Negative Log-Likelihood (NLL). The lower the ECE/NLL, the better the calibration performance. Below we explain in detail how these two metrics are calculated. ECE approximates the expectation of the difference between accuracy and confidence (i.e. the likelihood on the predicted label $\hat{y}$), which can reflect how well the prediction confidence aligns with the true accuracy. Specifically, the confidence estimates made on all test samples are partitioned into $L$ equally spaced bins (following [14, 40, 34], $L=15$), and the difference between the average confidence and accuracy within each bin $I_{l}$ is calculated, $\displaystyle\text{ECE}=\sum\limits_{l=1}^{L}\frac{1}{N}|\sum\limits_{x\in I_{l}}p(\hat{y}|x)-\sum\limits_{x\in I_{l}}1(\hat{y}=y)|,$ (18) where $N$ denotes the total number of samples in the test set. NLL computes the average negative log-likelihood on all test samples, $\displaystyle\text{NLL}=-\frac{1}{N}\sum\limits_{m=1}^{N}\log p(\hat{y}_{m}|x_{m}).$ (19) Implementation Details We use the SGD optimizer with the learning rate of 1e-4, the momentum of 0.9, and the weight decay of 5e-4. The batch size is set to 64. The number of epochs is 200. The learning rate is decayed by 0.1 at the 100-th and 150-th epoch, respectively. For SGLD, we use a constant step size of 2 and a standard deviation of 1e-3 (see Eq (5)). The number of updates in each SGLD round is set to 100. To ensure that the results are convincing, we run each experiment 5 times with different random seeds and average their results. ### 4.2 Main Results Confidence Calibration We first evaluate the calibration performance on four standard datasets, namely MNIST, CIFAR-10/100 and Tiny-ImageNet. The results are shown in Table 1. In general, our approach EOW-Softmax obtains the best overall calibration performance on most datasets. Comparing with Vanilla Training, we observe that our EOW-Softmax achieves a similar test accuracy, while significantly improves the calibration performance in terms of ECE, especially on the three challenging datasets with natural images—6.30%$\to$1.57% on CIFAR-10, 19.1%$\to$1.08% on CIFAR-100, and 25.2%$\to$3.45% on Tiny-ImageNet. These results strongly suggest that our energy-based open-world uncertainty modeling has great potential for real- world applications as it shows a good balance between test accuracy and calibration performance. JEM and OvA DM are two related methods to ours. JEM is based on joint distribution modeling while OvA DM transforms the conventional softmax classifier to a distance-based one-vs-all classifier. The comparison with these two methods shows that our approach is clearly better in all metrics, which advocates our design of the $K$+1-way softmax for modeling open-world uncertainty. Compared with the top-performing baselines, i.e. Temperature Scaling and DBLE, our EOW-Softmax is highly competitive—it obtains the best ECE on all datasets except CIFAR-10 where the performance is only slightly worse than DBLE. Among these three methods, EOW-Softmax performs the best in maintaining the original test accuracy, whereas Temperature Scaling has to sacrifice the test accuracy in exchange for improvement on ECE. This is because for fair comparison (all methods only have access to the training set), Temperature Scaling has to separate a validation set out from the original training set for tuning its scaling parameter, which reduces the amount of training data. Table 2: Test accuracy on the combination of in-distribution and OOD test set using ResNet50 trained on CIFAR-100. OOD dataset | Method | Probability threshold ---|---|--- 0 | .25 | .5 | .75 CIFAR10 | Vanilla Training | 37.24 | 42.31 | 47.67 | 58.90 JEM [12] | 42.55 | 46.88 | 0.53 | 63.21 OvA DM [34] | 39.78 | 48.54 | 54.31 | 65.20 EOW-Softmax | 37.65 | 50.11 | 57.32 | 69.00 SVHN | Vanilla Training | 20.24 | 21.33 | 24.38 | 26.90 JEM [12] | 19.87 | 22.57 | 26.22 | 30.75 OvA DM [34] | 20.08 | 23.99 | 26.08 | 30.32 EOW-Softmax | 20.21 | 25.58 | 28.13 | 32.68 OOD Detection We follow [34] to simulate real-world scenarios by training a classifier on CIFAR-100 and testing on the combination of CIFAR-100’s test set and an OOD dataset (CIFAR-10/SVHN). The classifier is required to assign low confidence to OOD samples such that their predictions can be rejected by a pre-defined threshold. The results are reported in Table 2 where we compare our approach with JEM and OvA DM (as these two methods are most related to ours), as well as the vanilla training method. The probability threshold is linearly increased from 0 to 0.75. Only predictions with confidence higher than this threshold are kept. From the results, we observe that EOW-Softmax outperforms all baselines with clear margins in different thresholds (except 0). This indicates that a classifier trained with EOW-Softmax suffers much less from the overconfidence problem than the competitors, and is thus safer to be deployed in practical applications. Robustness under Corruption We also evaluate our approach on corrupted images, i.e. CIFAR-100-C, and compare with JEM and OVA DM. Specifically, there are five different intensities of corruption as defined in [17], each with 19 different corruption types. Under each intensity level of corruption, we test a model’s ECE on images of all corruption types, and report their average result and the standard deviation. The comparison is illustrated in Figure 3. Overall, EOW-Softmax performs favorably against JEM and OVA DM, as well as the vanilla training baseline. Though both EOW-Softmax and JEM are based on energy models for generative modeling, JEM clearly has a larger variation in performance among different corruption types. This is because JEM is more difficult to train—it optimizes the log-likelihood of distribution independently from the classifier learning. In contrast, the ‘generative modeling’ in EOW-Softmax is seamlessly integrated into the $K$+1-way classification task, which has a better training stability. ### 4.3 Ablation study Hyper-parameter Recall that $\lambda$ in Eq. (4) balances between the standard $K$-way classification loss and the energy-based loss for uncertainty modeling. We experiment with different values (1, 0.1, and 0.01) on CIFAR-10/100 to see the impact of this hyper-parameter. The results in Table 3 show that $\lambda=0.1$ leads to the best calibration performance (lowest ECE values) on both datasets. Where to Apply the SGLD Sampling? In our approach, SGLD sampling is applied to the latent feature space rather than the pixel space as in most EBMs. To justify this design, we experiment on CIFAR-100 with different variants of our approach where the SGLD sampling is applied to different positions, including the pixel space, stage-1, stage-2, and stage-3 (in the neural network). In addition to the test accuracy and the ECE, we also report the training speed (second per iteration) measured using a Tesla K80 GPU. Table 3 shows that applying the SGLD sampling to the pixel space incurs huge computation overhead, while shifting the sampling to the latent space significantly improves the training speed without sacrificing the calibration performance. Table 3: Ablation study on the impact of $\lambda$ in Eq. (4). Model | $\lambda$ | CIFAR10 ---|---|--- Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ VGG11 | 1 | 89.81 | 2.11 | 0.28 0.1 | 90.24 | 1.57 | 0.25 0.01 | 90.11 | 3.37 | 0.36 Model | $\lambda$ | CIFAR100 Acc% $\uparrow$ | ECE% $\downarrow$ | NLL $\downarrow$ ResNet50 | 1 | 70.26 | 1.33 | 1.24 0.1 | 71.33 | 1.08 | 1.03 0.01 | 71.51 | 2.31 | 1.31 Table 4: Ablation on where to apply the SGLD sampling. Position | Acc% $\uparrow$ | ECE% $\downarrow$ | Second/Iter $\downarrow$ ---|---|---|--- Pixel space | 73.94 | 1.89 | 25.15 Feature stage 1 | 73.21 | 2.09 | 10.31 Feature stage 2 | 73.48 | 1.73 | 5.21 Feature stage 3 | 73.05 | 2.17 | 2.37 ## 5 Conclusion This paper has addressed the closed-world problem in softmax that causes neural network classifiers to produce overconfident predictions by introducing an energy-based objective to model the open-world uncertainty. Acknowledgement. This study is supported by NTU NAP and Microsoft Research Lab Asia, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). ## References * [1] Michael Arbel, Liang Zhou, and Arthur Gretton. Generalized energy based models. In ICLR, 2021. * [2] Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In CVPR, 2016. * [3] Yoshua Bengio, Grégoire Mesnil, Yann N. Dauphin, and Salah Rifai. Better mixing via deep representations. CoRR, abs/1207.4404, 2012. * [4] Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. Your gan is secretly an energy-based model and you should use discriminator driven latent sampling. arXiv preprint arXiv:2003.06060, 2020. * [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. * [6] Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. In NeurIPS, 2019. * [7] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059, 2016. * [8] Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, and Diederik P Kingma. Learning energy-based models by diffusion recovery likelihood. In ICLR, 2021. * [9] Charles J Geyer. Markov chain monte carlo maximum likelihood. 1991\. * [10] Charles J Geyer. Practical markov chain monte carlo. Statistical science, pages 473–483, 1992. * [11] Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NeurIPS, 2014. * [12] Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263, 2019. * [13] Will Sussman Grathwohl, Jacob Jin Kelly, Milad Hashemi, Mohammad Norouzi, Kevin Swersky, and David Duvenaud. No {mcmc} for me: Amortized sampling for fast and stable training of energy-based models. In ICLR, 2021. * [14] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML, 2017. * [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. * [16] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In CVPR, 2019. * [17] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019. * [18] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002. * [19] Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR, 2020. * [20] Heinrich Jiang, Been Kim, Melody Guan, and Maya Gupta. To trust or not to trust a classifier. In Advances in Neural Information Processing Systems, volume 31, 2018. * [21] Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. In ICLR-W, 2016. * [22] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\. * [23] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017. * [24] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. * [25] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006. * [26] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR, 2018. * [27] Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR, 2018. * [28] Rafael Müller, Simon Kornblith, and Geoffrey Hinton. When does label smoothing help? In NeurIPS, 2019. * [29] Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. * [30] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don’t know? In ICLR, 2019. * [31] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS-W, 2011. * [32] Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CoRR, abs/1412.1897, 2014. * [33] Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. arXiv preprint arXiv:1904.09770, 2019. * [34] Shreyas Padhy, Zachary Nado, Jie Ren, Jeremiah Liu, Jasper Snoek, and Balaji Lakshminarayanan. Revisiting one-vs-all classifiers for predictive uncertainty and out-of-distribution detection in neural networks. arXiv preprint arXiv:2007.05134, 2020. * [35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. * [36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016. * [37] Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. arXiv preprint arXiv:1905.11001, 2019. * [38] Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. On calibration and out-of-domain generalization. arXiv preprint arXiv:2102.10395, 2021. * [39] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In ICML, 2011. * [40] Chen Xing, Sercan Arik, Zizhao Zhang, and Tomas Pfister. Distance-based learning from errors for confidence calibration. In ICLR, 2020. * [41] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. In ICLR, 2017. * [42] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization in vision: A survey. arXiv preprint arXiv:2103.02503, 2021.
# Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features Wei-Han Hsu, Bo-Yu Chen, and Yi-Hsuan Yang Research Center for IT Innovation Academia Sinica Taipei, Taiwan {ddmanddman, bernie40916<EMAIL_ADDRESS> ###### Abstract Along with the evolution of music technology, a large number of styles, or “subgenres,” of Electronic Dance Music (EDM) have emerged in recent years. While the classification task of distinguishing between EDM and non-EDM has been often studied in the context of music genre classification, little work has been done on the more challenging EDM subgenre classification. The state- of-art model is based on extremely randomized trees and could be improved by deep learning methods. In this paper, we extend the state-of-art music auto- tagging model “short-chunk CNN$+$Resnet” to EDM subgenre classification, with the addition of two mid-level tempo-related feature representations, called the Fourier tempogram and autocorrelation tempogram. And, we explore two fusion strategies, early fusion and late fusion, to aggregate the two types of tempograms. We evaluate the proposed models using a large dataset consisting of 75,000 songs for 30 different EDM subgenres, and show that the adoption of deep learning models and tempo features indeed leads to higher classification accuracy. ###### Index Terms: EDM subgenre classification, deep learning, feature fusion, tempogram, convolutional neural network ## I Introduction Electronic Dance Music (EDM) is a kind of dance and club music. Disk Jocket (DJs) usually need to classify EDM by their subgenres to get songs with similar style for many purposes, such as for making the DJ set transitions part. An automatic program for EDM subgenre classification can be useful for human DJs [1]. It is also a fundamental building block towards realizing an automatic AI DJ [2, 3]. The task _automatic EDM subgenre classification_ can be in general considered as an instance of music auto-tagging problem [4]. Therefore, methodology-wise we can base on research that has been done for general music auto-tagging and classification. However, we note that, due to the similarity among the EDM subgenres, EDM subgenre classification can sometimes be difficult even for human DJs. The decision between genres can be fuzzy. For example, “tech- house,” “deep-house,” and “progressive-house” music may sound fairly similar as they are all “house” music. We present in this paper a deep learning based approach to automatic EDM subgenre classification, extending the recent work by Caparrini _et al._ [5], which use a non-deep learning approach. Following their work, we compile our training and test sets from Beatport (https://www.beatport.com/), a worldwide principal source of music for DJs. The Beatport website assigns only a single subgenre label to each song, so we can formulate the task as a multi-class classification problem. Our dataset contains 30 different subgenres, each with 2,500 songs and hence 75,000 songs in total. We thus treat it as a 30-class classification problem. While the classifier used by Caparrini _et al._ [5] was based on extremely randomized trees [6], a non-deep learning algorithm, our classifier is based on the “short-chunk convolutional neural network (CNN)$+$Resnet” deep architecture proposed by Won _et al._ [7], which represents the state-of-the- art in music auto-tagging (see Figure 2). However, the original short-chunk CNN model takes only the Mel-spectrograms as input for feature learning. While this may be sufficient for classifying broader genre classes such as Pop, Rock, Jazz, and EDM, it is unclear how it performs for a subgenre classification task. Figure 1: Boxplots of the tempo values (in beat-per-minute; or BPM) of different EDM subgenres considered in our work. In particular, we note that, among the 92 hand-crafted audio features employed by Caparrini _et al._ [5], _tempo_ -related features were found to be the top- four important features. By estimating the tempo values for all the songs in our dataset using the algorithm of Grosche _et al._ [8], and plotting the resulting distribution of tempo values for 100 randomly picked songs from each genre, as shown in Figure 1, we can see that different EDM subgenres do prefer different tempo values. For example, the tempo of “psy-trance” music is usually around 140 beat-per-minute (BPM), while “deep-house” is usually around 120 BPM. Different subgenres also have different tempo ranges. For example, the tempo of “jumpup-drum-and-bass” can vary widely from 90 to 170 BPM, while the tempo values of “tech-house” typically fall within 120 to 130 BPM. In light of this, we propose simple extensions of the short-chunk CNN$+$Resnet model [7] to learn features from not only the Mel-spectrogram but also the “tempogram” [8], a feature representation of the possible tempo values of each short-time frame of a signal. We consider both the autocorrelation- and Fourier-tempogram proposed by Grosche _et al._ [8], and experiment with both an early-fusion architecture and a late-fusion architecture. Our experiment shows that the Mel-spectrogram only baseline attains 55.4% song-level accuracy, and that our best model incorporating additionally tempograms achieves 60.6% accuracy. We find in particular salient improvement in subgenres such as “future-house,” “leftfield-house-and-techno,” and “uplifing- trance”. While we cannot re-distribute the audio files of the data due to copyright issues, we release our code and model checkpoints at https://github.com/mir- aidj/EDM-subgenre-classifier. ## II Related Work Music genre classification is a well-researched task in the field of music information retrieval (MIR), usually formulated as a multi-class classification problem. For example, the most famous and widely-used dataset in music genre classification, the GTZAN dataset [9], includes in total 1K songs covering the following 10 genres: Blues, Classical, Country, Disco, Hiphop, Jazz, Metal, Pop, Reggae and Rock. In contrast, relatively less research has been done on music subgenre classification, which aims at distinguishing between subgenres belonging to the same “super” genre such as EDM and Jazz. We review three such existing work below. EDM subgenre classification. Caparrini _et al._ [5] presented a non-deep learning method for EDM subgenre classification. They compiled two different collections of EDM subgenre data from Beatport, the first with 23 subgenres and the second expanded version with 29 subgenres. They retrieved 100 songs for each subgenre from the Beatport website for both datasets. Then, they employed 92 hand-crafted audio features and tested several non-deep learning based classifiers under 10-fold cross validation. For the first set, the gradient tree boosting [10] performed the best, reaching 59.2% song-level classification accuracy. For the second set, which is more challenging due to the larger number of classes, the extremely randomised trees [6] performed the best, reaching 48.2% accuracy. They also drew two confusion matrices for both two sets, as well as the directed graph to show which subgenres were easily misclassified. And, they measured the importance of each feature in their tree-based classifier, finding that the top-four important features are all related to tempo. Figure 2: System diagram of the short-chunk CNN$+$Resnet model [7]. (a) (b) (c) Figure 3: System diagram of the proposed architecture for (a) employing both Mel-spectrograms and two different tempograms for music subgenre classification; (b) zoom-in of the early-fusion module for fusing the two tempograms; (c) the late-fusion variant. The ‘SCcnn’ in (a) denotes the stack of 7 copies of feature extraction blocks (each with two 2-D convolutions) shown in Figure 2. Heavy metal subgenre classification. Tsatsishvili [11] investigated subgenre classification for heavy metal music, also using non-deep learning methods. The author compiled a dataset of 210 songs, comprising 30 songs for each of the following 7 heavy metal subgenres: “black,” “death,” “melodic,” “death,” “gothic,” “heavy,” “power” and “progressive.” Half the songs were used for training and the other half for testing. Audio features employed in the classifiers were automatically chosen from 200 hand-crafted features with either a correlation-based feature selection method or a wrapper selection method. The best classification accuracy was achieved by AdaBoost [12], reaching 45.7%. Jazz subgenre classification. Quinto _et al._ [13] employed deep learning methods for Jazz subgenre classifcation, considering only three subgenres: “acid-jazz,” “bebop,” and “swing/electroswing.” They considered a simpler multi-layer perceptron (MLP) with 1–3 layers, as well as a more sophisticated 3-layer recurrent neural network employing long short-term memory (LSTM) with 32 neurons per layer. They employed Mel-frequency cepstral coefficients (MFCC) as the input feature. Their dataset includes 254 minutes of “acid-jazz,” 141 minutes “bebop,” and 245 minutes “swing/electroswing.” 60% of them were used for training and the rest for testing. Finally, they got 90% testing accuracy with the LSTM classifier, and 79% with the MLP classifier. The accuracy was fairly high, possibly because of the small number of classes. Unlike music genre/subgenre classification, the music auto-tagging task [7, 14, 15, 16, 17, 18, 19, 20] is often formulated as a multi-label classification problem, where a song can be labeled with multiple tags. The performance of an auto-tagging model is usually evaluated in terms of metrics such as PR-AUC and ROC-AUC [4]. Won _et al._ [7] experimented with a larger number of different CNN-based models on three widely-used public datasets for this task: MagnaTagATune (MTAT) [14], million song dataset (MSD) [21], and MTG-Jamendo [19], finding that a particular architecture called “short-chunk CNN$+$Resnet” performs in general the best. As it is straightforward to modify this architecture for multi-class classification, we employ it as the backbone architecture of our models in this work. ## III Methods ### III-A Dataset Following Caparrini _et al._ [5], we build our dataset by crawling the audio previews of songs and their corresponding class labels from Beatport. Different from their work, we consider a larger set of classes, covering the 30 EDM subgenres listed in Figure 1, and a much larger collection of songs, with consistently 2,500 songs per subgenre.111This Beatport dataset was initially created in one of our prior works [24]. We split the dataset by the ratio of 8:1:1 per subgenre to get the training, validation, and test sets. Namely, the test set contains 250 tracks per subgenre. Every audio preview made available by Beatport is 2 minutes long. We make them consistently mono channel with 22,050 Hz sampling rate. ### III-B Input Features: Mel-spectrogram and Tempograms Instead of using hand-crafted features as done by Caparrini _et al._ [5], we employ “raw” features as input to our deep neural network for feature learning. The basic feature representation, also one of the most widely-used one in MIR, is the Mel-spectrogram, a time-frequency representation that is computed by applying the perceptually-motivated Mel filter bank to the spectrogram of an audio waveform. With librosa [22], we compute the Mel- spectrogram using a Hamming window of 2,048-sample long and 512-sample hop length for the short-time Fourier transform (STFT), and 128 Mel filters. We also employ the tempogram [8] as input feature, a “time-tempo” representation that contains local tempo information for each frame of an audio signal. Grosche _et al._ [8] proposed two types of tempogram: the _Fourier tempogram_ and the _autocorrelation tempogram_. The former converts frequency (Hz) to tempo (beat-per-minute; BPM), emphasizing the harmonics, while the latter converts time-lag (seconds) to tempo, emphasizing instead the subharmonics. We consider both here. The _Fourier tempogram_ 222https://www.audiolabs- erlangen.de/resources/MIR/FMP/C6/C6S2_TempogramFourier.html is computed by firstly estimating from the audio waveform a “novelty curve” indicating note onset candidates, and then computing the Fourier representation of the novelty curve (not the original audio waveform) using STFT. The resulting representation is assumed to capture local periodic patterns of the input signal. Its frequency axis is finally mapped to a BPM axis. The _autocorrelation tempogram_ ,333https://www.audiolabs- erlangen.de/resources/MIR/FMP/C6/C6S2_TempogramAutocorrelation.html on the other hand, is computed by calculating the local autocorrelation function for different time lags from the also the novelty curve, and then converting the time-lag to a linear BPM axis using interpolation and resampling. We also employ librosa [22] to compute these two variants of the tempogram, with 512-sample hop size and a Hamming window of 2,048 samples. Figure 6 provides examples of the Fourier tempogram and autocorrelation tempogram of songs in our dataset.444Grosche _et al_ [8] also proposed the “cyclic tempogram,” where tempi differing by a power of two are identified, but we do not use it here. We use two different length per song to compute the Mel-spectrograms: the 30-second segment from 15s to 45s of a song, or the whole two minutes. For the tempograms, we use only the segment a song from 15s to 45s for simplicity. The size of a Fourier tempogram and an autocorrelation tempogram would be 193$\times$1,293 (BPM$\times$time) and 384$\times$1,292, respectively. Audio is fed to our models using “chunks” (see below) of such fixed-length Mel- spectrograms and tempograms. Every feature dimension is zscore-normalized. ### III-C Short-chunk CNN with Resnet We employ the short-chunk CNN$+$Resnet as the backbone architecture of our models, as it has been shown to outperform competing models in music auto- tagging across different benchmark datasets [7]. As depicted in Figure 2, this model contains 7 copies of feature extraction layers, each comprising two 2-D convolutional layers, two batch normalization layers (after convolution), and one ReLU activate layer in between convolutions. The output of this stack of layers goes through max pooling and then two dense layers and a final softmax layer for classification. The name “short-chunk” stems from the fact that the model is designed to take as input short segments of the Mel-spectrogram of the size 128$\times$200, which in other words divides our Mel-spectrogram into 25 chunks (neglecting the last 168 frames). We similarly divide the tempograms into 25 chunks. Following [7], we assume that the subgenre label of each chunk is the same as the the subgenre label of the song the chunk comes from. TABLE I: Chunk- and song-level testing accuracy of 30-class EDM subgenre classification of the evaluated short-chunk CNN-based models; the first two models use only the Mel-spectrograms, while the last two use both the Mel-spectrograms and tempograms | chunk-level | song-level ---|---|--- Mel-spectrogram only (30 sec) [7] | 46.1% | 50.4% Mel-spectrogram only (120 sec) [7] | 46.1% | 55.4% Fourier tempogram only (30 sec) | 32.0% | 34.9% autocorrelation tempogram only (30 sec) | 28.3% | 31.2% early-fusion | 53.4% | 60.3% late-fusion | 53.3% | 60.6% ### III-D Proposed Fusion Models Figure 3(a) shows the architecture we propose to integrate the (chunk-level) Mel-spectrogram, Fourier tempogram, and autocorrelation tempogram of a song for classification. We firstly fuse the two tempograms into a combined representation, and then concatenate it with the output of the feature extraction blocks of the short-chunk CNN branch that deals with the Mel- spectrogram. After feature concatenation, we use the same classification block of the short-chunk CNN. Figures 3(b) and (c) depict the two fusion strategies considered in our work to combine the two tempograms. They both use four parallel 1-D convolutional layers with different kernel sizes (3, 3, 5, 5) and strides (2, 3, 3, 5) for feature extraction, a design that is inspired by the work of Pons _et al._ [23]. The outputs of the 1-D convolution layers are concatenated, mean-pooled, futher processed with a 2-D convolution layer, and finally max-pooled to yield a combined representation. The _early-fusion_ variant (Figure 3(b)) combines the two tempograms at the very beginning with feature-wise concatenation, while the _late-fusion_ variant (Figure 3(c)) combines them after the 1-D convolutional layers. Figure 4: Per subgenre testing classification accuracy of different models. ## IV Experiments We train the following deep models by using cross-entropy as the loss function and Adam as the optimizer. * • _Mel-spectrogram only_ : the short-chunk CNN$+$Resnet baseline [7] shown in Figure 2. * • _Proposed early- and late-fusion_ : the proposed models that use both the Mel- spectrograms and the two tempograms, shown in Figure 3. * • _Fourier tempogram or autocorrelation tempogram only_ : the ablated variants that use only one of the tempograms, without using the Mel-spectrograms. This is done with a simplified version of the fusion models, with four 1-D convolutional layers and a 2-D convolutional layer for feature extraction, and two dense layers for classification. We set the batch size to 256, and train the models for 200 epochs with learning rate $0.005$. We use 50% dropout before the last dense layer and after the ReLU layer. We report the chunk-level classification accuracy by taking the class with the highest softmax-ed value as the prediction result. In addition, we also report the song-level accuracy by a majority voting mechanism over the prediction results of the short chunks of a song. Table I shows that early-fusion and late-fusion models are both better than Mel-spectrogram-input model in both chunk-level and song-level accuracy. The late-fusion model performs overall the best, reaching 60.6% song-level testing accuracy for our 30-class classification setting, which is much higher than the 48.2% accuracy obtained by Caparrini _et al._ [5] for 29-class classification using a non-deep learning method.This also stands as a $+$5 to $+$10% performance gain relative to the Mel-spectrogram only baseline, empirically validating the benefit of using tempo-related features for EDM subgenre classification. Table I also shows that the Fourier tempogram only model outperforms the autocorrelation tempogram counterpart. (a) Confusion table of the short-chunk CNN baseline (120 sec) (b) Confusion table of the proposed late-fusion model Figure 5: The confusion matrices of the testing result of two evaluated models. Each row shows whether the songs from a subgenre are misclassified to other subgenres; those on the diagonal are correctly classified. As our test set contains consistently 250 songs per subgenre, each row sums up to 250. Figure 4 shows the per-genre result of these five models. We can see salient performance improvement for genres such as “future-house,” “leftfield-house- and-techno” and “uplifing-trance” for the proposed fusion models than the baseline Mel-spectrogram only model. Figure 5 shows the 30$\times$30 confusion matrices of the baseline model and the proposed late-fusion model. We can see that the propose model still gets confused for some pairs of subgenres, but less seriously so than the baseline model. Finally, Figure 6 visualizes the Mel-spectrograms and tempograms of four songs from our test set. The songs shown in Figures 6(a) and (b) are both “uplifting-trance” songs. They can both be correctly classified by our late- fusion model, but the first song would be wrongly recognized as a “tech- trance” song by the baseline model. We see that these two songs seem to have fairly different patterns in the Mel-spectrograms, but similar patterns in the autocorrelation tempograms. The baseline model does not work well for this song, possibly because it only has access to the Mel-spectrograms. On the other hand, the songs shown in Figures 6(c) and (d) are an “uplifing-trance” song and a “tech-trance” song, respectively. Again, they can both be correctly classified by our late-fusion model, but the first song in this pair (i.e. (c)) would be wrongly regarded as a “tech-trance” song by the baseline model. We see these two songs, despite they are associated with different subgenres, have some similar local patterns in their Mel-spectrograms, which might have caused confusion for the baseline model. (a) (b) (c) (d) Figure 6: The Mel-spectrogram, Fourier tempogram and autocorrelation tempogram for (a) an “uplifting-trance” song misclassified as “tech-trance” by the baseline model but correctly classified by our late-fusion model; (b) an “uplifting-trance” song correctly classified by both the baseline model and our fusion model; (c) another “uplifting-trance” song misclassified as “tech- trance” by the baseline model but correctly classified by our late-fusion model; (d) a “tech-trance” correctly classified by both the baseline model and our fusion model. ## V Conclusion In the paper, we have presented a deep learning approach for EDM subgenre classification, achieving 60.6% testing accuracy for 30-class classification when using not only the Mel-spectrograms but also the tempograms as input feature. We found that the proposed late-fusion model is 10% more accurate than a Mel-spectrogram only deep learning baseline model. This may suggest that, for a fine-grained classification problem such as EDM subgenre classification, it is beneficial to consider multiple inputs for feature learning. We have a few ideas for future extension of this work. First, we would like to explore different architectures to process the tempograms, such as the use of recurrent layers or temporal convolutional layers [25]. Second, to incorporate more features as input, including other mid-level features [26] as well as high-level features [17, 24, 27]. Third, to employ other feature fusing strategies. And finally, to employ metric-based learning method [28], which is shown to be promising in recent work on music auto-tagging. ## References * [1] S. Sankalp, T. Baruah, S. Tiwari and S. Ganesh, “Intelligent classification of electronic music,” in _Proc. IEEE Int. Symposium on Signal Processing and Information Technology_ , pp. 31–35, 2014. * [2] Y.-S. Huang, S.-Y. Chou, and Y.-H. Yang, “DJnet: A dream for making an automatic DJ,” in _Proc. Int. Soc. Music Information Retrieval Conf._ , late-breaking and demo paper, 2017. * [3] B.-Y. Chen, W.-H. Hsu, W.-H. Liao, M. A. M. Ramírez, Y. Mitsufuji, and Y.-H. Yang, “Automatic DJ transitions with differentiable audio effects and generative adversarial networks,” in _arXiv preprint:2110.06525_ , 2021. * [4] J. Nam, K. Choi, J. Lee, S.-Y. Chou, and Y.-H. Yang, “Deep learning for audio-based music classification and tagging,” _IEEE Signal Processing Magazine_ , vol. 36, no. 1, pp. 41–51, Jan. 2019. * [5] A. Caparrini, J. Arroyo, L. Pérez-Molina, J. Sánchez-Hernández, “Automatic subgenre classification in an electronic dance music taxonomy,” _Journal of New Music Research_ , vol. 49, no. 3, pp. 269–284, 2020. * [6] P. Geurts, D, Ernst, L. Wehenkel, “Extremely randomized trees,” _Proc. Machine learning_ , pp. 3–42, 2006. * [7] M. Won, A. Ferraro, D. Bogdanov, X. Serra, “Evaluation of CNN-based automatic music tagging models,” in _Proc. Sound and Music Computing Conf._ , 2020. * [8] P. Grosche, M. Müller, and F. Kurth, “Cyclic tempogram: A midlevel tempo representation for music signals,” in _Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing_ , pp. 5522–5525, 2010. * [9] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” _IEEE Transactions on Speech and Audio Processing_ , vol. 10, no. 5, pp. 293–302, Jul. 2002. [Online] https://www.kaggle.com/andradaolteanu/gtzan-dataset-music-genre-classification. * [10] J. H. Friedman, “Stochastic gradient boosting,” _Proc. Computational statistics & data analysis_, vol. 38, no. 4, pp. 367–378, 2002. * [11] V. Tsatsishvili, “Automatic subgenre classification of heavy metal music,” University Of Jyväskylä, master thesis, 2011. * [12] J. Bergstra, N. Casagrande, D. Erhan, D. Eck, B. Kégl, “Aggregate features and adaboost for music classification,” _Proc. Machine learning_ , vol. 65, no. 2-3, pp. 473–484, 2006. * [13] R. J. M. Quinto, R. O. Atienza, and N. M. C. Tiglao, “Jazz music subgenre classification using deep learning,” in _Proc. IEEE Region 10 Conf._ , pp. 3111–3116, 2017. * [14] E. Law, K. West, M. I. Mandel, M. Bay, and J. S. Downie, “Evaluation of algorithms using games: The case of music tagging.” in _Proc. Int. Soc. Music Information Retrieval Conf._ , pp. 387-–392, 2009. * [15] D. Turnbull, L. Barrington, D. Torres and G. Lanckriet, “Semantic annotation and retrieval of music and sound effects,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 16, no. 2, pp. 467–476, Feb. 2008. * [16] J.-Y. Liu and Y.-H. Yang, “Event localization in music auto-tagging,” in _Proc. ACM Multimedia_ , pp. 1048–1057, 2016. * [17] J. Pons and X. Serra, “musicnn: Pre-trained convolutional neural networks for music audio tagging,” in _arXiv preprint:1803.01271_ , 2018. * [18] M. Won, S. Chun, O. Nieto Caballero, X. Serra, “Automatic music tagging with harmonic CNN,” in _Proc. Int. Soc. Music Information Retrieval Conf._ , 2019. * [19] D. Bogdanov, M. Won, P. Tovstogan, A. Porter, X. Serra, “The MTG-Jamendo dataset for automatic music tagging,” in _Proc. Machine Learning for Music Discovery Workshop_ , 2019. * [20] M. Won, S. Chun, and X. Serra, “Toward interpretable music tagging with self-attention,” _arXiv preprint:1906.04972_ , 2019. * [21] T. Bertin-Mahieux, D. P. Ellis, B. Whitman, and P. Lamere, “The million song dataset,” in _Proc. Int. Soc. Music Information Retrieval Conf._ , 2011. * [22] B. McFee, C. Raffel, D Liang, D. P. Ellis, M. McVicar, E. Battenberg, O. Nieto, “librosa: Audio and music signal analysis in Python,” in _Proc. Python in Science Conf._ , vol. 8, pp.18–25, 2015. * [23] J. Pons, O. Slizovskaia, and R. Gong, “Timbre analysis of music audio signals with convolutional neural networks,” in _Proc. European Signal Processing Conf._ , pp. 2744–2748, 2017. * [24] Y.-S. Huang, S.-Y. Chou and Y.-H. Yang, “Pop music highlighter: Marking the emotion keypoints,” _Transactions on International Society for Music Information Retrieval_ , vol. 1, no. 1, pp. 68–78, 2018. * [25] S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” in _arXiv preprint:1909.06654_ , 2019. * [26] H. Foroughmand and G. Peeters, “Deep-Rhythm for global tempo estimation in music,” in _Proc. Int. Soc. Music Information Retrieval Conf._ , pp. 636–643, 2019. * [27] E. Zangerle, M. Vötter, R. Huber, and Y.-H. Yang, “Hit song prediction: Leveraging low- and high-level audio features,” in _Proc. Int. Soc. Music Information Retrieval Conf._ , pp. 319–326, 2019. * [28] M. Won, S. Oramas, O. Nieto, F. Gouyon, and X. Serra, “Multimodal metric learning for tag-based music retrieval,” in _Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing_ , pp. 591–595, 2021.
# Perturbation analysis of bounded homogeneous generalized inverses on Banach spaces Jianbing Cao Department of mathematics, Henan Institute of Science and Technology Xinxiang, Henan, 453003, P.R. China Department of Mathematics, East China Normal University, Shanghai 200241, P.R. China Email<EMAIL_ADDRESS>Yifeng Xue Department of Mathematics, East China Normal University, Shanghai 200241, P.R. China Email<EMAIL_ADDRESS>Corresponding author ###### Abstract Let $X,Y$ be Banach spaces and $T:X\to Y$ be a bounded linear operator. In this paper, we initiate the study of the perturbation problems for bounded homogeneous generalized inverse $T^{h}$ and quasi–linear projector generalized inverse $T^{H}$ of $T$. Some applications to the representations and perturbations of the Moore–Penrose metric generalized inverse $T^{M}$ of $T$ are also given. The obtained results in this paper extend some well–known results for linear operator generalized inverses in this field. 2010 Mathematics Subject Classification: Primary 47A05; Secondary 46B20 Key words: homogeneous operator, stable perturbation, quasi–additivity, generalized inverse. ## 1 Introduction The expression and perturbation analysis of the generalized inverses (resp. the Moore–Penrose inverses) of bounded linear operators on Banach spaces (resp. Hilbert spaces) have been widely studied since Nashed’s book [18] was published in 1976. Ten years ago, Chen and Xue proposed a notation so–called the stable perturbation of a bounded operator instead of the rank–preserving perturbation of a matrix in [8]. Using this new notation, they established the perturbation analyses for the Moore–Penrose inverse and the least square problem on Hilbert spaces in [6, 9, 26]. Meanwhile, Castro–González and Koliha established the perturbation analysis for Drazin inverse by using of the gap–function in [4, 5, 14]. Later, some of their results were generalized by Chen and Xue in [27, 28] in terms of stable perturbation. Throughout this paper, $X,Y$ are always Banach spaces over real field $\mathbb{R}$ and $B(X,Y)$ is the Banach space consisting of bounded linear operators from $X$ to $Y$. For $T\in B(X,Y)$, let $\mathcal{N}(T)$ (resp. $\mathcal{R}(T)$) denote the null space (resp. range) of $T$. It is well–known that if $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are topologically complemented in the spaces $X$ and $Y$, respectively, then there exists a (projector) generalized inverse $T^{+}\in B(Y,X)$ of $T$ such that $TT^{+}T=T,\quad T^{+}TT^{+}=T^{+},\quad T^{+}T=I_{X}-P_{\mathcal{N}(T)},\quad TT^{+}=Q_{\mathcal{R}(T)},$ where $P_{\mathcal{N}(T)}$ and $Q_{\mathcal{R}(T)}$ are the bounded linear projectors from $X$ and $Y$ onto $\mathcal{N}(T)$ and $\mathcal{R}(T)$, respectively (cf. [6, 18, 25]). But, in general, not every closed subspace in a Banach space is complemented. Thus the linear generalized inverse $T^{+}$ of $T$ may not exist. In this case, we may seek other types of generalized inverses for $T$. Motivated by the ideas of linear generalized inverses and metric generalized inverses (cf. [18, 20]), by using the so–called homogeneous (resp. quasi–linear) projector in Banach space, Wang and Li defined the homogeneous (resp. quasi–linear) generalized inverse in [22]. Then, some further study on these types of generalized inverses in Banach space was given in [1, 17]. More important, from the results in [17, 20], we know that, in some reflexive Banach spaces $X$ and $Y$, for an operator $T\in B(X,Y)$, there may exists a bounded quasi–linear (projector) generalized inverse of $T$, which is generally neither linear nor metric generalized inverse of $T$. So, from this point of view, it is important and necessary to study homogeneous and quasi–linear (projector) generalized inverses in Banach spaces. Since the homogeneous (or quasi–linear) projector in Banach space are no longer linear, the linear projector generalized inverse and the homogeneous (or quasi–linear) projector generalized inverse of linear operator in Banach spaces are quite different. Motivated by the new perturbation results of closed linear generalized inverses [12], in this paper, we initiate the study of the following problems for bounded homogeneous (resp. quasi–linear projector) generalized inverse: let $T\in B(X,Y)$ with a bounded homogeneous (resp. quasi–linear projector) generalized inverse $T^{h}$ (resp. $T^{H}$), what conditions on the small perturbation $\delta T$ can guarantee that the bounded homogeneous (resp. quasi–linear projector) generalized inverse $\bar{T}^{h}$ (resp. $\bar{T}^{H}$) of the perturbed operator $\bar{T}=T+\delta T$ exists? Furthermore, if it exists, when does $\bar{T}^{h}$ (resp. $\bar{T}^{H}$) have the simplest expression $(I_{X}+T^{h}\delta T)^{-1}T^{h}$ (resp. $(I_{X}+T^{H}\delta T)^{-1}T^{H}$)? With the concept of the quasi–additivity and the notation of stable perturbation in [8], we will present some perturbation results on homogeneous generalized inverses and quasi–linear projector generalized inverses in Banach spaces. Explicit representation and perturbation for the Moore–Penrose metric generalized inverse of the perturbed operator are also given. ## 2 Preliminaries Let $T\in B(X,Y)\backslash\\{0\\}$. The reduced minimum module $\gamma(T)$ of $T$ is given by $\gamma(T)=\inf\\{\|Tx\|\,|\,x\in X,\mathrm{dist}(x,\mathcal{N}(T))=1\\},$ (2.1) where $\mathrm{dist}(x,\mathcal{N}(T))=\inf\\{\|x-z\|\,|\,z\in\mathcal{N}(T)\\}$. It is well–known that $\mathcal{R}(T)$ is closed in $Y$ iff $\gamma(T)>0$ (cf. [16, 28]). From (2.1), we can obtain useful inequality as follows: $\|Tx\|\geq\gamma(T)\,\mathrm{dist}(x,\mathcal{N}(T)),\quad\forall\,x\in X.$ Recall from [1, 23] that a subset $D$ in $X$ is called to be homogeneous if $\lambda\,x\in D$ whenever $x\in D$ and $\lambda\in\mathbb{R}$; a mapping $T\colon X\rightarrow Y$ is called to be a bounded homogeneous operator if $T$ maps every bounded set in $X$ into a bounded set in $Y$ and $T(\lambda\,x)=\lambda\,T(x)$ for every $x\in X$ and every $\lambda\in\mathbb{R}$. Let $H(X,Y)$ denote the set of all bounded homogeneous operators from $X$ to $Y$. Equipped with the usual linear operations on $H(X,Y)$ and norm on $T\in H(X,Y)$ defined by $\|T\|=\sup\\{\|Tx\|\,|\,\|x\|=1,x\in X\\}$, we can easily prove that $(H(X,Y),\|\cdot\|)$ is a Banach space (cf. [20, 23]). ###### Definition 2.1. Let $M$ be a subset of $X$ and $T\colon X\rightarrow Y$ be a mapping. We call $T$ is quasi–additive on $M$ if $T$ satisfies $T(x+z)=T(x)+T(z),\qquad\forall\;x\in X,\;\forall\;z\in M.$ Now we give the concept of quasi–linear projector in Banach spaces. ###### Definition 2.2 (cf. [17, 20]). Let $P\in H(X,X)$. If $P^{2}=P$, we call $P$ is a homogeneous projector. In addition, if $P$ is also quasi–additive on $\mathcal{R}(P)$, i.e., for any $x\in X$ and any $z\in\mathcal{R}(P)$, $P(x+z)=P(x)+P(z)=P(x)+z,$ then we call $P$ is a quasi–linear projector. Clearly, from Definition 2.2, we see that the bounded linear projectors, orthogonal projectors in Hilbert spaces are all quasi–linear projector. Let $P\in H(X,X)$ be a quasi–linear projector. Then by [17, Lemma 2.5], $\mathcal{R}(P)$ is a closed linear subspace of $X$ and $\mathcal{R}(I-P)=\mathcal{N}(P)$. Thus, we can define “the quasi–linearly complement” of a closed linear subspace as follows. Let $V$ be a closed subspace of $X$. If there exists a bounded quasi–linear projector $P$ on $X$ such that $V=\mathcal{R}(P)$, then $V$ is said to be bounded quasi–linearly complemented in $X$ and $\mathcal{N}(P)$ is the bounded quasi–linear complement of $V$ in $X$. In this case, as usual, we may write $X=V\dotplus\mathcal{N}(P)$, where $\mathcal{N}(P)$ is a homogeneous subset of $X$ and “$\dotplus$” means that $V\cap\mathcal{N}(P)=\\{0\\}$ and $X=V+\mathcal{N}(P)$. ###### Definition 2.3. Let $T\in B(X,Y)$. If there is $T^{h}\in H(Y,X)$ such that $TT^{h}T=T,\ \quad T^{h}TT^{h}=T^{h},$ then we call $T^{h}$ is a bounded homogeneous generalized inverse of $T$. Furthermore, if $T^{h}$ is also quasi–additive on $\mathcal{R}(T)$, i.e., for any $y\in Y$ and any $z\in\mathcal{R}(T)$, we have $T^{h}(y+z)=T^{h}(y)+T^{h}(z),$ then we call $T^{h}$ is a bounded quasi–linear generalized inverse of $T$. Obviously, the concept of bounded homogeneous (or quasi-linear) generalized inverse is a generalization of bounded linear generalized inverse. Definition 2.3 was first given in paper [1] for linear transformations and bounded linear operators. The existence of a homogeneous generalized inverse of $T\in B(X,Y)$ is also given in [1]. In the following, we will give a new proof of the existence of a homogeneous generalized inverse of a bounded linear operator. ###### Proposition 2.4. Let $T\in B(X,Y)\backslash\\{0\\}$. Then $T$ has a homogeneous generalized inverse $T^{h}\in H(Y,X)$ iff $\mathcal{R}(T)$ is closed and there exist a bounded quasi–linear projector $P_{\mathcal{N}(T)}\colon X\to\mathcal{N}(T)$ and a bounded homogeneous projector $Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$. ###### Proof. Suppose that there is $T^{h}\in H(Y,X)$ such that $TT^{h}T=T$ and $T^{h}TT^{h}=T^{h}$. Put $P_{\mathcal{N}(T)}=I_{X}-T^{h}T$ and $Q_{\mathcal{R}(T)}=TT^{h}$. Then $P_{\mathcal{N}(T)}\in H(X,X)$, $Q_{\mathcal{R}(T)}\in H(Y,Y)$ and $\displaystyle P_{\mathcal{N}(T)}^{2}$ $\displaystyle=(I_{X}-T^{h}T)(I_{X}-T^{h}T)=I_{X}-T^{h}T-T^{h}T(I_{X}-T^{h}T)=P_{\mathcal{N}(T)},$ $\displaystyle Q_{\mathcal{R}(T)}^{2}$ $\displaystyle=TT^{h}TT^{h}=TT^{h}=Q_{\mathcal{R}(T)}.$ From $TT^{h}T=T$ and $T^{h}TT^{h}=T^{h}$, we can get that $\mathcal{N}(T)=\mathcal{R}(P_{\mathcal{N}(T)})$ and $\mathcal{R}(T)=\mathcal{R}(Q_{\mathcal{R}(T)})$. Since for any $x\in X$ and any $z\in\mathcal{N}(T)$, $\displaystyle P_{\mathcal{N}(T)}(x+z)$ $\displaystyle=x+z-T^{h}T(x+z)=x+z-T^{h}Tx$ $\displaystyle=P_{\mathcal{N}(T)}x+z=P_{\mathcal{N}(T)}x+P_{\mathcal{N}(T)}z,$ it follows that $P_{\mathcal{N}(T)}$ is quasi–linear. Obviously, we see that $Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$ is a bounded homogeneous projector. Now for any $x\in X$, $\mathrm{dist}(x,\mathcal{N}(T))\leq\|x-P_{\mathcal{N}(T)}x\|=\|T^{h}Tx\|\leq\|T^{h}\|\|Tx\|.$ Thus, $\gamma(T)\geq\dfrac{1}{\|T^{h}\|}>0$ and hence $\mathcal{R}(T)$ is closed in $Y$. Conversely, for $x\in X$, let $[x]$ stand for equivalence class of $x$ in $X/\mathcal{N}(T)$. Define mappings $\phi\colon\mathcal{R}(I-P_{\mathcal{N}(T)})\rightarrow X/\mathcal{N}(T)$ and $\hat{T}\colon X/\mathcal{N}(T)\rightarrow\mathcal{R}(T)$ respectively, by $\phi(x)=[x],\quad\forall\,x\in\mathcal{R}(I-P_{\mathcal{N}(T)})\ \text{and}\ \hat{T}([z])=Tz,\quad\forall\,z\in X.$ Clearly, $\hat{T}$ is bijective. Noting that the quotient space $X/\mathcal{N}(T)$ with the norm $\|[x]\|=\mathrm{dist}(x,\mathcal{N}(T))$, $\forall\,x\in X$, is a Banach space (cf. [25]) and $\|Tx\|\geq\gamma(T)\,\mathrm{dist}(x,\mathcal{N}(T))$ with $\gamma(T)>0$, $\forall\,x\in X$, we have $\|\hat{T}[x]\|\geq\gamma(T)\|[x]\|$, $\forall\,x\in X$. Therefore, $\|\hat{T}^{-1}y\|\leq\dfrac{1}{\gamma(T)}\|y\|$, $\forall\,y\in\mathcal{R}(T)$. Since $P_{\mathcal{N}(T)}$ is a quasi–linear projector, it follows that $\phi$ is bijective and $\phi^{-1}([x])=(I-P_{\mathcal{N}(T)})x$, $\forall\,x\in X$. Obviously, $\phi^{-1}$ is homogeneous and for any $z\in\mathcal{N}(T)$, $\|\phi^{-1}([x])\|=\|(I-P_{\mathcal{N}(T)})(x-z)\|\leq(1+\|P_{\mathcal{N}(T)}\|)\|x-z\|$ which implies that $\|\phi^{-1}\|\leq 1+\|P_{\mathcal{N}(T)}\|$. Put $T_{0}=\hat{T}\circ\phi\colon\mathcal{R}(I-P_{\mathcal{N}(T)})\rightarrow\mathcal{R}(T)$. Then $T_{0}^{-1}=\phi^{-1}\circ\hat{T}^{-1}\colon\mathcal{R}(T)\rightarrow\mathcal{R}(I-P_{\mathcal{N}(T)})$ is homogeneous and bounded with $\|T_{0}^{-1}\|\leq\gamma(T)^{-1}(1+\|P_{\mathcal{N}(T)}\|)$. Set $T^{h}=(I-P_{\mathcal{N}(T)})T_{0}^{-1}Q_{\mathcal{R}(T)}$. Then $T^{h}\in H(Y,X)$ and $TT^{h}T=T,\ T^{h}TT^{h}=T^{h},\ TT^{h}=Q_{\mathcal{R}(T)},\ T^{h}T=I_{X}-P_{\mathcal{N}(T)}.$ This finishes the proof. ∎ Recall that a closed subspace $V$ in $X$ is Chebyshev if for any $x\in X$, there is a unique $x_{0}\in V$ such that $\|x-x_{0}\|=\mathrm{dist}(x,V)$. Thus, for the closed Chebyshev space $V$, we can define a mapping $\pi_{V}\colon X\rightarrow V$ by $\pi_{V}(x)=x_{0}$. $\pi_{V}$ is called to be the metric projector from $X$ onto $V$. From [20], we know that $\pi_{V}$ is a quasi–linear projector with $\|\pi_{V}\|\leq 2$. Then by Proposition 2.4, we have ###### Corollary 2.5 ([19, 20]). Let $T\in B(X,Y)\backslash\\{0\\}$ with $\mathcal{R}(T)$ closed. Assume that $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$, respectively. Then there is $T^{h}\in H(Y,X)$ such that $TT^{h}T=T,\ T^{h}TT^{h}=T^{h},\ TT^{h}=\pi_{\mathcal{R}(T)},\ T^{h}T=I_{X}-\pi_{\mathcal{N}(T)}.$ (2.2) The bounded homogeneous generalized inverse $T^{h}$ in (2.2) is called to be the Moore–Penrose metric generalized inverse of $T$. Such $T^{h}$ in (2.2) is unique and is denoted by $T^{M}$ (cf. [20]). ###### Corollary 2.6. Let $T\in B(X,Y)\backslash\\{0\\}$ such that the bounded homogeneous generalized inverse $T^{h}$ exists. Assume that $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$, respectively. Then $T^{M}=(I_{X}-\pi_{\mathcal{N}(T)})T^{h}\pi_{\mathcal{R}(T)}$. ###### Proof. Since $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces, it follows from Corollary 2.5 that $T$ has the unique Moore–Penrose metric generalized inverse $T^{M}$ which satisfy $TT^{M}T=T,\ T^{M}TT^{M}=T^{M},\ TT^{M}=\pi_{\mathcal{R}(T)},\ T^{M}T=I_{X}-\pi_{\mathcal{N}(T)}.$ Set $T^{\natural}=(I_{X}-\pi_{\mathcal{N}(T)})T^{h}\pi_{\mathcal{R}(T)}$. Then $T^{\natural}=T^{M}TT^{h}TT^{M}=T^{M}TT^{M}=T^{M}.$ ∎ ## 3 Perturbations for bounded homogeneous generalized inverse In this section, we extend some perturbation results of linear generalized inverses to bounded homogeneous generalized inverses. We start our investigation with some lemmas, which are prepared for the proof of our main results. The following result is well–known for bounded linear operators, we generalize it to the bounded homogeneous operators in the following form. ###### Lemma 3.1. Let $T\in H(X,Y)$ and $S\in H(Y,X)$ such that $T$ is quasi–additive on $\mathcal{R}(S)$ and $S$ is quasi–additive on $\mathcal{R}(T)$, then $I_{Y}+TS$ is invertible in $H(Y,Y)$ if and only if $I_{X}+ST$ is invertible in $H(X,X)$. ###### Proof. If there is a $\Phi\in H(Y,Y)$ be such that $(I_{Y}+TS)\Phi=\Phi(I_{Y}+TS)=I_{Y}$, then $\displaystyle I_{X}$ $\displaystyle=I_{X}+ST-ST=I_{X}+ST-S((I_{Y}+TS)\Phi)T$ $\displaystyle=I_{X}+ST-((S+STS)\Phi)T\quad(S\ \text{quasi-additive on}\ \mathcal{R}(T))$ $\displaystyle=I_{X}+ST-((I_{X}+ST)S\Phi)T$ $\displaystyle=(I_{X}+ST)(1_{X}-S\Phi T)\quad(T\ \text{quasi--additive on}\ \mathcal{R}(S)).$ Similarly, we also have $I_{X}=(I_{X}-S\Phi T)(I_{X}+ST)$. Thus, $I_{X}+ST$ is invertible on $X$ with $(I_{X}+ST)^{-1}=(1_{X}-S\Phi T)\in H(X,X)$. The converse can also be proved by using the same way as above. ∎ ###### Lemma 3.2. Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists and let $\delta T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $(I_{X}+T^{h}\delta T)$ is invertible in $B(X,X)$. Then $I_{Y}+\delta TT^{h}:Y\rightarrow Y$ is invertible in $H(Y,Y)$ and $\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$ (3.1) is a bounded homogeneous operator with $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$ and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$. ###### Proof. By Lemma 3.1, $I_{Y}+\delta TT^{h}:Y\rightarrow Y$ is invertible in $H(Y,Y)$. Clearly, $I_{X}+T^{h}\delta T$ is linear bounded operator and $I_{Y}+\delta TT^{h}\in H(Y,Y)$. From the equation $(I_{X}+T^{h}\delta T)T^{h}=T^{h}(I_{Y}+\delta TT^{h})$ and $T^{h}\in H(Y,X)$, we get that $\Phi$ is a bounded homogeneous operator. Finally, from (3.1), we can obtain that $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$ and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$. ∎ Recall from [8] that for $T\in B(X,Y)$ with bounded linear generalized inverse $T^{+}\in B(Y,X)$, we say that $\bar{T}=T+\delta T\in B(X,Y)$ is a stable perturbation of $T$ if $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{+})=\\{0\\}$. Now for $T\in B(X,Y)$ with $T^{h}\in H(Y,X)$, we also say that $\bar{T}=T+\delta T\in B(X,Y)$ is a stable perturbation of $T$ if $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$. ###### Lemma 3.3. Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $I_{X}+T^{h}\delta T$ is invertible in $B(X,X)$ Put $\bar{T}=T+\delta T$. If $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$, then $\mathcal{N}(\bar{T})=(I_{X}+T^{h}\delta T)^{-1}\mathcal{N}(T)\ \text{and}\ \mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T).$ ###### Proof. Set $P=(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)$. We first show that $P^{2}=P$ and $\mathcal{R}(P)=\mathcal{N}(\bar{T})$. Since $T^{h}TT^{h}=T^{h}$, we get $(I_{X}-T^{h}T)T^{h}\delta T=0$ and then $\displaystyle(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)=I_{X}-T^{h}T$ (3.2) and so that $\displaystyle I_{X}-T^{h}T=(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)^{-1}.$ (3.3) Now, by using (3.2) and (3.3), it is easy to get $P^{2}=P$. Since $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$, we see $I_{X}-T^{h}T=(I_{X}+T^{h}\delta T)-T^{h}\bar{T}$. Then for any $x\in X$, we have $\displaystyle Px$ $\displaystyle=(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)x$ $\displaystyle=(I_{X}+T^{h}\delta T)^{-1}[(I_{X}+T^{h}\delta T)-T^{h}\bar{T}]x$ $\displaystyle=x-(I_{X}+T^{h}\delta T)^{-1}T^{h}\bar{T}x.$ (3.4) From (3), we get that if $x\in\mathcal{N}(\bar{T})$, then $x\in\mathcal{R}(P)$. Thus, $\mathcal{N}(\bar{T})\subset\mathcal{R}(P)$. Conversely, let $z\in\mathcal{R}(P)$, then $z=Pz$. From (3), we get $(I_{X}+T^{h}\delta T)^{-1}T^{h}\bar{T}x=0$. Therefore, we have $\bar{T}x\in\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$. Thus, $x\in\mathcal{N}(\bar{T})$ and then $\mathcal{R}(P)=\mathcal{N}(\bar{T})$. From the Definition of $T^{h}$, we have $\mathcal{N}(T)=\mathcal{R}(I_{X}-T^{h}T)$. Thus, $(I_{X}+T^{h}\delta T)^{-1}\mathcal{N}(T)=(I_{X}+T^{h}\delta T)^{-1}\mathcal{R}(I_{X}-T^{h}T)=\mathcal{R}(P)=\mathcal{N}(\bar{T}).$ Now, we prove that $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$. From $(I_{Y}+\delta TT^{h})T=\bar{T}T^{h}T$, we get that $(I_{Y}+\delta TT^{h})\mathcal{R}(T)\subset\mathcal{R}(\bar{T})$. On the other hand, since $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $\mathcal{R}(P)=\mathcal{N}(\bar{T})$, we have for any $x\in X$, $\displaystyle 0$ $\displaystyle=\bar{T}Px=\bar{T}(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)x$ $\displaystyle=\bar{T}x-\bar{T}(I_{X}+T^{h}\delta T)^{-1}(T^{h}\delta Tx+T^{h}{T}x)$ $\displaystyle=\bar{T}x-\bar{T}(I_{X}+T^{h}\delta T)^{-1}T^{h}\bar{T}x=\bar{T}x-\bar{T}T^{h}(I_{Y}+\delta TT^{h})^{-1}\bar{T}x$ $\displaystyle=\bar{T}x-(I_{Y}+\delta TT^{h}-I_{Y}+TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}x$ $\displaystyle=(I_{Y}-TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}x.$ (3.5) Since $\mathcal{N}(I_{Y}-TT^{h})=\mathcal{R}(T)$, it follows (3) that $(I_{Y}+\delta TT^{h})^{-1}\mathcal{R}(\bar{T})\subset\mathcal{R}(T)$, that is, $\mathcal{R}(\bar{T})\subset(I_{Y}+\delta TT^{h})\mathcal{R}(T)$. Consequently, $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$. ∎ Now we can present the main perturbation result for bounded homogeneous generalized inverse on Banach spaces. ###### Theorem 3.4. Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $I_{X}+T^{h}\delta T$ is invertible in $B(X,X)$. Put $\bar{T}=T+\delta T$. Then the following statements are equivalent: 1. $(1)$ $\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}$ is a bounded homogeneous generalized inverse of $\bar{T}$; 2. $(2)$ $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{h})=\\{0\\}$; 3. $(3)$ $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$; 4. $(4)$ $(I_{Y}+T^{h}\delta T)\mathcal{N}(\bar{T})=\mathcal{N}(T)$; 5. $(5)$ $(I_{Y}+\delta TT^{h})^{-1}\bar{T}\mathcal{N}(T)\subset\mathcal{R}(T)$; ###### Proof. We prove our theorem by showing that $(3)\Rightarrow(5)\Rightarrow(4)\Rightarrow(2)\Rightarrow(3)\Rightarrow(1)\Rightarrow(3).$ $(3)\Rightarrow(5)$ This is obvious since $(I+\delta TT^{h})$ is invertible and $\mathcal{N}(T)\subset X$. $(5)\Rightarrow(4)$. Let $x\in\mathcal{N}(\bar{T})$, then we see $(I_{X}+T^{h}\delta T)x=x-T^{h}Tx\in\mathcal{N}(T)$. Hence $(I_{X}+T^{h}\delta T)\mathcal{N}(\bar{T})\subset\mathcal{N}(T)$. Now for any $x\in\mathcal{N}(T)$, then by (5), there exists some $z\in X$ such that $\bar{T}x=(I_{Y}+\delta TT^{h})Tz=\bar{T}T^{h}Tz.$ So $x-T^{h}Tz\in\mathcal{N}(\bar{T})$ and hence $(I_{X}+T^{h}\delta T)(x-T^{h}Tz)=(I_{X}-T^{h}T)(x-T^{h}Tz)=x.$ Consequently, $(I_{X}+T^{h}\delta T)\mathcal{N}(\bar{T})=\mathcal{N}(T)$. $(4)\Rightarrow(2)$. Let $y\in R(\overline{T})\cap N(T^{h})$, then there exists an $x\in X$ such that $y=\bar{T}x$ and $T^{h}\bar{T}x=0$. We can check that $T(I_{X}+T^{h}\delta T)x=Tx+TT^{h}\delta Tx=Tx+TT^{h}\bar{T}x-TT^{h}Tx=0.$ Thus, $(I_{X}+T^{h}\delta T)x\in\mathcal{N}(T)$. By (4), $x\in\mathcal{N}(\bar{T})$ and so that $y=\bar{T}x=0$. $(2)\Rightarrow(3)$ follows from Lemma 3.3. $(3)\Rightarrow(1)$ Noting that by Lemma 3.2, we have $\Phi=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$ is a bounded homogeneous operator with $\mathcal{R}(\Phi)=\mathcal{R}(T^{h})$ and $\mathcal{N}(\Phi)=\mathcal{N}(T^{h})$. We need to prove that $\Phi\bar{T}\Phi=\Phi$ and $\bar{T}\Phi\bar{T}=\bar{T}$. Since $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$, we have $T^{h}\bar{T}=T^{h}T+T^{h}\delta T$. Therefore, $\displaystyle\Phi\bar{T}\Phi$ $\displaystyle=(I_{Y}+T^{h}\delta T)^{-1}T^{h}\bar{T}(I_{Y}+T^{h}\delta T)^{-1}T^{h}$ $\displaystyle=(I_{Y}+T^{h}\delta T)^{-1}T^{h}[(I_{X}+T^{h}\delta T)-(I_{X}-T^{h}T)](I_{X}+T^{h}\delta T)^{-1}T^{h}$ $\displaystyle=(I_{X}+T^{h}\delta T)^{-1}T^{h}-(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)(I_{X}+T^{h}\delta T)^{-1}T^{h}$ $\displaystyle=(I_{X}+T^{h}\delta T)^{-1}T^{h}-(I_{X}+T^{h}\delta T)^{-1}(I_{X}-T^{h}T)T^{h}(I_{X}+\delta TT^{h})^{-1}$ $\displaystyle=\Phi.$ $\mathcal{R}(\bar{T})=(I_{Y}+\delta TT^{h})\mathcal{R}(T)$ means that $(I_{Y}-TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}=0$. So $\displaystyle\bar{T}\Phi\bar{T}$ $\displaystyle=(T+\delta T)T^{h}(I_{Y}+T^{h}\delta T)^{-1}\bar{T}$ $\displaystyle=(I_{Y}+\delta TT^{h}+TT^{h}-I_{Y})(I_{Y}+T^{h}\delta T)^{-1}\bar{T}$ $\displaystyle=\bar{T}.$ $(1)\Rightarrow(3)$ From $\bar{T}\Phi\bar{T}=\bar{T}$, we have $(I_{Y}-TT^{h})(I_{Y}+\delta TT^{h})^{-1}\bar{T}=0$ by the proof of $(3)\Rightarrow(1)$. Thus, $(I_{Y}+\delta TT^{h})^{-1}\mathcal{R}(\bar{T})\subset\mathcal{R}(T)$. From $(I_{Y}+\delta TT^{h})T=\bar{T}T^{h}T$, we get that $(I_{Y}+\delta TT^{h})\mathcal{R}(T)\subset\mathcal{R}(\bar{T})$. So $(I_{Y}+\delta TT^{h})\mathcal{R}(T)=\mathcal{R}(\bar{T})$. ∎ ###### Corollary 3.5. Let $T\in B(X,Y)$ such that $T^{h}\in H(Y,X)$ exists. Suppose that $\delta T\in B(X,Y)$ such that $T^{h}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $\|T^{h}\delta T\|<1$. Put $\bar{T}=T+\delta T$. If $\mathcal{N}(T)\subset\mathcal{N}(\delta T)$ or $\mathcal{R}(\delta T)\subset\mathcal{R}(T)$, then $\bar{T}$ has a homogeneous bounded generalized inverse $\bar{T}^{h}=T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}.$ ###### Proof. If $N(T)\subset N(\delta T)$, then $N(T)\subset N(\bar{T})$. So Condition (5) of Theorem 3.4 holds. If $\mathcal{R}(\delta T)\subset R(T)$, then $R(\bar{T})\subset\mathcal{R}(T)$. So $\mathcal{R}(\bar{T})\cap\mathcal{N}(T)\subset\mathcal{R}(T)\cap\mathcal{N}(T^{h})=\\{0\\}$ and consequently, $\bar{T}$ has the homogeneous bounded generalized inverse $T^{h}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{h}\delta T)^{-1}T^{h}$ by Theorem 3.4. ∎ ###### Proposition 3.6. Let $T\in B(X,Y)$ with $\mathcal{R}(T)$ closed. Assume that $\mathcal{N}(T)$ and $\mathcal{R}(T)$ are Chebyshev subspaces in $X$ and $Y$, respectively. Let $\delta T\in B(X,Y)$ such that $T^{M}$ is quasi–additive on $\mathcal{R}(\delta T)$ and $\|T^{M}\delta T\|<1$. Put $\bar{T}=T+\delta T$. Suppose that $\mathcal{N}(\bar{T})$ and $\overline{\mathcal{R}(\bar{T})}$ are Chebyshev subspaces in $X$ and $Y$, respectively. If $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{M})=\\{0\\}$, then $\mathcal{R}(\bar{T})$ is closed in $Y$ and $\bar{T}$ has the Moore–Penrose metric generalized inverse $\bar{T}^{M}=(I_{X}-\pi_{\mathcal{N}(\bar{T})})(I_{X}+T^{M}\delta T)^{-1}T^{M}\pi_{\mathcal{R}(\bar{T})}$ with $\|\bar{T}^{M}\|\leq\dfrac{2\|T^{M}\|}{1-\|T^{M}\delta T\|}$. ###### Proof. $T^{M}$ exists by Corollary 2.5. Since $T^{M}\delta T$ is $\mathbb{R}$–linear and $\|T^{M}\delta T\|<1$, we have $I_{X}+T^{M}\delta T$ is invertible in $B(X,X)$. By Theorem 3.4 and Proposition 2.4, $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{M})=\\{0\\}$ implies that $\mathcal{R}(\bar{T})$ is closed and $\bar{T}$ has a bounded homogeneous generalized inverse $\bar{T}^{h}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$. Then by Corollary 2.6, $\bar{T}^{M}$ has the form $\bar{T}^{M}=(I_{X}-\pi_{\mathcal{N}(\bar{T})})(I_{X}+T^{M}\delta T)^{-1}T^{M}\pi_{\mathcal{R}(\bar{T})}.$ Note that $\|x-\pi_{\mathcal{N}(\bar{T})}x\|=\mathrm{dist}(x,\mathcal{N}(\bar{T}))\leq\|x\|$, $\forall\,x\in X$. So $\|I_{X}-\pi_{\mathcal{N}(\bar{T})}\|\leq 1$. Therefore, $\|\bar{T}^{M}\|\leq\|I_{X}-\pi_{\mathcal{N}(\bar{T})}\|\|(I_{X}+T^{M}\delta T)^{-1}T^{M}\|\|\pi_{\mathcal{R}(\bar{T})}\|\leq\frac{2\|T^{M}\|}{1-\|T^{M}\delta T\|}.$ This completes the proof. ∎ ## 4 Perturbation for quasi–linear projector generalized inverse We have known that the range of a bounded qausi–linear projector on a Banach space is closed(see [17, Lemma 2.5]). Thus, from Definition 2.3 and the proof of Proposition 2.4, the following result is obvious. ###### Proposition 4.1. Let $T\in B(X,Y)\backslash\\{0\\}$. Then $T$ has a bounded quasi–linear generalized inverse $T^{h}\in H(Y,X)$ iff there exist a bounded linear projector $P_{\mathcal{N}(T)}\colon X\to\mathcal{N}(T)$ and a bounded quasi–linear projector $Q_{\mathcal{R}(T)}:Y\to\mathcal{R}(T)$. Motivated by related results in papers [1, 17, 22] and the definition of the oblique projection generalized inverses in Banach space(see [18, 25]), based on Proposition 4.1, we can give the following definition of quasi–linear projector generalized inverse of a bounded linear operator on Banach space. ###### Definition 4.2. Let $T\in B(X,Y)$. Let $T^{H}\in H(Y,X)$ be a bounded homogeneous operator. If there exist a bounded linear projector $P_{\mathcal{N}(T)}$ from $X$ onto $\mathcal{N}(T)$ and a bounded quasi–linear projector $Q_{\mathcal{R}(T)}$ from $Y$ onto $\mathcal{R}(T)$, respectively, such that $\displaystyle(1)\,TT^{H}T=T;\quad(2)\,T^{H}TT^{H}=T^{H};\quad(3)\,T^{H}T=I_{X}-P_{\mathcal{N}(T)};\quad(4)\,TT^{H}=Q_{\mathcal{R}(T)}.$ Then $T^{H}$ is called a quasi–linear projector generalized inverse of $T$. For $T\in B(X,Y)$, if $T^{H}$ exists, then from Proposition 4.1 and Definition 2.3, we see that $\mathcal{R}(T)$ is closed and $T^{H}$ is quasi–additive on $\mathcal{R}(T)$, in this case, we may call $T^{H}$ is a quasi–linear operator. Choose $\delta T\in B(X,Y)$ such that $T^{H}$ is also quasi–additive on $\mathcal{R}(\delta T)$, then $I_{X}+T^{H}\delta T$ is a bounded linear operator and $I_{Y}+\delta TT^{H}$ is a bounded linear operator on $\mathcal{R}(\bar{T})$. ###### Lemma 4.3. Let $T\in B(X,Y)$ such that $T^{H}$ exists and let $\delta T\in B(X,Y)$ such that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$. Put $\bar{T}=T+\delta T$. Assumes that $X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$ and $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$. Then 1. $(1)$ $I_{X}+T^{H}\delta T:X\rightarrow X$ is a invertible bounded linear operator; 2. $(2)$ $I_{Y}+\delta TT^{H}:Y\rightarrow Y$ is a invertible quasi–linear operator; 3. $(3)$ $\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a bounded homogeneous operator. ###### Proof. Since $I_{X}+T^{H}\delta T\in B(X,X)$, we only need to show that $\mathcal{N}(I_{X}+T^{H}\delta T)=\\{0\\}$ and $\mathcal{R}(I_{X}+T^{H}\delta T)=X$ under the assumptions. We first show that $\mathcal{N}(I_{X}+T^{H}\delta T)=\\{0\\}$. Let $x\in\mathcal{N}(I_{X}+T^{H}\delta T)$, then $(I_{X}+T^{H}\delta T)x=(I_{X}-T^{H}T)x+T^{H}\bar{T}x=0$ since $T^{H}$ is quasi–linear. Thus $(I_{X}-T^{H}T)x=0=T^{H}\bar{T}x$ and hence $\bar{T}x\in\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})$. Noting that $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we have $\bar{T}x=0$ and hence $x\in\mathcal{R}(T^{H})\cap\mathcal{N}(\bar{T})$. From $X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$, we get that $x=0$. Now, we prove that $\mathcal{R}(I_{X}+T^{H}\delta T)=X$. Let $x\in X$ and put $x_{1}=(I_{X}-T^{H}T)x$, $x_{2}=T^{H}Tx$. Since $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we have $\mathcal{R}(T^{H})=T^{H}\mathcal{R}(\bar{T})$. Therefore, from $X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$, we get that $\mathcal{R}(T^{H})=T^{H}\mathcal{R}(\bar{T})=T^{H}\bar{T}\mathcal{R}(T^{H})$. Consequently, there is $z\in Y$ such that $T^{H}(Tx_{2}-\bar{T}x_{1})=T^{H}\bar{T}T^{H}z$. Set $y=x_{1}+T^{H}z\in X$. Noting that $T^{H}$ is quasi–additive on $\mathcal{R}(T)$ and $\mathcal{R}(\delta T)$, respectively. we have $\displaystyle(I_{X}+T^{H}\delta T)y$ $\displaystyle=(I_{X}-T^{H}T+T^{H}\bar{T})(x_{1}+T^{H}z)$ $\displaystyle=x_{1}+T^{H}\bar{T}x_{1}+T^{H}\bar{T}T^{H}z$ $\displaystyle=x_{1}+T^{H}\bar{T}x_{1}+T^{H}(Tx_{2}-\bar{T}x_{1})$ $\displaystyle=x.$ Therefore, $X=\mathcal{R}(I_{Y}+T^{H}\delta T)$. Similar to Lemma 3.2, we have $\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a bounded homogeneous operator. ∎ ###### Theorem 4.4. Let $T\in B(X,Y)$ such that $T^{H}$ exists and let $\delta T\in B(X,Y)$ such that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$. Put $\bar{T}=T+\delta T$. Then the following statements are equivalent: 1. $(1)$ $I_{X}+T^{H}\delta T$ is invertible in $B(X,X)$ and $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})=\\{0\\};$ 2. $(2)$ $I_{X}+T^{H}\delta T$ is invertible in $B(X,X)$ and $\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a quasi–linear projector generalized inverse of $\bar{T};$ 3. $(3)$ $X=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H})$ and $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, i.e., $\mathcal{N}(\bar{T})$ is topological complemented in $X$ and $\mathcal{R}(\bar{T})$ is quasi–linearly complemented in $Y$. ###### Proof. $(1)\Rightarrow(2)$ By Theorem 3.4, $\Upsilon=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is a bounded homogeneous generalized inverse of $T$. Let $y\in Y$ and $z\in\mathcal{R}(\bar{T})$. Then $z=Tx+\delta Tx$ for some $x\in X$. Since $T^{H}$ is quasi–additive on $\mathcal{R}(T)$ and $\mathcal{R}(\delta T)$, it follows that $T^{H}(y+z)=T^{H}(y+Tx+\delta Tx)=T^{H}(y)+T^{H}(Tx)+T^{H}(\delta Tx)=T^{H}y+T^{H}z,$ i.e., $T^{H}$ is quasi–additive on $\mathcal{R}(\bar{T})$ and hence $\Upsilon$ is quasi–linear. Set $\bar{P}=(I_{X}+T^{H}\delta T)^{-1}(I_{X}-T^{H}T),\qquad\bar{Q}=\bar{T}(I_{X}+T^{H}\delta T)^{-1}T^{H}.$ Then, by the proof of Lemma 3.3, $\bar{P}\in H(X,X)$ is a projector with $\mathcal{R}(\bar{P})=\mathcal{N}(\bar{T})$. Noting that $(I_{X}+T^{H}\delta T)^{-1}$ and $I_{X}-T^{H}T$ are all linear. So $\bar{P}$ is linear. Furthermore, $\displaystyle\Upsilon\bar{T}\\!$ $\displaystyle=(I_{X}+T^{H}\delta T)^{-1}T^{H}(T+\delta T)\\!$ $\displaystyle=(I_{X}+T^{H}\delta T)^{-1}(I_{X}+T^{H}\delta T+T^{H}T-I_{X})\\!$ $\displaystyle=I_{X}-\bar{P}.$ Since $T^{H}$ is quasi–additive on $\mathcal{R}(\bar{T})$, it follow that $\bar{Q}=\bar{T}(I+T^{H}\delta T)^{-1}T^{H}=\bar{T}\Upsilon$ is quasi–linear and bounded with $\mathcal{R}(\bar{Q})\subset\mathcal{R}(\bar{T})$. Noting that $\displaystyle\bar{Q}$ $\displaystyle=\bar{T}T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{Y}+\delta TT^{H}+TT^{H}-I_{Y})(I_{Y}+\delta TT^{H})^{-1}$ $\displaystyle=I_{Y}-(I_{Y}-TT^{H})(I_{Y}+\delta TT^{H})^{-1}$ and $(I_{Y}+\delta TT^{H})^{-1}\mathcal{R}(\bar{T})=\mathcal{R}(T)$ by Lemma 3.3, we have $\mathcal{R}(\bar{T})=\bar{Q}(\mathcal{R}(\bar{T}))\subset\mathcal{R}(\bar{Q})$. Thus, $\mathcal{R}(\bar{Q})=\mathcal{R}(\bar{T})$. From $\Upsilon\bar{T}=I_{X}-\bar{P}$ and $\mathcal{R}(\bar{P})=\mathcal{N}(\bar{T})$, we see that $\Upsilon\bar{T}\Upsilon=\Upsilon$, then we have $\bar{Q}^{2}=\bar{T}(I_{X}+T^{H}\delta T)^{-1}T^{H}\bar{T}(I_{X}+T^{H}\delta T)^{-1}T^{H}=\bar{T}\Upsilon\bar{T}\Upsilon=\bar{Q}.$ Therefore, by Definition 4.2, we get $\bar{T}^{H}=\Upsilon$. $(2)\Rightarrow(3)$ From $\bar{T}^{H}=T^{H}(I_{Y}+\delta TT^{h})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$, we obtain that $\mathcal{R}(\bar{T}^{H})=\mathcal{R}(T^{H})$ and $\mathcal{N}(\bar{T}^{H})=\mathcal{N}(T^{H})$. From $\bar{T}\bar{T}^{H}\bar{T}=\bar{T}$, $\bar{T}^{H}\bar{T}\bar{T}^{H}=\bar{T}^{H}$, we get that $\mathcal{R}(I_{X}-\bar{T}^{H}\bar{T})=\mathcal{N}(\bar{T}),\ \mathcal{R}(\bar{T}^{H}\bar{T})=\mathcal{R}(\bar{T}^{H}),\ \mathcal{R}(\bar{T}\bar{T}^{H})=\mathcal{R}(\bar{T}),\ \mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{N}(\bar{T}^{H})$ Thus $\mathcal{R}(\bar{T}^{H}\bar{)}=\mathcal{R}(T^{H})$ and $\mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{N}(T^{H})$. Therefore, $\displaystyle X$ $\displaystyle=\mathcal{R}(I_{X}-\bar{T}^{H}\bar{T})\dotplus\mathcal{R}(\bar{T}^{H}\bar{T})=\mathcal{N}(\bar{T})\dotplus\mathcal{R}(T^{H}),$ $\displaystyle Y$ $\displaystyle=\mathcal{R}(\bar{T}\bar{T}^{H})\dotplus\mathcal{R}(I_{Y}-\bar{T}\bar{T}^{H})=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H}).$ $(3)\Rightarrow(1)$ By Lemma 4.3, $I_{X}+T^{H}\delta T$ is invertible in $H(X,X)$. Now from $Y=\mathcal{R}(\bar{T})\dotplus\mathcal{N}(T^{H})$, we get that $\mathcal{R}(\bar{T})\cap\mathcal{N}(T^{H})=\\{0\\}$. ∎ ###### Lemma 4.5 ([2]). Let $A\in B(X,X)$. Suppose that there exist two constants $\lambda_{1},\lambda_{2}\in[0,1)$ such that $\|Ax\|\leq\lambda_{1}\|x\|+\lambda_{2}\|(I+A)x\|,\quad\quad(\forall\;x\in X).$ Then $I+A\colon X\rightarrow X$ is bijective. Moreover, for any $x\in X$, $\displaystyle\frac{1-\lambda_{1}}{1+\lambda_{2}}\|x\|\leq\|(I+A)x\|\leq\frac{1+\lambda_{1}}{1-\lambda_{2}}\|x\|,\quad\frac{1-\lambda_{2}}{1+\lambda_{1}}\|x\|\leq\|(I+A)^{-1}x\|\leq\frac{1+\lambda_{2}}{1-\lambda_{1}}\|x\|.$ Let $T\in B(X,Y)$ such that $T^{H}$ exists. Let $\delta T\in B(X,Y)$ such that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$ and satisfies $\displaystyle\|T^{H}\delta Tx\|\leq\lambda_{1}\|x\|+\lambda_{2}\|(I+T^{H}\delta T)x\|\quad(\forall\;x\in X),$ (4.1) where $\lambda_{1},\lambda_{2}\in[0,1)$. ###### Corollary 4.6. Let $T\in B(X,Y)$ such that $T^{H}$ exists. Suppose that $\delta T\in B(X,Y)$ such that $T^{H}$ is quasi–additive on $\mathcal{R}(\delta T)$ and satisfies (4.1). Put $\bar{T}=T+\delta T$. Then $I_{X}+T^{H}\delta T$ is invertible in $H(X,X)$ and $\bar{T}^{H}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is well defined with $\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}\leq\dfrac{(2+\lambda_{1})(1+\lambda_{2})}{(1-\lambda_{1})(1-\lambda_{2})}.$ ###### Proof. By using Lemma 4.5, we get that $I_{X}+T^{H}\delta T$ is invertible in $H(X,X)$ and $\displaystyle\|(I_{X}+T^{H}\delta T)^{-1}\|\leq\dfrac{1+\lambda_{2}}{1-\lambda_{1}},\qquad\|I_{X}+T^{H}\delta T\|\leq\dfrac{1+\lambda_{1}}{1-\lambda_{2}}.$ (4.2) From Theorem 4.4, we see $\bar{T}^{H}=T^{H}(I_{Y}+\delta TT^{H})^{-1}=(I_{X}+T^{H}\delta T)^{-1}T^{H}$ is well–defined. Now we can compute $\displaystyle\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}$ $\displaystyle\leq\dfrac{\|(I_{X}+T^{H}\delta T)^{-1}T^{H}-T^{H}\|}{\|T^{H}\|}$ $\displaystyle\leq\dfrac{\|(I_{X}+T^{H}\delta T)^{-1}[I_{X}-(I_{X}+T^{H}\delta T)]T^{H}\|}{\|T^{H}\|}$ $\displaystyle\leq\|(I_{X}+T^{H}\delta T)^{-1}\|\|T^{H}\delta T\|.$ (4.3) Since $\lambda_{2}\in[0,1)$, then from the second inequality in (4.2), we get that $\|T^{H}\delta T\|\leq\dfrac{2+\lambda_{1}}{1-\lambda_{2}}$. Now, by using (4) and (4.2), we can obtain $\dfrac{\|\bar{T}^{H}-T^{H}\|}{\|T^{H}\|}\leq\dfrac{(2+\lambda_{1})(1+\lambda_{2})}{(1-\lambda_{1})(1-\lambda_{2})}.$ This completes the proof. ∎ ###### Corollary 4.7. Let $T\in B(X,Y)$ with $\mathcal{R}(T)$ closed. Assume that $\mathcal{R}(T)$ and $\mathcal{N}(T)$ are Chebyshev subspaces in $Y$ and $X$, respectively. Let $\delta T\in B(X,Y)$ such that $\mathcal{R}(\delta T)\subset\mathcal{R}(T)$, $\mathcal{N}(T)\subset\mathcal{N}(\delta T)$ and $\|T^{M}\delta T\|<1$. Put $\bar{T}=T+\delta T$. If $T^{M}$ is quasi–additive on $\mathcal{R}(T)$, then $\bar{T}^{M}=T^{M}(I_{Y}+\delta TT^{M})^{-1}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$ with $\dfrac{\|\bar{T}^{M}-T^{M}\|}{\|T^{M}\|}\leq\dfrac{\|T^{M}\delta T\|}{1-\|T^{M}\delta T\|}.$ ###### Proof. From $\mathcal{R}(\delta T)\subset\mathcal{R}(T)$ and $\mathcal{N}(T)\subset\mathcal{N}(\delta T)$, we get that $\pi_{\mathcal{R}(T)}\delta T=\delta T$ and $\delta T\pi_{\mathcal{N}(T)}=0$, that is, $TT^{M}\delta T=\delta T=\delta TT^{M}T$. Consequently, $\bar{T}=T+\delta T=T(I_{X}+T^{M}\delta T)=(I_{Y}+\delta TT^{M})T$ (4.4) Since $T^{M}$ is quasi–additive on $\mathcal{R}(T)$ and $\|T^{M}\delta T\|<1$, we get that $I_{X}+T^{M}\delta T$ and $I_{Y}+\delta TT^{M}$ are all invertible in $H(X,X)$. So from (4.4), we have $\mathcal{R}(\bar{T})=\mathcal{R}(T)$ and $\mathcal{N}(\bar{T})=\mathcal{N}(T)$ and hence $\bar{T}^{H}=T^{M}(I_{Y}+\delta TT^{M})^{-1}=(I_{X}+T^{M}\delta T)^{-1}T^{M}$ by Theorem 4.4. Finally, by Corollary 2.6, $\displaystyle\bar{T}^{M}$ $\displaystyle=(I_{X}-\pi_{\mathcal{N}(\bar{T})})\bar{T}^{H}\pi_{\mathcal{R}(\bar{T})}=(I_{X}-\pi_{\mathcal{N}(T)})T^{M}(I_{Y}+\delta TT^{M})^{-1}\pi_{\mathcal{R}(T)}$ $\displaystyle=(I_{X}+T^{M}\delta T)^{-1}T^{M}\pi_{\mathcal{R}(T)}=(I_{X}+T^{M}\delta T)^{-1}T^{M}=T^{M}(I_{Y}+\delta TT^{M})^{-1}$ and then $\|\bar{T}^{M}-T^{M}\|\leq\|(I_{X}-T^{M}\delta T)^{-1}-I_{X}\|\|T^{M}\|\leq\frac{\|T^{M}\delta T\|\|T^{M}\|}{1-\|T^{M}\delta T\|}.$ The proof is completed. ∎ ## References * [1] X. Bai, Y. Wang, G. Liu and J. Xia, Definition and criterion of homogeneous generalized inverse, Acta Math. Sinica (Chin. Ser.),, 52 (2) (2009), 353–360. * [2] P. Cazassa and O. Christensen, Perturbation of operators and applications to frame theory, J. Fourier Anal. Appl., 3 (5) (1997) 543–557. * [3] N. Castro–González and J. Koliha, Perturbation of Drazin inverse for closed linear operators, Integral Equations and Operator Theory, 36 (2000), 92–106. * [4] N. Castro–González, J. Koliha and V. Rakočević, Continuity and general perturbation of Drazin inverse for closed linear operators, Abstr. Appl. Anal., 7 (2002), 355–347. * [5] N. Castro–González, J. Koliha and Y. Wei, Error bounds for perturbation of the Drazin inverse of closed operators with equal spectral idempotents, Appl. Anal., 81 (2002), 915–928. * [6] G. Chen, M. Wei and Y. Xue, Perturbation analysis of the least square solution in Hilbert spaces, Linear Algebra Appl., 244 (1996), 69–80. * [7] G. Chen, Y. Wei and Y. Xue, The generalized condition numbers of bounded linear operators in Banach spaces, J. Aust. Math. Soc., 76 (2004), 281–290. * [8] G. Chen and Y. Xue, Perturbation analysis for the operator equation $Tx=b$ in Banach spaces, J. Math. Anal. Appl., 212 (1997), no. 1, 107–125. * [9] G. Chen and Y. Xue, The expression of generalized inverse of the perturbed operators under type I perturbation in Hilbert spaces, Linear Algebra Appl., 285 (1998), 1–6. * [10] J. Ding, New perturbation results on pseudo-inverses of linear operators in Banach spaces, Linear Algebra Appl., 362 (2003), no. 1, 229–235. * [11] J. Ding, On the expression of generalized inverses of perturbed bounded linear operators, Missouri J. Math. Sci., 15 (2003), no. 1, 40–47. * [12] F. Du and Y. Xue, The characterizations of the stable perturbation of a closed operator by a linear operator in Banach spaces, Linear Algebra Appl., (Accepted). * [13] Q. Huang, On perturbations for oblique projection generalized inverses of closed linear operators in Banach spaces, Linear Algebra Appl., 434 (2011), no. 12, 2468–2474. * [14] J. Koliha, Error bounds for a general perturbation of Drazin inverse, Appl. Math. Comput., 126 (2002), 181–185. * [15] I. Singer, The Theory of Best Approximation and Functional Analysis, Springer-Verlag, New York, 1970. * [16] T. Kato. Perturbation Theory for Linear Operators. Springer-Verlag, New York, 1984. * [17] P. Liu and Y. Wang, The best generalized inverse of the linear operator in normed linear space, Linear Algebra Appl., 420 (2007) 9–19. * [18] M. Z. Nashed (Ed.), Generalized inverse and Applications ,Academic Press, New York, 1976. * [19] R. Ni, Moore-Penrose metric generalized inverses of linear operators in arbitrary Banach spaces. Acta Math. Sinica (Chin. Ser.), 49 (2006), no. 6, 1247–1252. * [20] Y. Wang, Generalized Inverse of Operator in Banach Spaces and Applications, Science Press, Beijing, 2005. * [21] H. Wang and Y. Wang, Metric generalized inverse of linear operator in Banach space, Chin. Ann. Math. B, 24 (4) (2003) 509–520. * [22] Y. Wang and S. Li, Homogeneous generalized inverses of linear operators in Banach spaces, Acta Math. Sinica, 48 (2) (2005), 253–258. * [23] Y. Wang and Sh. Pan, An approximation problem of the finite rank operator in Banach spaces, Sci. Chin. A, 46 (2) (2003) 245–250. * [24] Y. Wei and J. Ding, Representations for Moore–Penrose inverses in Hilbert spaces, Appl. Math. Lett., 14 (2001) 599–604. * [25] Y. Xue, Stable Perturbations of Operators and Related Topics, World Scientific, 2012. * [26] Y. Xue and G. Chen, Some equivalent conditions of stable perturbation of operators in Hilbert spaces, Applied Math. Comput., 147 (2004), 765–772. * [27] Y. Xue and G. Chen, Perturbation analysis for the Drazin inverse under stable perturbation in Banach space, Missouri J. Math. Sci., 19 (2007), 106–120. * [28] Y. Xue, Stable perturbation in Banach algebras, J. Aust. Math. Soc., 83 (2007), 1–14.
# Milky Way and M31 rotation curves: $\Lambda$CDM vs. MOND De-Chang Dai1,3111communicating author: De-Chang Dai, email<EMAIL_ADDRESS>Glenn Starkman3, Dejan Stojkovic2 1 Center for Gravity and Cosmology, School of Physics Science and Technology, Yangzhou University, 180 Siwangting Road, Yangzhou City, Jiangsu Province, P.R. China 225002 2 HEPCOS, Department of Physics, SUNY at Buffalo, Buffalo, NY 14260-1500 3 CERCA/Department of Physics/ISO, Case Western Reserve University, Cleveland OH 44106-7079 ###### Abstract We analyze the existing rotation-curve data of the Milky Way and M31 galaxies that extends to very large distances and low accelerations. We find a systematic downward trend in the weak acceleration (large distances) segment of the radial acceleration. A similar downward trend has been noticed in the $\Lambda$CDM EAGLE simulation [1], while the deviation from the generic MOND prediction would need to be ascribed to an external field effect, or possibly a post facto selected acceleration function $\mu(x)$. Galaxy rotation curves were the first widely accepted evidence for the existence of dark matter [3]. Outside the maximum radius where the luminous matter contributes significantly to the total mass of a galaxy, Newtonian gravity (and General Relativity (GR)) predict that the centripetal acceleration binding tracers in orbit around the galaxy, and hence the rotational velocity, should fall, yet consistently among all galaxies it plateaus [3, 4, 5, 6, 7, 8, 9]. The dark matter hypothesis interprets this plateau as evidence for the presence of additional non-luminous “dark” matter. A number of lines of argument suggest that most, if not all, of this dark matter would need to be non-relativistic – what cosmologists refer to as cold dark matter (CDM) [10]. CDM is at the core of the concordance $\Lambda$CDM model of cosmology, and the success of $\Lambda$CDM in predicting the details of the power spectrum of temperature and polarization fluctuations of the cosmic microwave background radiation [11] can be regarded as a stringent test that the dark matter hypothesis has passed. Meanwhile, considerable theoretical, experimental and observational effort has been invested in the search for the nature of this dark matter through non-gravitational signals and distinctive gravitational signals [12]. A competing hypothesis for the plateau in galaxy rotation curves, Modified Newtonian Dynamics (MOND) [13], is that Newton’s law (and hence General Relativity) is incorrect at small accelerations. Specific implementations of the MOND paradigm at the level of phenomenological alternatives to Newton’s law or of Poisson’s law have long been considered. While these phenomenological alternatives are limited in their applicability (weak field, non-relativistic, isolated systems, often requiring spherical or axial symmetry), for those galaxies to which they can be applied they fit the rotation curves with only a single parameter per system, the mass-to-light ratio of the luminous matter, and a single universal function to match standard Newtonian behavior at large acceleration to the desired behavior at low acceleration [14]. However, the applicability of MOND to generic systems, including cosmology, has been hampered by the absence of a suitable Relativistic MOND (RMOND) theory – a modification of GR that reduces to MOND in appropriate limits and is consistent with key cosmological observables and other tests of GR. Recent work on such Relativistic MOND (RMOND) theories [15, 16] may have removed that difficulty. The discovery [17] of the Radial Acceleration Relation (RAR) – a tight correlation between the observed acceleration of low-acceleration tracers in galaxies and the acceleration expected just from the baryonic matter – was expected in the context of MOND. On the other hand it is widely regarded as a mystery in $\Lambda$CDM, and therefore a failure of that theory. However, [18, 29] and one of the authors [1] independently examined data from the EAGLE simulation [19, 20, 21], one of the largest cosmological hydrodynamical simulations, and found that it unexpectedly demonstrated such a tight correlation. Its origin must lie in physics emergent from the complex interactions of baryonic matter and the non-linear structures it forms (e.g., stars), which are not directly captured by the standard particle interactions of an N-body simulation, but which the EAGLE Project attempts to model phenomenologically. Unsurprisingly, the EAGLE correlation does not precisely match the observed RAR – the underlying theory is too complicated to expect an accurate prediction from a phenomenological model that was not calibrated to it – however this should probably be regarded as a very unexpected success of $\Lambda$CDM, though perhaps a less compelling one than for MOND. While the dark matter and MOND hypotheses may agree, if only by construction, on galaxy rotation curves at radii outside the luminous matter, there is no reason for them to continue to do so at larger radii where the centripetal acceleration falls much lower. Consider a spiral galaxy. Within the $\Lambda$CDM model, there are three main structures in the galaxy - a bulge, a disk and a dark halo. The innermost part is the bulge, which is the gravitationally dominant component within several $kpc$ from the center. The stellar disk is then dominant out to distances of several tens of kpc, where the CDM halo takes over. The presence of the dark matter halo is the primary reason why the rotational velocity does not decrease dramatically after the effects of ordinary matter are saturated. However, within $\Lambda$CDM, the effects of dark matter also eventually saturate. At larger radii, there is no additional component to maintain the rotational velocity at a constant value, so it should decrease outside some characteristic radius of the halo. Within MOND, no such transition is predicted for isolated galaxies. Most radial velocity data in galaxies cover only accelerations larger than $a_{min}\sim 10^{-11}m/s^{2}$, as pointed out in [18]. The region of lower accelerations has not been fully explored yet. However, as we discuss below, recent data on the rotation curves of the Milky Way (MW) and M31 [22, 23, 24] extend to larger distances and hence lower accelerations. Intriguingly, as expected in $\Lambda$CDM, after plateauing at intermediate distances (several tens of kpc from the galactic center), the rotation curve does not remain flat at larger radii. Rotation velocities decrease by tens of $km/s^{2}$, down to values that are much smaller than the plateau value. In the MW, this happens at the distances larger than several tens of kpc. Similar trends can also be found in the galaxy M31 (Fig. 5 in [24]). As we discuss below, this decrease in the rotation velocity at large radius is anticipated quantitatively in the results of the EAGLE simulation. It therefore appears to be consistent with the general expectations of $\Lambda$CDM. On the other hand, this is not what is expected for an isolated galaxy in MOND – there are strong indications that the rotation curves deviate from the MOND prediction for accelerations below $10^{-11}m/s^{2}$[1]. Such deviations are ascribed by [2] to the “external field effect” (EFE) in MOND – a consequence of a galaxy’s environment upsetting the normal MOND expectation (see also studies based on other modified gravity models [38, 39, 40, 41, 42, 43, 44]). Below we present rotation curves for the Milky Way and M31, and compare them to the results of the EAGLE simulation and to the predictions of MOND for isolated galaxies. ## I The Milky Way and M31 as spiral galaxies in $\Lambda$CDM In the $\Lambda$CDM model, the rotation curve of a spiral galaxy can be well approximated as arising due to the combined contributions from and axisymmetric bulge, disk and dark halo. (Of course both real and simulated galaxies are more complicated.) This allows us to represent the rotational velocity squared as the sum of three components, $v^{2}(r)\equiv v_{b}^{2}(r)+v_{d}^{2}(r)+v_{h}^{2}(r)\,,$ (1) where $r$ is the distance from the center of the galaxy, while $v_{b}$, $v_{d}$ and $v_{h}$ are the contributions to $v^{2}$ due to the bulge, disk and dark halo respectively. Newtonian mechanics simply relates $v_{i}(r)$ (for $i=b,d,h$) to the mass of that component within the radius r, $v_{i}^{2}=\frac{GM_{i}(r)}{r}\,.$ (2) A galactic bulge is taken to be spherically symmetric with a de Vaucouleur profile [25]. Its surface mass density is $\Sigma_{B}(r)=\Sigma_{be}e^{-\kappa\left((\frac{r}{a_{B}})^{1/4}-1\right)},$ (3) where $\kappa=7.6695$, and $\Sigma_{be}$ is the surface mass density at the half-mass radius $r=a_{B}$. The mass density of the bulge is then $\rho_{b}(r)=-\frac{1}{\pi}\int_{r}^{\infty}\frac{d\Sigma_{B}(x)}{dx}\frac{1}{\sqrt{x^{2}-r^{2}}}dx\,,$ (4) so that the bulge mass within the radius $r$ is $M_{b}(r)=4\pi\int^{r}_{0}\rho_{b}(r)r^{2}dr\,$ (5) and $v^{2}_{b}(r)=\frac{4\pi G}{r}\int^{r}_{0}\rho_{b}(r)r^{2}dr\,.$ (6) A galactic disk can be approximated by an exponential disk [26, 27] with surface mass density $\Sigma_{d}(r)=\Sigma_{0}\exp\Big(-\frac{r}{a_{d}}\Big{missing})\,.$ (7) $\Sigma_{0}$ is the central value and $a_{d}$ is a scale radius. The rotation velocity squared due to the disc [27] can be written explicitly, $v_{d}^{2}=\pi G\Sigma_{0}\frac{r^{2}}{a_{d}}\left[I_{0}\left(\frac{r}{2a_{d}}\right)K_{0}\left(\frac{r}{2a_{d}}\right)-I_{1}\left(\frac{r}{2a_{d}}\right)K_{1}\left(\frac{r}{2a_{d}}\right)\right]\,,$ (8) where $I_{i}$ and $K_{i}$ are modified Bessel functions. A dark halo is taken to follow the NFW profile [28], $\displaystyle\rho_{h}(r)$ $\displaystyle=$ $\displaystyle\frac{\rho_{0}}{\frac{r}{h}\left(1+\frac{r^{2}}{h^{2}}\right)}\,,$ (9) where $\rho_{0}$ and $h$ are the scale density and scale radius of the dark halo respectively. This leads to $M_{h}(r)=4\pi\rho_{0}h^{3}\left[\ln\left(1+\frac{r}{h}\right)-\frac{r}{r+h}\right]\,,$ (10) and $v_{h}^{2}=\frac{GM_{h}(r)}{r}$ (11) The total acceleration is then $a_{t}=\frac{v_{b}^{2}+v_{d}^{2}+v_{h}^{2}}{r}\,,$ (12) and the expected rotation curve is $v_{t}(r)=\sqrt{v_{b}^{2}+v_{d}^{2}+v_{h}^{2}}\,.$ (13) Figure 1: The rotation curve of the Milky Way. The data (solid dark circles with error bars) for $r<100$kpc come from [22], while for $r>100$kpc from [23]. The solid, dashed and doted lines describe the contribution from the bulge, stellar disk and dark matter halo respectively, within a $\Lambda$CDM model of the galaxy. The dashed-dot line is the total contribution of all three components. The parameters of each component are taken from [24]. For comparison, the Milky way rotation curve from GAIA data releaese II is shown in color. The red dots are data from [34], the blue upward-pointing triangles are from [35], while the cyan downward-pointing triangles are from [36]. Figure 2: The rotation curve of the M31. The data (solid purple squares with error bars) come from [24]. The solid, dashed and doted lines describe the contribution from the bulge, stellar disk and dark matter halo respectively, within a $\Lambda$CDM model of the galaxy. The dashed-dot line is the total contribution of all three components. The parameters of each component are taken from [24]. There are $6$ parameters ($a_{B}$, $a_{d}$, $h$, $\Sigma_{be}$, $\Sigma_{0}$ and $\rho_{0}$) that fully specify all three components of a rotationally supported galaxy in $\Lambda$CDM. We use the values for the MW and M31 from [24]. For MW they are $a_{B}=0.87\pm 0.07$kpc, $a_{d}=5.73\pm 1.23$kpc, $h=10.7\pm 2.9$kpc, $\Sigma_{be}=0.25\pm 0.02\times 10^{11}M_{\odot}$, $\Sigma_{0}=1.12\pm 0.4\times 10^{11}M_{\odot}$ and $\rho_{0}=18.2\pm 7.4\times 10^{-3}M_{\odot}pc^{-3}$. For M31 they are $a_{B}=1.35\pm 0.02$kpc, $a_{d}=5.28\pm 0.25$kpc, $h=34.6\pm 2.1$kpc, $\Sigma_{be}=0.35\pm 0.004\times 10^{11}M_{\odot}$, $\Sigma_{0}=1.26\pm 0.08\times 10^{11}M_{\odot}$ and $\rho_{0}=2.23\pm-.24\times 10^{-3}M_{\odot}pc^{-3}$. Figure 1 and 2 show the rotation curve of the Milky Way and M31 respectively, and its resolution into the bulge, disk and halo components. The data for $r<100$kpc come from [22], while for $r>100$kpc from [23]. As expected of a low-dimensional axisymmetric approximation of a spiral galaxy, the fit to the model reproduces the gross features of the rotation curve reasonably well, but not the fine details. Recent Milky way rotation curve from GAIA data release II are plotted in figure 1 in color [34, 35, 36, 37]. The general trend is very similar to the rotation curve based the older data [24]. We see that while the GAIA rotation curve at modest galactic radii differs significantly from earlier data, so far, there is no statistically significant change at larger radii where the model suggests that dark matter halo dominates the kinematics. One could try to model the galaxy with GAIA data in the regime where the data is available, however the behavior at modest galactic radii will not have significant influence at larger radii where our analysis is concentrated. We also note that there is no such discrepancy in M31 galaxy data. ## II Comparing to $\Lambda$CDM and MOND To compare observations with $\Lambda$CDM and MOND predictions, the Newtonian gravitational acceleration due to the ordinary matter must be separated from the total acceleration. Presumably the acceleration due to baryons, $a_{B}$, comes from the bulge and the disk (with no dark matter halo contribution), i.e. $a_{B}(r)=\frac{v_{b}^{2}(r)+v_{d}^{2}(r)}{r}$ (14) The MOND-predicted acceleration is obtained from [13] $\mu(a/a_{0})\vec{a}=\vec{a}_{B}\,,$ (15) where $a_{0}$ is a critical acceleration constant, which we will set to $a_{0}=1.2\times 10^{-10}m/s^{2}$, while $\mu$ is an empirical function which is often modeled with $\mu(x)=\frac{x}{1+x}\,.$ (16) MOND then predicts the total acceleration to be $a_{M}=\frac{a_{B}+\sqrt{a_{B}^{2}+4a_{0}a_{B}}}{2}\,,$ (17) which is to be compared with (13) in $\Lambda$CDM. In fact, this prediction assumes that the galaxies being observed are isolated. The MOND isolated galaxy (MIG) prediction would be expected to fail where tidal accelerations due to other structures in the environment disrupt the symmetries of the system. This external field effect (EFE) is a longstanding prediction of MOND, but its details – what specific tidal field modifies (17) and how – would depend on how (15) emerges from a specific relativistic theory of broader applicability. In $\Lambda$CDM, disks of galaxies usually form at the centers of dark matter halos. They span a relatively narrow range of acceleration, and the acceleration profiles are self-similar from $a_{0}\sim 10^{-10}m/s^{2}$ to $a_{min}\sim 10^{-11}m/s^{2}$ [18]. $\Lambda$CDM predictions for the rotation curve then clearly deviate from MOND predictions only for $a\lesssim a_{min}$[1]. Because this is within the dark-matter-dominated region, it is not easily accessible to observations. Since there are very few distant galaxies in which we can explore accelerations below $a_{min}$, this deviation frequently goes unnoticed. However, in nearby galaxies like our own Milky Way and M31, we can can explore such low accelerations and the distinguishable predictions of MOND and $\Lambda$CDM, as shown in Fig. 3. Figure 3: The total acceleration, $a$ vs. the Newtonian acceleration due to baryons, $a_{B}$, for data and models. The black circles with error bars represent Milky Way data. The purple squares with error bars represent M31. The $\Lambda$CDM fit to them is the short-dashed line. (The dash-dot line is the $\Lambda$CDM fitting curve of M31.) The dotted line is the reference line for $a=a_{B}$. The dashed and solid lines are predicted by MOND with $a_{0}=1.2\times 10^{-10}m/s^{2}$ and $a_{0}=1.9\times 10^{-10}m/s^{2}$. The gray dots are from the EAGLE simulation (data file RefL0025N0752 on EAGLE’s website) of $\Lambda$CDM in [1]. The thick horizontal (vertical) line crossing the a ($a_{B}$) axis marks the acceleration (baryonic acceleration) below which very little data is available – other than for the MW and M31. Accelerations in M31 and the MW are smaller than the MOND prediction at lower values of $a_{B}$. A potentially observable discrepancy appears at the radius of several tens of kpc for both galaxies. We plot the results out to $500kpc$. The galaxy mass in EAGLE’s data is chosen to be between $5\times 10^{10}M_{\odot}$ to $5\times 10^{11}M_{\odot}$. For comparison, the Milky way rotation curve from GAIA data release II are shown in color. The red dots are data from [34], the blue triangles are from [35], while the cyan down triangles are from [36]. Fig. 3 shows that for smaller radii, i.e. larger accelerations, MOND does not significantly deviate from expected $\Lambda$CDM model galaxies, but that this changes for lower accelerations where the difference is clear. This feature may not be easy to observe for galaxies that are further away, because the discrepancy becomes clear only in dark halo regions far away from the bulk of the visible matter. It should be noted that the model rotation curve of M31 is closer to the MOND predictions than that of the Milky May at lower accelerations. This is because M31’s halo is more spread out. ($h=34.6kpc$ for M31 compared to $h=10.7kpc$ for the Milky Way.) These lower accelerations are also where tidal fields are more likely to induce an external field effect in the MOND theory. Fig. 3 also compares the total observed acceleration $a$ with the acceleration $a_{B}$ caused by the baryons for the case of the MW (for which the low- acceleration data is much less noisy than for M31). The observational data (black squares with error bars) follow the MOND isolated galaxies (MIG) prediction (dashed and solid lines) very well for larger values of acceleration (down to a few $10^{-10}m/s^{2}$); however, for smaller accelerations, the observed acceleration is lower than the MIG prediction. Fig. 1 of [2] does not extend to the same low accelerations as does this plot with Milky Way data, which might be the reason why the discrepancy is not striking in [2]. Fig. 4 explicitly shows this difference by plotting $a/a_{M}-1$: MOND predicts accelerations that are much larger than observed, and the theory and data appear well-separated for $a<10^{-11}m/s^{2}$. This is the effect that was found in data in [2] and ascribed to the expected external field effect (EFE) – the fact that tidal fields due to nearby mass concentrations (e.g. other galaxies) disrupt the MIG prediciton (17). In contrast, the MW and M31 models, like the MW data, are all alike in falling below the MOND prediction at low $a_{B}$. One might complain that it is not surprising that the MW model match the data, however Fig. 3 also includes the data from EAGLE galaxies [19, 20, 21] as presented in [1]. The turndown in $a$ at low $a_{B}$ matches the feature previously pointed out by one of us [1] in the EAGLE simulation [19, 20, 21]. We see that for EAGLE $a$ vs. $a_{B}$ follows a trend similar to that in MOND for $a\gtrsim a_{min}$ with reasonably low dispersion, but then $a$ falls faster at low $a_{B}$ in EAGLE than predicted by MOND, and the dispersion in the simulated data increases. This broad feature is consistent with the data, and with the fits to simple dark- matter models. We note here that the EAGLE simulation data points in figure Fig. 4 are a combination of many independent galaxies. The rotation curves of galaxies with less dark matter are closer to Newtonian physics. On the other hand, the rotation curves of galaxies with more dark matter are well above the MOND prediction. In general, MOND predicts some average behavior, instead of a precise behavior for every single galaxy. This is the reason why one can see single-galaxy deviations from MOND clearly, while the deviation is not so clear if many galaxies are included. It would be attractive to be able to compare the data to the EAGLE simulation quantitatively at these low $a$ and $a_{B}$; however this is not necessarily justified, and we certainly resist doing so here. The baryonic physics that, in EAGLE, results in the MOND-like radial acceleration relation and in the downturn and increased (fractional) dispersion at low $a_{B}$, is microphysics that is modeled phenomenologically. Even if we were convinced that all and only the correct relevant physics has been captured in EAGLE, the appropriate Bayesian hierarchal model comparison needed to draw a statistically robust conclusion would certainly require more sophisticated simulation or emulation infrastructure than is currently extant. Nevertheless, we think the case is clear that where ultra-low-acceleration galaxy-rotation-curve data exist they display both the RAR-like correlation of $a$ and $a_{B}$ expected from EAGLE at moderately low accelerations, and the stronger tail off expected at the lowest accelerations. Figure 4: Figure 3 recast as a comparison between the total acceleration, $a$, and the MOND prediction, $a_{M}$, as a function of the acceleration due to baryons $a_{B}$. The solid horizontal line is $a=a_{M}$. The circles and squares with error bars represent the Milky Way and M31 data, while the gray dots are from the EAGLE simulation of $\Lambda$CDM in [1]. For $a_{B}\gtrsim 10^{-10}m/s^{2}$ any difference between $a$ and $a_{M}$is unclear. However, once $a_{B}$ drops well below $10^{-11}m/s^{2}$, the discrepancy emerges. The short-dashed line is the $\Lambda$CDM fitting curve of the MW. The dash-dot line is the $\Lambda$CDM fitting curve of M31. The mass range of galaxies in EAGLE’s data is chosen to be between $5\times 10^{10}M_{\odot}$ to $5\times 10^{11}M_{\odot}$. For comparison, the Milky way rotation curve from GAIA data release II is shown in color. The red dots are data from [34], the blue triangles are from [35], while the cyan down triangles are from [36]. While the EAGLE simulation does not match the data perfectly, these plots indicate that it is much easier to accommodate a systematic downward trend with the $\Lambda$CDM model than with MOND. Similar indications come from other studies. The authors of [30] found that $\Lambda$CDM reproduces the acceleration relation predicted by MOND, and conjectured that this relation was a consequence of dissipative collapse of baryons. Such baryonic physics is not directly obtained by individual baryonic particles in the simulations colliding and emitting photons into the simulation volume, rather it is modeled phenomenologically. However, their study extended only to accelerations $\gtrsim 10^{-11.5}$ m/s2, where we do not expect to see a clear deviation from RAR. In [31] the authors found that the deviation from RAR becomes clear for radii larger than 100kpc, which agrees with our findings. Meanwhile, the RAR has been studied using weak-lensing data for released by the Kilo-Degree Survey, first [32] using KiDS GGL on isolated foreground galaxies from 180 deg2 of the Galaxy and Mass Assembly (GAMA) survey, and more recently [33] using a selection of 1 million foreground galaxies from KiDS-1000 to achieve a fivefold increase in survey area. The GAMA survey results were broadly consistent with the MOND prediction for an extension of the RAR down to $a\simeq 10^{-12.5}$ m/s2 with the observed stellar baryons plus inferred cold gas; the KiDS-1000 yielded $a/a_{b}$ that were systematically higher, matching the MOND prediction with the additional inclusion of a hot gas component. All of them are well above the MW and M31 RARs at $10^{-12.5}-10^{-11}$m/s2. Only the mean RAR is reported. These results appear to be in tension with our analysis here. However, as the authors of [32] state, a fundamental limitation of their analysis is that the additional diffuse gas surrounding galaxies is difficult to measure, and has not been included in their study. The existence of gaseous halos not included in their study would enhance acceleration from baryons (and thus $a_{M}$), and the measured rotation curve will be well below what MOND predicts. In summary, flat galactic rotation curves can be explained either by introducing a dark matter component, like in the $\Lambda$CDM model, or by modifying Newtonian dynamics, like in MOND theories. However, it has been noticed that the acceleration systematically goes down at large distances, i.e. low accelerations. This systematic downward trend in the weak gravity segment of the galactic rotation curves was interpreted in [2] as an external field effect within the framework of the MOND theories. However, this feature was observed in the $\Lambda$CDM model without invoking any modification of Newtonian gravity, as was first found by one of the authors in the EAGLE simulation [1]. They showed that dark halos in $\Lambda$CDM generate an acceleration feature like MOND predicts, but that in the even-larger-radius, even-lower-acceleration regions, the dark halos are saturated and the acceleration then decreases as Kepler’s Laws, i.e. Newtonian gravity, but not MOND predict. We corroborated this finding here by analyzing the Milky Way and M31 data. At very large distances, these data are easier to accommodate in the $\Lambda$CDM model. ###### Acknowledgements. D.C Dai is supported by the National Natural Science Foundation of China (Grant No. 11775140). D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021. GDS is supported by a Department of Energy grant DE-SC0009946 to the particle astrophysics theory group at CWRU. ## References * [1] D. C. Dai and C. Lu, Phys. Rev. D 96, no.12, 124016 (2017) doi:10.1103/PhysRevD.96.124016 [arXiv:1712.01654 [gr-qc]]. * [2] K. H. Chae, F. Lelli, H. Desmond, S. S. McGaugh, P. Li and J. M. Schombert, Astrophys. J. 904, no.1, 51 (2020) [erratum: Astrophys. J. 910, no.1, 81 (2021)] doi:10.3847/1538-4357/abbb96 [arXiv:2009.11525 [astro-ph.GA]]. * [3] V. C. Rubin and W. K. Ford, Jr., Astrophys. J. 159, 379-403 (1970) doi:10.1086/150317 * Rubin et al. [1980] V. C. Rubin, W. K. Ford, & N. Thonnard, apj, 238, 471. (1980) doi:10.1086/158003 * Bosma [1981] A. Bosma, aj, 86, 1791 (1981). doi:10.1086/113062 * Persic & Salucci [1988] M. Persic & P. Salucci, Mon. Not. Roy. Astron. Soc. 234, 131 (1988). doi:10.1093/mnras/234.1.131 * [7] M. Persic, P. Salucci and F. Stel, Mon. Not. Roy. Astron. Soc. 281, 27 (1996) doi:10.1093/mnras/278.1.27 [arXiv:astro-ph/9506004 [astro-ph]]. * [8] J. F. Navarro, C. S. Frenk and S. D. M. White, Astrophys. J. 490, 493-508 (1997) doi:10.1086/304888 [arXiv:astro-ph/9611107 [astro-ph]]. * [9] E. Corbelli and P. Salucci, Mon. Not. Roy. Astron. Soc. 311, 441-447 (2000) doi:10.1046/j.1365-8711.2000.03075.x [arXiv:astro-ph/9909252 [astro-ph]]. * [10] P. J. E. Peebles, Astrophys. J. Lett. 263, L1-L5 (1982) doi:10.1086/183911 * [11] N. Aghanim et al. [Planck], Astron. Astrophys. 641, A1 (2020) doi:10.1051/0004-6361/201833880 [arXiv:1807.06205 [astro-ph.CO]]. * [12] C. Pérez de los Heros, Symmetry 12, no.10, 1648 (2020) doi:10.3390/sym12101648 [arXiv:2008.11561 [astro-ph.HE]]. * Milgrom [1983] Milgrom, M. 1983, Astrophys. J. , 270, 365 * [14] R. H. Sanders and S. S. McGaugh, Ann. Rev. Astron. Astrophys. 40, 263-317 (2002) doi:10.1146/annurev.astro.40.060401.093923 [arXiv:astro-ph/0204521 [astro-ph]]. * [15] J. D. Bekenstein, Phys. Rev. D 70, 083509 (2004) [erratum: Phys. Rev. D 71, 069901 (2005)] doi:10.1103/PhysRevD.70.083509 [arXiv:astro-ph/0403694 [astro-ph]]. * [16] C. Skordis and T. Zlosnik, [arXiv:2007.00082 [astro-ph.CO]]. * [17] S. McGaugh, F. Lelli and J. Schombert, Phys. Rev. Lett. 117, no.20, 201101 (2016) * [18] J. F. Navarro, A. Benítez-Llambay, A. Fattahi, C. S. Frenk, A. D. Ludlow, K. A. Oman, M. Schaller and T. Theuns, Mon. Not. Roy. Astron. Soc. 471, no.2, 1841-1848 (2017) doi:10.1093/mnras/stx1705 [arXiv:1612.06329 [astro-ph.GA]]. * [19] R. A. Crain, J. Schaye, R. G. Bower, M. Furlong, M. Schaller, T. Theuns, C. D. Vecchia, C. S. Frenk, I. G. McCarthy and J. C. Helly, et al. Mon. Not. Roy. Astron. Soc. 450, no.2, 1937-1961 (2015) doi:10.1093/mnras/stv725 [arXiv:1501.01311 [astro-ph.GA]]. * [20] S. McAlpine, J. C. Helly, M. Schaller, J. W. Trayford, Y. Qu, M. Furlong, R. G. Bower, R. A. Crain, J. Schaye and T. Theuns, et al. Astron. Comput. 15, 72-89 (2016) doi:10.1016/j.ascom.2016.02.004 [arXiv:1510.01320 [astro-ph.GA]]. * [21] J. Schaye, R. A. Crain, R. G. Bower, M. Furlong, M. Schaller, T. Theuns, C. Dalla Vecchia, C. S. Frenk, I. G. McCarthy and J. C. Helly, et al. Mon. Not. Roy. Astron. Soc. 446, 521-554 (2015) doi:10.1093/mnras/stu2058 [arXiv:1407.7040 [astro-ph.GA]]. * Sofue [2020] Sofue, Y. 2020, Galaxies, 8, 37 * Sofue [2012] Sofue, Y. 2012, pasj, 64, 75 * Sofue [2015] Sofue, Y. 2015, pasj, 67, 75 * de Vaucouleurs [1948] G. de Vaucouleurs, Annales d’Astrophysique, 11, 247 (1948) * de Vaucouleurs [1959] G. de Vaucouleurs, Handbuch der Physik, 53, 311 (1959). doi:10.1007/978-3-642-45932-0_8 * Freeman [1970] K. C. Freeman, , apj, 160, 811 (1970). doi:10.1086/150474 * Navarro et al. [1996] Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, apj, 462, 563 * [29] A. D. Ludlow, A. Benitez-Llambay, M. Schaller, T. Theuns, C. S. Frenk, R. Bower, J. Schaye, R. A. Crain, J. F. Navarro and A. Fattahi, et al. Phys. Rev. Lett. 118, no.16, 161103 (2017) doi:10.1103/PhysRevLett.118.161103 [arXiv:1610.07663 [astro-ph.GA]]. * Keller Wadsley [2017] Keller, B. W. & Wadsley, J. W. 2017, Astrophysical Journal Letter, 835, L17. doi:10.3847/2041-8213/835/1/L17 * Wheeler et al. [2019] Wheeler, C., Hopkins, P. F., & Doré, O. 2019, Astrophys. J. , 882, 46. doi:10.3847/1538-4357/ab311b * [32] M. M. Brouwer, M. R. Visser, A. Dvornik, H. Hoekstra, K. Kuijken, E. A. Valentijn, M. Bilicki, C. Blake, S. Brough and H. Buddelmeijer, et al. Mon. Not. Roy. Astron. Soc. 466, no.3, 2547-2559 (2017) doi:10.1093/mnras/stw3192 [arXiv:1612.03034 [astro-ph.CO]]. * [33] M. M. Brouwer, K. A. Oman, E. A. Valentijn, M. Bilicki, C. Heymans, H. Hoekstra, N. R. Napolitano, N. Roy, C. Tortora and A. H. Wright, et al. Astron. Astrophys. 650, A113 (2021) doi:10.1051/0004-6361/202040108 [arXiv:2106.11677 [astro-ph.GA]]. * Eilers et al. [2019] Eilers, A.-C., Hogg, D. W., Rix, H.-W., et al. 2019, apj, 871, 120. doi:10.3847/1538-4357/aaf648 * Eadie & Jurić [2019] Eadie, G. & Jurić, M. 2019, apj, 875, 159. doi:10.3847/1538-4357/ab0f97 * [36] T. Callingham, M. Cautun, A. J. Deason, C. S. Frenk, W. Wang, F. A. Gómez, R. J. J. Grand, F. Marinacci and R. Pakmor, doi:10.1093/mnras/stz365 [arXiv:1808.10456 [astro-ph.GA]]. * Cautun et al. [2020] Cautun, M., Benítez-Llambay, A., Deason, A. J., et al. 2020, mnras, 494, 4291. doi:10.1093/mnras/staa1017 * [38] J. W. Moffat and V. T. Toth, Phys. Rev. D 91, no.4, 043004 (2015) doi:10.1103/PhysRevD.91.043004 [arXiv:1411.6701 [astro-ph.GA]]. * [39] J. W. Moffat and V. T. Toth, Astrophys. J. 680, 1158 (2008) doi:10.1086/587926 [arXiv:0708.1935 [astro-ph]]. * [40] Z. Davari and S. Rahvar, Mon. Not. Roy. Astron. Soc. 496, no.3, 3502-3511 (2020) doi:10.1093/mnras/staa1660 [arXiv:2006.06032 [astro-ph.GA]]. * [41] J. W. Moffat and V. T. Toth, Eur. Phys. J. C 81, no.9, 836 (2021) doi:10.1140/epjc/s10052-021-09632-5 [arXiv:2109.11133 [gr-qc]]. * [42] P. D. Mannheim and J. W. Moffat, Int. J. Mod. Phys. D 30, no.14, 2142009 (2021) doi:10.1142/S0218271821420098 [arXiv:2103.13972 [gr-qc]]. * [43] Z. Davari and S. Rahvar, Mon. Not. Roy. Astron. Soc. 507, no.3, 3387-3399 (2021) doi:10.1093/mnras/stab2350 [arXiv:2108.00266 [astro-ph.CO]]. * [44] J. W. Moffat and V. Toth, Universe 7, no.10, 358 (2021) doi:10.3390/universe7100358 [arXiv:2104.12806 [gr-qc]].
# Perturbation-based Self-supervised Attention for Attention Bias in Text Classification Huawen Feng, Zhenxi Lin, Qianli Ma Huawen Feng, Zhenxi Lin, and Qianli Ma are with the School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>*corresponding author) ###### Abstract In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This paper proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self- supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach. ###### Index Terms: Attention bias, perturbation, self-supervised learning, text classification. ## I Introduction Attention mechanisms [1, 2, 3] play an essential role in Natural Language Processing (NLP) and have been shown to be effective in various text classification tasks, such as sentiment analysis [4, 5, 6], document classification [7] and natural language inference [8]. They achieve significant performance gains, and can be used to provide insights into the inner workings of the model. Generally, the attention learning procedure is conditioned on access to large amounts of training data without additional supervision information. Although the current attention mechanisms have achieved remarkable performance, several problems remain unsolved. First, learning a good attention distribution without spurious correlations for neural networks requires large volumes of informative labeled data [9, 10]. As described in the work of Wallace et al. [11], after inserting 50 poison examples with the name “James Bond” into its training set, a sentiment model will frequently predict a positive whenever the input contains this name, even though there is no correlation between the name and the prediction. Second, attention mechanisms are prone to focus on high-frequency words with sentiment polarities and assign relatively high weights to them [12, 13, 5], while the higher frequency does not imply greater importance. Especially when there’s an adversative relation in a text, some high-frequency words with strong sentiment valence need to be selectively ignored based on the context of the whole text. In these cases, these words will mislead the model because the important words don’t get enough attention. The sentences in Figure 1 illustrate this problem. In most training sentences, as shown in the first four rows, “better” and “free” appear with positive sentiment, which makes the attention mechanism accustomed to attaching great importance to them and relating them to positive predictions. However, the two words are used ironically in the fifth sentence, and the model pays the most attention to them while the critical word – “leave” – is not attended to, resulting in an incorrect prediction. Based on these observations, there’s reason to believe that the attention mechanisms could be improved for text classification. Figure 1: The attention visualization for five sentences. The ”A/B” style tags before each row mean the model’s prediction is A and the label is B. The first four sentences are selected from training sets as representatives containing high-frequency words - ”better” (yellow box) and ”free” (green box). The last sentence including both of the two words is selected from testing sets, typically showing that the distribution of attention weights when some words in the sentence appear frequently in the corpus but are unimportant to the current prediction. To tackle this problem the most direct solution is to add human supervision collected by manual annotation [14, 10, 15] or special instruments [9, 16, 17, 18] (e.g., eye-tracking), to provide an inductive bias for attention. These approaches are costly, the labeling is entirely subjective, and there is often high variance between annotators. In particular, Sen et al. [19] point out that there is a huge difference between machine and human attention and it is difficult to map human attention to machine attention. Another flexible solution is to measure attribution scores, i.e., how much each token in a text contributes to the final prediction, to approximate an importance distribution as an attention supervision signal [20, 21, 5, 6]. Generally, the attribution scores are obtained by masking each token one by one to generate counterfactual examples, reflecting the difference in the softmax probability of the model after masking each token. These approaches have little or no additional annotation overhead and augment supervision information from the training corpus to refine the attention distribution. Despite their success, masking schemes can give rise to an out-of-distribution (OOD) problem [22, 23, 24]. That is, the generated counterfactuals deviate from the training data distribution of the target model, resulting in an overestimation of the contribution of unimportant tokens. The OOD problem induced by existing masking schemes makes it difficult to identify whether high-scoring tokens contribute significantly to the prediction. Furthermore, most of them are limited to generating uniform attention weights for the selected important words. Obviously, the contribution of different important words to the model should also be different according to the context, e.g., the word leave should have a higher attention weight than better and free for the fifth sentence in Figure 1. Some efforts reveal that the output of neural networks can be theoretically guaranteed to be invariant for a certain magnitude of input perturbations through establishing the concept of maximum safety radius [25, 26] or minimum disturbance rejection [27]. In simple terms, these approaches evaluate the minimum distance of the nearest perturbed text in the embedding space that is classified differently from the original text. Inspired by this work, we propose a novel perturbation-based self-supervised attention learning method without any additional annotation overhead for text classification. Specifically, we design an attention supervision mining mechanism called Word- based Concurrent Perturbation (WBCP), which effectively calculates an explainable word-level importance distribution for the input text. Concretely, WBCP tries to concurrently add as much noise as possible to perturb each word embedding of the input, while ensuring that the semantics of input and the classification outcome is not changed. Under this condition, the words that tolerate more noise are less important and the ones sensitive to noise deserve more attention. We can use the permissible perturbation amplitude as a measure of the importance of a word, where small amplitude indicates that minor perturbations of that word can have a significant influence on the semantic understanding of input text and easily lead to prediction error. According to the inverse distribution of perturbation amplitude, we can get sample-specific attention supervision information. Later, we use this supervision information to refine the attention distribution of the target model and iteratively update it. Notably, our method is model-agnostic and can be applied to any attention-based neural network. It generates attention supervision signals in a self-supervised manner to improve text classification performance without any manual labeling and incorporates Perturbation-based Self-supervised Attention (PBSA) to avoid the OOD problem caused by the masking scheme. In addition, it can also generate special attention supervision weights adaptively for each sample based on the perturbation amplitude, rather than allocate them uniformly. In summary, the contributions of this paper are as follows: (1) Through analysis of current methods, we point out the disadvantages and drawbacks of current attention mechanisms for text classification. (2) We propose a simple yet effective approach to automatically mine the attribution scores for the input text, and use it as supervision information to guide the learning of attention weights of target models. (3) We apply our approach to various text classification tasks, including sentence classification, document categorization, and aspect-level sentiment analysis. Extensive experiments and visualization analysis show the effectiveness of the proposed method in improving both model prediction accuracy and robustness. (4) Theoretically, our algorithm can be applied to the models with attention mechanisms, but it is impossible to compare with all of them. Considering this, we conduct our experiments on several typical baselines (LSTM, BERT [28], DEBERTA [29], ELECTRA [30], Memory Net [31], etc.) to justify the effectiveness of our method. Notably, we also compared our algorithm with other advanced attention self-supervised methods (PGAS [32], AWAS [5], SANA [6]). ## II Related work Figure 2: The diagram of WBCP. The left part of the figure corresponds to the last term of Eq. (2), which illustrates the process of adding noise that follows a Gaussian distribution to each word. The right part of the figure corresponds to the first two terms of Eq. (2), indicating the constraint of trying to not change the semantics and predictions after the noise is introduced. Work related to our method can be categorized into three types: Introducing human attention; using external resources or tools; and using self- supervision. Introducing human attention Adding human supervision to attention has been shown to effectively alleviate attention bias and improve model prediction accuracy on a range of tasks [14, 15, 16, 17, 18]. In general, the annotators need to explicitly highlight the important words or rationales [14, 10, 15] for the given sample. Obviously, the annotation is very labor-intensive and expensive in real-world scenarios, so an alternative is to use implicit signals such as eye gaze [9, 16, 17, 18]. For these methods, it is expected that the model can generate similar attention to human supervision. However, human recognition and model reasoning processes may be inconsistent [33], and aligning the two is challenging [19]. Using external resources or tools With the development of NLP, many corpora and tools, such as Dependency Tree and Synonym Dictionary, are created to obtain a deeper understanding of words and sentences. Therefore, some methods [34, 35, 36, 37] that generate attention supervision information according to existing corpora and tools emerge. For example, Nguyen et al. [36] introduce attention supervision information based on important words selected by semantic word lists and dependency trees. Similarly, Zhao et al. [37] first train the model on the document-level sentiment classification and then transfer the attention knowledge to a fine-grained one for aspect-level sentiment classification. And Hu et al. [38] introduce the tree structure’s representation into attention computations. However, these methods still rely on annotations based on parsers or external resources, and the performance depends heavily on the quality of the parser. Self-supervised attention learning Currently, self-supervised attention learning frameworks [20, 21, 5, 6, 32] have become the mainstream method because they do not require additional annotation overhead. They usually mask or erase each token one by one and quantify the difference in predictions of the model after masking each token, to approximate an importance distribution as attention supervision information. For example, Tang et al. [5] divide the words in sentences into the active set and the misleading set by progressively masking each word with respect to the maximum attention weight, and augment them to make the model focus on the active context words. Similarly, Choi et al. [6] adopt the masking method to find the unimportant words and gradually reduce their weights. These methods use a self-supervised paradigm to mine important words, which can greatly reduce the annotation cost and improve the robustness of the model. Nevertheless, the masking scheme they follow has an OOD problem. The counterfactuals generated by the mask operation deviate from the original training set distribution, which easily leads to the over- evaluation of unimportant words. In addition, the above methods usually assign the same weight to the extracted important words, but in our opinion, different words should have different contributions to the classification. ## III Proposed method In this section, we propose a Perturbation-based Self-supervised Attention (PBSA) mechanism to enhance the attention learning process and provide a good inductive bias. We first design a Word-based Concurrent Perturbation (WBCP) to automatically mine the attribution score for each word and use this as a measure of its degree of importance. Then we use the measure mentioned above to compute a word-level importance distribution as supervision information. Finally, we describe how to use the supervision information to refine the attention mechanism of the target model, improving the accuracy and robustness of text classification tasks. ### III-A Word-based Concurrent Perturbation The basic assumption of our design is based on the following fact: under the premise of trying not to change the semantics of the input text, unimportant words can withstand more changes than more significant ones. Specifically, a little noise on keywords can lead to dramatic changes in the final results, while greater noise on the unimportant ones won’t easily lead to changes in results. Therefore, we can estimate the importance distribution of the words according to the maximum amount of noise they can tolerate. To be specific, we try to concurrently add as much noise as possible to perturb each word embedding without changing the latent representations (e.g., the hidden states for classification) of the text and the prediction result. The above process can be optimized according to the maximum entropy principle. Given a sentence consisting of $n$ words $s=\\{w_{1},w_{2},...,w_{m}\\}$, we map each word into its embedding vector $\boldsymbol{X}=\\{\boldsymbol{x}_{1},\boldsymbol{x}_{2},...,\boldsymbol{x}_{n}\\}$. Actually, WBCP (Word-based Concurrent Perturbation) is based on the embedding of each token $\boldsymbol{X}$ but not each word $s$. Intuitively, one word can be tokenized into several parts, and various parts have various influences on the representation. Considering that, in experiments, the perturbation is added to each token generated by the tokenizer, which means each token has its own $\sigma_{i}$ (maximum safety radius). For ease of explanation and comprehension, here we take the traditional embedding where $m=n$ (each word has only one embedding, e.g. word2vec, glove, and so on) as an example in Figure 2 and Section III-A. We assume that the noise on word embeddings obeys a Gaussian distribution $\boldsymbol{\epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma}_{i}=\sigma_{i}^{2}\mathbf{I}\right)$ and let $\widetilde{\boldsymbol{x}_{i}}=\boldsymbol{x}_{i}+\boldsymbol{\epsilon}_{i}$ denote an input with noise $\boldsymbol{\epsilon}_{i}$. We use $\boldsymbol{h}$, $\boldsymbol{y}$ and $\widetilde{\boldsymbol{h}}$, $\widetilde{\boldsymbol{y}}$ to indicate the hidden state for classification and the prediction result of a pre-trained model with no noise and with noise respectively. Then we can write the loss function of WBCP as follows: $\displaystyle\begin{split}\mathcal{L}_{WBCP}=||\widetilde{\boldsymbol{h}}-\boldsymbol{h}||^{2}_{2}+||\widetilde{\boldsymbol{y}}-\boldsymbol{y}||^{2}_{2}\\\ -{\lambda}{\sum}_{i=1}^{n}H({\boldsymbol{\epsilon}_{i}})|_{\boldsymbol{\epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma}_{i}=\sigma_{i}^{2}\mathbf{I}\right)},\end{split}$ (1) where $\lambda$ is a hyperparameter that balances the strength of noise. The first and the second term of Eq. (1) mean that we need to minimize the L2-normalized euclidean distance between the two hidden states and between the two predictions respectively, to quantify the change of information [39]. The first term maintains latent representations to prevent modification of the text semantics, and the second term prevents excessive perturbations from causing the model to mispredict. The last term indicates that we need to maximize the entropy $H(\boldsymbol{\epsilon}_{i})|_{\boldsymbol{\epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma}_{i}=\sigma_{i}^{2}\mathbf{I}\right)}$ to encourage adding as much noise as possible to each word embedding. We can simplify the maximum entropy of the Gaussian distribution as follows: $\displaystyle Maximize(H({\boldsymbol{\epsilon}}_{i}))$ $\displaystyle=Maximize(-\int p({\boldsymbol{\epsilon}}_{i})\ln p({\boldsymbol{\epsilon}}_{i})d{\boldsymbol{\epsilon}}_{i})$ $\displaystyle=Maximize(\frac{1}{2}(\ln(2{\pi}{\sigma_{i}}^{2})+1))$ $\displaystyle=Maximize(\ln 2(\frac{1}{2}\log(2\pi e)+\log{\sigma}_{i}))$ $\displaystyle=Maximize(\log{\sigma}_{i})$ Finally we can use Eq. (III-A) to rewrite our final objective function: $\begin{split}\mathcal{L}_{WBCP}=||\widetilde{\boldsymbol{h}}-\boldsymbol{h}||^{2}_{2}+||\widetilde{\boldsymbol{y}}-\boldsymbol{y}||^{2}_{2}+{\lambda}{\sum}_{i=1}^{n}\log({-\sigma_{i}})\end{split}$ (2) The illustration of WBCP is given in Figure 2. After fixing the parameters of the pre-trained model, the only learnable parameters ${\sigma}=\\{{\sigma}_{1},{\sigma}_{2},{\sigma}_{3},{\sigma}_{4}\\}$ can be considered as the perturbation radii, which is positively associated with the perturbation amplitude. Specifically, the larger ${\sigma}_{i}$ WBCP gets, the more likely ${\boldsymbol{\epsilon}}_{i}$ is a big number, the more noise is added to $\boldsymbol{x}_{i}$, and the less important it is. As what is shown in the picture, it is obvious that ${\sigma}_{2}>{\sigma}_{1}>{\sigma}_{4}>{\sigma}_{3}$. According to the analysis listed above, we know that $\boldsymbol{w}_{2}$ (a) is the least important word and $\boldsymbol{w}_{3}$ (nice) is the most significant one, for $\boldsymbol{x}_{2}$ can tolerate the most noise while $\boldsymbol{x}_{3}$ can hardly stand any perturbation. During the training stage of WBCP, $\sigma$ is first initialized as the normal distribution and then normalized by the standard deviation of sentence embeddings before generating noise. And we set the epochs to $500$ for most datasets. Actually, most perturbation models converge within less than $200$ steps, but we choose more epochs for the time cost is acceptable. However, IMDB’s settings differ because of the large training and testing set. Therefore, we set epochs to 300 for it. As for the optimizer, we select AdamW with a learning rate of 0.01. ### III-B Attention supervision We obtain the $\sigma$s, the perturbation magnitudes, by optimizing Eq. (2) on the pre-trained model. If a word embedding $\boldsymbol{x}_{i}$ can tolerate more noise without impacting the semantics of input text, $\sigma_{i}$ will be larger, which means the word $\boldsymbol{x}_{i}$ is less important. Conversely, small $\sigma_{i}$ indicates that slight perturbations of word embedding $\boldsymbol{x}_{i}$ will lead to semantic drift and may affect the classification result. We can therefore use the perturbation magnitude to compute a word-level importance distribution as attention supervision information, as shown below: $\displaystyle\alpha^{\prime}_{i}$ $\displaystyle=1-\frac{\sigma_{i}}{\text{max}_{j}\\{\sigma_{j}\\}}$ (3) $\displaystyle{\boldsymbol{\widetilde{\alpha}}}$ $\displaystyle=\text{Softmax}({\boldsymbol{\alpha^{\prime}}})$ It is worth noting that our method generates sample-specific attention supervision, where the weight of each word is quantified according to the perturbation magnitude, instead of using the same importance weight for all words [5, 6]. Also, the quantification occurs in the embedding space rather than replacing the token with a predefined value, thus avoiding the OOD problem caused by masking schemes. Input: training dataset $D$, attention-based model $f(\cdot,\theta)$, the number of iterations $T$. Pre-train model $f(\cdot,\theta)$ on $D$ and update $\theta$ using Adam. for _$t=1,...T$_ do Fix $\theta$, and minimize WBCP objective function by Eq. (2) using Adam. Obtain the perturbation amplitude $\sigma$ for each sample in $D$. Calculate the attention supervision $\widetilde{\alpha}$ by Eq. (3) for each sample in $D$. Re-train model on $D$ with the attention supervision $\widetilde{\alpha}$ by Eq. (4) and update $\theta$ using Adam. end for Algorithm 1 Perturbation-based self-supervised attention ### III-C Perturbation-based Self-supervised Attention We do not use $\boldsymbol{\widetilde{\alpha}}$ to generate a new attention distribution to replace the original one $\boldsymbol{{\alpha}}$. Rather, we use it as a supervision target for the attention weights. We want the attention supervision to make the model notice more words that have an influence on the output. In this way, some low-frequency context words with great importance that would normally be ignored can be discovered by attention learning. In this section, we describe how to exploit the supervision information $\widetilde{\alpha}$ to guide the learning of model attention strengths. TABLE I: Detailed dataset statistics. Task | Dataset | Class | AvgLen | Train | Test ---|---|---|---|---|--- Sentence Classification | SST2 [40] | 2 | 19 | 6,920 | 1821 TREC [41] | 6 | 10 | 5,452 | 500 MR [42] | 2 | 19 | 10,662 | – CR [43] | 2 | 19 | 3,775 | – SUBJ [44] | 2 | 23 | 10,000 | – MPQA [45] | 2 | 3 | 10,606 | – Document Categorization | IMDB [46] | 2 | 280 | 25,000 | 25,000 Aspect-based Sentiment Analyis | REST [47] | 3 | 16 | 3,591 | 1,121 LAPTOP [47] | 3 | 17 | 2,292 | 639 TWITTER [48] | 3 | 19 | 6,248 | 692 TABLE II: Setup for Att-BiLSTM and Memory Net Task | Dataset | Dimension of hidden states | Dimension of attention context ---|---|---|--- Sentence Classification | SST2 [40] | 150 | 100 TREC [41] | 150 | 50 MR [42] | 150 | 100 CR [43] | 150 | 50 SUBJ [44] | 150 | 100 MPQA [45] | 150 | 100 Document Categorization | IMDB [46] | 150 | 300 Aspect-based Sentiment Analyis | REST [47] | 300 | 300 LAPTOP [47] | 300 | 300 TWITTER [48] | 300 | 300 Our method is shown in Algorithm 1. We first pre-train an attention-based model $f(\cdot,\theta)$ based on the classification dataset $D$. We then fix the model parameters $\theta$ and minimize the WBCP objective using Eq. (2) to obtain the perturbation amplitude $\sigma$ for each sample, and used to compute the attention supervision $\widetilde{\alpha}$ using Eq. (3). We then retrain the model using $\widetilde{\alpha}$ to guide the attention distribution $\alpha$ produced by the model. The above process can iterate $T$ times to capture the important distribution more accurately. The training objective function with attention supervision $\widetilde{\alpha}$ is defined as follows: $\displaystyle\begin{split}\mathcal{L}_{cls}=\frac{1}{M}{\sum}^{M}_{m=1}\hat{y}_{m}\log y_{m}+\gamma\text{KL}(\widetilde{\alpha}_{m}||\alpha_{m}),\end{split}$ (4) where $M$ is the number of samples, $\gamma$ is a hyperparameter that controls the strength of attention supervision, $\hat{y}_{m}$ and $y_{m}$ are the ground-truth label and predicted output for the $m$-th sample respectively. The first term is the Cross-Entropy Loss for classification, and the second term is the Kullback–Leibler Divergence between the distributions of attention $\alpha_{m}$ produced by model and attention supervision information $\widetilde{\alpha}_{m}$ for the $m^{th}$ sample. It’s worth noting that our method requires extra computations, but the time cost is usually acceptable because nearly all the process is parallel. The analysis are explained in Appendix A. ## IV Experiments We tried PBSA on several text classification tasks, including sentence classification, document categorization, and aspect-level sentiment analysis. Experimental results demonstrate that PBSA consistently enhances the performance and robustness of various attention-based baselines, and outperforms some strong models following self-supervised attention learning. Furthermore, a visualization analysis confirms that our model is capable of generating high-quality attention for target tasks. We aim to answer the following questions: * RQ1: Does PBSA improve model accuracy? * RQ2: Is PBSA more effective than other approaches? * RQ3: How do hyperparameters affect the results? * RQ4: How does PBSA work? ### IV-A Datasets and Baselines The statistics of widely-studied datasets used by different tasks are listed in Table I. These datasets come from different topics, such as movie reviews, customer reviews, social reviews, and question type. In particular, since there is no standard partition of MR, CR, SUBJ, and MPQA, we follow the data splitting protocol, 7:1:2 for them to get the training, validation, and test sets. For the aspect-level tasks, we remove the instances with conflict sentiment labels in Laptop and Restaurant as implemented in [49]. As for hyperparameters, we use a grid search to find the optimal value of $\gamma$ and $T$ for each dataset, from the sets $\gamma\in\\{0.05,0.1,1.0,2.0,10,100\\}$ and $T\in\\{1,2,3,4\\}$. We use the Adam optimizer with learning rate 0.001 and the batch size is set to 64. We use Att-BiLSTM, Memory Network, BERT, DEBERTA, ELECTRA, Att-BERT, BERTABSA, Att-BERTABSA as baselines and explain the details about them in Appendix B. The setup of hyperparameters for Att-BiLSTM and Memory Net are listed in Table II. To make a fair compare with other algorithms, we set our hyperparameters the same as theirs. TABLE III: The performance of PBSA on the document-level and sentence-level classification. Model | IMDB | SST2 | TREC | MR | CR | SUBJ | MPQA | Average ---|---|---|---|---|---|---|---|--- Att-BiLSTM | 87.21 | 83.42 | 90.60 | 77.04 | 76.82 | 89.82 | 70.59 | 82.20 Att-BiLSTM+PBSA | 89.14 | 85.72 | 92.20 | 79.05 | 77.64 | 90.53 | 71.31 | 83.65 Att-BERT(base) | 92.53 | 91.43 | 96.60 | 79.26 | 89.06 | 94.30 | 89.69 | 90.41 Att-BERT(base)+PBSA | 92.61 | 91.93 | 97.20 | 79.97 | 89.38 | 94.76 | 90.21 | 90.86 BERT(base) | 92.92 | 91.71 | 96.60 | 85.47 | 89.42 | 96.30 | 89.59 | 91.71 BERT(base)+PBSA | 93.48 | 92.20 | 97.80 | 86.08 | 90.21 | 97.50 | 90.57 | 92.55 DEBERTA(base) | 91.14 | 92.69 | 96.20 | 86.64 | 91.01 | 95.30 | 89.74 | 91.82 DEBERTA(base)+PBSA | 91.68 | 93.02 | 96.80 | 87.18 | 91.80 | 95.70 | 90.82 | 92.43 ELECTRA(base) | 93.48 | 94.67 | 96.8 | 89.27 | 92.2 | 96.75 | 90.9 | 93.44 ELECTRA(base)+PBSA | 93.87 | 95.43 | 97.40 | 89.88 | 92.99 | 97.30 | 91.55 | 94.06 TABLE IV: Experimental accuracy on the document-level and sentence-level classification compared with others Model | IMDB | SST2 | TREC | MR | CR | SUBJ | MPQA | Average ---|---|---|---|---|---|---|---|--- Att-BiLSTM | 87.21 | 83.42 | 90.60 | 77.04 | 76.82 | 89.82 | 70.59 | 82.20 +Gradient | 86.79 | 85.06 | 91.20 | 77.60 | 76.54 | 89.82 | 70.76 | 82.53 +SANA [6] | 88.03 | 84.35 | - | - | - | - | - | - +PBSA | 89.14 | 85.72 | 92.20 | 79.05 | 77.64 | 90.53 | 71.31 | 83.65 TABLE V: The performance of PBSA on the aspect-level classification. Models | REST | LAPTOP | TWITTER ---|---|---|--- | Accuracy | Macro-F1 | Accuracy | Macro-F1 | Accuracy | Macro-F1 MN [50] | 77.32 | 65.88 | 68.90 | 63.28 | 67.78 | 66.18 MN (Ours) | 79.89 | 65.89 | 72.68 | 61.97 | 68.34 | 66.23 +PBSA | 83.98 | 70.84 | 75.75 | 67.21 | 72.10 | 69.64 BERTABSA | 79.80 | 71.37 | 79.38 | 75.69 | 76.01 | 74.52 +PBSA | 79.89 | 71.59 | 79.51 | 75.87 | 76.11 | 74.69 Att-BERTABSA | 83.29 | 75.87 | 77.98 | 75.02 | 73.99 | 71.23 +PBSA | 83.41 | 76.70 | 78.65 | 75.53 | 74.45 | 72.88 TABLE VI: Experimental results on aspect-level tasks compared with others. Models | REST | LAPTOP | TWITTER ---|---|---|--- | Accuracy | Macro-F1 | Accuracy | Macro-F1 | Accuracy | Macro-F1 MN [50] | 77.32 | 65.88 | 68.90 | 63.28 | 67.78 | 66.18 MN (Ours) | 79.89 | 65.89 | 72.68 | 61.97 | 68.34 | 66.23 +Gradient [51] | 76.85 | 60.06 | 71.11 | 63.53 | 67.77 | 64.91 +AWAS [5] | 78.75 | 69.15 | 70.53 | 65.24 | 69.64 | 67.88 +Boosting [32] | 77.66 | 66.23 | 69.28 | 64.17 | 68.14 | 67.12 +Adaboost [32] | 76.77 | 62.29 | 67.88 | 60.52 | 66.96 | 65.09 +PGAS [32] | 78.98 | 69.42 | 70.84 | 65.58 | 69.78 | 67.80 +PBSA | 83.98 | 70.84 | 75.75 | 67.21 | 72.10 | 69.64 ### IV-B RQ1: Sentence-level and Document-level Classification Figure 3: The visualization result of several samples on SST2 test set. To verify that PBSA can improve the performance of the attention-based model, in this section, we use the classic Att-BiLSTM [52] and the pre-trained models BERT [28], DEBERTA [29], and ELECTRA [30] as the baselines. It is worth noting that Transformers use multiple-layer and multiple-head attention, so selecting the suitable head as the supervised target is difficult [32]. Hence, how to effectively combine its multiple-layer and multiple-head attention with our method is an exciting and valuable question. The previous researchers have yet to find a good way to apply their methods to Transformers, and we have made some explorations in this field, which is also one of our innovations. We explore two simple strategies to combine our approach with Transformers, 1) We first add a scaled dot-product attention layer to the output of BERT to derive a fixed-sized sentence representation for classification, and we call this model Att-BERT for short. 2) We also try a simple but effective way to combine the internal multi-head attention in Transformers with our method. Specifically, we average the multi-head attention of all the layers and compress the attention matrix to a vector to be guided by our mechanism. Table III reports the experimental results on the seven datasets of sentence classification and document categorization. We observe that our method consistently helps improve the accuracy of the baseline on all the datasets. The average accuracy of our approach on the five baselines across seven datasets are 83.65, 90.86, 92.55, 92.43, and 94.06, an improvement of 1.44%, 0.45%, 0.83%, 0.66%, and 0.66% over the baselines (82.21, 90.41, 91.71, 91.82, and 93.44). The results demonstrate that our approach delivers significant performance improvements over the baselines. It also indicates that the current model limits the potential of attention mechanisms when without any supervision information. However, PBSA can mine the potential important words and then guide the attention mechanism of the model to learn a good inductive bias. However, we find the improvements on pre-trained models are relatively marginal compared with smaller models like Att-BiLSTM. The phenomenon indicates that the pre-training on large corpora relieves the attention bias to some extent, which is further verified in Section IV-D. Moreover, we also find the size of the pre-trained model also impacts the performance of PBSA. We conduct the experiments on BERT-small and ELECTRA-small (shown in Table VII), and PBSA gains greater improvements under the same settings. To sum up, the attention bias may be more likely to appear in smaller-size models and smaller-scaled datasets. And the performance of PBSA will be more significant in these scenarios. TABLE VII: The performance of PBSA on small-size pre-trained models. Model | IMDB | SST2 | TREC | MR | CR | SUBJ | MPQA | Average ---|---|---|---|---|---|---|---|--- BERT(small) | 90.81 | 88.85 | 95.60 | 81.16 | 81.61 | 94.45 | 87.23 | 88.53 BERT(small)+PBSA | 91.73 | 90.17 | 97.40 | 82.33 | 83.07 | 96.20 | 88.54 | 89.92 ELECTRA(small) | 92.37 | 91.21 | 96.00 | 83.60 | 87.30 | 95.70 | 88.97 | 90.74 ELECTRA(small)+PBSA | 93.35 | 92.20 | 96.40 | 84.72 | 88.62 | 96.65 | 90.62 | 91.79 Figure 4: The chart of the fluctuations of accuracy when we change the value of the sample ratio. Each triangle point and circular point corresponds to the accuracy of BERT and BERT+PBSA under the current sample ratio, respectively. ### IV-C RQ1: Aspect-level Sentiment Analyis To further verify the effectiveness of our approach, we apply PBSA into MN [31, 50], BERTABSA [53], and Att-BERTABSA [32]. Both BERTABSA and Att-BERTABSA are typical and simple ways to apply BERT to aspect-level classification tasks. The difference is that BERTABSA directly uses the hidden states of the aspect words to classify, while Att-BERTABSA adds an attention layer to the output of BERT. To show that our method truly improves the results, we only use the most critical parts of the model without any other tricks or mechanisms (e.g. the gating mechanism). We conduct experiments on three benchmark datasets of aspect-based sentiment analysis and PBSA outperforms all the baselines on all datasets both in accuracy and Macro-F1. As shown in Table V, compared with other tasks, PBSA has a more significant improvement on these small-scale datasets, indicating that the original attention lacks a good inductive bias due to limited labeled data. With the help of PBSA, the robustness of the model can be improved effectively. ### IV-D RQ1: Performances under Different Sample Ratios To verify the performance of our approach on low-resource tasks, we conduct experiments on different values of sample ratio. We get sample sets from the original datasets with sample ratio $\in\\{0.001,0.005,0.01,0.05,0.1\\}$, and measure the performances of BERT and BERT+PBSA on these sample sets according to their accuracy. As shown in Figure 4, the performances of BERT and BERT+PBSA have the same trend. As the accuracy of BERT increases, the accuracy of BERT+PBSA increases and vice versa. As explained in Section III-C, the attention supervision information is obtained based on the pre-trained model, whose performance has a direct influence on the quality of the attention supervision information and further affects the results of re-training. That may explain the strong correlation between the performance of BERT and BERT+PBSA. The improvement is more prominent when the ratio is in the middle range (sample ratio$\in(0.005,0.05)$). As listed above, when the ratio is small, the pre-trained model has a bad performance, which results in meaningless attention supervision information and further limits the performance of PBSA. As the value of the sample ratio increases, the original model performs better, and the quality of attention supervision information is enhanced, and then PBSA improves the model even more. However, the improvement is not without limitation. As the value of the sample ratio exceeds a certain value, the phenomenon of attention bias is no longer evident, and the improvement reduces. It may be because BERT is pre-trained on a large-scale corpus, and when we fine-tune it, its attention fits well on these ’larger-scale’ sample sets, which makes the original model has scant room for improvement. To sum up, the distribution of the attention parameters is not stable enough when the data is limited or the model size is small, which can be refined by PBSA. And the performance and lifting area of PBSA are closely related to the performance of the baseline. ### IV-E RQ2: Comparison with other methods On the tasks listed above, we compare our method with other advanced self- supervised attention learning approaches. SANA [6] generates counterfactuals by a masking scheme and measures the difference in the softmax probability of the model between the counterfactual and original sample as an indicator of important words. AWAS [5] and PGAS [32] progressively mask the word with the largest attention weight or partial gradient. Most of these works don’t publish their critical code and do their experiment only on certain specific tasks, so we directly compare our algorithm with their best results published on different tasks respectively. To make a fair comparison, we use the same word embedding and the same settings of hidden size to reproduce their baselines, which is listed in Table II. On the document-level and sentence-level tasks (Table IV), PBSA is superior to SANA by 1.11% and 1.37%, which verifies that the word-based concurrent perturbation can mine the importance distribution of words more accurately than the masking scheme. On the aspect-level task (Table VI), compared with AWAS and PGAS, our method improves the model more. As we mentioned in the Introduction (Section I), our method can generate word-specific attention supervision while others treat the important words equally without discrimination. We speculate that this may be one of the main reasons for our improvement. Figure 5: The chart of the fluctuations of Macro-F1 when we change the values of hyperparameters. ### IV-F RQ2: Comparison with human intuition methods From the aspect of human intuition, the gradient-based methods and leave-one- out methods are usually used to improve the interpretability of model. The current self-supervised attention learning methods are mostly based on word masking, which can be seen as a variation of leave-one-out methods. We also try to use the gradient-based method [51] to generate supervision information. As shown in Table III and Table V, the gradient-based method does badly on most of the datasets, especially on aspect-level datasets. These results demonstrate that although the gradient-based method can improve the interpretability of the model, it does not necessarily improve the performance. However, our method enhances interpretability while also improving its performance. ### IV-G RQ3: Hyperparameter sensitivity As shown in Figure 5, our method achieves the best results on REST and TWITTER when $T=2$ and $T=1$ respectively. When the increase of $T$, the performance increases initially and then decreases due to over-fitting. The performance of models won’t change sharply with the increase of $T$ once they achieve the best results. In practice, we find that one iteration has achieved promising results. The hyperparmeter $\lambda$ controls the perturbation degree of WBCP, when $\lambda$ is too large, it will deteriorate performance due to injecting too much noise. In all of our experiments, we set $\lambda$ as $0.1$. The hyperparmeter $\gamma$ controls the strength of attention supervision, when $\gamma$ is too large, it easily leads to overly penalize the alignment between the model attention and perturbation attention, which may hurt the model’s internal reasoning process. Compared with $\gamma$, $\lambda$ has less effect on results when the value of which changes slightly, but we cannot remove ${\sum}_{i=1}^{n}\log({-\sigma_{i}})$ from our loss function. Otherwise, the model will try not to add any noise to $x$ without the term, which makes PBSA get a meaningless supervision distribution that varies dramatically for the same sentence each time (the distribution is supposed to be essentially unchanged for the same sentence). On the other hand, results are more sensitive to $\gamma$, which determines if the models can reach the peak of the results. ### IV-H RQ4: Visualization analysis In this section, we select several attention visualizations on SST2 test set to explain how PBSA works. As shown in Figure 3, we see that PBSA makes the model pay more attention to important but low-frequency words, reduces the focus on high-frequency words that do not affect the results, increases the difference in weight between words with conflicting meanings, and increases sensitivity to adversative relations in sentences. #### Pay more attention to important but low-frequency words Some words do have important effects on the results, but if they do not appear frequently enough then the traditional attention mechanism may not pay enough attention to them. As shown in Figure 3-(1), the word drowsy has an important influence on the emotional polarity of the film. However, it is a low- frequency word in the corpus, which makes the attention mechanisms do not allocate enough weights to it, resulting in a classification error. After being trained by PBSA, the model can assign enough weights to drowsy, which changes the result from false to correct. #### Reduce the focus on high-frequency words that do not affect the results In baseline, some high-frequency words which do not contain any emotional polarity usually get high weights, while some important words that should have been focused on are ignored. As Figure 3-(2) shows, romantic and doesn’t are words with strong emotional polarity. However, the baseline assigns greater weights to other high-frequency words (e.g., between) with no emotional polarity, and thus ignores the words romantic and doesn’t which results in misclassification. After being trained by PBSA, the model reduces the focus on between and the weights allocated to the significant words increase correspondingly, which turns the result. #### Increase the difference in weight between words with conflicting meanings As shown in Figure 3-(3), the baseline focuses on too many words: horror, revenge, perfect, relentless, torture, and so on. Maybe all of the words are important but the meanings of them are conflicting, which interferes with the classification task. The model feels confused because it does not know how to make a prediction according to so many emotional words. After being trained by PBSA, the difference in the weight of emotional words becomes larger, which makes it get the right result. It should be noted that the entropy of attention distribution may not decrease because PBSA keeps attention to important words while diluting the distribution of the other words. #### Be more sensitive to adversative relations in sentences If there are adversative conjunctions (e.g., but, however, and so on) in the sentence, it is likely to express two opposite emotions before and after the adversative conjunction. This is when the model needs to keenly feel the changes of emotional polarity in the sentence. From this aspect, the model is also supposed to assign higher weights to those adversative conjunctions. Judging from our results, it is unfortunate that the original attention mechanism tends to ignore these conjunctions for they seem to have no effect on results outwardly. As Figure 3-(4) and Figure 3-(5) show, the baseline ignores the word but and results in errors. After being trained by PBSA, the baseline pays more attention to but which makes both of the emotions before and after the adversative conjunction can be taken into consideration. ## V Conclusions and future work In this paper, we propose a novel self-supervised attention learning method based on word-based concurrent perturbation. The algorithm adds as much as noise to each word in the sentence under the premise of unchanged semantics to mine the supervision information to guide attention learning. Our experiments demonstrate that our method achieves significant performance improvements over the baselines on several text classification tasks. Moreover, we use several visualization samples to interpret how our method guides the internal reasoning process of models. It is worth to note that we combine our method with transformers, which is not implemented in most of the previous attention guiding methods. Our strategies may not be the best ways to apply our algorithm into transformers, but they still prove the effectiveness of the proposed method. We will try to find more appropriate and effective strategies and incorporate our algorithm into other NLP tasks in the future. ## References * [1] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” _arXiv preprint arXiv:1409.0473_ , 2014\. * [2] M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , 2015, pp. 1412–1421. * [3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in Neural Information Processing Systems_ , I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30, 2017. * [4] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio, “A structured self-attentive sentence embedding,” _arXiv preprint arXiv:1703.03130_ , 2017. * [5] J. Tang, Z. Lu, J. Su, Y. Ge, L. Song, L. Sun, and J. Luo, “Progressive self-supervised attention learning for aspect-level sentiment analysis,” in _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , 2019, pp. 557–566. * [6] S. Choi, H. Park, J. Yeo, and S.-w. Hwang, “Less is more: Attention supervision with counterfactuals for text classification,” in _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , 2020, pp. 6695–6704. * [7] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, “Hierarchical attention networks for document classification,” in _Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies_ , 2016, pp. 1480–1489. * [8] Q. Chen, X. Zhu, Z.-H. Ling, S. Wei, H. Jiang, and D. Inkpen, “Enhanced lstm for natural language inference,” in _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 2017, pp. 1657–1668. * [9] M. Barrett, J. Bingel, N. Hollenstein, M. Rei, and A. Søgaard, “Sequence classification with human attention,” in _Proceedings of the 22nd Conference on Computational Natural Language Learning_ , 2018, pp. 302–312. * [10] Y. Bao, S. Chang, M. Yu, and R. Barzilay, “Deriving machine attention from human rationales,” in _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , 2018, pp. 1903–1913. * [11] E. Wallace, T. Zhao, S. Feng, and S. Singh, “Concealed data poisoning attacks on NLP models,” in _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_. Online: Association for Computational Linguistics, Jun. 2021, pp. 139–150. * [12] H. Xu, S. Li, R. Hu, S. Li, and S. Gao, “From random to supervised: A novel dropout mechanism integrated with global information,” in _Proceedings of the 22nd Conference on Computational Natural Language Learning_ , 2018, pp. 573–582. * [13] X. Li, L. Bing, W. Lam, and B. Shi, “Transformation networks for target-oriented sentiment classification,” in _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 2018, pp. 946–956. * [14] Y. Zhang, I. Marshall, and B. C. Wallace, “Rationale-augmented convolutional neural networks for text classification,” in _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , 2016, pp. 795–804. * [15] O.-M. Camburu, T. Rocktäschel, T. Lukasiewicz, and P. Blunsom, “e-snli: natural language inference with natural language explanations,” in _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , 2018, pp. 9560–9572. * [16] E. Sood, S. Tannert, P. Mueller, and A. Bulling, “Improving natural language processing tasks with human gaze-guided neural attention,” in _Advances in Neural Information Processing Systems_ , vol. 33, 2020, pp. 6327–6341. * [17] E. Sood, S. Tannert, D. Frassinelli, A. Bulling, and N. T. Vu, “Interpreting attention models with human visual attention in machine reading comprehension,” in _Proceedings of the 24th Conference on Computational Natural Language Learning_ , 2020, pp. 12–25. * [18] J. Malmaud, R. Levy, and Y. Berzak, “Bridging information-seeking human gaze and machine reading comprehension,” in _Proceedings of the 24th Conference on Computational Natural Language Learning_ , 2020, pp. 142–152. * [19] C. Sen, T. Hartvigsen, B. Yin, X. Kong, and E. Rundensteiner, “Human attention maps for text classification: Do humans and neural networks focus on the same words?” in _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2020, pp. 4596–4608. * [20] J. Li, W. Monroe, and D. Jurafsky, “Understanding neural networks through representation erasure,” _arXiv preprint arXiv:1612.08220_ , 2016. * [21] S. Choi, H. Park, and S.-w. Hwang, “Counterfactual attention supervision,” in _2019 IEEE International Conference on Data Mining (ICDM)_. IEEE, 2019, pp. 1006–1011. * [22] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” _arXiv preprint arXiv:1610.02136_ , 2016. * [23] C.-H. Chang, E. Creager, A. Goldenberg, and D. Duvenaud, “Explaining image classifiers by counterfactual generation,” in _International Conference on Learning Representations_ , 2018. * [24] J. Yi, E. Kim, S. Kim, and S. Yoon, “Information-theoretic visual explanation for black-box classifiers,” _arXiv preprint arXiv:2009.11150_ , 2020. * [25] M. Wu, M. Wicker, W. Ruan, X. Huang, and M. Kwiatkowska, “A game-based approximate verification of deep neural networks with provable guarantees,” _Theoretical Computer Science_ , vol. 807, pp. 298–329, 2020. * [26] E. La Malfa, M. Wu, L. Laurenti, B. Wang, A. Hartshorn, and M. Kwiatkowska, “Assessing robustness of text classification through maximal safe radius computation,” in _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings_ , 2020, pp. 2949–2968. * [27] T.-W. Weng, H. Zhang, P.-Y. Chen, J. Yi, D. Su, Y. Gao, C.-J. Hsieh, and L. Daniel, “Evaluating the robustness of neural networks: An extreme value theory approach,” in _International Conference on Learning Representations_ , 2018. * [28] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [29] P. He, X. Liu, J. Gao, and W. Chen, “Deberta: Decoding-enhanced bert with disentangled attention,” _arXiv preprint arXiv:2006.03654_ , 2020. * [30] K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre-training text encoders as discriminators rather than generators,” _arXiv preprint arXiv:2003.10555_ , 2020. * [31] D. Tang, B. Qin, and T. Liu, “Aspect level sentiment classification with deep memory network,” in _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , 2016, pp. 214–224. * [32] J. Su, J. Tang, H. Jiang, Z. Lu, Y. Ge, L. Song, D. Xiong, L. Sun, and J. Luo, “Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning,” _Artificial Intelligence_ , vol. 296, p. 103477, 2021. * [33] A. Jacovi and Y. Goldberg, “Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness?” in _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2020, pp. 4198–4205. * [34] H. Kamigaito, K. Hayashi, T. Hirao, H. Takamura, M. Okumura, and M. Nagata, “Supervised attention for sequence-to-sequence constituency parsing,” in _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , 2017, pp. 7–12. * [35] Y. Zou, T. Gui, Q. Zhang, and X.-J. Huang, “A lexicon-based supervised attention model for neural sentiment analysis,” in _Proceedings of the 27th international conference on computational linguistics_ , 2018, pp. 868–877. * [36] M. Nguyen and T. H. Nguyen, “Who is killed by police: Introducing supervised attention for hierarchical lstms,” in _Proceedings of the 27th International Conference on Computational Linguistics_ , 2018, pp. 2277–2287. * [37] F. Zhao, Z. Wu, and X. Dai, “Attention transfer network for aspect-level sentiment classification,” in _Proceedings of the 28th International Conference on Computational Linguistics_ , 2020, pp. 811–821. * [38] X. Hu, X. Kong, and K. Tu, “A multi-grained self-interpretable symbolic-neural model for single/multi-labeled text classification,” _arXiv preprint arXiv:2303.02860_ , 2023. * [39] S. Jain and B. C. Wallace, “Attention is not explanation,” in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , 2019, pp. 3543–3556. * [40] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in _Proceedings of the 2013 conference on empirical methods in natural language processing_ , 2013, pp. 1631–1642. * [41] X. Li and D. Roth, “Learning question classifiers,” in _COLING 2002: The 19th International Conference on Computational Linguistics_ , 2002. * [42] B. Pang and L. Lee, “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,” _arXiv preprint cs/0506075_ , 2005. * [43] M. Hu and B. Liu, “Mining and summarizing customer reviews,” in _Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2004, pp. 168–177. * [44] B. Pang and L. Lee, “A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts,” _arXiv preprint cs/0409058_ , 2004. * [45] J. Wiebe, T. Wilson, and C. Cardie, “Annotating expressions of opinions and emotions in language,” _Language resources and evaluation_ , vol. 39, no. 2, pp. 165–210, 2005. * [46] A. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in _Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies_ , 2011, pp. 142–150. * [47] M. Pontiki, H. Papageorgiou, D. Galanis, I. Androutsopoulos, J. Pavlopoulos, and S. Manandhar, “Semeval-2014 task 4: Aspect based sentiment analysis,” _SemEval 2014_ , p. 27, 2014. * [48] L. Dong, F. Wei, C. Tan, D. Tang, M. Zhou, and K. Xu, “Adaptive recursive neural network for target-dependent twitter sentiment classification,” in _Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers)_ , 2014, pp. 49–54. * [49] C. Peng, Z. Sun, L. Bing, and Y. Wei, “Recurrent attention network on memory for aspect sentiment analysis,” in _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , 2017. * [50] S. Wang, S. Mazumder, B. Liu, M. Zhou, and Y. Chang, “Target-sensitive memory networks for aspect sentiment classification,” in _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , 2018, pp. 957–967. * [51] S. Serrano and N. A. Smith, “Is attention interpretable?” in _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , 2019, pp. 2931–2951. * [52] P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu, “Attention-based bidirectional long short-term memory networks for relation classification,” in _Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers)_ , 2016, pp. 207–212. * [53] J. Dai, H. Yan, T. Sun, P. Liu, and X. Qiu, “Does syntax matter? A strong baseline for aspect-based sentiment analysis with roberta,” _CoRR_ , vol. abs/2104.04986, 2021. [Online]. Available: https://arxiv.org/abs/2104.04986 * [54] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality arxiv : 1310 . 4546v1 [ cs . cl ] 16 oct 2013,” 2013. ## Appendix A Analysis of the Extra Computations The extra computations mainly come from the process of generating supervision information (Pre-training and re-training are the same as the common training method). The extra time required depends on the size of the model, the number of the samples and the epochs of training perturbation model. It is acceptable for most datasets because the whole process is parallel. All the sub- perturbation models have independent samples and training processes, and they just share the same pre-trained model whose parameters are fixed during the generating process. Therefore, the whole process can be handled concurrently if having enough GPU resources. For SST2, TREC, MR, CR, SUBJ, and MPQA, the generating process (batch-size=64) can be finished on 2 * GTX 3090 within less than 15 min. Some small datasets (e.g. SST2, TREC and CR) only need 8 min to generate supervison information. However, as for IMDB, the number of samples is enormous, and their average length is too long. Therefore, we must use several GPUs (2 * GTX 3090 and 4 * GTX 1080ti) to simultaneously deal with each part of the dataset to finish the task in a limited time. ## Appendix B Details of Baselines The details of our baselines are listed below. Figure 6: The illustration of Att-BiLSTM. Figure 7: The illustration of Memory Net. ### B-A Att-BiLSTM Figure 7 shows the structure of Att-BiLSTM. Att-BiLSTM first map each word into pre-trained skip-gram [54] word embedding and then utilize 1-layered BiLSTM with a scale-dot attention mechanism to get sentence-level hidden states which are finally used for classification. ### B-B Memory Network Figure 7 shows the structure of MN. Memory Network uses an iteratively updated vector $A$ (initialized as the aspect embedding) and the context embedding to generate the attention distribution, which is then used to select the important information from the context embedding and iteratively update the vector $A$. ### B-C Att-BERT Figure 10 shows the structure of Att-BERT. We add a scale-dot attention layer to the output of the BERT and use the output of the attention layer to classify. ### B-D BERTABSA Figure 10 shows the structure of BERTABSA. We input the whole sentence to get the context representation of the aspect words, which is directly used for classification. To verify that our method truly improves the results, we delete the gating mechanism and use bert-base-uncased instead of bert-large- uncased. ### B-E Att-BERTABSA Figure 10 shows the structure of Att-BERTABSA. Its structure is similar to Att-BERT, for adding a scale-dot attention layer after the output of BERT. However, different from Att-BERT, the hidden states of context words and aspect words are regarded as $Q$ and $K$ respectively and fed into the attention layer separately. To verify the effectiveness of our method, we make the same modifications on the Att-BERTABSA. Figure 8: The illustration of Att-BERT. Figure 9: The illustration of BERTABSA. Figure 10: The illustration of Att-BERTABSA.
20cm(1cm,1cm) Accepted by the IEEE/CVF International Conference on Computer Vision 2023 (ICCV’23) # SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability Wei Huang1,2 Xingyu Zhao2,3 Gaojie Jin2,4 Xiaowei Huang2 1Purple Mountain Laboratories 2University of Liverpool 3WMG, University of Warwick 4Institute of Software, CAS <EMAIL_ADDRESS> ###### Abstract Interpretability of Deep Learning (DL) is a barrier to trustworthy AI. Despite great efforts made by the Explainable AI (XAI) community, explanations lack robustness—indistinguishable input perturbations may lead to different XAI results. Thus, it is vital to assess how robust DL interpretability is, given an XAI method. In this paper, we identify several challenges that the state- of-the-art is unable to cope with collectively: i) existing metrics are not comprehensive; ii) XAI techniques are highly heterogeneous; iii) misinterpretations are normally rare events. To tackle these challenges, we introduce two black-box evaluation methods, concerning the worst-case interpretation discrepancy and a probabilistic notion of how robust in general, respectively. Genetic Algorithm (GA) with bespoke fitness function is used to solve constrained optimisation for efficient worst-case evaluation. Subset Simulation (SS), dedicated to estimate rare event probabilities, is used for evaluating overall robustness. Experiments show that the accuracy, sensitivity, and efficiency of our methods outperform the state-of-the-arts. Finally, we demonstrate two applications of our methods: ranking robust XAI methods and selecting training schemes to improve both classification and interpretation robustness. ## 1 Introduction A key impediment to the wide adoption of Deep Learning (DL) is its perceived lack of transparency. Explainable AI (XAI) is a research area that aims at providing the visibility into how a DL model makes decisions, and thus enables the use of DL in vision-based safety critical applications, such as autonomous driving [31], and medical image analysis [42]. Typically, XAI techniques visualise which input features are significant to the DL model’s prediction via attribution maps [4, 20]. However, interpretations111Despite the subtle difference between interpretability and explainability, we use both terms interchangeably as attributes of DL models in this paper. However, as suggested in [30], we use the terms explanation/interpretation specifically for individual predictions. suffer from the lack of robustness. Many works have shown that a small perturbation can manipulate the interpretation while keeping model’s prediction unchanged, e.g., [18, 24]. Moreover, there exists the misinterpretation of Adversarial Examples (AEs) [49], i.e., adversarial inputs are misclassified222Without loss of generality, in this paper we assume the DL model is a classifier if with no further clarification. by the DL model, but interpreted highly similarly to the benign counterparts. Fig. 1 illustrates examples of the aforementioned two types of misinterpretations. In this regard, it is vital to assess how robust the coupled DL model and XAI method are against input perturbations, which motivates this work. Figure 1: Two types of misinterpretations after perturbation To answer the question, the first challenge we recognise is the lack of diverse evaluation metrics from the state-of-the-art. Most of the existing works focus on adversarial attack [19] and defence [15, 41] on explanations, which essentially answer the binary question of whether there exist any adversarial interpretation in some perturbation distance. On the other hand, evaluation methods mainly study worst-case metrics, e.g., the maximum change in the resulting explanations when perturbations are made [3] and local Lipschitz continuity as the sensitivity to perturbations [47]. However, for systematic evaluation, we also need a notion of how robust in general the model is whenever a misinterpretation can be found (in line with the insight gained from evaluating classification robustness [44]). We introduce two metrics concerning the worst-case interpretation discrepancy and a probabilistic metric to calculate the proportion of misinterpretations in the local norm-ball around the original input, that complement each other from different perspectives. Second, XAI techniques are so heterogeneous that no existing white-box evaluation methods are generic enough to be applicable to all common ones. That said, black-box methods, that only access inputs and outputs of the coupled DL model and XAI tool without requiring any internal information, are promising for all kinds of XAI techniques (including perturbation-based ones that are missing from current literature). Based on this insight, we design a Genetic Algorithm (GA) and a statistical Subset Simulation (SS) approach to estimate the aforementioned two robustness metrics, both of which are of black-box nature. The third challenge we identified is that misinterpretations are normally rare events in a local norm-ball. Without white-box information like gradients, black-box methods have to leverage auxiliary information to detect such rare events efficiently. To this end, we design bespoke fitness functions in the GA (when solving the optimisation) and retrofit the established SS (dedicated to estimating rare event probabilities [6]) for efficient evaluation. To the best of our knowledge, no state-of-the-art methods can collectively cope with the three pinpointed challenges like ours. To validate this claim, we conduct experiments to study the accuracy, sensitivity, and efficiency of our methods. Additionally, we develop two practical applications of our methods: i) We evaluate a wide range of XAI techniques and gain insights that no XAI technique is superior in terms of robustness to both types of adversarial attacks. ii) We discover a strong correlation between classification robustness and interpretation robustness through theoretical analysis (see Appx.) and empirical studies. We also identified the best training scheme to improve both aspects. In summary, the key contributions of this paper include: * • Two diverse metrics, worst-case interpretation discrepancy and probabilistic interpretation robustness, complement each other as a versatile approach, allowing for a holistic evaluation of interpretation robustness. * • We introduce new methods based on GA and SS to estimate these two metrics. These methods are black-box and thus applicable to diverse XAI tools, enabling robustness evaluation of perturbation-based XAI techniques for the first time. Despite the rare occurrence of misinterpretations, our GA and SS algorithms efficiently detect them. * • We demonstrate two practical applications of our methods: ranking robust XAI techniques and selecting training schemes to improve both classification and interpretation robustness. ## 2 Related Work Evaluation of Interpretation Robustness: Existing evaluation metrics, proposed for interpretation robustness, only consider the misinterpretation when the prediction label of perturbed inputs remains unchanged [3]. [3] estimates the Local Lipschitz of interpretation, while [47] introduces the max-sensitivity and average-sensitivity of interpretation. Both of them use Simple Monte Carlo (SMC) sampling to estimate their metrics. [45] formally certify the robustness of gradient-based explanation by propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network (NN). In [13], it defines the consistency as the probability that the inputs with the same interpretation have the same prediction label. However, their evaluation method is only applicable to tree ensemble models and tabular datasets, leaving the probabilistic estimation of misinterpretation for high-dimensional image datasets blank. Notably, toolsets/benchmarks [25, 1] for evaluating XAI techniques are emerging in the last two years. They are not specifically built for evaluating interpretation robustness, thus only concern the aforementioned metrics. That said, our metrics and their efficient estimators can be integrated into and complement those toolsets/benchmarks. Adversarial Attack and Defence on Interpretation: Ghorbani et al. first introduce the notion of adversarial perturbation to NN interpretation [18]. Afterwards, several works are dedicated to generating indistinguishable inputs which have the same prediction label but substantially different interpretations [19, 38]. The theoretical analysis has shown that the lack of interpretation robustness is related to geometrical properties of NNs [14]. In [49], a new class of attack is proposed to fool the NN’s prediction as well as the coupled XAI method. GA is introduced to manipulate SHAP in [8]. In [14], an upper bound on maximum changes of gradient-based interpretation is derived. The upper bound is proportional to the smooth parameter of the softplus activation function, which can be smoothed to improve the interpretation robustness. In [15], regularisation on training, like weight decay, and minimising the hessian of NNs are theoretically proved to be effective for training more robust NNs against interpretation manipulation. In [52], prior knowledge, e.g., from V&V evidence, is used in Bayesian surrogate models for more robust and consistent explanations. Specifically designed for perturbation-based XAI tools, [9] devises defenses against adversarial attack. ## 3 Preliminaries ### 3.1 Feature-Attribution based XAI While readers are referred to [4] for a review, we list common feature- attribution based XAI methods [50, 20] that are studied by this work. For gradient-based methods, we consider the Guided Backpropagation, Gradient $\times$ Input, Integrated Gradients, GradCAM, LRP and DeepLift. For perturbation-based methods, we study LIME and SHAP. Descriptions with greater detail of these XAI methods are presented in Appx. 8.1. ### 3.2 Local Robustness of Interpretation Analogous to the adversarial robustness of classification, interpretation can be fooled by adding perturbations to the input. The interpretation robustness is highly related to the robustness of classification, since the attribution map is produced based on some prediction class. Therefore, we first define the robustness of classification and then formalise the robustness of interpretation, using the following notations. Given an input seed $x$, we may find a norm ball $B(x,r)$ with the central point at $x$ and radius $r$ in $L_{p}$ norm. We denote the prediction output of the DL model as the vector $f(x)$ with size equal to the total number of labels. Classification robustness requires that DL model’s prediction output should be invariant to the human imperceptible noise, which can be expressed through the prediction loss around an input seed $x$ $\begin{split}J(f(x),f(x^{\prime}))=\max_{i\neq y}(f_{i}(x^{\prime})-f_{y}(x^{\prime}))\\\ y=\operatorname{arg\,max}_{i}f_{i}(x),\quad x^{\prime}\in B(x,r)\end{split}$ (1) where $f_{i}(x^{\prime})$ returns the probability of label $i$ after input $x^{\prime}$ being processed by the DL model $f$. Note, $J\geq 0$ implies that $x^{\prime}$ is an AE. We then define the following indicator function for misclassification within the norm ball $B(x,r)$ $I_{c}=\begin{cases}-1&\text{if $J(f(x),f(x^{\prime}))\geq 0$}\\\ 1&\text{if $J(f(x),f(x^{\prime}))<0$}\\\ \end{cases}$ (2) That is, $I_{c}=-1$ indicates misclassification, otherwise $1$. Previous works study two circumstances when small perturbation fools the interpretation $g(x)$, cf. Fig. 1 for examples. We use the interpretation discrepancy $\mathfrak{D}(g(x),g(x^{\prime}))$ (defined later) to quantify the difference between the new interpretation $g(x^{\prime})$ after perturbation and the reference $g(x)$, where $x^{\prime}\in B(x,r)$. We then introduce two constants as thresholds, $\alpha$ and $\beta$, such that $\mathfrak{D}<\alpha$ represents consistent interpretations, while $\mathfrak{D}>\beta$ represents inconsistent interpretations333When $\alpha\leq\mathfrak{D}\leq\beta$, it represents the case that we cannot clearly decide if the two interpretations are consistent or not.. Two misinterpretation regions within the norm ball $B(x,r)$ are then defined as $\widehat{F}=\\{\mathfrak{D}>\beta\land J<0\\},\quad\widetilde{F}=\\{\mathfrak{D}<\alpha\land J\geq 0\\}$ (3) $\widehat{F}$ represents preserved classification with different interpretation and $\widetilde{F}$ represents different classification with preserved interpretation, respectively. Note, $\alpha$ and $\beta$ are hyper- parameters that define the consistency notion of interpretations. They may vary case by case in the specific application context, representing the level of strictness required by the users on interpretation robustness. For example, if we use PCC (defined later) to quantify $\mathfrak{D}$, i.e. $\mathfrak{D}$=1/PCC, there is a rule of thumb [2] that $\text{PCC}<0.4$ ($\beta=1/0.4$) indicates inconsistent interpretations while $\text{PCC}>0.6$ ($\alpha=1/0.6$) represents consistent interpretations. ### 3.3 Interpretation Discrepancy Metrics In order to quantify the visual discrepancy between the XAI results (i.e., attribution maps), there are several commonly used metrics, including Mean Square Error (MSE), Pearson Correlation Coefficient (PCC), and Structural Similarity Index Measure (SSIM) [14]. PCC and SSIM have the absolute values in $[0,1]$. The smaller values indicate the larger discrepancy between two interpretations. MSE calculates the average squared differences, the value of which more close to 0 means higher similarity. Then, interpretation discrepancy $\mathfrak{D}$ can be expressed as $\mathfrak{D}=\frac{1}{\text{PCC}}\>\text{ or }\>\frac{1}{\text{SSIM}}\>\text{ or }\>\text{MSE}$ (4) ## 4 Worst Case Evaluation The conventional way to evaluate robustness of classification is based on the worst case loss under the perturbation [48]. This underlines the adversarial attack and motivates the adversarial training. Similarly, the worst case interpretation discrepancy between the original input and perturbed input may reflect the interpretation robustness. There are two types of misinterpretations after perturbation in a local region, cf. Eq. (3). Accordingly, two optimisations are formalised for the worst case interpretation discrepancy: $\begin{split}sol_{\widehat{F}}=&\max_{x^{\prime}\in B(x,r)}\;\mathfrak{D}(g(x),g(x^{\prime}))\\\ &s.t.\;J(f(x),f(x^{\prime}))<0\end{split}$ (5) $\begin{split}sol_{\widetilde{F}}=&\min_{x^{\prime}\in B(x,r)}\;\mathfrak{D}(g(x),g(x^{\prime}))\\\ &s.t.\;J(f(x),f(x^{\prime}))\geq 0\end{split}$ (6) That is, $sol_{\widehat{F}}$ corresponds to finding the largest interpretation discrepancy when perturbed input is still correctly classified. While $sol_{\widetilde{F}}$ is the minimum interpretation discrepancy between the AE $x^{\prime}$ and input seed $x$. Previous works adopt white-box methods to solve the above optimisations for adversarial explanations [49, 18], in which case the DL model $f(x)$ and XAI method $g(x)$ are required to be fully accessible to their internal information. In addition, many XAI methods $g(x)$ are non-differentiable, and the strong assumptions (like smoothing gradient of ReLU non-linearity) are made to enable derivative-based optimisation. In contrast, Genetic Algorithm (GA) is a derivative-free method for solving both constrained and unconstrained optimisations, and has been successfully applied to the evaluation of classification robustness [11]. That motivates us to develop a black-box evaluation method for interpretation robustness based on GA. GA consists of 5 steps: initialisation, selection, crossover, mutation, and termination, the middle three of which are repeated until the convergence of fitness function values. We refer readers to Appx. 8.3 for more details of GA. Initialisation: The population with $N$ samples is initialized. Diversity of initial population could promise approximate global optimal [26]. Normally, we use the Gaussian distribution with the mean at input seed $x$, or a uniform distribution to generate a set of diverse perturbed inputs within the norm ball $B(x,r)$. Selection: The core of GA is the design of fitness functions. Fitness function guides the selection of parents for latter operations. Considering the constrained optimization, we design the fitness function based on the superiority of feasible individuals to make distinction between feasible and infeasible solutions [33]. For the optimisation of Eq. (5), the constraint can be directly encoded as the indicator $I_{c}$ into the fitness function $\mathcal{F}(x^{\prime})=I_{c}\,\mathfrak{D}(g(x),g(x^{\prime}))$ (7) and $\mathfrak{D}(g(x),g(x^{\prime}))$ is always none negative. All feasible individuals satisfying the constraint $J(f(x),f(x^{\prime}))<0$ will have $I_{c}=1$, and $\mathcal{F}>0$. If the constraint is violated, then $I_{c}=-1$, and $\mathcal{F}<0$. In other words, the individuals violating the constraint will have smaller fitness values than the others and are suppressed during the evolution. For the optimisation of Eq. (6), we note $J>0$ is a rare event within the local region $B(x,r)$, as AEs are normally rare [44]. To accelerate the search in the feasible input space, we set two fitness functions $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$. $\mathcal{F}_{1}$ increases the proportion of AEs in the population. On this basis, when over half amount of the population are AEs, $\mathcal{F}_{2}$ will guide the generation of adversarial explanations. $\mathcal{F}_{1}(x^{\prime})=J(f(x),f(x^{\prime}))\quad\mathcal{F}_{2}(x^{\prime})=-I_{c}/\mathfrak{D}(g(x),g(x^{\prime}))$ (8) In $\mathcal{F}_{2}$, $I_{c}$ also penalises the violation of constraints, which keeps the optimisation conditioned on AEs. Instead of directly selecting the best fitted individuals, we choose the fitness proportionate selection [27], which can maintain good diversity of population and avoid premature convergence. Then, the probability of selection $p_{i}$ for each individual $x^{\prime}_{i}$ is formulated as $p_{i}=\frac{\mathcal{F}(x^{\prime}_{i})}{\sum_{j=1}^{N}\mathcal{F}(x^{\prime}_{j})}$ (9) Crossover: The crossover operator will combine a pair of parents from last step to generate a pair of children, which share many of the characteristics from the parents. The half elements of parents are randomly exchanged. Mutation: Some elements of children are randomly altered to add variance in the evolution. It should be noticed that the mutated samples should still fall into the norm ball $B(x,r)$. Finally, the children and parents will be the individuals for the next iteration. Termination: GA terminates either when the allocated computation budget (maximum number of iterations) is depleted or the plateau is reached such that successive iterations no longer produce better results. ## 5 Probabilistic Evaluation ### 5.1 Probabilistic Metrics In addition to the worst case evaluation, probabilistic evaluation based on statistical approaches is of the same practical interest—a lesson learnt from evaluating classification robustness [44, 43] and DL reliability [51, 16]. Thus, we study the probability of misinterpretation within $B(x,r)$, regarding the two types of misinterpretations444Through out the paper, we use the shorthand notation $F$ for either $\widehat{F}$ or $\widetilde{F}$, according to the context. of the input image $x$ under study: $P_{F}(x)=\int_{x^{\prime}\in B(x,r)}\mathbbm{1}_{x^{\prime}\in F}\,q(x^{\prime})\,dx^{\prime},\quad F=\widehat{F}\,\text{ or }\,\widetilde{F}$ (10) where $x^{\prime}$ is a perturbed sample under the local distribution $q(x^{\prime})$ (precisely the “input model” used by [44], when studying local probabilistic metric) in $B(x,r)$. $\mathbbm{1}_{x^{\prime}\in F}$ is equal to $1$ when $x^{\prime}\in F$ is true, $0$ otherwise. Intuitively, Eq. (10) says, for the given input image $x$, if we generate an infinite set of perturbed samples locally (i.e., within a norm ball $B(x,r)$) according to the distribution $q$, then the proportion of those samples fall into the misinterpretation region $F$ is defined as the proposed probabilistic metric. ### 5.2 Estimation by Subset Simulation To estimate the two probabilistic metrics defined by Eq. (10), there are two challenges: i) misinterpretations represented by $\widetilde{F}$ and $\widehat{F}$ are arguably rare events (that confirmed empirically later in our experiments); ii) inputs of DL models are usually high dimensional data, like images. The first challenge requires sampling methods specifically designed for rare events rather than SMC (that is known to be inefficient for rare events). The second challenge rules out some commonly used advanced sampling methods, like importance sampling, as they may not be applicable to high dimensional data [5]. The well-established Subset Simulation (SS) can efficiently calculate the small failure probability in high dimensional space [6] and has been successfully applied to assessing classification robustness of DL models [44]. As a black-box method, it only involves the input and response of interest for calculation, thus generic to diverse XAI methods $g(x)$. The main idea of SS is introducing intermediate failure events so that the failure probability can be expressed as the product of larger conditional probabilities. Let $F=F_{m}\subset F_{m-1}\subset\cdots\subset F_{2}\subset F_{1}$ be a sequence of increasing events so that $F_{m}=\bigcap_{i=1}^{m}F_{i}$. By conditional probability, we get $P_{F}\\!:=\\!P(F_{m})\\!=\\!P(\bigcap_{i=1}^{m}\\!F_{i})\\!=\\!P(F_{1})\\!\prod_{i=2}^{m}P(F_{i}|F_{i-1})$ (11) The conditional probabilities of intermediate events involved in Eq. (11) can be chosen sufficiently large so that they can be efficiently estimated. For example, $P(F_{1})=1$, $P(F_{i}|F_{i-1})=0.1$, $i=2,3,4,5,6$, then $P_{F}\approx 10^{-5}$ which is too small for efficient estimation by SMC sampling. In this section, we adapt SS for our problem as what follows. #### 5.2.1 Design of Intermediate Events $\widehat{F}$ and $\widetilde{F}$ can be decomposed as the series of intermediate events through the expression of property functions $J$ and $\mathfrak{D}$. For $\widehat{F}$, $J<0$ is not rare for a well-trained DL model, representing the correctly classified input after perturbation. Thus, the intermediate events $\widehat{F}_{i-1}$ and $\widehat{F}_{i}$ can be chosen as $\begin{split}\widehat{F}_{i-1}=&\\{I_{c}\mathfrak{D}>\beta_{i-1}\\},\quad\widehat{F}_{i}=\\{I_{c}\mathfrak{D}>\beta_{i}\\}\\\ &\text{where}\quad\beta_{i-1}<\beta_{i}\leq\beta\end{split}$ (12) such that $\widehat{F}_{i}\subset\widehat{F}_{i-1}$. $I_{c}$ (in Eq. 2) encodes the constraint $J<0$ as the sign of $\mathfrak{D}$. In contrast, $J\geq 0$ in $\widetilde{F}$ represents the occurrence of AEs that are rare events, which cannot be directly expressed as the indicator $I_{c}$, since the random sampling within $B(x,r)$ cannot easily satisfy $J\geq 0$. Thus, for $\widetilde{F}$, $J\geq 0$ should be chosen as the critical intermediate event. $\widetilde{F}_{j}=\\{J\geq 0\\},\quad\text{where}\quad 1<j<m$ (13) For intermediate events $\widetilde{F}_{i-1}$ and $\widetilde{F}_{i}$, when $i\\!<j$, we set $\begin{split}\widetilde{F}_{i-1}&=\\{J>\gamma_{i-1}\\},\quad\widetilde{F}_{i}=\\{J>\gamma_{i}\\}\\\ &\text{where}\quad\gamma_{i-1}<\gamma_{i}<0\end{split}$ (14) such that $\widetilde{F}_{j}\subset\widetilde{F}_{i}\subset\widetilde{F}_{i-1}$. And for intermediate events $\widetilde{F}_{k-1}$ and $\widetilde{F}_{k}$, when $k-1>j$, we can set $\displaystyle\widetilde{F}_{k-1}=\\{-I_{c}/\mathfrak{D}$ $\displaystyle>1/\alpha_{k-1}\\},\quad\widetilde{F}_{k}=\\{-I_{c}/\mathfrak{D}>1/\alpha_{k}\\}$ $\displaystyle\text{where}\quad 0<\alpha\leq\alpha_{k}<\alpha_{k-1}$ (15) such that $\widetilde{F}_{k}\subset\widetilde{F}_{k-1}\subset\widetilde{F}_{j}$. #### 5.2.2 Estimating Conditional Probabilities Upon formally defined intermediate events, the question arises on how to set $\beta_{i}$, $\gamma_{i}$ and $\alpha_{i}$ to make the conditional probability $P(F_{i}|F_{i-1})$ sufficiently large for estimation by a few simulations. Also, simulating new samples from $F_{i}$ for estimating next conditional probability $P(F_{i+1}|F_{i})$ is difficult due to the rarity of $F_{i}$. Therefore, the Markov Chain Monte Carlo sampling based on the Metropolis–Hastings (MH) algorithm is adopted. For simplicity, the intermediate event threshold is generally denoted as $L_{i}=\\{\beta_{i},\gamma_{i},\alpha_{i}\\}$. #### 5.2.3 Choices of Intermediate Event Threshold Start from estimating $P(F_{1})$, $F_{1}$ is chosen as the common event such that $N$ samples are drawn from $q(\cdot)$ by SMC and all belong to $F_{1}$. A feasible way is setting the threshold of property function $L_{1}$ to $-\infty$, and $P(F_{1})=1$. For $i=2,\cdots,m$, $L_{i}$ affects the values of condition probabilities and hence the efficiency of SS. It is suggested that $L_{i}$ is set adaptively to make $P(F_{i}|F_{i-1})$ approximately equals to $\rho$, and $\rho$ is a hyper-parameter in SS (that takes a decimal less than 1 and normally $\rho=0.1$ yields good efficiency, although it can be empirically optimised), i.e., $P(F_{i}|F_{i-1})\approx\rho$. That is, at each iteration $i-1$ when we simulate $N$ samples, $\rho N$ samples should belong to $F_{i}$. #### 5.2.4 Simulating New Samples from $q(\cdot|F_{i})$ At iteration $i=2,\cdots,m-1$, we already have $\rho N$ samples belonging to $F_{i}$ and aim to simulate new samples to enlarge the set to $N$, so that the next conditional probability $P(F_{i+1}|F_{i})=\frac{1}{N}\sum_{k=1}^{N}\mathbbm{1}_{F_{i+1}}(x^{\prime}_{k})$ can be calculated. We can pick up an existing sample $x^{\prime}$ subject to the conditional distribution $q(\cdot|F_{i})$, denoted as $x^{\prime}\sim q(\cdot|F_{i})$, and use the Metropolis Hastings (MH) algorithm to construct a Markov Chain. By running $M$ steps of MH, the stationary distribution of the Markov Chain is $q(\cdot|F_{i})$. Then new data $x^{\prime\prime}\sim q(\cdot|F_{i})$ can be sampled from the Markov Chain and added into the set. More details of the MH algorithm for SS are presented in Appx. 8.4. #### 5.2.5 Termination Condition and Returned Estimation After the aforementioned steps, SS divides the problem of estimating a rare event probability into several simpler ones—a sequence of intermediate conditional probabilities as formulated in Eq. (11). The returned estimation $\widebar{P}_{F}$ and coefficient of variation (c.o.v.) $\widebar{\delta}$ (measuring the estimation error) are $\widebar{P}_{F}=\prod_{i=1}^{m}\frac{1}{N}\sum_{k=1}^{N}\mathbbm{1}_{F_{i}}(x^{\prime}_{k}),\quad\widebar{\delta}^{2}\approx\sum_{i=1}^{m}\frac{1-\widebar{P}_{F_{i}}}{\widebar{P}_{F_{i}}N}(1+\lambda_{i})$ (16) where $\lambda_{i}>0$ represents the efficiency of the estimator using dependent samples drawn from the Markov Chain. For simplicity, we can assume $\lambda_{i}\approx 0$ when the number of steps $M$ of MH is large [10]. Since each conditional probability $P(F_{i}|F_{i-1})$ approximately equals to $\rho$, then by Eq. (11), the returned estimation $\widebar{P}_{F}\approx\rho^{m-1}$. $m$ is the total number of intermediate event generated adaptively. The adaptive generation of intermediate events terminates when $\widebar{P}_{F}<P_{\text{min}}$, and $P_{\text{min}}$ is a given termination threshold. More details of statistical properties of the estimator, like error bound, efficiency are presented in Appx. 8.4. ## 6 Experiments ### 6.1 Experiment Setup We consider three public benchmark datasets , five XAI methods, and five training schemes in our experiments. The norm ball radius, deciding the oracle of robustness, is calculated with respect to the $r$ separation property [46]. That is, $r=0.3$ for MNIST, $r=0.03$ for CIFAR10, and $r=0.05$ for CelebA. More details of the DL models under study are presented in Appx. 8.6. For the probabilistic evaluation using SS, without loss of generality, we consider the uniform distribution as $q(x^{\prime})$ within each norm ball. We compare $\mathfrak{D}=$ MSE, 1/PCC, and 1/SSIM for measuring interpretation discrepancy in Appx. 8.7, and find PCC is better to quantify the interpretation difference in our cases. Based on sensitivity analysis, we choose hyperparameters PCC thresholds $1/\beta=0.4$, $1/\alpha=0.6$, MH steps $M=250$, $\rho=0.1$, $\ln P_{\text{min}}=-100$ for probabilistic evaluation, and population size $N=1000$, number of iteration $itr=500$ for the worst case evaluation by GA. Our tools and experiments are publicly available at https://github.com/havelhuang/Eval_XAI_Robustness. ### 6.2 Sensitivity to Hyper-Parameter Settings We first investigate the sensitivity of objective function $\mathfrak{D}$ and constraint $J$ (cf. Eq. (5) and (6)) to GA’s population size and iteration numbers, as shown in Fig. 2. We observe from the 1st row that interpretation discrepancy measured by PCC (the red curve) quickly converge after 300 iterations with the satisfaction of constraint $J$ (the blue curve), showing the effectiveness of our GA. From the 2nd row, we notice that the optimisation is not sensitive to population size, compared with the number of iterations, i.e., population size over 500 cannot make significant improvement to the optimisation. In addition, if the number of iterations is sufficiently large, the effect of population size on optimal solution is further diminished. We only present the results of one seed from CelebA, cf. Appx. 8.8 for more seeds from other datasets, while the general observation remains. Figure 2: Sensitivity of objective $\mathfrak{D}$ and constraint $J$ to GA’s population size and iteration numbers. Each column represents a type of misinterpretation. 1st row: quickly converged GA objectives satisfying the constraint, with fixed population size of 1000 and varying iterations. 2nd row: GA solutions, with fixed iteration numbers and varying population size. A test seed (representing a norm ball) from CelebA is used; interpretation discrepancy $\mathfrak{D}$ is measured by $1/$PCC; “Gradient$\times$Input” XAI method is studied. Next, we study the sensitivity of SS accuracy to the number of MH steps $M$, varying the PCC threshold that defines the rarity level of misinterpretation events. In Fig. 3, we can calculate the difference $\Delta\ln P_{F}$ between SS estimations and the approximated ground truth (by SMC estimations using a sufficiently large number of samples555We use $10^{8}$ samples (for the specific seed) which can accurately estimate a small probability in natural logarithm around $-17\\!\sim\\!-18$.). The 1st row shows the overlapping of SS and SMC estimations (two red curves) and the reducing running time (the blue curve) when decreasing the rarity levels of misinterpretations (by controlling the PCC threshold). From the 2nd row we observe that, with increased MH steps $M$, the estimation accuracy of SS is significantly improved. In addition, the rarity of misinterpretation events determines the choice of $M$. E.g., if $\ln P_{\widehat{F}}=-3.87$ with $\widehat{F}=\\{\textit{PCC}<0.4\land J<0\\}$, then $M=100$ already achieves high precision without additional sampling budget. Other parameters, e.g. the number of samples $n$ and sample quantile $\rho$ that are discussed in Appx. 8.8, are in general less sensitive than the number of MH steps $M$. In summary, sensitivity analysis provides the basis of setting hyper- parameters in later experiments: 500 iterations and 1000 population size for GA, 250 MH steps for SS. Figure 3: Each column represents a type of misinterpretation. 1st row: the probability of misinterpretation ($\ln P_{F}$) estimations returned by SS and approximated ground truth by SMC5, varying the rarity of misinterpretations. Overlapping of two red curves shows high accuracy of SS. 2nd row: sensitivity of SS accuracy $\Delta\ln P_{F}$ to MH steps $M$, varying the rarity level of misinterpretation controlled by PCC threshold. A test seed from MNIST and “Gradient$\times$Input” XAI method are used; Results are averaged over 10 runs. ### 6.3 Accuracy and Efficiency of Evaluation We study the accuracy of our GA-based evaluation, comparing with state-of-the- art [3, 47]—they define the local Lipschitz ($\textit{SENS}_{\textit{LIPS}}$) and max-sensitivity ($\textit{SENS}_{\textit{MAX}}$) metrics for the maximum interpretation discrepancy, and empirically estimate the metrics using SMC sampling. For fair comparisons, we first choose MSE as the interpretation discrepancy metric in our fitness functions of GA, and then apply both GA and SMC to generate two populations of interpretations in which we calculate the three robustness metrics respectively and summarise in Table 1. We use $5\\!\times\\!10^{5}$ samples for both GA and SMC. Table 1: Three worst case robustness metrics estimated by our GA and SMC, averaged over 100 test seeds. GA outperforms SMC (used by state-of-the-arts) w.r.t. all 3 metrics. Dataset | GA | SMC ---|---|--- | MSE --- ($sol_{\widehat{F}}$) $\textit{SENS}_{\textit{MAX}}$ | $\textit{SENS}_{\textit{LIPS}}$ | | MSE --- $\textit{SENS}_{\textit{MAX}}$ | $\textit{SENS}_{\textit{LIPS}}$ MNIST | 1.549 | 36.067 | 13.747 | 0.271 | 15.226 | 2.772 CIFAR10 | 42.436 | 328.147 | 314.861 | 0.589 | 38.529 | 40.232 CelebA | 3.204 | 192.203 | 65.635 | 0.013 | 11.298 | 3.563 As shown in Table 1, our GA-based estimator outperforms SMC in all of the three robustness metrics. Although the metrics of local Lipschitz and max- sensitivity are not explicitly encoded as optimisation objectives in our GA, GA is still more effective and efficient to estimate those three extreme values than SMC. This is non-surprising, since all three metrics are compatible and essentially representing the same worst-case semantics. That said, our interpretation discrepancy metric complements $\textit{SENS}_{\textit{LIPS}}$ and $\textit{SENS}_{\textit{MAX}}$ (as the former is based on Lipschitz value while the latter defined only in $L_{2}$ norm), can be easily encoded in our GA. In addition to the accuracy shown in Fig. 3, we compare the sample efficiency between SS and SMC by calculating the number of required simulations $N_{\textit{SS}}$ and $N_{\textit{SMC}}$ for achieving same estimation errors (measured by c.o.v. $\delta$). As shown in Table 2, SS requires fewer samples, showing great advantage over SMC, cf. Appx. 8.4 for theoretical analysis. Table 2: Sample efficiency of SS and SMC. In all six cases, SS requires fewer samples ($N_{\textit{SS}}<N_{\textit{SMC}}$) than SMC for achieving the same estimation errors $\delta^{2}$. Each result is averaged over 10 seeds. Dataset | $F$ | $\ln P_{F}$ | $\delta^{2}$ | $N_{\textit{SS}}$ | $N_{\textit{SMC}}$ ---|---|---|---|---|--- MNIST | $\widehat{F}$ | -12.25 | 0.0184 | 15000 | $1.13\times 10^{7}$ $\widetilde{F}$ | -24.63 | 0.0374 | 27500 | $1.34\times 10^{12}$ CIFAR10 | $\widehat{F}$ | -0.79 | 0.0004 | 2500 | $2500$ $\widetilde{F}$ | -33.54 | 0.0511 | 40000 | $7.22\times 10^{15}$ CelebA | $\widehat{F}$ | -31.43 | 0.0482 | 35000 | $9.29\times 10^{14}$ $\widetilde{F}$ | -70.71 | 0.1090 | 80000 | $4.68\times 10^{31}$ ### 6.4 Evaluating XAI Methods The first application of our methods is to draw insights on the robustness of common XAI techniques, from both the worst-case and probabilistic perspectives. Thanks to the black-box nature of GA and SS, our methods are applicable to diverse XAI tools, and we consider six popular ones in this section. In Appx. 8.9, we evaluate other XAI tools and discuss how the number of perturbed samples and image segmentation affect evaluation results on LIME and SHAP (which are missing from current literature). Figure 4: Worst-case (1st row) and probabilistic (2nd row) robustness evaluations of five XAI methods based on 100 random seeds from MNIST. Each column represents a type of misinterpretation—$\widehat{F}$ left and $\widetilde{F}$ right. For top-left plot, higher value means more robust; for all other plots, lower value means more robust. We randomly sample 100 seeds from MNIST for evaluations, and summarise the statistics as box-and-whisker plots in Fig. 4. Based on the empirical results of Fig. 4, we may conclude: i) Perturbation-based XAI method also suffers from the lack of robustness. ii) for misinterpretation $\widehat{F}$—correct classification ($J<0$) with inconsistent interpretation ($\textit{PCC}<0.4$), DeepLift and Integrated Gradients outperform others, while Guided Backprop and $\text{Gradient}\\!\times\\!\text{Input}$ are unrobust from both worst-case and probabilistic perspective; iii) for misinterpretation $\widetilde{F}$—wrong classification ($J\geq 0$) with persevered interpretation ($\textit{PCC}>0.6$), while all XAI methods perform similarly w.r.t. both metrics, LRP shows better robustness than others. The empirical insights are as expected if we consider the mechanisms behind those XAI methods. For instance, considering $\widehat{F}$, DeepLift and Integrated Gradients are more robust, since they use the reference point to avoid the discontinuous gradients (large curvature) that mislead the attribution maps [37]. On the other hand, DeepLift and Integrated Gradients become vulnerable to $\widetilde{F}$. Because misclassification and misinterpretation are rare events, most perturbed inputs inside the norm ball have consistent interpretation with the seed. Consequently, the integration from the reference point which averages the attribution map over several points are prone to produce the consistent interpretations. See Appx. 8.9 for more discussions and experiments on CIFAR10 and CelebA dataset. ### 6.5 Evaluating Training Schemes In this application, we study the effect of various training schemes on the interpretation robustness of DL models. In Appx. 8.2, we theoretically analyse the relation between classification robustness and interpretation robustness. The Prop. 1 shows that input hessian norm and input gradient norm are related to the change of classification loss and interpretation discrepancy. Thus, we add input gradient and input hessian regularisation terms to the training loss, and also consider the PGD-based adversarial training that improves classification robustness through minimising the maximal prediction loss in norm balls [21, 22]. Table 3 records the results. Table 3: Evaluating classification ($c$) and interpretation ($\widehat{F}$ and $\widetilde{F}$) robustness of DL models, trained with input gradient norm regularisation (Grad. Reg.), input hessian norm regularisation (Hess. Reg.), both of them (Grad. + Hess. Reg.) and adversarial training (Adv. Train.). Results are averaged over 100 random seeds. Higher $sol_{\widehat{F}}$ means more robust, while for other metrics, the lower is the better. Dataset | Model | Worst Case Evaluation | Probabilistic Evaluation ---|---|---|--- | $sol_{c}$ --- (J) | $sol_{\widehat{F}}$ --- (PCC) | $sol_{\widetilde{F}}$ --- (PCC) $\ln P_{c}$ | $\ln P_{\widehat{F}}$ | $\ln P_{\widetilde{F}}$ MNIST | Org. | 22.43 | 0.06 | 0.93 | -24.28 | -3.87 | -31.47 Grad. Reg | 11.37 | 0.10 | 0.92 | -31.51 | -15.69 | -44.96 Hess. Reg. | 10.59 | 0.17 | 0.90 | -33.36 | -21.27 | -43.85 Grad. + Hess. | 10.04 | 0.20 | 0.90 | -36.96 | -23.79 | -46.19 Adv. Train. | -0.16 | 0.21 | 0.59 | -84.15 | -28.67 | -89.09 CIFAR10 | Org. | 42.58 | 0.02 | 0.85 | -31.55 | -18.63 | -71.46 Grad. Reg | 42.34 | 0.01 | 0.85 | -27.31 | -21.77 | -65.75 Hess. Reg. | 8.99 | 0.08 | 0.81 | -76.29 | -99.20 | -91.89 Grad. + Hess. | 8.47 | 0.06 | 0.81 | -71.65 | -98.49 | -92.39 Adv. Train. | -0.67 | 0.25 | 0.80 | -92.57 | -100 | -95.97 CelebA | Org. | 51.08 | 0.08 | 0.86 | -13.77 | -21.58 | -70.82 Grad. Reg | 25.29 | 0.06 | 0.88 | -45.52 | -70.22 | -83.26 Hess. Reg. | 18.71 | 0.09 | 0.86 | -74.93 | -100 | -95.85 Grad. + Hess. | 25.41 | 0.06 | 0.88 | -65.95 | -100 | -94.13 Adv. Train. | -0.45 | 0.55 | 0.81 | -95.09 | -100 | -95.58 In addition to the knowledge that input hessian can defence adversarial interpretation [15], we notice that it is significant and effective in improving both classification and interpretation robustness, than input gradient regularisation, confirming our Prop. 1. Moreover, we discover that adversarial training is surprisingly effective at improving interpretation robustness, but at the price of dropping accuracy, cf. Appx. 8.6. This phenomenon reals the strong correlation between classification and interpretation robustness. That said, the improvement of classification robustness may lead to the improvement of interpretation robustness. ## 7 Conclusion This paper proposes two versatile and efficient evaluation methods for DL interpretation robustness. The versatility is twofold: (1) the proposed metrics are characterising robustness from both worst-case and probabilistic perspectives; (2) GA and SS are black-box methods thus generic to heterogeneous XAI methods. Considering the rare-event nature of misinterpretations, GA and SS show high efficiency in detecting them, thanks to the bespoke design of fitness functions in GA and encoding auxiliary information as intermediate events in SS. ##### Acknowledgements This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123. It is also supported by the UK EPSRC through End-to-End Conceptual Guarding of Neural Architectures [EP/T026995/1], Department of Transport UK, Transport Canada and WMG center of HVM Catapult. ## References * [1] Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. Openxai: Towards a transparent evaluation of model explanations. arXiv preprint arXiv:2206.11104, 2022. * [2] Haldun Akoglu. User’s guide to correlation coefficients. Turkish journal of emergency medicine, 18(3):91–93, 2018. * [3] David Alvarez-Melis and Tommi S Jaakkola. On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049, 2018. * [4] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58:82–115, 2020. * [5] Siu-Kui Au and JL Beck. Important sampling in high dimensions. Structural safety, 25(2):139–163, 2003. * [6] Siu-Kui Au and James L Beck. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic engineering mechanics, 16(4):263–277, 2001. * [7] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015. * [8] Hubert Baniecki and Przemyslaw Biecek. Manipulating shap via adversarial data perturbations. In Proc. of the AAAI Conference on Artificial Intelligence, volume 36, pages 12907–12908, 2022. * [9] Zachariah Carmichael and Walter J Scheirer. Unfooling perturbation-based post hoc explainers. In Proc. of the 37th AAAI Conference on Artificial Intelligence (AAAI’23), 2022. * [10] Frédéric Cérou, Pierre Del Moral, Teddy Furon, and Arnaud Guyader. Sequential monte carlo for rare event estimation. Statistics and computing, 22(3):795–808, 2012. * [11] Jinyin Chen, Mengmeng Su, Shijing Shen, Hui Xiong, and Haibin Zheng. Poba-ga: Perturbation optimized black-box adversarial attacks via genetic algorithm. Computers & Security, 85:89–106, 2019. * [12] Jinyin Chen, Mengmeng Su, Shijing Shen, Hui Xiong, and Haibin Zheng. POBA-GA: perturbation optimized black-box adversarial attacks via genetic algorithm. Comput. Secur., 85:89–106, 2019. * [13] Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz. Framework for evaluating faithfulness of local explanations. arXiv preprint arXiv:2202.00734, 2022. * [14] Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. Explanations can be manipulated and geometry is to blame. Advances in Neural Information Processing Systems, 32, 2019. * [15] Ann-Kathrin Dombrowski, Christopher J Anders, Klaus-Robert Müller, and Pan Kessel. Towards robust explanations for deep neural networks. Pattern Recognition, 121:108194, 2022. * [16] Yi Dong, Wei Huang, Vibhav Bharti, Victoria Cox, Alec Banks, Sen Wang, Xingyu Zhao, Sven Schewe, and Xiaowei Huang. Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance. ACM Trans. Embedded Computing Systems, 2022. * [17] Andrew Gelman, Walter R Gilks, and Gareth O Roberts. Weak convergence and optimal scaling of random walk metropolis algorithms. The annals of applied probability, 7(1):110–120, 1997. * [18] Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of Neural Networks Is Fragile. Proc. of the AAAI Conference on Artificial Intelligence, 33(01):3681–3688, 2019. * [19] Juyeon Heo, Sunghwan Joo, and Taesup Moon. Fooling neural network interpretations via adversarial model manipulation. Advances in Neural Information Processing Systems, 32, 2019. * [20] Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270, 2020. * [21] Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, and Xiaowei Huang. Enhancing adversarial training with second-order statistics of weights. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15273–15283, 2022. * [22] Gaojie Jin, Xinping Yi, Dengyu Wu, Ronghui Mu, and Xiaowei Huang. Randomized adversarial training via taylor expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16447–16457, 2023. * [23] Lambros S Katafygiotis and Konstantin M Zuev. Geometric insight into the challenges of solving high-dimensional reliability problems. Probabilistic Engineering Mechanics, 23(2-3):208–218, 2008. * [24] Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (Un)reliability of Saliency Methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pages 267–280. Springer, Cham, 2019. * [25] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020. * [26] Abdullah Konak, David W Coit, and Alice E Smith. Multi-objective optimization using genetic algorithms: A tutorial. Reliability engineering & system safety, 91(9):992–1007, 2006\. * [27] Adam Lipowski and Dorota Lipowska. Roulette-wheel selection via stochastic acceptance. Physica A: Statistical Mechanics and its Applications, 391(6):2193–2196, 2012. * [28] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017. * [29] Zbigniew Michalewicz and Marc Schoenauer. Evolutionary algorithms for constrained parameter optimization problems. Evolutionary computation, 4(1):1–32, 1996. * [30] Christoph Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. E-book on leanpub.com, 2020. * [31] Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze. Explanations in autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(8):10142–10162, 2021. * [32] Iason Papaioannou, Wolfgang Betz, Kilian Zwirglmaier, and Daniel Straub. Mcmc algorithms for subset simulation. Probabilistic Engineering Mechanics, 41:89–103, 2015. * [33] David Powell and Michael M. Skolnick. Using genetic algorithms in engineering design optimization with non-linear constraints. In Proceedings of the 5th International Conference on Genetic Algorithms, page 424–431, San Francisco, CA, USA, 1993. Morgan Kaufmann Publishers Inc. * [34] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proc. of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, New York, NY, USA, 2016. ACM. * [35] Gerhart Iwo Schueller, Helmuth J Pradlwarter, and Phaedon-Stelios Koutsourelakis. A critical appraisal of reliability estimation procedures for high dimensions. Probabilistic engineering mechanics, 19(4):463–474, 2004. * [36] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. of the IEEE Int. Conf. on Computer Vision, pages 618–626, 2017. * [37] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International conference on machine learning, pages 3145–3153. PMLR, 2017. * [38] Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 180–186, 2020. * [39] J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for simplicity: The all convolutional net. In ICLR (workshop track), 2015. * [40] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017. * [41] Ruixiang Tang, Ninghao Liu, Fan Yang, Na Zou, and Xia Hu. Defense against explanation manipulation. Frontiers in Big Data, 5, 2022. * [42] Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, and Max A Viergever. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Medical Image Analysis, page 102470, 2022. * [43] Benjie Wang, Stefan Webb, and Tom Rainforth. Statistically robust neural network classification. In Proc. of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, volume 161 of UAI’21 of PMLR, pages 1735–1745, 2021. * [44] Stefan Webb, Tom Rainforth, Yee Whye Teh, and M. Pawan Kumar. A statistical approach to assessing neural network robustness. In 7th Int. Conf. on Learning Representations (ICLR’19). OpenReview.net, 2019. * [45] Matthew Wicker, Juyeon Heo, Luca Costabello, and Adrian Weller. Robust explanation constraints for neural networks. arXiv preprint arXiv:2212.08507, 2022. * [46] Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. Advances in neural information processing systems, 33:8588–8601, 2020. * [47] Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I Inouye, and Pradeep K Ravikumar. On the (in) fidelity and sensitivity of explanations. Advances in Neural Information Processing Systems, 32, 2019. * [48] Fuxun Yu, Zhuwei Qin, Chenchen Liu, Liang Zhao, Yanzhi Wang, and Xiang Chen. Interpreting and evaluating neural network robustness. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4199–4205, 2019. * [49] Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. Interpretable Deep Learning under Fire. In 29th USENIX Security Symposium, USENIX Security 20\. USENIX Association, Aug. 2020. * [50] Yu Zhang, Peter Tiňo, Aleš Leonardis, and Ke Tang. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 2021. * [51] Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, and Xiaowei Huang. Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles. In AISafety’21 Workshop at IJCAI’21, volume 2916. ceur-ws.org, 2021. * [52] Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, and David Flynn. BayLIME: Bayesian local interpretable model-agnostic explanations. In Proc. of the 37th Conference on Uncertainty in Artificial Intelligence, volume 161 of UAI’21, pages 887–896. PMLR, 2021. ## 8 Appendix ### 8.1 Feature-Attribution based XAI ##### Guided Backpropagation: It computes the gradient of output with respect to the input, but only the non-negative components of gradients are propagated to highlight the important pixels in the image [39]. ##### Gradient $\times$ Input: The map $g(x)=x\odot\frac{\partial f(x)}{\partial x}$ is more preferable to gradient alone to leverage the sign and strength of input to improve the interpretation sharpness [37]. ##### Integrated Gradients: Instead of calculating single derivative, this approach integrates the gradients from some baseline to its current input value $g(x)=(x-\bar{x})\int_{\alpha=0}^{1}\frac{\partial f(\bar{x}+\alpha(x-\bar{x}))}{\partial x}d\alpha$, addressing the saturation and thresholding problems [40]. ##### GradCAM: Gradient-weighted Class Activation Mapping (Grad-CAM) generates the visual explanation for convolutional neural network, using gradients flowing into the final convolutional layer to produce a coarse localization map, highlighting the relevant regions in the image for prediction[36]. ##### Layer-wise Relevance Propagation (LRP): LRP operates by propagating the outputs $f(x)$ backwards, subject to the conservation rule [7]. Given neurons $j$ and $k$ in two consecutive layers, propagating relevance score $R_{k}$ to neurons $j$ in lower layer can be expressed as $R_{j}=\sum_{k}\frac{z_{jk}}{\sum_{j}z_{jk}}R_{k}$ where weight $z_{jk}=w_{jk}x_{k}$ is the weighted activation, representing the contribution of relevance neuron $k$ makes to neuron $j$. ##### DeepLift: It is an improved version of LRP by considering changes in the neuron activation from the reference point when propagating the relevance scores [37]. Rescale rule is used to assign contribution scores to each neuron. ##### Perturbation-based: LIME trains an interpretable local surrogate model, such as liner regression model, by sampling points around the input sample and use the regression coefficients as interpretation results [34]. SHAP calculates the attribution based on Shapley Values from cooperative game theory [28]. It involves taking the permutation of input features and adding them one by one to the baseline. The output difference after adding input feature corresponds to its attribution. ### 8.2 Classification and Interpretation Robustness Suppose the gradient based interpretation can be written as $g(x)=\nabla\ell(x)$, where $\ell$ can be the cross-entropy loss (or our defined prediction loss $J$). We leverage Lipschitz continuous gradient to hint the relation between classification robustness and interpretation robustness as what follows. A differentiable function $\ell(x)$ is called smooth within local region $B(x,r)$ iff it has a Lipschitz continuous gradient, i.e., if $\exists K>0$ such that $||\nabla\ell(x^{\prime})-\nabla\ell(x)||\leq K||x^{\prime}-x||,\quad\forall x^{\prime}\in B(x,r).$ (17) ###### Proposition 1 Lipschitz continuous gradient implies: $||\ell(x^{\prime})-\ell(x)||\leq||\nabla\ell(x)||r+\frac{K}{2}r^{2}$ (18) Prop. 1 says, the change of classification is bounded by input gradient $||\nabla\ell(x)||$, as well as $\frac{K}{2}$. $K$ can be chosen as the Frobenius norm of input hessian $||H||_{F}(x)$ [15]. Therefore, the regularisation of input gradient and input hessian can affect classification robustness and interpretation robustness. ##### Proof. We first show that for $K>0$, $||\nabla\ell(x_{1})-\nabla\ell(x_{2})||\leq K||x_{1}-x_{2}||$ implies $\ell(x_{1})-\ell(x_{2})\leq\nabla\ell(x_{2})^{T}(x_{1}-x_{2})+\frac{K}{2}||x_{1}-x_{2}||^{2}$ Recall from the integral calculus $\ell(a)-\ell(b)=\int_{b}^{a}\nabla\ell(\theta)\,d\theta$, $\displaystyle\ell(x_{1})-\ell(x_{2})=$ $\displaystyle\int_{0}^{1}\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))^{T}(x_{1}-x_{2})\,d\tau=$ $\displaystyle\int_{0}^{1}(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))^{T}-\nabla\ell(x_{2})^{T}+\nabla\ell(x_{2})^{T})$ $\displaystyle(x_{1}-x_{2})\,d\tau$ As $\nabla\ell(x_{2})$ is independent of $\tau$, it can be taken out from the integral $\displaystyle\ell(x_{1})-\ell(x_{2})=\nabla\ell(x_{2})^{T}(x_{1}-x_{2})+$ $\displaystyle\int_{0}^{1}(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))^{T}-\nabla\ell(x_{2})^{T})(x_{1}-x_{2})\,d\tau$ Then we move $\nabla\ell(x_{2})^{T}(x_{1}-x_{2})$ to the left and get the absolute value $\displaystyle|\ell(x_{1})-\ell(x_{2})-\nabla\ell(x_{2})^{T}(x_{1}-x_{2})|=$ $\displaystyle|\int_{0}^{1}(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))^{T}-\nabla\ell(x_{2})^{T})(x_{1}-x_{2})\,d\tau|\leq$ $\displaystyle\int_{0}^{1}|(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))^{T}-\nabla\ell(x_{2})^{T})(x_{1}-x_{2})|\,d\tau\leq_{c.s.}$ $\displaystyle\int_{0}^{1}||(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))-\nabla\ell(x_{2}))||||(x_{1}-x_{2})||\,d\tau$ c.s. means Cauchy – Schwarz inequality. By applying lipschitz continuous gradient, we can get $\displaystyle||(\nabla\ell(x_{2}+\tau(x_{1}-x_{2}))-\nabla\ell(x_{2}))||$ $\displaystyle\leq K||\tau(x_{1}-x_{2})||$ $\displaystyle\leq K\tau||x_{1}-x_{2}||$ Note $\tau\geq 0$, and the absolute sign of $\tau$ can be removed. Then, we can get $\displaystyle|\ell(x_{1})-\ell(x_{2})-\nabla\ell(x_{2})^{T}(x_{1}-x_{2})|\leq$ $\displaystyle\int_{0}^{1}K\tau||x_{1}-x_{2}||^{2}\,d\tau=\frac{K}{2}||x_{1}-x_{2}||^{2}$ Next, get the norm of two sides, and apply triangle inequality, we finally get $\begin{split}||\ell(x^{\prime})-\ell(x)||&\leq||\nabla\ell(x)^{T}(x^{\prime}-x)+\frac{K}{2}||x^{\prime}-x||^{2}||\\\ &\leq||\nabla\ell(x)||||x^{\prime}-x||+\frac{K}{2}||x^{\prime}-x||^{2}\\\ &\leq||\nabla\ell(x)||r+\frac{K}{2}r^{2}\\\ \end{split}$ (19) QED ### 8.3 Genetic Algorithm based Optimisation Genetic Algorithm (GA) is a classic evolutionary algorithm for solving the either constrained or unconstrained optimisation problems. It mimics the biological evolution by selecting the most fitted individuals in the population, which will be the parents for the next generation. It consists of 4 steps: initialisation, selection, crossover, and mutation, the last three of which are repeated until the convergence of fitness values. ##### Initialisation The initialisation of population is crucial to the quick convergence. Diversity of initial population could promise approximate global optimal[26]. Normally, we use the Gaussian distribution with the mean at input seed $x$, or a uniform distribution to generate a set of diverse perturbed inputs within the norm ball $B(x,r)$. ##### Selection A fitness function is defined to select fitted individuals as parents for the latter operations. We use the fitness proportionate selection [27]. $p_{i}=\frac{\mathcal{F}_{i}}{\sum_{i=1}^{n}\mathcal{F}_{i}}$ (20) The fitness value is used to associate a probability of selection $p_{i}$ for each individuals to maintaining good diversity of population and avoid premature convergence. The fitness function is the objective function to be optimised. For example, previous paper applies GA to the perturbation optimisation to generate the high quality AEs [12]. In this paper, the explanation discrepancy is optimised to find the worst case adversarial explanations. Figure 5: Illustration of crossover and mutation in GA ##### Crossover The crossover operator will combine a pair of parents from last step to generate a pair of children, which share many of the characteristics from the parents. The half elements of parents are randomly exchanged. ##### Mutation Some elements of children are randomly altered to add variance in the evolution. It should be noticed that the mutated samples should still fall into the norm ball $B(x,r)$. Finally, the children and parents will be the individuals for the next generation. ##### Termination The termination condition of GA is either maximum number of iterations is reached or the highest ranking of fitness reaches a plateau such that successive iterations no longer produce better results. In this paper, we fix the maximum iteration number for simplicity. GA can be directly applied to the unconstrained optimisation when objective function equals to fitness function. The constraint optimisation is more challenging and different strategies are proposed to handle the non-linear constraint for GA [29]. One of the popular approaches is based on the superiority of feasible individuals to make distinction between feasible and infeasible solutions [33]. ### 8.4 Subset Simulation Subset Simulation (SS) is widely used in reliability engineering to compute the small failure probability. The main idea of SS is introducing intermediate failure events so that the failure probability can be expressed as the product of larger conditional failure probabilities [6]. Suppose the distribution of perturbed inputs with the norm ball is $q(x)$, and the failure event is denoted as $F$. let $F=F_{m}\subset F_{m-1}\subset\cdots\subset F_{2}\subset F_{1}$ be a sequence of increasing events so that $F_{m}=\bigcap_{i=1}^{m}F_{i}$. By the definition of conditional probability, we get $\begin{split}P_{F}&=P(F_{m})=P(\bigcap_{i=1}^{m}F_{i})\\\ &=P(F_{m}|\bigcap_{i=1}^{m-1}F_{i})P(\bigcap_{i=1}^{m-1}F_{i})\\\ &=P(F_{m}|F_{m-1})P(\bigcap_{i=1}^{m-1}F_{i})\\\ &=P(F_{m}|F_{m-1})\cdots P(F_{2}|F_{1})P(F_{1})\\\ &=P(F_{1})\prod_{i=2}^{m}P(F_{i}|F_{i-1})\end{split}$ (21) $F_{m}$ is usually a rare event, which means a large amount of samples are required for the precise estimation by Simple Monte Carlo (SMC). SS decomposes the rare event with a series of intermediate events, which are more frequent. The conditional probabilities of intermediate events involved in Eq. (11) can be chosen sufficiently large so that they can be efficiently estimated. For example, $P(F_{1})=1$, $P(F_{i}|F_{i-1})=0.1$, $i=2,3,4,5,6$, then $P_{F}\approx 10^{-5}$ is too small for the efficient estimation by SMC. The keypoint of SS is estimating $P(F_{1})$ and conditional probabilities $P(F_{i}|F_{i-1})$. On the one hand, $F_{1}$ can be chosen as the common event such that by SMC of $N$ perturbed inputs within the norm ball $x^{\prime}_{k}\sim q(x^{\prime})$, all samples fall into $F_{1}$. On the other hand, computing the conditional probability $P(F_{i+1}|F_{i})=\frac{1}{N}\sum_{k=1}^{N}\mathbbm{1}_{F_{i+1}}(x^{\prime}_{k})\approx\rho$ (22) requires the simulation of $(1-\rho)N$ additional samples. For example, if we have $N$ samples belonging to $F_{i-1}$ with $i\geq 2$, and $P(F_{i}|F_{i-1})=\rho$, which indicate $\rho N$ samples belongs to $F_{i}$. To estimate next conditional probability $P(F_{i+1}|F_{i})$, $(1-\rho)N$ additional samples lying in $F_{i}$ should be simulated to expand the population size to $N$. Given the conditional distribution $q(x^{\prime}|F_{i})=q(x^{\prime})I_{F_{1}}(x^{\prime})/P(F_{i})$, on average $1/P(F_{i})$ samples are simulated before one such sample occur. The Markov Chain Monte Carlo based on Metropolis-Hastings (MH) algorithm can be adopted to improve the efficiency. At intermediate iteration $i$, we already obtain $\rho N$ samples lying in $F_{i}$, that is $x^{\prime}\in F_{i}$. The target distribution is $q(\cdot|F_{i})$. We can use MH algorithm to generate new samples $x^{\prime\prime}$ from the proposal distribution $g(x^{\prime\prime}|x^{\prime})$. $g(x^{\prime\prime}|x^{\prime})$ can be normal distribution or uniform distribution centred at $x^{\prime}$. The MH algorithm can be written as below: #### 8.4.1 Initialisation Pick up a sample $x^{\prime}$ belonging to $F_{i}$. Set step $t=0$ and let $x_{t}=x^{\prime}$. #### 8.4.2 Iteration At step $t$, generate a random candidate sample $x^{\prime\prime}$ according to $g(x^{\prime\prime}|x_{t})$. Calculate the acceptance probability $A(x^{\prime\prime},x_{t})=\min\\{1,\frac{q(x_{t}|F_{i})}{q(x^{\prime\prime}|F_{i})}\frac{g(x_{t}|x^{\prime\prime})}{g(x^{\prime\prime}|x_{t})}\\}$ (23) and accept the new sample $x^{\prime\prime}$ with probability $A(x^{\prime\prime},x_{t})$. Further check if $x^{\prime\prime}\in F_{i}$, otherwise reject $x^{\prime\prime}$. In practice, we generate a uniform random number $u\in[0,1]$, set $x_{t+1}$ as $x_{t+1}=\begin{cases}x^{\prime\prime}&\text{if $u\leq A(x^{\prime\prime},x_{t})$ and $x^{\prime\prime}\in F_{i}$}\\\ x_{t}&\text{Otherwise}\\\ \end{cases}$ (24) and increment $t=t+1$. We can run a large amount of Markov chains simultaneously to enlarge the set of i.i.d. samples falling into $F_{i}$. However, as discussed in [23, 35], MH becomes inefficient for high dimensional problems. The acceptance probability $A(x^{\prime\prime},x^{\prime})$ will rapidly decrease with increasing dimensions. It results in many repeated samples and high correlated Markov chains. It is recommended to adapt the proposal distribution $g(x^{\prime\prime}|x^{\prime})$ after $M$ steps of MH [32]. The mean acceptance probability should be kept around 0.234 [17]. The whole process of SS can be summarized as follows. First, we simulate $N$ perturbed samples within the norm ball $B(x,r)$ (all belong to $F_{1}$) and use SMC to estimate $P(F_{2}|F_{1})$. From these $N$ samples, we already obtain $\rho N$ samples distributed from $q(\cdot|F_{2})$. Start from each of these $\rho N$ samples falling in $F_{2}$, we can create a Markov chain and run MH $M$ steps to generate new samples distributed from $q(\cdot|F_{2})$. In initial SS method [6], $\rho N$ distinct Markov chains (with different start points) are created. $1/\rho$ new samples are drawn from each chain, and the covariance between new samples in same Markov chain should be considered for evaluating the coefficient of variation (c.o.v) of the final estimation on ${P}_{F}$. [10] modify the algorithm by firstly enlarge set to $N$ samples with replacement from $\rho N$. Then $N$ Markov Chains are constructed and only one sample is drawn from each chain. These new generated samples can be utilised to estimate $P(F_{3}|F_{2})$. Repeating this process until the rare failure of interest. We get the final estimation of failure event probability by “assembling” the conditional probabilities with Eq. (11). #### 8.4.3 Statistical Property of SS Estimator We present the analysis on statistical property of $P_{F_{i}}$ (shortened notation for $P(F_{1})$ and $P(F_{i}|F_{i-1})$) and $P_{F}$. They are based on the assumption that Markov chain generated by MH algorithm is theoretically ergodic. That is, the stationary distribution is unique and tend to the corresponding conditional probability distribution. Since we simulate samples from Markov chain to estimate $P_{F_{i}}$ (ref. to Eq. (22)), The coefficient of variation of $P_{F_{i}}$ (c.o.v) is $\delta_{i}=\sqrt{\frac{1-P_{F_{i}}}{P_{F_{i}}N}}(1+\lambda_{i})$ (25) $\lambda_{i}>0$ represents the dependency of samples drawn from Markov Chain. This is compared to case when we use SMC to simulate independent samples from the known distribution ($\lambda_{i}=0$). As $N\rightarrow\infty$, the Central Limit Theorem (CLT) tells $\widebar{P}_{F_{1}}\rightarrow{P}(F_{1})$, and $\widebar{P}_{F_{i}}\rightarrow{P}(F_{i}|F_{i-1})$. We can get almost surely $\widebar{P}_{F}\rightarrow P(F_{1})\prod_{i=2}^{m}P(F_{i}|F_{i-1})=P_{F}$. It should be noted that $\widebar{P}_{F}$ is biased for $N$, but asymptotically unbiased due to the fact that samples in $F_{i}$ for computing $\widebar{P}_{F_{i}}$ are utilised to start Markov chain for computing $\widebar{P}_{F_{i+1}}$. This bias will asymptotically vanish when $N$ goes to infinity. ###### Proposition 2 $\widebar{P}_{F}$ is biased for $N$, the fractional bias is bounded by: $|E\left[\frac{\widebar{P}_{F}-P_{F}}{P_{F}}\right]|\leq\sum_{i>j}\delta_{i}\delta_{j}+o(1/N)=O(1/N)$ (26) ##### Proof. We define $Z_{i}=(\widebar{P}_{F_{i}}-P_{F_{i}})/\sigma_{i}$, and get $\widebar{P}_{F_{i}}=P_{F_{i}}+\sigma_{i}Z_{i}$. By CLT, it’s clear that $E[Z_{i}]=0$ and $E[Z^{2}_{i}]=1$. $\displaystyle\frac{\widebar{P}_{F}-P_{F}}{P_{F}}$ $\displaystyle=\prod_{i=1}^{m}\widebar{P}_{F_{i}}/P_{F_{i}}-1$ $\displaystyle=\prod_{i=1}^{m}(1+\delta_{i}Z_{i})-1$ $\displaystyle=\prod_{i=1}^{m}\delta_{i}Z_{i}+\sum_{i=1}^{m}\delta_{i}Z_{i}+\sum_{i>j}\delta_{i}\delta_{j}Z_{i}Z_{j}+$ $\displaystyle\sum_{i>j>k}\delta_{i}\delta_{j}\delta_{k}Z_{i}Z_{j}Z_{k}+...$ Take expectation and use $E[Z_{i}]=0$, we can further get $\displaystyle E\left[\frac{\widebar{P}_{F}-P_{F}}{P_{F}}\right]$ $\displaystyle=\left(\prod_{i=1}^{m}\delta_{i}\right)E\left[\prod_{i=1}^{m}Z_{i}\right]+\sum_{i>j}\delta_{i}\delta_{j}E[Z_{i}Z_{j}]$ $\displaystyle+\sum_{i>j>k}\delta_{i}\delta_{j}\delta_{k}E[Z_{i}Z_{j}Z_{k}]+...$ Since $\\{Z_{i}\\}$ are correlated, $E[Z_{i}Z_{j}]$, $E[Z_{i}Z_{j}Z_{k}]$,…. are not zero, and $\widebar{P}_{F_{i}}$ is biased for every $N$. $\delta_{i}$ is $O(1/\sqrt{N})$ according to the definition, which makes $\sum_{i>j}\delta_{i}\delta_{j}E[Z_{i}Z_{j}]$ have $O(1/N)$ and remaining items with higher product of $\delta_{i}$ have $o(1/N)$. Take absolute value of both sides and use Cauchy-Schwartz inequality to obtain $|E[Z_{i}Z_{j}]|\leq\sqrt{E[Z^{2}_{i}]E[Z^{2}_{j}]}=1$. Finally, we can get the proof. ###### Proposition 3 $\widebar{P}_{F}$ is a consistent estimator and its c.o.v. $\delta$ is bounded by: $\delta^{2}=E\left[\frac{\widebar{P}_{F}-P_{F}}{P_{F}}\right]^{2}\leq\sum_{i,j=1}\delta_{i}\delta_{j}+o(1/N)=O(1/N)$ (27) ##### Proof. $\displaystyle E\left[\frac{\widebar{P}_{F}-P_{F}}{P_{F}}\right]^{2}$ $\displaystyle=E\left[\prod_{i=1}^{m}\delta_{i}Z_{i}+\sum_{i=1}^{m}\delta_{i}Z_{i}+\sum_{i>j}\delta_{i}\delta_{j}Z_{i}Z_{j}+...\right]^{2}$ $\displaystyle=\sum_{i,j=1}^{m}\delta_{i}\delta_{j}E[Z_{i}Z_{j}]+o(1/N)$ $\displaystyle\leq\sum_{i,j=1}^{m}\delta_{i}\delta_{j}+o(1/N)=O(1/N)$ As $\delta_{i}=O(1/\sqrt{N})$ and $E[Z_{i}Z_{j}]\leq 1$. Note that the bias is accounted for when c.o.v. $\delta$ is defined as the deviation about $P_{F}$, instead of $E[\widebar{P}_{F}]$. The upper bound corresponds to the case that conditional probability $\\{P_{F_{i}}\\}$ are all correlated. Although $\\{P_{F_{i}}\\}$ are generally correlated, $\delta$ can be well approximated by $\sum_{i=1}^{m}\delta^{2}_{i}$. For simplicity, we can also make the assumption that enough steps of MH algorithm are taken to eliminate the dependency of simulated samples from MCMC ($\lambda_{i}=0$) [10]. Then we use sample mean $\widebar{P}_{F_{i}}$ to approximate $P_{F_{i}}$, and finally get $\widebar{\delta}^{2}\approx\sum_{i=1}^{m}\delta^{2}_{i}=\sum_{i=1}^{m}\frac{1-\widebar{P}_{F_{i}}}{\widebar{P}_{F_{i}}N}(1+\lambda_{i})\approx\sum_{i=1}^{m}\frac{1-\widebar{P}_{F_{i}}}{\widebar{P}_{F_{i}}N}$ (28) To get an idea of how many samples are required by SS to achieve the estimation accuracy $P_{F}$, we assume the c.o.v $\delta$, $\lambda_{i}=\lambda$ and $P(F_{i}|F_{i-1})=\rho$ are fixed, then $m=logP_{F}/log\rho+1$, and $\delta^{2}=(m-1)\frac{1-\rho}{\rho N}(1+\lambda)$, We can get the number of simulated samples in SS is $N_{SS}\approx mN=(\frac{|logP_{F}|^{2}}{|log\rho|^{2}}+\frac{|logP_{F}|}{|log\rho|})\frac{(1-\rho)(1+\lambda)}{N\delta^{2}}$ Thus, for a fixed $\delta$ and $\rho$, $N_{SS}\propto(|logP_{F}|^{2}+|log\rho||logP_{F}|)$. Compared to the SMC, the required samples are $N_{SMC}\propto 1/P_{F}$. This indicates that SS is substantially efficient to estimate small failure probability. ### 8.5 Complexity Analysis of Genetic Algorithm and Subset Simulation Applied on XAI Methods Although the proposed evaluation methods can be applied to all kinds of feature attribution based XAI techniques, the time complexity will be extremely high for perturbation based XAI methods, such as LIME and SHAP, which take random perturbation of input features to yield explanations. The complexity of GA is $O(t\cdot N\cdot(c(fitness)+c(crossover)+c(mutation)))$, where $t$ and $N$ are evolution iterations and population size, respectively. When we choose different XAI methods, the evaluation time of fitness values $c(fitness)$ will change correspondingly. The complexity of SS is related to the number of sub-events $m$, the number of MH steps $M$ and number of simulated samples $N$. For estimating conditional probability of each sub-event, $M$ MH steps are taken, and running each MH step requires the calculation of property function of $N$ samples. Thus, the complexity of SS is approximately $O(m\cdot M\cdot N\cdot c(property))$. When we choose different XAI methods, the evaluation time of property function $c(property)$ will change correspondingly. Table 4: Time counts of $N\cdot c(cal\\_attr\\_dis)$ in seconds across different dataset ($N=1000$). Results are averaged over 10 runs. Dataset | | Gradient --- x Input | Integrated --- Gradients GradCAM | DeepLift | LIME | SHAP MNIST | 0.0202 | 0.0512 | 0.0342 | 0.0382 | 99.21 | 25.80 CIFAR-10 | 0.0909 | 0.3329 | 0.1222 | 0.1307 | 293.72 | 255.95 CelebA | 0.0620 | 0.2759 | 0.0887 | 0.1029 | 739.59 | 692.75 From the definition of fitness function in GA and property function in SS. both $c(fitness)$ and $c(property)$ can be approximated by the computation of interpretation discrepancy $c(cal\\_attr\\_dis)$. In practice, we can compute interpretation discrepancy in a batch, e.g. $N$ samples can run simultaneously to generate the explanations. Therefore, we count the running time of $N\cdot c(cal\\_attr\\_dis)$ across different datasets and different XAI methods in Nvidia A100. Results are presented in Table 4. LIME and SHAP take much more time than gradient-based XAI methods for the batch computation of interpretation discrepancy. This will be amplified by iteration number $t$ in GA or number of sub-events times number of MH steps $m\cdot M$ in SS for one time evaluation of interpretation robustness. ### 8.6 Details of DL models The information of DL models under evaluation are presented in Table 5. All experiments were run on a machine of Ubuntu 18.04.5 LTS x86_64 with Nvidia A100 GPU and 40G RAM. The source code, DL models, datasets and all experiment results are available in Supplementary Material, and will be publicly accessible at GitHub after the double-blind review process. Table 5: Details of the datasets and DL models under evaluation. Dataset | Image Size | $r$ | DL Model | Org. | Grad. Reg. | Hess. Reg. | Adv. Train. ---|---|---|---|---|---|---|--- Train | Test | Train | Test | Train | Test | Train | Test MNIST | $1\times 32\times 32$ | 0.1 | LeNet5 | $1.000$ | $0.991$ | $0.993$ | $0.989$ | $0.993$ | $0.989$ | $0.994$ | $0.989$ CIFAR-10 | $3\times 32\times 32$ | 0.03 | ResNet20 | $0.927$ | $0.878$ | $0.910$ | $0.876$ | $0.786$ | $0.779$ | $0.715$ | $0.703$ CelebA | $3\times 64\times 64$ | 0.05 | MobileNetV1 | $0.934$ | $0.917$ | $0.918$ | $0.912$ | $0.908$ | $0.904$ | $0.769$ | $0.789$ ### 8.7 Experiment on Interpretation Discrepancy Measures We study the quality of three widely used metrics, i.e. Mean Square Error (MSE), Pearson Correlation Coefficient (PCC), and Structural Similarity Index Measure (SSIM) [14] to quantify the visual discrepancy between two attribution maps. The proposed evaluation methods can produce the adversarial interpretation with the guidance of different metrics. As shown in Fig. 6, the first row displays three seed inputs and corresponding attribution maps. The following groups separated by lines show the adversarial interpretation of perturbed input measured by different metrics. The value of PCC appears to be relatively more accurate in terms of reflecting the visual difference between original interpretation of seeds input and adversarial interpretations. Smaller PCC represents larger visual difference between two attribution maps. In addition, the value range of PCC is 0$\sim$1, with 0$\sim$0.3 indicating weak association, 0.5$\sim$1.0 indicating strong association. Therefore, it provides a uniform measurement across different seeds input and different dataset. In contrast, MSE can also precisely measure the visual difference but vary greatly with respect to seed inputs and image size. SSIM exhibits the worst performance in measuring difference between attribution maps. Figure 6: Comparison between PCC, SSIM and MSE as metrics of interpretation discrepancy between original interpretation and adversarial interpretation, generated by GA and SS. Smaller PCC, smaller SSIM, and larger MSE indicate greater difference. In this set of experiments, PCC is relatively the best to quantify the visual difference between attribution maps. ### 8.8 Experiment on Parameter Sensitivity Additional experiments on hyper-parameter settings in GA and SS are presented in Fig. 7 and Fig. 8. The objective function interpretation discrepancy $\mathfrak{D}$, measured by PCC, is optimised to converge with the increasing number of iterations while the prediction loss $J$ as the constraint is gradually satisfied. The number of iterations in GA is more important than population size. For hyper-parameters in SS, apart from the sensitivity of MH steps, we also discuss the impact of population size $n$ and quantile $\rho$ for conditional probability. As expected, increasing population size will improve the estimation precision, using SMC results with $10^{8}$ samples as the ground truth. However, there is no exact answer for which $\rho$ is better. In most cases, we find that $\rho=0.5$ can reduce the estimation error, but will take more time for one estimation. Larger $\rho$ represents more sub events are decomposed and additional estimation of conditional probability will obviously cost more time. Fortunately, we find SS estimation accuracy is more sensitive to the number of MH steps $M$ and population size $n$, compared with $\rho$. Therefore, setting $\rho=0.1$ but increasing MH steps and population size will get sufficiently accurate results. Finally, the rarity of failure events can determine the setting of these hyper-parameters. The estimating accuracy of more rare events, e.g. $\text{PCC}<0.2$, is more sensitive to the theses parameters. ### 8.9 Experiments on Evaluating XAI methods #### 8.9.1 Evaluation for Gradient-based XAI Methods We evaluate the robustness of more XAI methods on CIFAR10 and CelebA dataset, including “Deconvolution”, “Guided Backpropagation”, “Gradient$\times$Input”, “Integrated Gradients”, “GradCAM”, and “DeepLift”. Results are presented in Fig. 9. In terms of misinterpretation with preserved classification, Integrated Gradients is the most robust XAI method due to the integral of gradient of model’s output with respect to the input. The integral averages the gradient-based attribution maps over several perturbed images instead of single point explanation. DeepLift has the similar smoothing mechanism by comparing the neuron activation with a reference point. Therefore, single point explanation like Deconvolution and GradCAM are vulnerable to this type of misinterpretation when DL model’s loss surface is highly curved, leading to the great change of gradients. Gradient$\times$Input is slightly better by leveraging the input sign and strength. These XAI methods in general show similar robustness against misinterpretation conditioned on misclassification, although we find the single point explanation is a litter better than explanation averaged over several points under this circumstance. We guess the rarity of misclassification and misinterpretation make it difficult to find the perturbed input which have different attribution map with input seeds. Therefore, the averaged interpretation of perturbed input tend to be consistent with original interpretation. #### 8.9.2 Evaluation for Perturbation-based XAI Methods We also consider the robustness of interpretation for LIME and SHAP, the most popular perturbation-based XAI methods. In contrast to the gradient-based XAI methods, the robustness problem of which is thoroughly studied, perturbation- based XAI methods are difficult to be attacked by adversarial noise due to the model-agnostic settings. As far as we have known, the only adversarial attack on LIME/SHAP [38] requires to scaffold the biased DL model. That’s conceptually different from the interpretation robustness mentioned in this paper, for which the internal structure of DL model should not be maliciously modified. Thanks to the black-box nature of our evaluation approaches, we can assess the robustness of LIME/SHAP. As is known, image feature segmentation is an important procedure in LIME/SHAP. LIME/SHAP will produce inconsistent interpretation at each run when the number of samples is smaller than the number of image segments [52]. Therefore, we record the evaluation results when using different number of samples. For simplicity, we use quickshift to segment the images into around 40 pieces of super-pixels, which is the default settings of LIME/SHAP tools. Table 6: Robustness evaluation of perturbation-based XAI methods. Dataset | XAI Method \+ Num_Samples | Worst Case Evaluation | Probabilistic Evaluation ---|---|---|--- | $sol_{\widehat{F}}$ --- (PCC) | $sol_{\widetilde{F}}$ --- (PCC) $\ln P_{\widehat{F}}$ | $\ln P_{\widetilde{F}}$ MNIST | LIME+50 | 0.0002 | 0.9886 | -0.46 | -12.96 LIME+200 | 6.88e-05 | 0.9350 | -0.37 | -14.59 LIME+500 | 8.59e-06 | 0.8360 | -0.31 | -16.98 SHAP+50 | 4.11e-05 | 0.9648 | -0.36 | -14.78 SHAP+200 | 0.0011 | 0.9708 | -0.39 | -14.44 SHAP+500 | 0.0005 | 0.9851 | -0.34 | -14.41 CIFAR-10 | LIME+50 | 0.0002 | 0.9940 | -3.58 | -28.96 LIME+200 | 0.0001 | 0.9986 | -3.78 | -30.28 LIME+500 | 0.0001 | 0.9965 | -4.29 | -40.06 SHAP+50 | 0.0014 | 0.9973 | -3.75 | -48.56 SHAP+200 | 0.0016 | 0.9950 | -3.94 | -47.87 SHAP+500 | 0.0001 | 0.9982 | -3.84 | -46.24 CelebA | LIME+50 | 0.0004 | 0.9571 | -1.17 | -39.63 LIME+200 | 1.23e-05 | 0.9824 | -4.06 | -41.41 LIME+500 | 0.0001 | 0.9739 | -5.53 | -48.55 SHAP+50 | 0.0008 | 0.9568 | -4.24 | -49.21 SHAP+200 | 0.0006 | 0.9520 | -4.97 | -50.69 SHAP+500 | 0.0002 | 0.9543 | -4.41 | -58.18 The initial results in Table 6 give us the hints that perturbation-based XAI methods also suffer from the lack of interpretation robustness, especially when classification is preserved but interpretation is different. In addition, increasing the number of perturbed samples is not significant to improving interpretation robustness. In other words, even if we use enough number of perturbed samples for LIME/SHAP to produce precise interpretation results, they are still easily fooled by adversarial noise. In the second experiment, we further explore the influence of image segmentation on interpretation robustness. By making the assumption that image segmentation is fixed or not fixed after adding adversarial noise, we can check whether adversarial noise change the image segmentation and indirectly affect the interpretation robustness of perturbation-based XAI methods. Result in Table 7 shows that current image segmentation used by LIME/SHAP is sensitive to the pixel-level adversarial noise and will produce different feature masks, which may affect the interpretation robustness. Nevertheless, fixing image segmentation is not effective to defend second type of misinterpretation-wrong classification with persevered interpretation. Table 7: Sensitivity of Image Segmentation to adversarial noise when evaluating interpretation robustness for LIME+200. Dataset | Image Segmentation | Worst Case Evaluation | Probabilistic Evaluation ---|---|---|--- | $sol_{\widehat{F}}$ --- (PCC) | $sol_{\widetilde{F}}$ --- (PCC) $\ln P_{\widehat{F}}$ | $\ln P_{\widetilde{F}}$ MNIST | Not Fixed | 6.88e-05 | 0.9350 | -0.37 | -14.59 Fixed | 0.3632 | 0.8892 | -34.22 | -17.38 CIFAR-10 | Not Fixed | 0.0001 | 0.9986 | -3.78 | -30.28 Fixed | 0.0004 | 1.0000 | -100 | -41.33 CelebA | Not Fixed | 1.23e-05 | 0.9824 | -4.06 | -41.41 Fixed | 0.3547 | 0.8289 | -100 | -38.72 The above observations align with the insight that interpretation robustness is attributed to the geometrical properties of DL model (i.e. large curvature of loss function), but not the XAI methods. Therefore, the most effective way to address the problem is to train a DL model, which is more robust to be interpreted. Table 8: Robustness evaluation of XAI methods on different neural network architecture for CIFAR-10 dataset. | Model --- Architecture | Eval --- Metrics | Gradient --- x Input | Integrated --- Gradients GradCAM | DeepLift ResNet20 | $sol_{\widehat{F}}$ | 0.0166 | 0.0375 | 0.0044 | 0.0212 $sol_{\widetilde{F}}$ | 0.8562 | 0.8308 | 0.8079 | 0.8551 $\ln P_{\widehat{F}}$ | -20.32 | -45.05 | -35.93 | -21.22 $\ln P_{\widetilde{F}}$ | -80.73 | -87.64 | -68.27 | -81.81 MobileNetV2 | $sol_{\widehat{F}}$ | 0.0552 | 0.1167 | 0.0523 | 0.0712 $sol_{\widetilde{F}}$ | 0.7689 | 0.7885 | 0.7085 | 0.7707 $\ln P_{\widehat{F}}$ | -12.75 | -34.99 | -16.01 | -8.70 $\ln P_{\widetilde{F}}$ | -70.32 | -62.19 | -82.17 | -68.38 VGG16 | $sol_{\widehat{F}}$ | 0.0767 | 0.1227 | 0.1133 | 0.0206 $sol_{\widetilde{F}}$ | 0.7813 | 0.8240 | 0.8637 | 0.8358 $\ln P_{\widehat{F}}$ | -14.42 | -53.48 | -47.52 | -44.25 $\ln P_{\widetilde{F}}$ | -59.74 | -54.155 | -49.90 | -66.02 DLA | $sol_{\widehat{F}}$ | 0.0737 | 0.0953 | 0.0078 | 0.0930 $sol_{\widetilde{F}}$ | 0.7919 | 0.8111 | 0.2113 | 0.7983 $\ln P_{\widehat{F}}$ | -8.48 | -28.69 | -4.31 | -9.77 $\ln P_{\widetilde{F}}$ | -39.57 | -37.74 | -77.57 | -36.40 #### 8.9.3 Evaluation on Different NN Architectures Apart from evaluation on different datasets, we do experiments on different neural network architectures for CIFAR10 dataset. Results in Table 8 shows that Integrated Gradients maintain the most robust XAI method to misinterpretation with preserved classification, invariant to the change of neural network architecture. However, the robustness to misinterpretation conditioned on misclassification varies according to the internal structure of neural network. GradCAM seems to be robust in most cases. #### 8.9.4 Evaluation for Real-world Models Table 9: Robustness evaluation for Wide ResNet-50-2 model trained on ImageNet dataset. Results are averaged over 20 samples. XAI Methods | Worst Case Evaluation | Probabilistic Evaluation ---|---|--- | $sol_{\widehat{F}}$ --- PCC | $sol_{\widetilde{F}}$ --- (PCC) $\ln P_{\widehat{F}}$ | $\ln P_{\widetilde{F}}$ Gradient x Input | 0.159 | 0.463 | -4.595 | -100 Integrated Gradients | 0.191 | 0.515 | -39.235 | -100 GradCAM | 0.233 | 0.944 | -98.725 | -76.688 FullGrad | 0.315 | 0.799 | -100 | -75.716 Extremal Perturbations | 0.126 | 0.957 | -4.321 | -32.612 Accuracy | Top-1: 81.60% | Top-5: 95.76% We add additional experiments on wide_ResNet50_2 model trained on ImageNet-1K dataset in Table 9. We discover that FullGrad aggregates layer-wise gradient maps and thus combine the advantages of Gradient x Input and GradCAM. Extremal Perturbations seek to find the region of an input image that maximally excites a certain output, which is not robust to the adversarial perturbation. Figure 7: GA is applied to test seeds (norm balls) from MNIST and CIFAR10 dataset to find worst case interpretation discrepancy, measure by PCC. First row: fixed population size 1000, and varied iterations; Second row: fixed iterations, and varied population size. “Gradient$\times$Input” interpretation method is considered. Figure 8: SS for estimating the probability of misinterpretation ($\ln P_{F}$) within a norm ball from MNIST, CIFAR10 dataset compared with SMC using $10^{8}$ samples ( 22 minutes for each estimate for MNIST; 154 minutes for each estimate for CIFAR10). Results are averaged on 10 runs. “Gradient$\times$Input” interpretation method is considered. Figure 9: Robustness evaluation of different interpretation methods based on 100 randomly selected samples from CIFAR10 and CelebA test set. From top to bottom, first row (worst case evaluation) and second row (probabilistic evaluation). From left to right, first column (misinterpretation $\widehat{F}$) and second column (misinterpretation $\widetilde{F}$)
# Kondo destruction and fixed-point annihilation in a Bose-Fermi Kondo model Haoyu Hu<EMAIL_ADDRESS>Department of Physics & Astronomy, Rice Center for Quantum Materials, Rice University, Houston, Texas 77005, USA Qimiao Si <EMAIL_ADDRESS>Department of Physics & Astronomy, Rice Center for Quantum Materials, Rice University, Houston, Texas 77005, USA ###### Abstract Quantum criticality that goes beyond the Landau framework of order-parameter fluctuations is playing a central role in elucidating the behavior of strange metals. A prominent case appears in Kondo lattice systems, which have been extensively analysed in terms of an effective Bose-Fermi Kondo model. Here, a spin is simultaneously coupled to conduction electron bands and gapless vector bosons that represent magnetic fluctuations. The Bose-Fermi Kondo model features interacting fixed points of Kondo destruction with such properties as dynamical Planckian ($\hbar\omega/k_{\rm B}T$) scaling and loss of quasiparticles. Here we carry out a renormalization-group analysis of the model with spin isotropy and identify pair-wise annihilations of the fixed points as the spectrum of the bosonic bath evolves. Our analysis not only provides an essentially complete understanding of the previous numerical results of an SU(2)-symmetric model, but also reveals a surprising feature of sequential fixed-point annihilation. Our results lay the foundation for the understanding of quantum criticality in spin-isotropic heavy-fermion metals as well as in doped Mott-Hubbard systems. Introduction: Quantum criticality appears in a variety of strongly correlated metallic systems Special issue (2013); Coleman and Schofield (2005); Paschen and Si (2021); Kirchner _et al._ (2020); Sachdev (1999), with the antiferromagnetic (AF) case being prototypical. It provides a setting for strange metals with a complete loss of quasiparticles and with a dynamical Planckian ($\hbar\omega/k_{\rm B}T$) scaling Si _et al._ (2001); Coleman _et al._ (2001); Senthil _et al._ (2004). Such properties are to be contrasted with the Landau description of metallic quantum criticality. In the Landau framework, phases of matter are identifieded by order parameters, which specify the spontaneous breaking of global symmetries, and quantum criticality is described by the spatial and temporal fluctuations of the order parameter: A continuous AF to paramagnetic phase transition at $T=0$ is characterized by the slow fluctuations of the staggered magnetization Hertz (1976); Millis (1993); at such a spin-density-wave QCP, quasiparticles remain and the underlying fixed point is Gaussian, leading to a violation of dynamical Planckian scaling. In AF heavy-fermion metals, the beyond-Landau description Si _et al._ (2001); Coleman _et al._ (2001); Senthil _et al._ (2004) features the added critical ingredient in the form of Kondo destruction. Experiments in many heavy fermion strange metals are in strong support for the beyond-Landau nature of the quantum critical points Paschen and Si (2021); Kirchner _et al._ (2020); Paschen _et al._ (2004); Gegenwart _et al._ (2007); Friedemann _et al._ (2010); Shishido _et al._ (2005); Park _et al._ (2006); Knebel _et al._ (2008); Custers _et al._ (2012); Martelli _et al._ (2019); Schröder _et al._ (2000); Prochaska _et al._ (2020); Chen _et al._ (2022). Given the broad current interest in the strange metal behavior Keimer and Moore (2017); Paschen and Si (2021); Phillips _et al._ (2022), it is opportune to elucidate its physics by expanding our understanding of the Kondo destruction quantum criticality. The Bose-Fermi Kondo (BFK) model has been extensively recognized as an important case study in this context, given that it is an effective description Si _et al._ (2014) of the Kondo lattice Hamiltonian in the extended dynamical mean-field theory (EDMFT) Si and Smith (1996a); Smith and Si (2000); Chitra and Kotliar (2000). The model has been treated by renormalization group (RG) method within an $\epsilon$-expansion (see below for the definition of $\epsilon$) Si and Smith (1996b); Smith and Si (1999); Sengupta (2000); Si _et al._ (2001, 2003); Zhu and Si (2002); Zaránd and Demler (2002). Various numerical means ranging from Monte-Carlo to numerical renormalization group methods Grempel and Si (2003); Zhu _et al._ (2003); Glossop and Ingersent (2007); Zhu _et al._ (2007) have been used to analyse the Ising-anisotropic model. For the SU(2)-symmetric BFK model, numerical studies have only recently become possible: Ref. Cai and Si, 2019 developed a continuous-time Monte Carlo method for this case and identified an unexpectedly large number (eight in total) of fixed points for small (but positive) $\epsilon$, which are pair-wise annihilated as $\epsilon$ is increased. Understanding these surprising results is important not only for the beyond-Landau quantum criticality and strange metal physics but also for the decoherence physics of spin qubits. In this work, we carry out analytical calculations to identify all the fixed points of the SU(2)-symmetric BFK model seen in the numerical simulations. Our calculation utilizes an RG method that is based on a large-$S$ expansion (with $S$ denoting the spin size) Cuomo _et al._ (2022). Our results not only capture the phenomena of pair-wise fixed-point annihilations as a function of increasing $\epsilon$, as suggested by the numerical results, but also uncover a hitherto unsuspected result that such annihilations develop in sequence. The BFK model can be understood as an effective $0+1$ dimensional nonlinear $\sigma$ model, but with not only a topological term but also an extra Kondo coupling to gapless fermions. As we show later, the Kondo coupling affects the topological term. The interplay among the Kondo interaction, topological term, and bosonic coupling leads to a rich phase diagram with interacting fixed points that annihilate in sequence. Our results are important for the understanding of quantum criticality in correlated systems such as the physics of Kondo destruction in heavy fermion metals and doped Mott-Hubbard systems. Model: We focus on the following Hamiltonian for the BFK model: $\displaystyle H$ $\displaystyle=$ $\displaystyle J_{K}\sum_{\mu,a}S^{\mu}s^{\mu}_{c,a}(\bm{r_{0}})+g\sum_{\mu}S^{\mu}\phi^{\mu}(\bm{r_{0}})$ (1) $\displaystyle+$ $\displaystyle\sum_{\bm{k},\sigma,a}\epsilon_{\bm{k}}c_{\bm{k},\sigma,a}^{\dagger}c_{\bm{k},\sigma,a}+\frac{1}{2}\sum_{\bm{p},\mu}E_{\bm{p}}\phi_{-\bm{p}}^{\mu}\phi_{\bm{p}}^{\mu}\,.$ Here, $S^{\mu}(\mu\in\\{x,y,z\\})$ denotes a spin-$S$ impurity at site $\bm{r_{0}}=\bm{0}$. $c_{\bm{k},\sigma,a}^{\dagger}$ creates a conduction electron with momentum $\bm{k}$, spin $\sigma$, orbital $a$ (where $a\in\\{1,2,..,K\\}$) and dispersion $\epsilon_{\bm{k}}$. We consider a generic filling of the conduction electrons with a Fermi surface in the non- interacting limit, which gives a constant conduction electron density of state near the Fermi energy: $\rho_{c}(\epsilon)=\sum_{\bm{k}}\delta(\epsilon-\epsilon_{\bm{k}})\sim\rho_{0}$. The spin operator of the conduction electron with orbital $a$ at site $\bm{r_{0}}$ is defined as $s_{c,a}^{\mu}(\bm{r_{0}})=c_{\bm{r_{0}},a}^{\dagger}\frac{\sigma^{\mu}}{2}c_{\bm{r_{0}},a}$, where $\sigma^{x,y,z}$ are the Pauli matrices. It couples to the impurity at the same site via Kondo coupling $J_{K}$. $\phi_{\bm{p}}^{\mu}$ describes a free $O(3)$ bosonic field with momentum $\bm{p}$ and kinetic energy $E_{\bm{p}}=\bm{p}^{2}$. The spectral functions of $\phi$ fields are taken to be $\rho_{b}(\omega)=\sum_{\bm{p}}\delta(\omega- E_{\bm{p}})\propto|\omega|^{1-\epsilon}$. $g$ is the coupling strength between bosonic field at site $\bm{r}_{0}$ and impurity spin. To perform a large $S$ expansion, we also rescale the fields and coupling constants as $\phi^{\mu}\rightarrow\sqrt{S}\phi^{\mu}$, $g\rightarrow S^{-1/2}g$, $J_{K}\rightarrow 2S^{-1}J_{K}$. All the terms of the Hamiltonian are then of order $S$. Throughout the work, we focus on the case of perfect screening with $\kappa=K/S=2$. RG analysis and beta functions: We now carry out an RG analysis of the model defined in Eq. 1. To do so, we first calculate the propagators of the fermions and bosons using the large $S$ expansion. We note that these propagators can be expressed as functionals of the $n$-point correlation functions of $S^{\mu}$ [see Supplementary Material (SM) A SM ]. Thus, it’s sufficient to consider the effective theory of $S^{\mu}$ and its correlation functions. The effective action of $S^{\mu}$ reduces to $\displaystyle S_{eff}$ $\displaystyle=$ $\displaystyle- iSS_{B}-\frac{g^{2}}{2}\int_{\tau,\tau^{\prime}}\sum_{\tau,\tau^{\prime}}S^{\mu}(\tau)G_{\phi}(\tau-\tau^{\prime})S^{\mu}(\tau^{\prime})$ $\displaystyle-K\text{Tr}\log\bigg{[}\delta_{\tau,\tau^{\prime}}\bigg{(}\delta_{k,k^{\prime}}(\partial_{\tau^{\prime}}+\epsilon_{\bm{k}})+J_{K}\sum_{\mu}S^{\mu}(\tau)\frac{\sigma^{\mu}}{2}\bigg{)}\bigg{]}\,,$ where $S_{B}$ describes the Berry phase for the spin, the kernel $G_{\phi}(\tau)\propto|\tau-\tau^{\prime}|^{-(2-\epsilon)}$ is obtained by integrating out the bosonic fields and the last line represents the contribution from integrating out the conduction electrons. We then introduce the spinor representation of the impurity spin $S^{\mu}=Sz_{a}^{\dagger}\frac{\sigma_{ab}^{\mu}}{2}z_{b}$ with a constraint, $|z_{1}|^{2}+|z_{2}|^{2}=2$ Auerbach (2012). We expand the action around the saddle point solution $z=z_{0}=\text{const}$ and integrate out zero-modes to enforce the $SU(2)$ symmetry Zinn-Justin (2007). Without loss of generality, we consider $z=z_{0}+\delta z$ with $\displaystyle z_{0}=\sqrt{2}\begin{bmatrix}1&0\end{bmatrix}\,,~{}~{}~{}~{}~{}~{}~{}~{}\delta z=\begin{bmatrix}-\frac{|\chi|^{2}}{\sqrt{2}S}&\frac{i\chi^{\dagger}}{\sqrt{S}}\end{bmatrix}\,.$ (3) Here, the $\chi$ fields parametrize the $1/S^{0}$ fluctuations around the saddle-point solution $z_{0}$ Auerbach (2012). The effective action at order $S^{0}$ is $\displaystyle S_{\chi}=\int G^{-1}_{\chi}(i\Omega)|\chi(i\Omega)|^{2}\frac{d\Omega}{2\pi}\,,$ (4) with the propagator (see SM A SM ): $\displaystyle G_{\chi}^{-1}(i\Omega)$ $\displaystyle=$ $\displaystyle i\Omega\bigg{[}1+p(J|\Omega|^{-r})\bigg{]}+|\Omega|\bigg{[}q(J|\Omega|^{-r})+h|\Omega|^{-\epsilon}\bigg{]}\,.$ Here, the functions $q(x)$ and $p(x)$ are defined as $\displaystyle q(x)=2\kappa x^{2}\,/\left[\pi\,(1+x^{2})^{2}\right]\,,$ $\displaystyle p(x)=(\kappa/\pi)[x-x^{3}-(1+x^{2})^{2}\arctan(x)]/(1+x^{2})^{2}\,,$ where $p(x)\sim-8\kappa x^{3}/(3\pi)$ at small $x$, and $p(x\rightarrow\infty)\rightarrow-\kappa/2$. In the derivation, we regularize the conduction electron density of state to be $\rho_{c}(\epsilon)=\rho_{0}|\epsilon|^{-r}$. The exponent $r$ is introduced to perform a minimal-subtraction RG study and will be set to zero at the final step of the calculation Zhu and Si (2002). Other constants in Eq. LABEL:eq:prop are $h\propto g^{2}$, $J=\pi J_{K}\rho_{0}$. With the effective theory of $\chi$, which describes the $S^{0}$ fluctuations, we’re able to calculate the correlation functions of the fermions and bosons (see SM A SM ). We then require that the poles in these correlation functions are minimally removed Zhu and Si (2002); Zinn-Justin (2007), which gives the following beta functions (see SM A SM ): $\displaystyle\beta_{J}$ $\displaystyle=$ $\displaystyle\frac{J-J^{3}}{2S\pi(1+J^{2})}F(J)+\frac{J^{2}}{S\pi(1+J^{2})}H(J)$ $\displaystyle+\bigg{[}\frac{-(1-J^{2})Jh+2iJ^{2}h}{2S(1+J^{2})\pi c(J)(c(J)+h)}+\text{h.c.}\bigg{]}+o(\frac{1}{S})\,,$ $\displaystyle\beta_{h}$ $\displaystyle=$ $\displaystyle-\frac{1}{S}\tilde{\epsilon}h+\frac{h}{S\pi}F(J)-\frac{1}{S\pi}\frac{(c(J)+c(J)^{*})h}{|c(J)|^{2}}$ (6) $\displaystyle-\frac{1}{S\pi}\frac{-2h^{2}-(c(J)+c(J)^{*})h}{(c(J)+h)(c(J)^{*}+h)}+o(\frac{1}{S})\,.$ Here, $\displaystyle F(j)$ $\displaystyle=$ $\displaystyle{2q(j)}/{\left[(1+p(j))^{2}+q(j)^{2}\right]}$ $\displaystyle H(j)$ $\displaystyle=$ $\displaystyle{-2(1+p(j))}/{\left[(1+p(j))^{2}+q(j)^{2}\right]}$ $\displaystyle c(J)$ $\displaystyle=$ $\displaystyle i(1+p(J))+q(J),$ (7) and we rescale $\epsilon$ as $\epsilon=\tilde{\epsilon}/S$. All terms in the beta functions at the order of $1/S$ have been captured. By treating $1/S$ as the only small parameter in the beta function, we’re able to determine the phase diagram even for large $J$ and $h$. Figure 1: Schematic renormalization group flows for $\tilde{\epsilon}<\tilde{\epsilon}_{1}$ (a), $\tilde{\epsilon}_{1}<\tilde{\epsilon}<\tilde{\epsilon}_{2}$ (b) and $\tilde{\epsilon}_{2}<\tilde{\epsilon}$ (c). The squares label stabled fixed points, the circles mark unstable fixed points and the wide line denotes a line of stable fixed points at $h\rightarrow\infty$. See SM C for the RG flow from an explicit numerical evaluations of the beta functions. RG flow: We are now in a position to analyze the RG flow. Note that there is always a trivial unstable fixed point at $J=h=0$, which we will not discuss further. In addition, at $J=0$, the model becomes a Bose-Kondo model and our results are consistent with previous works Cuomo _et al._ (2022); Nahum (2022). After numerical evaluations of Eq. 6 at perfect screening $\kappa=2$, we identify three different types of phase diagrams corresponding to three regions of $\tilde{\epsilon}$: $\tilde{\epsilon}<\tilde{\epsilon}_{1}$, $\tilde{\epsilon}_{1}<\tilde{\epsilon}<\tilde{\epsilon}_{2}$, $\tilde{\epsilon}_{2}<\tilde{\epsilon}$, with $\tilde{\epsilon}_{1}=1/\pi,\tilde{\epsilon}_{2}\simeq 0.36$. The schematic RG flows are shown in Fig. 1. There are in total seven types of non-trivial fixed points with their properties – including their labels – listed in Tab. 1. We first analyze the RG flow at $\tilde{\epsilon}<\tilde{\epsilon}_{1}$ as shown in Fig. 1 (a). At $h=0$, the model is equivalent to a standard Kondo impurity model and the behavior of $\beta_{J}$ at small $J$ (see SM C SM ) is consistent from what has been found via perturbation theory Affleck (1995). Numerically, we find $J$ is always relevant and the system flows to a strong coupling fixed point at $J=\infty$. We call it Kondo or K fixed point; the bosonic coupling $h$ is irrelevant at this fixed point. For $J=0$, there are two stable fixed points at $h=(1-\sqrt{1-\tilde{\epsilon}^{2}})/{\tilde{\epsilon}},\infty$ and one critical point at $h=(1+\sqrt{1-\tilde{\epsilon}^{2}})/{\tilde{\epsilon}}$. Due to their local moment nature, we label the two stable fixed points as L fixed point (at small $h$) and L′ fixed point (at infinity $h$). We call the critical point that controls the quantum phase transition between L and L′ as LC. After introducing the Kondo coupling, we find $J$ is irrelevant at both L and LC, but marginal at L′. The marginal behavior extends to finite $J$ which yields a line of fixed points at $h=\infty$ (see SM C SM ). | Position | Stable? | Region | Color ---|---|---|---|--- L | $J=0,h=\frac{1-\sqrt{1-\tilde{\epsilon}^{2}}}{\tilde{\epsilon}}$ | Stable | $\tilde{\epsilon}<\tilde{\epsilon}_{1}$ | Grey L′ | $h=\infty$ | Stable | | Yellow K | $J=\infty,h=0$ | Stable | | Purple LC | $J=0,h=\frac{1+\sqrt{1-\tilde{\epsilon}^{2}}}{\tilde{\epsilon}}$ | Unstable | $\tilde{\epsilon}<\tilde{\epsilon}_{1}$ | Green KD | $J\neq 0,h\neq 0$ | Unstable | $\tilde{\epsilon}<\tilde{\epsilon}_{2}$ | Red KD′ | $J=\infty,h=\frac{2}{\tilde{\epsilon}\pi}$ | Unstable | | Blue C | $J\neq 0,h\neq 0$ | Unstable | $\tilde{\epsilon}<\tilde{\epsilon}_{2}$ | Brown Table 1: List of fixed points with their positions in the phase diagram, stability, region of existence and color in Fig. 1 Away from $J=0$, $h=0$ and $h=\infty$ lines, there are three unstable fixed points. We label them as KD, KD′, and C. Both KD and KD′ describe the Kondo destruction quantum phase transition with one relevant direction and one irrelevant direction. A third fixed point C with all the directions being relevant separates KD and KD′ in the RG flow. KD is located at smaller $J,h$ and controls the Kondo destruction quantum phase transition between the local momentum phase L and Kondo-screened phase K. The quantum phase transition between L′ and K is described by the fixed point KD′ at $(h_{c}=\frac{2}{\tilde{\epsilon}\pi},J=\infty)$ (see SM C SM ). The topology of RG flow and the existence of fixed points change as we increase the value of $\tilde{\epsilon}$. Remarkably, we find a two-step sequence of fixed-point annihilation. The first annihilation occurs between L and LC′ at $\tilde{\epsilon}=\tilde{\epsilon}_{1}=1/\pi$ and the second fixed- point annihilation occurs between C and KD at $\tilde{\epsilon}=\tilde{\epsilon}_{2}\simeq 0.36$. We show the RG flow after first annihilation in Fig. 1 (b) and the flow after second annihilation in Fig. 1 (c). Figure 2: Zeros of $\beta_{J}$ and $\beta_{h}$ at $\tilde{\epsilon}<\tilde{\epsilon}_{1}$, $\tilde{\epsilon}=\tilde{\epsilon}_{1}$, $\tilde{\epsilon}_{1}<\tilde{\epsilon}<\tilde{\epsilon}_{2}$, $\tilde{\epsilon}=\tilde{\epsilon}_{2}$ and $\tilde{\epsilon}>\tilde{\epsilon}_{2}$. $\beta_{h}=0$ line have two crossing points with $J=0$ at $\tilde{\epsilon}<\tilde{\epsilon}_{1}$, which describe L and LC fixed points respectively. Two crossing points merge into one at $\tilde{\epsilon}=\tilde{\epsilon}_{1}$ and disappear at $\tilde{\epsilon}>\tilde{\epsilon}_{1}$. $\beta_{h}=0$ and $\beta_{J}=0$ lines have two crossing points at $\tilde{\epsilon}<\tilde{\epsilon}_{2}$, which represent KD and C fixed points respectively. They merge into one at $\tilde{\epsilon}=\tilde{\epsilon}_{2}$ and disappear at $\tilde{\epsilon}>\tilde{\epsilon}_{2}$. Fixed-point annihilation: We next turn to a detailed analysis of the fixed- point annihilation. The annihilation between two fixed points respectively with $n$ and $n+1$ relevant directions is generically allowed by the topology of the RG flow. In the BFK model, both $n=0$ and $n=1$ types of annihilation happen. To be specific, we consider the zeros of beta functions. The shape of $\beta_{J}=0$ line and $\beta_{h}=0$ line at various $\tilde{\epsilon}$ is shown in Fig. 2. At $\tilde{\epsilon}<\tilde{\epsilon}_{1}$, $\beta_{h}=0$ line have two crossings points with $J=0$ axis, which label the positions of L (at a smaller $h$) and LC (at a larger $h$). With increasing $\tilde{\epsilon}$, the crossing points get closer and merge into one at $\tilde{\epsilon}=\tilde{\epsilon}_{1}$, signaling the first fixed-point annihilation between L and LC. The two crossing points between $\beta_{J}=0$ and $\beta_{h}=0$ lines represent KD (at a smaller $h$) and C (at a larger $h$) fixed points respectively. Their annihilation occurs at $\tilde{\epsilon}=\tilde{\epsilon}_{2}$, where two curves have a single touching point. Upon further increasing $\tilde{\epsilon}$, these fixed points disappear. Role of Berry phase: We now discuss the role of Berry phase and its interplay with Kondo coupling. From Eq. LABEL:eq:prop, we see that the Kondo coupling contributes an $i\Omega p(J)$ term to $G_{\chi}^{-1}$ in the $r=0$ limit. It reduces the Berry phase term, $i\Omega$, at finite $J$ and fully removes it at $J=\infty$ and $K=2S$. The exact cancellation reveals the nature of Kondo effect which can also be seen by directly solving the problem at $J=\infty$ as shown in SM D SM . It also suggests that, at $J=\infty$, the system should always stay at the Kondo-screened phase. Thus, we expect the critical point KD′, located at $J=\infty,h=2/(\tilde{\epsilon\pi})$, will be moved to a finite $J$ when the next-order corrections in $1/S$ are included in the RG analysis. Figure 3: RG flow based on numerical simulations at $S=1/2$ (adapted from Ref. Cai and Si (2019)), where $\tilde{\epsilon}^{*}\simeq 0.265$ Comparison with numerical results: The BFK model is equivalent to the numerically studied Bose-Fermi Anderson model with a power-law bosonic bath spectral functions $\rho_{b}(\omega)=|\omega|^{1-\epsilon}$ when the Hubbard interaction $U$ is sufficiently large Cai and Si (2019). At $S=1/2$, two types of RG flow have been observed in the numerical simulations as shown in Fig. 3 (adapted from Ref. Cai and Si (2019)). The first one, at $\tilde{\epsilon}<\tilde{\epsilon}^{*}\simeq 0.265$, is comparable with our analytical results at $\tilde{\epsilon}<\tilde{\epsilon}_{1}$. The other one, at $\tilde{\epsilon}>\tilde{\epsilon}^{*}$, is consistent with our analytical RG flow at $\tilde{\epsilon}>\tilde{\epsilon}_{2}$. We note that our analytical analysis has identified all the fixed points observed numerically and the values of $\tilde{\epsilon}_{1,2}$ that we have derived turns out to be comparable to the numerical results. However, in further details there are several distinctions in our analytical results. First, from the analytical calculations the KD′ fixed point is located at infinite $J$, while it is at a finite $J$ numerically. As we explained before, the next-order corrections (in $1/S$) are expected to move the position of KD′ to finite $J$ and we leave their analysis for a future study. Second, the analytical calculations find a line of fixed points at infinite $h$ instead of a single point as suggested by numerical simulations. In practice, it’s hard to conclude whether L′ phase corresponds to a single fixed point or a line of fixed points numerically, and it’s also possible that the next order (in $1/S$) corrections would shrink the line of fixed points to a single fixed point in our analytical approaches. We leave the precise reconcilation of this issue for future studies. Finally, numerical calculations fail to find RG flows corresponding to $\tilde{\epsilon}_{1}<\tilde{\epsilon}<\tilde{\epsilon}_{2}$. The likely reason for this discrepancy is that numerical calculations have so far focused on only a small number of $\tilde{\epsilon}$ and may have missed this narrow region of $\tilde{\epsilon}$. Discussion: Several remarks are in order. First, our results have some general connections with the dimensional hierarchy of nonlinear $\sigma$ model with topological terms Nahum _et al._ (2015); Ma and Wang (2020); Abanov and Wiegmann (2000). An important distinction of our model is the additional coupling to gapless fermions, which normalize the effect of the topological term and leads to a richer phase diagram. Second, our model and results are in general relevant to various strongly correlated systems, including heavy fermion metals and doped Mott-Hubbard systems. In these systems, the electrons tend to be localized and form local moments due to the strong Coulomb interactions that they experience. The local moment then interacts with the remaining itinerant electrons and also collective bosonic excitations, the effects of which are captured by the BFK model. Importantly, the fixed point KD′ has a local spin susceptibility $1/|\tau|^{\epsilon}$ Cai and Si (2019), which is essential for realizing the EDMFT solutions for the beyond-Landau quantum criticality Si _et al._ (2014). Thus, the understanding we have reached for the Kondo destruction physics of the SU(2) BFK model is crucially important for understanding the quantum criticality of magnetic heavy-fermion metals Special issue (2013); Coleman and Schofield (2005); Paschen and Si (2021); Kirchner _et al._ (2020). Generalizations to including additional local degrees of freedom, such as multipoles Martelli _et al._ (2019); Liu _et al._ (2021); Han _et al._ (2022), may also be considered. Finally, the BFK model also represents an effective description of the doped Mott systems in the form of $t$-$J$-$U$ Hubbard models Smith and Si (2000). As such, our results may also play an important role in the understanding of such systems Fang _et al._ (2022); Badoux _et al._ (2016); Chowdhury _et al._ (2021). Conclusions: We have carried out a renormalization group analysis of the spin- isotropic Bose-Fermi Kondo model. Our RG analysis determines a set of interacting fixed points with dynamical Planckian scaling and loss of quasiparticles. Our results not only provide an understanding of the numerical results on the systematic evolution of the fixed points, but also reveals an unexpected phenomenon of sequential fixed-point annihilations. Our results lay the foundation for the understanding of quantum criticality in both spin- isotropic heavy-fermion metals and doped Mott-Hubbard systems. We thank A. Cai and S. Paschen for useful discussions. This work has primarily been supported by the NSF Grant No. DMR-2220603 and by the Robert A. Welch Foundation Grant No. C-1411. The work of Q.S. was performed in part at the Aspen Center for Physics, which is supported by the NSF grant No. PHY-1607611. ## References * Special issue (2013) Special issue, Phys. Status Solidi 250, 417 (2013). * Coleman and Schofield (2005) P. Coleman and A. J. Schofield, Nature 433, 226 (2005). * Paschen and Si (2021) S. Paschen and Q. Si, Nat. Rev. Phys. 3, 9 (2021). * Kirchner _et al._ (2020) S. Kirchner, S. Paschen, Q. Chen, S. Wirth, D. Feng, J. D. Thompson, and Q. Si, Rev. Mod. Phys. 92, 011002 (2020). * Sachdev (1999) S. Sachdev, _Quantum Phase Transitions_ (Cambridge University Press, Cambridge, 1999). * Si _et al._ (2001) Q. Si, S. Rabello, K. Ingersent, and J. L. Smith, Nature 413, 804 (2001). * Coleman _et al._ (2001) P. Coleman, C. Pépin, Q. Si, and R. Ramazashvili, J. Phys. Cond. Matt. 13, R723 (2001). * Senthil _et al._ (2004) T. Senthil, M. Vojta, and S. Sachdev, Phys. Rev. B 69, 035111 (2004). * Hertz (1976) J. Hertz, Phys. Rev. B 14, 1165 (1976). * Millis (1993) A. J. Millis, Phys. Rev. B 48, 7183 (1993). * Paschen _et al._ (2004) S. Paschen, T. Lühmann, S. Wirth, P. Gegenwart, O. Trovarelli, C. Geibel, F. Steglich, P. Coleman, and Q. Si, Nature 432, 881 (2004). * Gegenwart _et al._ (2007) P. Gegenwart, T. Westerkamp, C. Krellner, Y. Tokiwa, S. Paschen, C. Geibel, F. Steglich, E. Abrahams, and Q. Si, Science 315, 1049 (2007). * Friedemann _et al._ (2010) S. Friedemann, N. Oeschler, S. Wirth, C. Krellner, C. Geibel, F. Steglich, S. Paschen, S. Kirchner, and Q. Si, Proceedings of the National Academy of Sciences 107, 14547 (2010). * Shishido _et al._ (2005) H. Shishido, R. Settai, H. Harima, and Y. Ōnuki, J. Phys. Soc. Jpn. 74, 1103 (2005). * Park _et al._ (2006) T. Park, F. Ronning, H. Q. Yuan, M. B. Salamon, R. Movshovich, J. L. Sarrao, and J. D. Thompson, Nature 440, 65 (2006). * Knebel _et al._ (2008) G. Knebel, D. Aoki, J.-P. Brison, and J. Flouquet, J. Phys. Soc. Jpn. 77, 114704 (2008). * Custers _et al._ (2012) J. Custers, K. A. Lorenzer, M. Müller, A. Prokofiev, A. Sidorenko, H. Winkler, A. M. Strydom, Y. Shimura, T. Sakakibara, R. Yu, Q. Si, and S. Paschen, Nat. Mater. 11, 189 (2012). * Martelli _et al._ (2019) V. Martelli, A. Cai, E. M. Nica, M. Taupin, A. Prokofiev, C.-C. Liu, H.-H. Lai, R. Yu, K. Ingersent, R. Küchler, A. M. Strydom, D. Geiger, J. Haenel, J. Larrea, Q. Si, and S. Paschen, Proc. Natl. Acad. Sci. U.S.A. 116, 17701 (2019). * Schröder _et al._ (2000) A. Schröder, G. Aeppli, R. Coldea, M. Adams, O. Stockert, H. v. Löhneysen, E. Bucher, R. Ramazashvili, and P. Coleman, Nature 407, 351 (2000). * Prochaska _et al._ (2020) L. Prochaska, X. Li, D. C. MacFarland, A. M. Andrews, M. Bonta, E. F. Bianco, S. Yazdi, W. Schrenk, H. Detz, A. Limbeck, Q. Si, E. Ringe, G. Strasser, J. Kono, and S. Paschen, Science 367, 285 (2020). * Chen _et al._ (2022) L. Chen, D. T. Lowder, E. Bakali, A. Andrews, W. Schrenk, M. Waas, R. Svagera, G. Eguchi, L. Prochaska, Q. Si, _et al._ , arXiv preprint arXiv:2206.00673 (2022). * Keimer and Moore (2017) B. Keimer and J. E. Moore, Nat. Phys. 13, 1045 (2017). * Phillips _et al._ (2022) P. W. Phillips, N. E. Hussey, and P. Abbamonte, arXiv preprint arXiv:2205.12979 (2022). * Si _et al._ (2014) Q. Si, J. H. Pixley, E. Nica, S. J. Yamamoto, P. Goswami, R. Yu, and S. Kirchner, Journal of the Physical Society of Japan 83, 061005 (2014). * Si and Smith (1996a) Q. Si and J. L. Smith, Phys. Rev. Lett. 77, 3391 (1996a). * Smith and Si (2000) J. L. Smith and Q. Si, Phys. Rev. B 61, 5184 (2000). * Chitra and Kotliar (2000) R. Chitra and G. Kotliar, Phys. Rev. Lett. 84, 3678 (2000). * Si and Smith (1996b) Q. Si and J. L. Smith, Phys. Rev. Lett. 77, 3391 (1996b). * Smith and Si (1999) J. L. Smith and Q. Si, Europhysics Letters (EPL) 45, 228 (1999). * Sengupta (2000) A. M. Sengupta, Phys. Rev. B 61, 4041 (2000). * Si _et al._ (2003) Q. Si, S. Rabello, K. Ingersent, and J. L. Smith, Phys. Rev. B 68, 115103 (2003). * Zhu and Si (2002) L. Zhu and Q. Si, Phys. Rev. B 66, 024426 (2002). * Zaránd and Demler (2002) G. Zaránd and E. Demler, Phys. Rev. B 66, 024427 (2002). * Grempel and Si (2003) D. Grempel and Q. Si, Phys. Rev. Lett. 91, 026401 (2003). * Zhu _et al._ (2003) J. Zhu, D. Grempel, and Q. Si, Phys. Rev. Lett. 91, 156404 (2003). * Glossop and Ingersent (2007) M. Glossop and K. Ingersent, Phys. Rev. Lett. 99, 227203 (2007). * Zhu _et al._ (2007) J.-X. Zhu, S. Kirchner, R. Bulla, and Q. Si, Phys. Rev. Lett. 99, 227204 (2007). * Cai and Si (2019) A. Cai and Q. Si, Phys. Rev. B 100, 014439 (2019). * Cuomo _et al._ (2022) G. Cuomo, Z. Komargodski, M. Mezei, and A. Raviv-Moshe, arXiv preprint arXiv:2202.00040 (2022). * (40) Supplementary Materials . * Auerbach (2012) A. Auerbach, _Interacting electrons and quantum magnetism_ (Springer Science & Business Media, 2012). * Zinn-Justin (2007) J. Zinn-Justin, _Phase transitions and renormalization group_ (Oxford University Press on Demand, 2007). * Nahum (2022) A. Nahum, arXiv preprint arXiv:2202.08431 (2022). * Affleck (1995) I. Affleck, arXiv preprint cond-mat/9512099 (1995). * Nahum _et al._ (2015) A. Nahum, J. T. Chalker, P. Serna, M. Ortuño, and A. M. Somoza, Phys. Rev. X 5, 041048 (2015). * Ma and Wang (2020) R. Ma and C. Wang, Phys. Rev. B 102, 020407 (2020). * Abanov and Wiegmann (2000) A. Abanov and P. Wiegmann, Nuclear Physics B 570, 685 (2000). * Liu _et al._ (2021) C.-C. Liu, S. Paschen, and Q. Si, arXiv preprint arXiv:2101.01807 (2021). * Han _et al._ (2022) S. Han, D. J. Schultz, and Y. B. Kim, arXiv preprint arXiv:2206.02808 (2022). * Fang _et al._ (2022) Y. Fang, G. Grissonnanche, A. Legros, S. Verret, F. Laliberté, C. Collignon, A. Ataei, M. Dion, J. Zhou, D. Graf, M. J. Lawler, P. A. Goddard, L. Taillefer, and B. J. Ramshaw, Nature Physics 18, 558 (2022). * Badoux _et al._ (2016) S. Badoux, W. Tabis, F. Laliberté, G. Grissonnanche, B. Vignolle, D. Vignolles, J. Béard, D. A. Bonn, W. N. Hardy, R. Liang, N. Doiron-Leyraud, L. Taillefer, and C. Proust, Nature 531, 210 (2016). * Chowdhury _et al._ (2021) D. Chowdhury, A. Georges, O. Parcollet, and S. Sachdev, arXiv preprint arXiv:2109.05037 (2021). ## Supplementary Material ### .1 A. Renormalization group analysis We begin by reiterating that our RG procedure is carried out for the Hamiltonian defined in Eq. 1. The effective actions $S$ and $S_{\chi}$ are introduced to describe the next-order fluctuations and calculate the correlations functions. The relevant correlation functions are: $\displaystyle\langle\sum_{\mu}[\phi^{\mu}(\bm{r},\tau=0)]^{2}\rangle_{imp}\,,$ $\displaystyle\sum_{\sigma,a}\langle c_{\sigma,a}(\bm{r_{0}},\tau)c_{\sigma,a}^{\dagger}(\bm{r_{0}},\tau)\rangle_{imp}\,.$ (S1) Here, $\langle\cdot\rangle_{imp}$ means we subtract the free-bulk contributions, and the bulk system has spacetime dimension $d=4-\epsilon$. Before performing calculations, we note that the exact formula of above correlations functions are $\displaystyle\langle\sum_{\mu}\phi^{\mu}(\bm{r},\tau=0)^{2}\rangle_{imp}=\int_{\tau,\tau^{\prime}}\frac{A_{\phi}}{2\pi}\frac{h}{(\bm{r}^{2}+\tau^{2})^{(2-\epsilon)/2}(\bm{r}^{2}+\tau^{\prime 2})^{(2-\epsilon)/2}}\langle\sum_{\mu}S^{\mu}(\tau)S^{\mu}(\tau^{\prime})\rangle_{imp}$ $\displaystyle-\sum_{\sigma,a}\langle c_{\sigma,a}(\bm{r_{0}},\tau)c_{\sigma,a}^{\dagger}(\bm{r_{0}},\tau)\rangle_{imp}=K\bigg{\langle}\bigg{[}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau^{\prime}}+\epsilon_{\bm{k}})+\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}S^{\mu}(\tau)\sigma^{\mu}\bigg{)}\bigg{]}\bigg{\rangle}_{imp}-G_{0}(\bm{r}_{0},\tau),$ (S2) where $\displaystyle h=-4A_{\phi}g^{2}\Gamma(-1+\epsilon)\sin(\frac{\epsilon\pi}{2})\approx[2\pi A_{\phi}+O(\epsilon)]g^{2}.$ (S3) $A_{\phi}$ is the normalization factor of boson propagator and $G_{0}(\bm{r}_{0},\tau)$ is the free fermion propagator. Above equations suggest the fermion and boson correlation functions can be represented as functional of $n$-point correlation functions of $S^{\mu}$ fields. Then, it’s sufficient to evaluate the correlation functions of $S^{\mu}$. To do so, we first derive the effective action of $S^{\mu}$ by integrating out fermions and bosons: $\displaystyle S_{eff}=-iSS_{B}-K\text{Tr}\log\bigg{(}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau}+\epsilon_{\bm{k}})-\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}S^{\mu}(\tau)\frac{\sigma^{\mu}}{2}\bigg{)}.$ (S4) Using $S_{eff}$ and Eq. S2, we’re able to calculate fermion and boson correlation functions. The detailed calculations are shown in Supplementary Material (SM) B and the results are $\displaystyle\langle\sum_{\mu}[\phi^{\mu}(\bm{r},\tau=0)]^{2}\rangle_{imp}$ $\displaystyle\propto$ $\displaystyle\frac{1}{|\bm{r}|^{2}}h\bigg{[}1-\frac{1}{S\pi r}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{S\pi\epsilon}[\frac{\log(1+h/c(J)}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}]\bigg{]}$ $\displaystyle\sum_{\sigma,a}\langle c_{\sigma,a}(\bm{r_{0}},\tau)c_{\sigma,a}^{\dagger}(\bm{r_{0}},\tau)\rangle_{imp}$ $\displaystyle\propto$ $\displaystyle\frac{J^{2}\text{sgn}(\tau)}{(1+J^{2})|\tau|}\bigg{[}1+\frac{-1-J^{2}}{(1+J^{2})^{2}\pi Sr}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{(1+J^{2})\pi Sr}\int_{0}^{J}\frac{2jF(j)}{1+j^{2}}dj$ $\displaystyle-\frac{2}{(1+J^{2})r\pi S}\int_{0}^{J}\frac{H(j)}{1+j^{2}}dj-\frac{-1+J^{2}}{(1+J^{2})^{2}\pi S\epsilon}\bigg{(}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}$ $\displaystyle+\frac{2iJ}{(1+J^{2})^{2}\pi\epsilon S}\bigg{(}-\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}\bigg{]}.$ We are now in the position to carry out renormalization group study. We introduce the following relation between renormalized couplings $J,h$ and bare couplings $J_{B},h_{B}$: $\displaystyle h_{B}=Z_{h}\mu^{\epsilon}h$ $\displaystyle J_{B}=Z_{J}\mu^{r}J$ (S6) where $Z_{J},Z_{h}$ are coupling renormalization factor, and $\mu$ is a renormalization energy scale. Without loss of generality, we let $\displaystyle Z_{h}=1+\frac{1}{S}x_{h}$ $\displaystyle Z_{J}=1+\frac{1}{S}x_{J}.$ We determine $x_{J}$ and $x_{h}$ by requiring all the poles as a function of $r,\epsilon$ in Eq. LABEL:eq:corr are minimally removed. This leads to $\displaystyle x_{h}$ $\displaystyle=$ $\displaystyle\frac{1}{\pi r}\int_{0}^{J}\frac{F(j)}{j}dj-\frac{1}{\pi\epsilon}\bigg{[}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{]}+O(\frac{1}{S})$ $\displaystyle x_{J}$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi r}\int_{0}^{J}\frac{F(j)}{j}dj-\frac{1}{\pi r}\int_{0}^{J}\frac{jF(j)}{1+j^{2}}dj+\frac{1}{r\pi}\int_{0}^{J}\frac{H(j)}{1+j^{2}}dj+\frac{-1+J^{2}}{2(1+J^{2})\pi\epsilon}\bigg{(}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}$ $\displaystyle-\frac{iJ}{(1+J^{2})\pi\epsilon}\bigg{(}-\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}+O(\frac{1}{S})$ Finally, we take $\mu$ derivative of Eq. S6 at fixed $h_{B}$ and $J_{B}$ and obtain the following beta functions: $\displaystyle\beta_{J}$ $\displaystyle=$ $\displaystyle\frac{J-J^{3}}{2S\pi(1+J^{2})}F(J)+\frac{J^{2}}{S\pi(1+J^{2})}H(J)+\bigg{[}\frac{-(1-J^{2})Jh+2iJ^{2}h}{2S(1+J^{2})\pi c(J)(c(J)+h)}+\text{h.c.}\bigg{]}+o(\frac{1}{S})\,,$ $\displaystyle\beta_{h}$ $\displaystyle=$ $\displaystyle-\frac{1}{S}\tilde{\epsilon}h+\frac{h}{S\pi}F(J)-\frac{1}{S\pi}\frac{(c(J)+c(J)^{*})h}{|c(J)|^{2}}-\frac{1}{S\pi}\frac{-2h^{2}-(c(J)+c(J)^{*})h}{(c(J)+h)(c(J)^{*}+h)}+o(\frac{1}{S})\,.$ (S8) where we also set $r=0$ at the last step. Note that an alternative way to perform renormalization group study is to start from the action in Eq. LABEL:eq:action and then expand in powers of $\chi$ fields Nahum (2022). Combining the quadratic order term $S_{\chi}$ and fourth-order term $S_{4}$, we could find an interacting field theory of $\chi$: $S_{\chi}+S_{4}$. Then we are able to perform RG analysis based on this model. The resulting beta functions are expected to be consistent with current approach. We reserve such an analysis for a future study. ### .2 B. Calculations of correlation functions Here we present the detailed calculation of Eq. S2 using the effective action $S_{eff}$. We first introduce the spinor representation of $S^{\mu}$ with $z^{\dagger}\frac{\sigma^{\mu}}{2}z$ and let $z=z_{0}+\delta z$ (see Eq. 3). This gives us $\displaystyle S^{\mu}$ $\displaystyle=$ $\displaystyle S\delta_{\mu,z}+\delta S^{\mu}$ $\displaystyle\delta S^{x}$ $\displaystyle=$ $\displaystyle\frac{i\sqrt{S}}{\sqrt{2}}(\chi^{\dagger}-\chi)+O(\frac{1}{S^{1/2}})$ $\displaystyle\delta S^{y}$ $\displaystyle=$ $\displaystyle\frac{\sqrt{S}}{\sqrt{2}}(\chi^{\dagger}+\chi)+O(\frac{1}{S^{1/2}})$ $\displaystyle\delta S^{z}$ $\displaystyle=$ $\displaystyle-|\chi|^{2}+O(\frac{1}{S}).$ (S9) We expand the action in Eq. LABEL:eq:action to the quadratic order of $\chi$ fields: $\displaystyle S_{eff}$ $\displaystyle=$ $\displaystyle S_{0}+S_{\chi}$ $\displaystyle S_{0}$ $\displaystyle=$ $\displaystyle-\frac{Sg^{2}}{4}\int_{\tau,\tau^{\prime}}\frac{\sum_{\mu}(z_{0}^{\dagger}\sigma^{\mu}z_{0})^{2}}{2}G_{\phi}(\tau-\tau^{\prime})-K\text{Tr}\log\bigg{(}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau^{\prime}}+\epsilon_{\bm{k}})+\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}(z_{0}^{\dagger}\sigma^{\mu}z_{0})\sigma^{\mu}\bigg{)},$ (S10) with $S_{0}$ the saddle point contribution at order of $S$ and $S_{\chi}$ describing fluctuations at order $S^{0}$. We now derive the formula of $S_{\chi}$. It has contribution from three part: the Berry phase, bosonic coupling and Kondo interaction. The Berry phase term gives $\displaystyle\int i\Omega|\chi(\Omega)|^{2}\frac{d\Omega}{2\pi}\,,$ (S11) while the bosonic coupling yields $\displaystyle-\frac{g^{2}}{2}\int_{\tau,\tau^{\prime}}G_{\phi}(\tau-\tau^{\prime})[\chi^{\dagger}(\tau)\chi(\tau^{\prime})+\chi(\tau)\chi^{\dagger}(\tau^{\prime})-|\chi(\tau)|^{2}-|\chi(\tau^{\prime})|^{2}]$ (S12) $\displaystyle=$ $\displaystyle\int h|\Omega|^{1-\epsilon}|\chi(i\Omega)|^{2}\frac{d\Omega}{2\pi}\,,$ and, finally, the Kondo coupling gives (by expanding the fermion determinant) $\displaystyle\frac{\kappa J_{K}^{2}}{4}\int_{\tau,\tau^{\prime}}\bigg{[}G_{s}(\tau-\tau^{\prime})G_{s}(\tau^{\prime}-\tau)-G_{a}(\tau-\tau^{\prime})G_{a}(\tau^{\prime}-\tau)\bigg{]}\bigg{[}\chi^{\dagger}(\tau)\chi(\tau^{\prime})+\chi^{\dagger}(\tau^{\prime})\chi(\tau)-|\chi(\tau)|^{2}-|\chi(\tau^{\prime})|^{2}\bigg{]}$ (S13) $\displaystyle+\frac{\kappa J_{K}^{2}}{2}\int_{\tau,\tau^{\prime}}\bigg{[}G_{s}(\tau-\tau^{\prime})G_{a}(\tau^{\prime}-\tau)\bigg{]}\bigg{[}-\chi^{\dagger}(\tau)\chi(\tau^{\prime})+\chi(\tau)\chi^{\dagger}(\tau^{\prime})\bigg{]}$ $\displaystyle=$ $\displaystyle\int\frac{2\kappa J^{2}|\Omega|^{1-2r}}{\pi(1+J^{2}|\Omega|^{-2r})^{2}}|\chi(i\Omega)|^{2}\frac{d\Omega}{2\pi}+\int\frac{i\Omega\kappa}{\pi}\bigg{[}\frac{J|\Omega|^{-r}-J^{3}|\Omega|^{-3r}}{(1+J^{2}|\Omega|^{-2r})^{2}}-\arctan(J|\Omega|^{-r})\bigg{]}|\chi(i\Omega)|^{2}\frac{d\Omega}{2\pi},$ where $G_{s}(\tau)=\sum_{\sigma}G_{\sigma}(\tau)$ and $G_{a}(\tau)=\sum_{\sigma}\sigma G_{\sigma}(\tau)$ and $G_{\sigma}(\tau)$ denotes the conduction electron Green’s function at leading order $(S^{1})$. The explicit formula of these functions are $\displaystyle G_{s}(i\omega)$ $\displaystyle=\frac{-2i\rho_{0}\pi|\omega|^{-r}}{1+\rho_{0}^{2}\pi^{2}J_{K}^{2}|\omega|^{-2r}}\text{sgn}(\omega)$ $\displaystyle G_{a}(i\omega)$ $\displaystyle=\frac{-2\rho_{0}^{2}\pi^{2}J_{K}|\omega|^{-2r}}{1+\rho_{0}^{2}\pi^{2}J_{K}^{2}|\omega|^{-2r}}.$ (S14) In addition, the coefficient before each $|\Omega|^{-r}$ should be a function of $r$. However, we only keep the leading order term ($r^{0}$) since we would set $r=0$ in the final step. Combining Eqs. S11, S12 and S13, we have the quadratic theory of the $\chi$ fields: $\displaystyle S_{\chi}=\int G^{-1}_{\chi}(i\Omega)|\chi(i\Omega)|^{2}\frac{d\Omega}{2\pi}$ $\displaystyle G_{\chi}^{-1}(i\Omega)=i\Omega\bigg{[}1+p(J|\Omega|^{-r})\bigg{]}+|\Omega|\bigg{[}q(J|\Omega|^{-r})+h|\Omega|^{-\epsilon}\bigg{]}.$ (S15) For future reference, we introduce the series expansion of $G_{\chi}(i\Omega)$ $\displaystyle G_{\chi}(i\Omega)$ $\displaystyle=$ $\displaystyle\frac{1}{|\Omega|}\frac{1}{i(\text{sgn}(\Omega)+p(J|\Omega|^{-r}))+q(J|\Omega|^{-r})}+\frac{1}{|\Omega|}\sum_{m=1}^{\infty}\frac{(-h)^{m}|\Omega|^{-m\epsilon}}{[i(\text{sgn}(\Omega)+p(J))+q(J)]^{m+1}}$ (S16) Since we set $r=0$ in the last step, we drop the $r$ dependence in the summation by setting $r=0$. Including this contribution won’t change the beta functions at the leading order. We can further expand the real and imaginary part of $G_{\chi}(i\Omega)$ as the following $\displaystyle G_{\chi}(i\Omega)+G_{\chi}(-i\Omega)=\sum_{n=0}^{\infty}a_{n}J^{n}|\Omega|^{-1-nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}$ $\displaystyle G_{\chi}(i\Omega)-G_{\chi}(i\Omega)=\sum_{n=0}^{\infty}ib_{n}J^{n}|\Omega|^{1-nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}-\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{1-m\epsilon}$ (S17) where $a_{n},b_{n}$ are the Taylor coefficients of $F(j)$ and $H(j)$ at $j=0$ and $\displaystyle F(j)$ $\displaystyle=$ $\displaystyle\frac{2q(j)}{(1+p(j))^{2}+q(j)^{2}}$ $\displaystyle H(j)$ $\displaystyle=$ $\displaystyle\frac{-2(1+p(j))}{(1+p(j))^{2}+q(j)^{2}}$ $\displaystyle c(J)$ $\displaystyle=$ $\displaystyle i(1+p(J))+q(J).$ Now, we combine Eqs. S2, S9, S10 and S15 and calculate the correlation functions of fermions and bosons. We first expand Eq. S2 using Eq. S9. For bosons, we have $\displaystyle\langle\sum_{\mu}\phi^{\mu}(\bm{r},\tau=0)^{2}\rangle_{imp}$ $\displaystyle=$ $\displaystyle\int_{\tau,\tau^{\prime}}\frac{A_{\phi}}{2\pi}\frac{h}{(\bm{r}^{2}+\tau^{2})^{(2-\epsilon)/2}(\bm{r}^{2}+\tau^{\prime 2})^{(2-\epsilon)/2}}S^{2}$ $\displaystyle+\int_{\tau,\tau^{\prime}}\frac{A_{\phi}}{2\pi}\frac{h}{(\bm{r}^{2}+\tau^{2})^{(2-\epsilon)/2}(\bm{r}^{2}+\tau^{\prime 2})^{(2-\epsilon)/2}}\langle-S\delta S^{z}(\tau)-S\delta S^{z}(\tau^{\prime})+\delta S^{x}(\tau)\delta S^{x}(\tau^{\prime})+\delta S^{y}(\tau)\delta S^{y}(\tau^{\prime})\rangle_{S_{\chi}}+O(S^{0})$ where the last line denotes the contribution of the next-order fluctuations and are evaluated with respect to $S_{\chi}$. For the fermionic fields, it gives $\displaystyle-\langle\sum_{a,\sigma}c_{\bm{r_{0}},a,\sigma}(\tau)c_{\bm{r_{0}},a,\sigma}^{\dagger}(0)\rangle$ (S19) $\displaystyle=$ $\displaystyle K\bigg{[}G_{s}(\tau)-G_{0}(\tau)$ $\displaystyle+\int_{\tau_{1}}\sum_{\mu}\frac{J_{K}\text{Tr}[G(\tau-\tau_{1})\sigma^{\mu}G(\tau_{1})]}{S}\langle\delta S^{\mu}(\tau_{1})\rangle_{S_{\chi}}+\int_{\tau_{1},\tau_{2}}\sum_{\mu,\nu}\frac{J^{2}_{K}\text{Tr}[G(\tau-\tau_{1})\sigma^{\mu}G(\tau_{1}-\tau_{2})\sigma^{\nu}G(\tau_{2})]}{S^{2}}\langle\delta S^{\mu}(\tau_{1})\delta S^{\nu}(\tau_{2})\rangle_{S_{\chi}}\bigg{]}$ $\displaystyle+O(S^{0})$ where the third line denotes the contribution of the next-order fluctuations and are evaluated with respect to $S_{\chi}$ and $G(\tau)=\text{diag}[\frac{G_{s}(\tau)+G_{a}(\tau)}{2},\frac{G_{s}(\tau)-G_{a}(\tau)}{2}]$. To get the final results, we only need to calculate the correlation functions of $\delta S^{\mu}$ (or $\chi$) that appearing in the above equations. The boson correlation function becomes $\displaystyle\langle\sum_{\mu}\phi^{\mu}(\bm{r},\tau=0)^{2}\rangle_{imp}$ (S20) $\displaystyle=$ $\displaystyle\int_{\tau,\tau^{\prime}}\frac{A_{\phi}}{2\pi}\frac{h}{(\bm{r}^{2}+\tau^{2})^{(2-\epsilon)/2}(\bm{r}^{2}+\tau^{\prime 2})^{(2-\epsilon)/2}}S^{2}$ $\displaystyle\bigg{[}1+\frac{1}{S}\langle\chi^{\dagger}(\tau)\chi(\tau^{\prime})+\chi^{\dagger}(\tau^{\prime})\chi(\tau^{\prime})-|\chi(\tau)|^{2}-|\chi(\tau^{\prime})|^{2}\rangle\bigg{]}+O(S^{0})$ $\displaystyle=$ $\displaystyle\frac{A_{\phi}\pi S^{2}}{2|\bm{r}|^{2}}h+\frac{A_{\phi}Sh}{2\pi}\int_{0}^{\infty}2(G_{\chi}(\Omega)+G_{\chi}(-\Omega))\bigg{(}2\pi x^{-(1-\epsilon)}B[\frac{\epsilon-1}{2},x]^{2}|\Omega|^{2-2\epsilon}-\frac{\pi^{2}}{|\bm{r}|^{2}}\bigg{)}\frac{d\Omega}{2\pi}+O(S^{0})$ $\displaystyle=$ $\displaystyle\frac{A_{\phi}\pi S^{2}}{2|\bm{r}|^{2}}h+\frac{A_{\phi}Sh}{2\pi}\int_{0}^{\infty}2\bigg{[}\sum_{n=0}^{\infty}a_{n}J^{n}|\Omega|^{1-nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}\bigg{]}$ $\displaystyle\frac{1}{|\bm{r}|^{2}}\bigg{(}2\pi x^{1-\epsilon}B[\frac{\epsilon-1}{2},x]^{2}-\pi 2^{-\epsilon}\Gamma((1-\epsilon)/2)^{2}\bigg{)}\frac{d\Omega}{2\pi}+O(S^{0})$ $\displaystyle=$ $\displaystyle\frac{A_{\phi}\pi S^{2}}{2|\bm{r}|^{2}}h+\frac{A_{\phi}Sh}{\pi|\bm{r}|^{2}}\bigg{[}\frac{1}{r}\sum_{n=0}^{\infty}\frac{-a_{n}J^{n}\pi}{2n}+\frac{1}{\epsilon}\sum_{m=1}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}\frac{-\pi}{2n\epsilon}+O(r^{0},\epsilon^{0})\bigg{]}+O(S^{0})$ $\displaystyle=$ $\displaystyle\frac{A_{\phi}\pi S^{2}}{2|\bm{r}|^{2}}h\bigg{\\{}1-\frac{1}{S}\bigg{[}\frac{1}{\pi r}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{\pi\epsilon}[\frac{\log(1+h/c(J)}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}]+O(\epsilon^{0},r^{0})\bigg{]}+o(\frac{1}{S})\bigg{\\}}$ where $x=|\Omega||\bm{r}|$ and $B(a,x)$ is the modified Bessel functions of the second kind. In the above expression, we only keep track of the poles as a function of $\epsilon$ and $r$. We also drop the $\frac{r}{\epsilon}$ poles which goes to zero as we set $r=0$. For the fermion correlation function, we first perform Fourier transformation and separate the next-order contributions into four parts $I_{1},I_{2},I_{3},I_{4}$: $\displaystyle G_{f}(i\omega)=-\int_{\tau}\langle\sum_{a,\sigma}c_{\bm{r_{0}},a,\sigma}(\tau)c_{\bm{r_{0}},a,\sigma}^{\dagger}(0)\rangle_{imp}e^{i\omega\tau}=K\bigg{[}\frac{2i\pi\rho_{0}J^{2}}{1+J^{2}}\text{sgn}(\omega)+I_{1}+I_{2}+I_{3}+I_{4}+o(\frac{1}{S})\bigg{]}$ (S21) Each part is shown below: $\displaystyle I_{1}$ $\displaystyle=$ $\displaystyle\frac{J_{K}^{2}}{4S}(G_{s}(i\omega)^{2}+G_{a}(i\omega)^{2})\int G_{\chi}(i\Omega)(G_{s}(i\omega+i\Omega)+G_{s}(i\omega-i\Omega)-G_{s}(i\omega)-G_{s}(i\omega))\frac{d\Omega}{2\pi}$ $\displaystyle=$ $\displaystyle\frac{-2i\rho_{0}\pi J^{2}(-1+J^{2})}{(1+J^{2})^{2}S}\int_{0}^{\infty}\bigg{[}\sum_{n=0}^{\infty}a_{n}J^{n}|\Omega|^{-1-nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}\bigg{]}$ $\displaystyle\sum_{p=0}^{\infty}(-J^{2})^{p}\bigg{[}\text{sgn}(\omega+\Omega)|\omega+\Omega|^{-2pr-r}+\text{sgn}(\omega-\Omega)|\omega-\Omega|^{-2pr-r}-2\text{sgn}(\omega)|\omega|^{-2pr-r}\bigg{]}\frac{d\Omega}{2\pi}+O(r^{0})$ $\displaystyle=$ $\displaystyle\frac{-i\rho_{0}\pi J^{2}(-1+J^{2})}{(1+J^{2})^{2}\pi S}\bigg{[}\sum_{n=0}^{\infty}a_{n}J^{n}\sum_{p=0}^{\infty}(-J^{2})^{P}\frac{-2}{nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}\frac{-2}{m\epsilon}\bigg{]}\text{sgn}(\omega)$ $\displaystyle=$ $\displaystyle\frac{2i\rho_{0}\pi J^{2}(-1+J^{2})}{(1+J^{2})^{3}\pi S}\bigg{[}\frac{1}{r}\int_{0}^{J}\frac{F(j)}{j}dj-\frac{1}{\epsilon}\bigg{(}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}+O(r^{0},\epsilon^{0})\bigg{)}\bigg{]}\text{sgn}(\omega).$ $\displaystyle I_{2}$ $\displaystyle=$ $\displaystyle-\frac{J_{K}^{2}}{2S}G_{s}(i\omega)G_{a}(i\omega)\int G_{\chi}(i\Omega)(G_{a}(i\omega+i\Omega)+G_{a}(i\omega-i\Omega)-G_{a}(i\omega)-G_{a}(i\omega))\frac{d\Omega}{2\pi}$ $\displaystyle=$ $\displaystyle-\frac{2iJ^{3}(-2J\rho_{0}\pi)}{(1+J^{2})^{2}S}\text{sgn}(\omega)\int_{0}^{\infty}\bigg{[}\sum_{n=0}^{\infty}a_{n}J^{n}|\Omega|^{-1-nr}+\sum_{m=1}^{\infty}(-h)^{m}\bigg{(}\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}\bigg{]}$ $\displaystyle\sum_{p=0}^{\infty}(-J^{2})^{p}\bigg{[}|\omega+\Omega|^{-2pr-2r}+|\omega-\Omega|^{-2pr-2r}-2|\omega|^{-2pr-2r}\bigg{]}\frac{d\Omega}{2\pi}$ $\displaystyle=$ $\displaystyle-\frac{2iJ^{3}(-2J\rho_{0}\pi)}{2\pi(1+J^{2})^{2}S}\text{sgn}(\omega)\bigg{[}\sum_{n=0}^{\infty}a_{n}\sum_{p=0}^{\infty}(-J^{2})^{p}J^{n}(\frac{2}{2pr+2r+nr}-\frac{2}{nr})+O(r^{0},\epsilon^{0})\bigg{]}$ $\displaystyle=$ $\displaystyle\frac{2iJ^{4}\rho_{0}\pi}{\pi(1+J^{2})^{2}S}\text{sgn}(\omega)\bigg{[}\frac{1}{rJ^{2}}\int_{0}^{J}\frac{2jF(j)}{1+j^{2}}dj-\frac{2}{r(1+J^{2})}\int_{0}^{J}\frac{F(j)}{j}dj+O(r^{0},\epsilon^{0})\bigg{]}.$ $\displaystyle I_{3}$ $\displaystyle=$ $\displaystyle\frac{J_{K}^{2}}{4S}\bigg{[}G_{a}(i\omega)^{2}+G_{s}(i\omega)^{2}\bigg{]}\int G_{\chi}(i\Omega)[G_{a}(i\Omega+i\omega)-G_{a}(i\omega-i\Omega)]\frac{d\Omega}{2\pi}$ $\displaystyle=$ $\displaystyle\frac{J^{2}(-1+J^{2})}{2\pi(1+J^{2})^{2}S}\int_{0}^{\infty}\bigg{[}\sum_{n=0}ib_{n}J^{n}|\Omega|^{1-nr}+\sum_{m=1}(-h)^{m}\bigg{(}-\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}\bigg{]}$ $\displaystyle\bigg{[}-2J\rho_{0}\pi\sum_{p=0}^{\infty}(-J^{2})^{p}\bigg{(}|\Omega+\omega|^{-2r-2pr}-|-\Omega+\omega|^{-2r-2pr}\bigg{)}\bigg{]}d\Omega$ $\displaystyle=$ $\displaystyle\frac{1}{S}[0+O(r^{0},\epsilon^{0})].$ $\displaystyle I_{4}$ $\displaystyle=$ $\displaystyle\frac{-J_{K}^{2}}{2}G_{a}(i\omega)G_{s}(i\omega)\int G_{\chi}(i\Omega)\bigg{(}G_{s}(i\Omega+i\omega)-G_{s}(i\omega-i\Omega)\bigg{)}\frac{d\Omega}{2\pi}$ $\displaystyle=$ $\displaystyle\frac{-2J^{3}\rho_{0}\pi}{S(1+J^{2})^{2}}\int_{0}^{\infty}\bigg{[}\sum_{n=0}ib_{n}J^{n}|\Omega|^{1-nr}+\sum_{m=1}(-h)^{m}\bigg{(}-\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}|\Omega|^{-1-m\epsilon}\bigg{]}$ $\displaystyle\sum_{p=0}(-J^{2})^{p}\bigg{[}\text{sgn}(\omega+\Omega)|\omega+\Omega|^{-2pr-r}-\text{sgn}(\omega-\Omega)|\omega-\Omega|^{-2pr-r}\bigg{]}\frac{d\Omega}{\pi}$ $\displaystyle=$ $\displaystyle\frac{-2J^{3}\rho_{0}\pi}{S(1+J^{2})^{2}}\bigg{[}\sum_{n=0}ib_{n}J^{n}\sum_{p=0}(-J^{2})^{p}\frac{2}{\pi(nr+2pr+r)}+\sum_{m=1}(-h)^{m}\frac{1}{1+J^{2}}\bigg{(}-\frac{1}{(c(J)^{*})^{m+1}}+\frac{1}{c(J)^{m+1}}\bigg{)}\frac{2}{\pi m\epsilon}\bigg{]}\text{sgn}(\omega)$ $\displaystyle=$ $\displaystyle\frac{-2J^{3}\rho_{0}\pi}{(1+J^{2})^{2}}\bigg{[}\frac{2}{r\pi J}\int_{0}^{J}\frac{iH(j)}{(1+j^{2})}dj+\frac{2}{\pi\epsilon(1+J^{2})}\bigg{(}-\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}+O(r^{0},\epsilon^{0})\bigg{]}\text{sgn}(\omega).$ In the above derivation, we only keep terms that diverge as $r\rightarrow 0$ and $\epsilon\rightarrow 0$, with $r$ going to zero first. Summing over all the contributions, we have $\displaystyle G_{f}(i\omega)$ $\displaystyle=$ $\displaystyle 2iK\pi\rho_{0}\text{sgn}(\omega)\frac{J^{2}}{1+J^{2}}\bigg{[}1+\frac{-1+J^{2}}{(1+J^{2})^{2}\pi Sr}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{(1+J^{2})\pi Sr}\int_{0}^{J}\frac{2jF(j)}{1+j^{2}}dj$ $\displaystyle-\frac{2J^{2}}{(1+J^{2})^{2}\pi Sr}\int_{0}^{J}\frac{F(j)}{j}dj-\frac{2}{(1+J^{2})r\pi S}\int_{0}^{J}\frac{H(j)}{1+j^{2}}dj$ $\displaystyle-\frac{-1+J^{2}}{(1+J^{2})^{2}\pi S\epsilon}\bigg{(}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}+\frac{2iJ}{(1+J^{2})^{2}\pi\epsilon S}\bigg{(}-\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}\bigg{]}$ In summary, we have $\displaystyle\langle\sum_{\mu}[\phi^{\mu}(\bm{r},\tau=0)]^{2}\rangle_{imp}$ $\displaystyle\propto$ $\displaystyle\frac{1}{|\bm{r}|^{2}}h\bigg{[}1-\frac{1}{S\pi r}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{S\pi\epsilon}[\frac{\log(1+h/c(J)}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}]\bigg{]}$ $\displaystyle\sum_{\sigma,a}\langle c_{\sigma,a}(\bm{r_{0}},\tau)c_{\sigma,a}^{\dagger}(\bm{r_{0}},\tau)\rangle_{imp}$ $\displaystyle\propto$ $\displaystyle\frac{J^{2}\text{sgn}(\tau)}{(1+J^{2})|\tau|}\bigg{[}1+\frac{-1-J^{2}}{(1+J^{2})^{2}\pi Sr}\int_{0}^{J}\frac{F(j)}{j}dj+\frac{1}{(1+J^{2})\pi Sr}\int_{0}^{J}\frac{2jF(j)}{1+j^{2}}dj$ $\displaystyle-\frac{2}{(1+J^{2})r\pi S}\int_{0}^{J}\frac{H(j)}{1+j^{2}}dj-\frac{-1+J^{2}}{(1+J^{2})^{2}\pi S\epsilon}\bigg{(}\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}$ $\displaystyle+\frac{2iJ}{(1+J^{2})^{2}\pi\epsilon S}\bigg{(}-\frac{\log(1+h/c(J))}{c(J)}+\frac{\log(1+h/c(J)^{*})}{c(J)^{*}}\bigg{)}\bigg{]}.$ ### .3 C. Beta function analysis We numerically evaluate the beta functions and the resulting RG flows are shown in Fig. S1. Besides numerical evaluations, we analyze the beta equations in certain extreme cases. At $h=0$, the model becomes a standard Kondo impurity model. The beta function of $J$ is $\displaystyle\beta_{J}\bigg{|}_{h=0}$ $\displaystyle=$ $\displaystyle\frac{J-J^{3}}{2S\pi(1+J^{2})}F(J)+\frac{J^{2}}{S\pi(1+J^{2})}H(J)$ (S23) $\displaystyle=$ $\displaystyle-\frac{2J^{2}}{S\pi}+\frac{2\kappa J^{3}}{S\pi^{2}}+O(J^{4}/S).$ Numerically, we find $J$ is always relevant at $\kappa=2$ and the system flows to a strong coupling fixed point $K$ at $J=\infty$. The first two terms in the second line are also consistent with the result obtained from perturbation theory as shown in Ref. Affleck (1995) (note that we’ve scaled the Kondo coupling). At $J=0$, the model becomes a Bose-Kondo model with the following beta function of $h$: $\displaystyle\beta_{h}\bigg{|}_{J=0}=-\frac{1}{S}\tilde{\epsilon}h+\frac{2h^{2}}{S\pi(1+h^{2})}.$ (S24) This is consistent with previous results obtained in Refs. Cuomo _et al._ (2022); Nahum (2022). It gives two stable fixed points at $h=(1-\sqrt{1-\tilde{\epsilon}^{2}})/{\tilde{\epsilon}},\infty$ (L and L′) and one critical point at $h=(1+\sqrt{1-\tilde{\epsilon}^{2}})/{\tilde{\epsilon}}$ (LC). After introducing the Kondo coupling, we find $J$ is irrelevant at both L and LC, but marginal at L′. At $h=\infty$, we have a line of fixed points (L′) where $J$ is always marginal. The beta function at the large $h$ is: $\displaystyle\beta_{J}=\frac{J-J^{3}}{\pi S(1+J^{2})}\frac{1}{h}+O(\frac{1}{h^{2}S}).$ (S25) Clearly, $\beta_{J}\rightarrow 0$ as $h\rightarrow\infty$. At $J=\infty$, besides the K fixed point, there is an unstable fixed point KD′ at $(h_{c}=\frac{2}{\tilde{\epsilon}\pi},J=\infty)$. To see the existence of such a critical point, we give the beta function of $h$ at large $J$: $\displaystyle\beta_{h}=-\frac{\tilde{\epsilon}h}{S}+\frac{2}{S\pi}+O(\frac{1}{SJ}).$ (S26) It suggests that $h$ flows to $0$ ($\infty$) when $h<(>)h_{c}$ at $J\rightarrow\infty$. Figure S1: Renormalization group flow at $\tilde{\epsilon}<\tilde{\epsilon}_{1}$ (a), $\tilde{\epsilon}_{1}<\tilde{\epsilon}<\tilde{\epsilon}_{2}$ (b) and $\tilde{\epsilon}_{2}<\tilde{\epsilon}$ (c) obtained from numerical evaluation. For purpose of illustration, we normalize all the vectors to have the same length. ### .4 D. Topological contribution from the Kondo coupling In this section, we’ll show that the Kondo coupling contributes an additional topological term to the impurity spin at sufficiently large values of $J_{K}$. For the purpose of discussion, we consider pure Kondo model, in other words, the BFK model at $g=0$ or $h=0$. We then integrate out the fermionic fields, which gives the following action $\displaystyle S_{K}=-iSS_{B}-K\text{Tr}\log\bigg{(}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau}+\epsilon_{\bm{k}})-\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}S^{\mu}(\tau)\frac{\sigma^{\mu}}{2}\bigg{)}.$ (S27) We now evaluate the fermion determinant contribution at sufficiently large $J_{K}$ via a Trotter decomposition $\displaystyle\exp\bigg{(}K\text{Tr}\log\bigg{(}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau}+\epsilon_{\bm{k}})-\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}S^{\mu}(\tau)\frac{\sigma^{\mu}}{2}\bigg{)}\bigg{)}=\bigg{[}\lim_{N\rightarrow\infty}\text{Tr}[\prod^{N}_{i}\exp[-H(\tau_{i})d\tau]]\bigg{]}^{K}$ (S28) where $\displaystyle H(\tau_{i})=J_{K}\sum_{\mu}c_{\bm{r}_{0}}\frac{\sigma^{\mu}}{2}c_{\bm{r}_{0}}S^{\mu}(\tau_{i})+\sum_{\bm{k}\sigma}\epsilon_{\bm{k}}c_{\bm{k},\sigma}^{\dagger}c_{\bm{k},\sigma}.$ (S29) Since the $K$ orbitals are degenerate, it’s sufficient to consider one orbital and multiply the contribution by $K$. When $J_{K}$ is sufficiently large, the hopping between site $\bm{r}_{0}$ and the other sites is a high order effect ($\frac{t^{2}}{J_{K}}$). Here, we focus on the leading order contribution, and it’s sufficient to take $H(\tau_{i})=J_{K}/2\sum_{\mu}c_{\bm{r}_{0}}\sigma^{\mu}c_{\bm{r}_{0}}S^{\mu}(\tau_{i})$. The eigenstates and eigenvalues of $H(\tau_{i})$ are $\displaystyle|v_{+}(\tau_{i})\rangle=\cos(\frac{\theta(\tau_{i})}{2})c_{\bm{r_{0}},\uparrow}^{\dagger}|0\rangle+\sin(\frac{\theta(\tau_{i})}{2})e^{i\phi(\tau_{i})}c_{\bm{r_{0}},\downarrow}^{\dagger}|0\rangle$ $\displaystyle|v_{-}(\tau_{i})\rangle=-\sin(\frac{\theta(\tau_{i})}{2})c_{\bm{r_{0}},\uparrow}^{\dagger}|0\rangle+\cos(\frac{\theta(\tau_{i})}{2})e^{i\phi(\tau_{i})}c_{\bm{r_{0}},\downarrow}^{\dagger}|0\rangle$ $\displaystyle|v_{0}(\tau_{i})\rangle=|0\rangle$ $\displaystyle|v_{d}(\tau_{i})\rangle=|\uparrow\downarrow\rangle$ $\displaystyle E_{\pm}(\tau_{i})=\pm\frac{J_{K}S}{2},\quad E_{0}(\tau_{i})=0,\quad E_{d}(\tau_{i})=0$ (S30) where we let $\bm{S}(\tau_{i})=S(\sin(\theta(\tau_{i}))\cos(\phi(\tau_{i})),\sin(\theta(\tau_{i}))\sin(\phi(\tau_{i})),\cos(\theta(\tau_{i})))$. We have the following decomposition via the eigenvalues and eigenstates $\displaystyle e^{-H(\tau_{i})d\tau}$ $\displaystyle=$ $\displaystyle e^{-E_{+}(\tau_{i})d\tau}|v_{+}(\tau_{i})\rangle\langle v_{+}(\tau_{i})|+e^{-E_{-}(\tau_{i})d\tau}|v_{-}(\tau_{i})\rangle\langle v_{-}(\tau_{i})|$ (S31) $\displaystyle+e^{-E_{0}(\tau_{i})d\tau}|v_{0}(\tau_{i})\rangle\langle v_{0}(\tau_{i})|+e^{-E_{d}(\tau_{i})d\tau}|v_{d}(\tau_{i})\rangle\langle v_{d}(\tau_{i})|.$ Combining the above identity with a Trotter decomposition of the fermion determinant, we obtain the following leading order contribution $\displaystyle\exp(K\text{Tr}\log\bigg{(}\delta_{\tau,\tau^{\prime}}\delta_{k,k^{\prime}}(\partial_{\tau}+\epsilon_{\bm{k}})-\delta_{\tau,\tau^{\prime}}J_{K}\sum_{\mu}S^{\mu}(\tau)\frac{\sigma^{\mu}}{2}\bigg{)})$ (S32) $\displaystyle\propto$ $\displaystyle\exp\bigg{(}K\int_{\tau}\langle\partial_{\tau}v_{-}(\tau)|v_{-}(\tau)\rangle d\tau+O(\frac{1}{J_{K}})\bigg{)}$ $\displaystyle\propto$ $\displaystyle\exp\bigg{(}i\frac{K}{2}\int_{\tau}(1-\cos(\theta))\partial_{\tau}\phi d\tau+O(\frac{1}{J_{K}})\bigg{)}$ $\displaystyle=$ $\displaystyle\exp\bigg{(}-i\frac{K}{2}S_{B}+O(\frac{1}{J_{K}})\bigg{)}.$ The effective action then becomes $\displaystyle S_{K}=-i(S-\frac{K}{2})S_{B}+O(\frac{1}{J_{K}})$ (S33) which indicates the non-trivial topological contribution from the coupling to the gapless fermions. Qualitatively, the spin has been Kondo-screened, with the effective spin becoming $S-\frac{K}{2}$. For the perfect screening case that we consider, $S-\frac{K}{2}=0$, and we obtain a Kondo singlet.
# Line Spectral Estimation via Unlimited Sampling ###### Abstract Line spectral estimation (LSE) is a fundamental problem in signal processing due to its wide applications. For signals that are orders of magnitude larger than the dynamic range of the analog-to-digital (ADC) threshold, conventional ADC will clip or saturate, leading to significant information loss. The Unlimited Sensing Framework (USF) was introduced to avoid saturation through sampling of the signal modulo. Motivated by the USF, we study the LSE from modulo samples. By exploiting oversampling and the bounded spectral property, the US-LSE is proposed to recover the folding instants and perform LSE. Our numerical simulations show that for oversampling factor $\gamma\geq 10$, the US-LSE is more stable in a lower signal to noise scenario, in the range of $20\sim 30$ dB, compared to the existing algorithm. Besides, we process the real data generated by AWR1642, and show that US-LSE estimates the ranges of two corners with SNRs of $12$ dB and $23$ dB for oversampling factor $\gamma=10$, and the normalized dynamic range ${\rm DR}=\beta_{g}/\lambda\approx 3$, where $\beta_{g}$ is the infinity norm of the signal. keywords: Unlimited sampling, orthogonal matching pursuit, line spectral estimation, modulo samples ## 1 Introduction Line spectral estimation (LSE) aims to estimate a superposition of several complex exponentials, which arises in several areas of science and engineering fields. Many methods such as the fast Fourier transform (FFT), the subspace based method and the compressed sensing based method have been proposed to do so [1, 2, 3]. These techniques rely on the assumption that the signals’ dynamic range is within that of the analog-to-digital converters (ADCs). In practice, however, a signal’s amplitude may be unknown and much larger than the dynamic range of the ADC. This results in clipping and leads to a new form of information loss. In [4], the Unlimited Sampling Framework (USF) was proposed for signal sensing and recovery. By applying a modulo operation to the signal before the ADC, the saturation problem was avoided. However, the amplitudes were folded with values in the range $[-\lambda,\lambda]$, where $\lambda>0$ is the ADC threshold. It has been proven that when the sampling rate is just slightly above the Nyquist rate, a finite energy bandlimited function is identifiable by its modulo samples. In addition, a sufficient condition in which the oversampling rate is greater than $2\pi e$ was provided for recovery. The proposed recovery algorithm is sensitive to noise due to the use of higher- order differences. The USF was then further extended to different problems and signal models such as LSE, DOA estimation, wavelets, mmWave FMCW radar, finite-rate-of-innovation signals, multi-dimensional signals, quantized measurements, sparse vector recovery, computed tomography, and graph signals, etc.[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. In [21], a periodic function reconstruction was studied. Instead of using higher-order differences in the time domain for recovery [4], a first-order difference was adopted and a new Fourier domain recovery algorithm was proposed. This approach makes the reconstruction more stable and its performance was validated both in theory and via extensive experiments on their prototype ADC. In [22], exploiting the fact that bandlimited signals are time-limited, a robust algorithm was proposed to recover the signal from the modulo samples in the frequency domain. It was numerically shown that the proposed method has lower error compared to existing approaches for a given sampling rate, noise level, and dynamic range of the ADC. LSE in the noiseless scenario was studied in [5]. Utilizing the general recovery method [4] for bandlimited signals, the original signal was recovered, and exponential fitting methods were used to perform LSE. In contrast, we utilize the fact that both a first-order difference of the simple function [21] and the line spectra in the frequency domain are sparse. By exploiting oversampling, the first-order difference of the modulo samples has spectral beyond bandwidth, which we used to recover the first-order difference of the simple function via the orthogonal matching pursuit (OMP). In this work, we address the LSE estimation via USF. By restricting the frequency on the grid, the measurement model relating to the amplitudes of frequencies, the modulo samples and the simple function is established. OMP is then employed to recover the first-order difference of the simple function, which is further used to recover the spectrum. Numerical simulations involving real data acquired by mmWave FMCW radar are conducted to demonstrate the performance of the proposed algorithm, compared with the existing approach. ## 2 Problem Setup and Recovery Algorithm Consider the continuous line spectral estimation problem $\displaystyle g(t)=x(t)+w(t),$ (1) where $x(t)=\sum\limits_{k=1}^{K}a_{k}{\rm e}^{{\rm j}2\pi f_{k}t}$, $f_{k}$ denotes the frequency of the $k$th signal and $0\leq f_{k}\leq f_{\rm max}$, $t\in{\mathbb{R}}$, $w(t)$ is the noise. Obviously, $f_{\rm max}$ is the Nyquist frequency, i.e., $f_{\rm Nyq}=f_{\rm max}$. Let $\gamma$ denote the oversampling factor with respect to the Nyquist frequency $f_{\rm Nyq}$. The sampling frequency $f_{\rm s}$ is $\displaystyle f_{\rm s}=\gamma f_{\rm Nyq},$ (2) and the sampling interval is $T_{\rm s}=1/f_{\rm s}$. Uniform unlimited sampling of $g(t)$ yields $\displaystyle y_{n}\triangleq\mathscr{M}_{\lambda}\left(\Re\left\\{g(nT_{\rm s})\right\\}\right)+{\rm j}\mathscr{M}_{\lambda}\left(\Im\left\\{g(nT_{\rm s})\right\\}\right),$ (3) $n=0,1,\cdots,N-1$, where $\mathscr{M}_{\lambda}\left(t\right)$ is the module mapping defined as $\displaystyle\mathscr{M}_{\lambda}\left(t\right)\triangleq 2\lambda\left(\left\llbracket\frac{t}{2\lambda}+\frac{1}{2}\right\rrbracket-\frac{1}{2}\right),$ (4) $\llbracket t\rrbracket\stackrel{{\scriptstyle\text{ def }}}{{=}}t-\lfloor t\rfloor$ denotes the fractional part of $t$, $\lambda$ denotes the ADC threshold. Note that the number of samples is $N$. $g_{n}=g(nT_{\rm s})$ can be written as $\displaystyle g_{n}=\frac{1}{\sqrt{N}}\sum\limits_{k=1}^{K}c_{k}{\rm e}^{{\rm j}\omega_{k}n}+w_{n},$ (5) where $w_{n}\triangleq w(nT_{\rm s})$ denotes the noise sequence, $\omega_{k}$ is $\displaystyle\omega_{k}=\frac{2\pi}{\gamma}\times\frac{f_{k}}{f_{\rm max}}\in[0,\frac{1}{\gamma}\times 2\pi].$ (6) LSE aims to recover $\\{(c_{k},\omega_{k})\\}_{k=1}^{K}$ from ${\mathbf{y}}=[y_{0},\cdots,y_{N-1}]^{\rm{T}}$. The proposed US-LSE algorithm is motivated by several facts and are presented below: * • Fact 1 [4]: The original samples can be decomposed as the sum of module samples and a simple function, i.e., $g_{n}=y_{n}+\epsilon_{g,n}$, and the first-order finite difference samples also applies. * • Fact 2 [21]: The first-order finite difference of samples of the simple function are sparse. Besides, their values lie on the grid $2\lambda{\mathbb{Z}}$. We make an assumption that the frequencies lie on the grid exactly, i.e., $\omega_{k}\in\\{0,\frac{2\pi}{N},\frac{2\pi}{N}\times 2,\cdots,\frac{2\pi}{N}\times(\lfloor\frac{N}{\gamma}\rfloor-1)\\}$. Define the discrete Fourier transform (DFT) matrix ${\mathbf{F}}\in{\mathbb{C}}^{N\times N}$ with the $(n_{1},n_{2})$ elements being $\frac{1}{\sqrt{N}}{\rm e}^{-{\rm j}\frac{2\pi(n_{1}-1)(n_{2}-1)}{N}}$, $n_{1}=1,2,\cdots,N$ and $n_{2}=1,2,\cdots,N$. Obviously, ${\mathbf{F}}{\mathbf{F}}^{\rm H}={\mathbf{F}}^{\rm H}{\mathbf{F}}={\mathbf{I}}_{N}$. Consequently, $\mathbf{g}$ (5) can be formulated as $\displaystyle{\mathbf{g}}={\mathbf{F}}^{\rm H}\bar{\mathbf{c}}+{\mathbf{w}},$ (7) where $\bar{\mathbf{c}}\in{\mathbb{C}}^{N}$, $\bar{c}_{i}=0$, $i=\lfloor N/\gamma\rfloor,\lfloor N/\gamma\rfloor+1,\cdots,N$, $\|\bar{\mathbf{c}}\|_{0}=K$. This implies that the support set of $\bar{\mathbf{c}}$ is a subset of set $\\{0,1,2,\cdots,\lfloor N/{\gamma}\rfloor-1\\}$. Note that $\displaystyle{\mathbf{g}}={\mathbf{y}}+{\bm{\epsilon}}_{g}.$ (8) Define the cyclic first-order finite difference $\Delta{\mathbf{g}}\in{\mathbb{C}}^{N}$ with the $k$th element being $\displaystyle\Delta g[k]$ $\displaystyle=g_{k+1}-g_{k},k=0,1,\cdots,N-2,$ (9) $\displaystyle\Delta g[N-1]$ $\displaystyle=g_{0}-g_{N-1},$ (10) or in matrix form $\displaystyle\Delta{\mathbf{g}}=({\mathbf{J}}-{\mathbf{I}})\mathbf{g}=({\mathbf{J}}-{\mathbf{I}})({\mathbf{F}}^{\rm H}\bar{\mathbf{c}}+\mathbf{w}),$ (11) where ${\mathbf{J}}\in{\mathbb{C}}^{N\times N}$ is a selection matrix which shifts the row of the identity matrix ${\mathbf{I}}_{N}$ up in one dimension cyclicly. Applying cyclic first-order finite difference to (8) yields $\displaystyle\Delta{\mathbf{g}}=\Delta{\mathbf{y}}+\Delta{\bm{\epsilon}}_{g}.$ (12) Performing DFT on (12) yields $\displaystyle{\mathbf{F}}\Delta{\mathbf{g}}={\mathbf{F}}\Delta{\mathbf{y}}+{\mathbf{F}}\Delta{\bm{\epsilon}}_{g}.$ (13) According to ${\mathbf{F}}\Delta{\mathbf{g}}={\mathbf{F}}({\mathbf{J}}-{\mathbf{I}})({\mathbf{F}}^{\rm H}\bar{\mathbf{c}}+{\mathbf{w}})={\mathbf{d}}\odot(\bar{\mathbf{c}}+\bar{\mathbf{w}})$, where ${\mathbf{d}}=[1,{\rm e}^{{\rm j}\frac{2\pi}{N}},\cdots,{\rm e}^{{\rm j}\frac{2\pi(N-1)}{N}}]^{\rm T}-{\mathbf{1}}$, and $\bar{\mathbf{w}}=\mathbf{F}\mathbf{w}$, one has $\displaystyle{\mathbf{d}}\odot(\bar{\mathbf{c}}+\bar{\mathbf{w}})=\Delta\tilde{\mathbf{y}}+{\mathbf{F}}\Delta{\bm{\epsilon}}_{g},$ (14) where $\Delta\tilde{\mathbf{y}}\triangleq{\mathbf{F}}\Delta{\mathbf{y}}$. According to Fact $2$, $\Delta{\bm{\epsilon}}_{g}$ is a $M$ sparse vector and $\bar{\mathbf{c}}$ is a $K$ sparse vector. We use least squares to formulate (14) as the following optimization problem $\displaystyle\underset{\bar{\mathbf{c}},\Delta{\bm{\epsilon}}_{g}}{\operatorname{minimize}}~{}\|\Delta\tilde{\mathbf{y}}+{\mathbf{F}}\Delta{\bm{\epsilon}}_{g}-\mathbf{d}\odot\bar{\mathbf{c}}\|_{2}^{2},$ (15a) $\displaystyle{\rm subject~{}to}~{}\|\bar{\mathbf{c}}\|_{0}\leq K,\|\Delta{\bm{\epsilon}}_{g}\|_{0}\leq M,\Delta{\bm{\epsilon}}_{g}\in 2\lambda{\mathbb{Z}}.$ (15b) Note that $\bar{\mathbf{c}}_{i}=0$, for $i=M+1,M+2,M+3,\cdots,N$. Denote $\Delta\tilde{\mathbf{y}}_{\rm sub}$ as a subvector of $\Delta\tilde{\mathbf{y}}$ with index chosen from $M+1$ to $N$, ${\mathbf{F}}_{\rm sub}$ denotes a submatrix chosen from the rows of ${\mathbf{F}}$ with index chosen from $M+1$ to $N$. Then we solve the subproblem $\displaystyle\underset{\Delta{\bm{\epsilon}}_{g}}{\operatorname{minimize}}~{}\|\Delta\tilde{\mathbf{y}}_{\rm sub}+{\mathbf{F}}_{\rm sub}\Delta{\bm{\epsilon}}_{g}\|_{2}^{2},$ (16a) $\displaystyle{\rm subject~{}to}~{}\|\Delta{\bm{\epsilon}}_{g}\|_{0}\leq P.$ (16b) via OMP. A sufficient condition for recovering $\Delta{\bm{\epsilon}}_{g}$ is that ${\rm spark}({\mathbf{F}}_{\rm sub})>2P$ [23, Corollary 1], where ${\rm spark}({\mathbf{A}})$ denotes the smallest number of columns of $\mathbf{A}$ that are linearly dependent. Obviously, ${\rm spark}({\mathbf{F}}_{\rm sub})\in[2,N-M]$. Therefore, $N-M=N(1-\frac{1}{\gamma})\geq 2P$ should be satisfied. This implies that the oversampling factor should satisfy $\displaystyle{\gamma}\geq\frac{1}{1-2P/N}.$ (17) Once $\Delta\bm{\epsilon}_{g}$ is recovered, $\bar{\mathbf{c}}$ can be estimated via (15). Overall the US-LSE is summarized in Algorithm 1. Algorithm 1 LSE via Unlimited Sampling (US-LSE) 1: Input: Modulo samples ${\mathbf{y}}\in{\mathbb{C}}^{N}$, number of sinusoids $K$, and the number of folding instants $P$. 2: Compute $\Delta{\mathbf{y}}$ and $\Delta\tilde{\mathbf{y}}\triangleq{\mathbf{F}}\Delta{\mathbf{y}}$. 3: Solve (16) to obtain $\widehat{\Delta{\bm{\epsilon}}}_{g}$. 4: Round $\widehat{\Delta{\bm{\epsilon}}}_{g}$ to the nearest grid $2\lambda{\mathbb{Z}}$. 5: Solve (15) and return $\widehat{\bar{\mathbf{c}}}$. ## 3 Numerical Simulation In this section, numerical experiments are conducted to verify the performance of the proposed US-LSE. The parameters are set as follows: $N=256$, the number of frequencies is $K=3$, the frequencies are randomly generated from $\\{\frac{2\pi}{N},2\frac{2\pi}{N}\cdots,\lfloor\frac{N}{\gamma}\rfloor\frac{2\pi}{N}\\}$, $\lambda=0.06$, the amplitudes are identical and phases are randomly drawed from $[0,2\pi)$. The SNR is defined as $\rm{SNR}=\|\mathbf{x}\|^{2}_{2}/\mathbb{E}[\|\mathbf{w}\|^{2}_{2}]$, where $\mathbf{x}$ is the noise-free signals and $\mathbf{w}$ is the noise. The normalized mean square error (NMSE) is $\rm{NMSE}(\widehat{\mathbf{x}})=\|\widehat{\mathbf{x}}-{\mathbf{x}}\|^{2}_{2}/\|{\mathbf{x}}\|^{2}_{2}$, where $\widehat{\mathbf{x}}$ denotes the estimate of the algorithm. The US-LSE is compared with the existing method named USE that recovers the spectrum from modulo-folded samples [5, 4]. We define the event in which the algorithm successfully recovers the true signal $\mathbf{x}$ provided that $\rm{NMSE}(\widehat{\mathbf{x}})<-15$ dB. All the results are averaged over $300$ Monte Carlo (MC) trials. The probabilities of successful recovery of USE and US-LSE algorithms versus the SNR and oversampling factor $\gamma$ are shown in Fig. 1. With the oversampling factor $\gamma$ being fixed and $\gamma\leq 10$, when $\rm{SNR}\leq 25$ dB, the probability of successful recovery of USE is lower than that of US-LSE, when $\rm{SNR}\geq 30$ dB, the probability of successful recovery of USE is higher than that of US-LSE. This demonstrates that in the low oversampling scenario where $\gamma\leq 10$, US-LSE is more stable in the lower SNR scenario where $\rm{SNR}\leq 25$ dB, while USE performs better than US-LSE in the higher SNR scenario where $\rm{SNR}\geq 30$ dB. The poor performance of USE is due to the high order difference operator which may amplify noise in the low SNR scenario. For $\gamma\geq 12$, the phenomenon is similar to the scenario in which $\gamma\leq 10$ when the SNR is low, while both algorithms successfully recover the true signal in the high SNR scenario. This demonstrates that US-LSE benefits more from oversampling, as oversampling increases the number of effective observations $\Delta\tilde{\mathbf{y}}_{\rm sub}$ and $\Delta{\bm{\epsilon}}_{g}$ can be recovered more easily. Fig. 1: The probability of successful recovery versus SNR and oversampling factor $\gamma$. ## 4 Empirical Experiments Empirical experiments are conducted to demonstrate the performance of US-LSE by using an AWR1642 radar. The sampling frequency is $f_{s}=5$ MHz. The chirp rate is $\kappa=15\times 10^{12}$ Hz/s. The maximum distance is $r_{\rm{max}}=cf_{s}/(2\kappa)=50$ m. Provided that the distances of targets are less than $5$ m which is the case in our experiments, it can be shown that the range mesearment model can be formulated as an oversampling line spectra estimation model with $\gamma=10$. The USF measurements are generated in software from the original measurements. The number of measurements is $N=256$, the number of targets is $K=2$ and $M=25$. The maximum amplitude of the inphase and quadrature components is about $240$, the threshold is $\lambda=80$. We conduct two experiments, as shown in Fig. 2. We run the Newtonalized OMP (NOMP) to process the original measurements and output the range estimates as the ground truth [3]. Note that US-LSE is the on-grid approach and the grid spacing is $r_{\rm{max}}/N\approx 0.195$m. For scenario 1, the radial distances of two corners named corner 1 and corner 2 and $1.421$m and $2.171$m, respectively. The grids closest to corner 1 and corner 2 are $1.367$m and $2.148$m, and the off-grid distances are $0.054$m and $0.023$m. For scenario 2, the radial distances of corner 1 and corner 2 and $1.421$m and $2.419$m, respectively. The grids closest to corner 1 and corner 2 are $1.367$m and $2.344$m, and the off-grid distances are $0.054$m and $0.075$m. We estimate the SNRs of the two targets based on the spectrum of the original data, which are $11.5$ dB and $24.3$ dB for corner 1 and corner 2 in scenario 1, $14.0$ dB and $22.0$ dB for corner 1 and corner 2 in scenario 2. For scenario 1, we apply US-LSE to perform LSE. Note that the number of folds of the modulo non-linearity is $P=85$ satisfying (17). Fig. 3 shows the recovery results of US-LSE and USE. USE failed to recover the inpahse component and recover the quadrature component, while US-LSE successfully recovers both the inphase and quadrature components. For scenario 2, Fig. 3 shows that USE failed to recover both the inphase and quadrature components, while US-LSE still recovers the signal. Once the $\Delta\bm{\epsilon}_{g}$ is estimated, we solve (15) to obtain $\tilde{\bar{\mathbf{c}}}$. The results are shown in Fig. 5. For scenario 1, the recovery results of US-LSE match the ground truth quite well. For scenario 2, the amplitude of corner 1 is consistent with ground truth, and the amplitudes of two bins closest to the distance of target 2 are large, due to the significant off-grid effects of corner 2. Fig. 2: Field experiments setup. (a) The radial distances of corner $1$ and corner $2$ are $1.42$m and $2.17$m, respectively. (b) The radial distance of corner $1$ and corner $2$ are $1.42$m and $2.42$m. Fig. 3: The recovery results of scenario $1$. (a) The inphase component, (b) the recovered inphase component, (c) the quadrature component, (b) the recovered quadrature component. Fig. 4: The recovery results of scenario $2$. (a) The inphase component, (b) the recovered inphase component, (c) the quadrature component, (b) the recovered quadrature component. Fig. 5: The estimated $\tilde{\bar{\mathbf{c}}}$ for scenario $1$ and scenario $2$. ## 5 Conclusion In this paper, the LSE is studied in the context of USF. The US-LSE is proposed, which exploits the oversampling and the bandlimited property of the line spectrum to recover the folding instants and the line spectral. The robustness of US-LSE compared to the existing approach is demonstrated through numerical experiments. Besides, we also evaluate the performance of US-LSE by processing the data acquired by mmWave radar. It is shown that the US-LSE successfully recovers the LSE with the oversampling factor $\gamma=10$. Several relevant future concerns are worth noting. For example, we recover the first-order difference of the simple functions by solving a relaxed problem. It may further improve the robustness and performance of the proposed approach if one could jointly estimate the frequencies and the first-order difference of the simple functions. ## References * [1] P. Stoica, R. L. Moses, et al., Spectral analysis of signals, vol. 452. Pearson Prentice Hall Upper Saddle River, NJ, 2005. * [2] T. L. Hansen, B. H. Fleury, and B. D. Rao, “Superfast line spectral estimation,” IEEE Transactions on Signal Processing, vol. 66, no. 10, pp. 2511–2526, 2018. * [3] B. Mamandipoor, D. Ramasamy, and U. Madhow, “Newtonized orthogonal matching pursuit: Frequency estimation over the continuum,” IEEE Transactions on Signal Processing, vol. 64, no. 19, pp. 5066–5081, 2016. * [4] A. Bhandari, F. Krahmer, and R. Raskar, “On unlimited sampling and reconstruction,” IEEE Transactions on Signal Processing, vol. 69, pp. 3827–3839, 2020. * [5] A. Bhandari, F. Krahmer, and R. Raskar, “Unlimited sampling of sparse sinusoidal mixtures,” in 2018 IEEE International Symposium on Information Theory (ISIT), pp. 336–340, IEEE, 2018. * [6] S. Fernández-Menduina, F. Krahmer, G. Leus, and A. Bhandari, “Doa estimation via unlimited sensing,” in 2020 28th European Signal Processing Conference (EUSIPCO), pp. 1866–1870, IEEE, 2021. * [7] S. Fernández-Menduiña, F. Krahmer, G. Leus, and A. Bhandari, “Computational array signal processing via modulo non-linearities,” IEEE Transactions on Signal Processing, vol. 70, pp. 2168–2179, 2022. * [8] S. Rudresh, A. Adiga, B. A. Shenoy, and C. S. Seelamantula, “Wavelet-based reconstruction for unlimited sampling,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4584–4588, IEEE, 2018. * [9] T. Feuillen, M. Alaee-Kerahroodi, A. Bhandari, B. Ottersten, et al., “Unlimited sampling for fmcw radars: A proof of concept,” in 2022 IEEE Radar Conference (RadarConf22), pp. 1–5, IEEE, 2022. * [10] V. Bouis, F. Krahmer, and A. Bhandari, “Multidimensional unlimited sampling: A geometrical perspective,” in 2020 28th European Signal Processing Conference (EUSIPCO), pp. 2314–2318, IEEE, 2021. * [11] O. Graf, A. Bhandari, and F. Krahmer, “One-bit unlimited sampling,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5102–5106, IEEE, 2019. * [12] A. Bhandari, O. Graf, F. Krahmer, and A. I. Zayed, “One-bit sampling in fractional fourier domain,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 9140–9144, IEEE, 2020. * [13] N. I. Bernardo, J. Zhu, Y. C. Eldar, and J. Evans, “Design and analysis of hardware-limited non-uniform task-based quantizers,” arXiv preprint arXiv:2208.07525, 2022. * [14] A. Bhandari, “Back in the us-sr: Unlimited sampling and sparse super-resolution with its hardware validation,” IEEE Signal Processing Letters, vol. 29, pp. 1047–1051, 2022. * [15] A. Bhandari and F. Krahmer, “Hdr imaging from quantization noise,” in 2020 IEEE International Conference on Image Processing (ICIP), pp. 101–105, IEEE, 2020. * [16] D. Florescu, F. Krahmer, and A. Bhandari, “The surprising benefits of hysteresis in unlimited sampling: Theory, algorithms and experiments,” IEEE Transactions on Signal Processing, vol. 70, pp. 616–630, 2022. * [17] D. Florescu and A. Bhandari, “Unlimited sampling with local averages,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5742–5746, IEEE, 2022. * [18] D. Florescu and A. Bhandari, “Unlimited sampling via generalized thresholding,” in 2022 IEEE International Symposium on Information Theory (ISIT), pp. 1606–1611, IEEE, 2022. * [19] O. Musa, P. Jung, and N. Goertz, “Generalized approximate message passing for unlimited sampling of sparse signals,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 336–340, IEEE, 2018. * [20] G. Lu and H. Liu, “High dynamic range sensing using multi-channel modulo samplers,” in 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM), pp. 1–5, IEEE, 2020. * [21] A. Bhandari, F. Krahmer, and T. Poskitt, “Unlimited sampling from theory to practice: Fourier-prony recovery and prototype adc,” IEEE Transactions on Signal Processing, vol. 70, pp. 1131–1141, 2021. * [22] E. Azar, S. Mulleti, and Y. C. Eldar, “Residual recovery algorithm for modulo sampling,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5722–5726, IEEE, 2022\. * [23] D. L. Donoho and M. Elad, “Optimally sparse representation in general nonorthogonal dictionaries via l1 minimization,” Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003.
# Kinetic theory of acoustic-like modes in nonextensive pair plasmas E. Saberian11affiliation: Visiting Researcher in Department of Physics, Faculty of Basic Sciences, Azarbaijan Shahid Madani University, P.O.Box: 53714-161, Tabriz, Iran<EMAIL_ADDRESS>Department of Physics, Faculty of Basic Sciences, University of Neyshabur, P.O.Box: 91136-599, Neyshabur, Iran A. Esfandyari-Kalejahi Department of Physics, Faculty of Basic Sciences, Azarbaijan Shahid Madani University, P.O.Box: 53714-161, Tabriz, Iran<EMAIL_ADDRESS> ###### Abstract The low-frequency acoustic-like modes in a pair plasma (electron-positron or pair-ion) is studied by employing a kinetic theory model based on the Vlasov and Poisson’s equation with emphasizing the Tsallis’s nonextensive statistics. The possibility of the acoustic-like modes and their properties in both fully symmetric and temperature-asymmetric cases are examined by studying the dispersion relation, Landau damping and instability of modes. The resultant dispersion relation in this study is compatible with the acoustic branch of the experimental data [W. Oohara, D. Date, and R. Hatakeyama, Phys. Rev. Lett. 95, 175003 (2005)], in which the electrostatic waves have been examined in a pure pair-ion plasma. Particularly, our study reveals that the occurrence of growing or damped acoustic-like modes depends strongly on the nonextensivity of the system as a measure for describing the long-range Coulombic interactions and correlations in the plasma. The mechanism that leads to the unstable modes lies in the heart of the nonextensive formalism yet, the mechanism of damping is the same developed by Landau. Furthermore, the solutions of acoustic-like waves in an equilibrium Maxwellian pair plasma are recovered in the extensive limit ($q\rightarrow 1$), where the acoustic modes have only the Landau damping and no growth. Pair plasmas: Kinetic theory of plasma waves: Waves, oscillations, and instabilities in plasmas ## 1 Introduction Studying the pair plasmas has been an important challenge for many plasma physicists in two past decades. As we know, the difference between the electron and ion masses in an ordinary electron-ion plasma (in general, multi- component plasma with both light and heavy particles) gives rise to different time-space scales which are used to simplify the analysis of low- and high- frequency modes. Such time-space parity disappears when studying a pure pair plasma which consisting of only positive- and negative-charged particles with an equal mass, because the mobility of the particles in the electromagnetic fields is the same. Pair plasmas consisting of electrons and positrons have attracted an especial of interest because of their significant applications in astrophysics. In fact, electron-positron plasmas play an important role in the physics of a number of astrophysical situations such as active galactic nuclei (Begelman et al., 1984; Miller & Witta, 1987), pulsar and neutron star magnetosphere (Goldreich & Julian, 1969; Max & Perkins, 1972; Michel, 1982), solar atmosphere (Tandberg-Hansen & Emslie, 1988), accretion disk (Orsoz et al., 1997), black holes (Daniel & Tajima, 1998), the early universe (Misner et al., 1973; Gibbons et al., 1983) and many others. For example, the detection of circularly polarized radio emission from the jets of the archtypal quasar 3C297, indicates that electron-positron pairs are an important component of the jet plasma (Wardle et al., 1998). Similar detections in other radio sources suggest that, in general, extragalactic radio jets are composed mainly of an electron-positron plasma (Wardle et al., 1998). Furthermore, it has been suggested that the creation of electron-positron plasma in pulsars is essentially by energetic collisions between particles which are accelerated as a result of electric and magnetic fields in such systems (Sturrock, 1971; Michel, 1982, 1991). On the other hand, the successful achievements for creation of the electron-positron plasmas in laboratories have been frequently reported in the scientific literatures (Gibson et al., 1960; Gahn et al., 2000; Pedersen et al., 2003; Helander & Ward, 2003; Amoretti et al., 2003; Pedersen et al., 2004; Chen et al., 2009). In this regard, many authors have concentrated on the relativistic electron-positron plasmas (Berezhiani et al., 1993; Verheest, 1996; Gedalin et al., 1998; Keston et al., 2003; Muoz, 2004; Laing & Diver, 2006) because of its occurrence in astrophysics and encountering with positron as an antimatter in high-energy physics. However, there are many experiments that confirm the possibility of nonrelativistic electron-positron plasmas in laboratory (Trivelpiece, 1972; Boehmer et al., 1995). It has been observed that the annihilation time of electron-positron pairs in typical experiments is often long compared with typical confinement times (Surko & Murphy, 1990), showing that the lifetime of electron-positron pairs in the plasma is much longer than the characteristic time scales of typical oscillations. The long lifetime of electron-positron pairs against pair annihilation indicates that many collective modes can occur and propagate in an electron-positron plasma. Although pair plasmas consisting of electrons and positrons have been experimentally produced, however, because of fast annihilation and the formation of positronium atoms and also low densities in typical electron- positron experiments, the identification of collective modes in such experiments is practically very difficult. To resolve this problem, one may experimentally deal with a pure pair-ion plasma instead of a pure electron- positron plasma for identification of the collective modes. An appropriate experimental method has been developed by Oohara and Hatakeyama (Oohara & Hatakeyama, 2003) for the generation of pure pair-ion plasmas consisting only positive and negative ions with equal masses by using fullerenes $\mathrm{C}_{60}^{-}$ and $\mathrm{C}_{60}^{+}$. The fullerenes are molecules containing 60 carbon atoms in a very regular geometric arrangement, and so a fullerene pair plasma is physically akin to the an electron-positron plasma, without having to worry about fast annihilation. By drastically improving the pair-ion plasma source in order to excite effectively the collective modes, Oohara _et al._ (Oohara et al., 2005) have experimentally examined the electrostatic modes propagating along the magnetic-field lines in a fullerene pair plasma. In exploring the electrostatic modes in a pair plasma, most of authors have merely studied the high frequency Langmuir-type oscillation in a pure electron-positron plasma (Tsytovich & Wharton, 1978; Iwamoto, 1993; Zank & Greaves, 1995; Verheest, 2005) or in a pure pair-ion plasma (Vranjes & Poedts, 2005) via the theoretical studies. Particularly, Iwamato (Iwamoto, 1993) and Veranjes and Poedts (Vranjes & Poedts, 2005) have studied the longitudinal modes in a pair plasma only in the case in which the phase velocity of the wave is much larger than the thermal velocity of the particles which leads to the Langmuir-type waves. However, in the experiment of Oohara _et al._ (Oohara et al., 2005), three kinds of electrostatic modes have been observed from the obtained dispersion curves: a relatively low-frequency band with nearly constant group velocity (the acoustic waves), an intermediate-frequency backward-like mode (to our knowledge, with the lack of a satisfactory theoretical explanation), and the Langmuir-type waves in a relatively high- frequency band. This experiment indicates that in a pure pair plasma, besides of the Langmuir-type waves, the acoustic-like modes are possible in practice. There, Oohara _et al._ (Oohara et al., 2005) have briefly discussed some aspects of their experimental results by using a theoretical two-fluid model. Here, our goal is to investigate the possibility and the properties of the intriguing acoustic-like modes in a pair plasma (in both symmetric and asymmetric cases) by using a kinetic theory model and to argue some properties of these modes in a subtler manner. It is to be noted that the only asymmetry in a pure pair plasma may arise from a difference in temperatures of species. Physically, the temperature-asymmetry in a pair plasma may arise from the typical experimental procedure in which a pair plasma is produced in the laboratory. For example, an effective technique for creating an electron- positron plasma in the laboratory is as follows: at first, we may obtain a positron plasma through scattering from a buffer gas into a panning trap (Surko et al., 1989; Greaves et al., 1994); using this technique, the positrons can be stored at densities of order $10^{7}cm^{-3}$ and lifetime of order $10^{3}sec$ in the recent experiments. Then, an electron-positron plasma with sufficient stability can be produced by injecting a low-energy electron beam into the positrons (Greaves & Surko, 1995; Liang et al., 1998; Greaves & Surko, 2001; Surko & Greaves, 2004). For our purpose, we assume that the phase speed of the acoustic-like modes lies in the vicinity of the thermal velocities of the species (in fact, between the thermal velocities of the two species). The situation is somewhat similar to the case in which the possibility and properties of the electron-acoustic waves in a two-temperature (cold and hot) electron plasma is examined (Defler & Simonen, 1969; Watanabe & Taniuti, 1977; Dubouloz et al., 1991; Kakad et al., 2007; Amour et al., 2012). It is often observed that the physical distribution of particles in space plasmas as well as in laboratory plasmas are not exactly Maxwellian and particles show deviations from the thermal distribution (Huang & Driscoll, 1994; Liu et al., 1994). Presence of nonthermal particles in space plasmas has been widely confirmed by many spacecraft measurements (Montgomery et al., 1968; Feldman et al., 1975; Maksimovic et al., 1997; Zouganelis, 2008). In many cases, the velocity distributions show non-Maxwellian tails decreasing as a power-law distribution in particle speed. Several models for phase space plasma distributions with superthermal wings or other deviations from purely Maxwellian behavior have become rather popular in recent years, like the so- called kappa ($\kappa$) distribution which was introduced initially by Vasyliunas in 1968 (Vasyliunas, 1968) for describing plasmas out of the thermal equilibrium such as the magnetosphere environments and the Solar winds (Maksimovic et al., 1997), or the nonthermal model advanced by Cairns _et al._ in 1995 (Cairns et al., 1995) which was introduced at first for an explanation of the solitary electrostatic structures involving density depletions that have been observed in the upper ionosphere in the auroral zone by the Freja satellite (Dovner et al., 1994), and also the nonextensive model which go under the name of Tsallis. In the following we want to briefly review the formalism of the Tsallis model and to argue why it is preferred, rather than that of the Cairns and kappa model. From a statistical point of view, there are numerous studies indicating the breakdown of the Boltzmann-Gibbs (BG) statistics for description of many systems with long-range interactions, long-time memories and fractal space- time structures (see, e.g., Landsberg (1984); Tsallis et al. (1995); Tsallis (1995, 1999)). Generally, the standard BG extensive thermo-statistics constitutes a powerful tool when microscopic interactions and memories are short ranged and the environment is an Euclidean space-time, a continuous and differentiable manifold. Basically, systems subject to the long-range interactions and correlations and long-time memories are related to the nonextensive statistics where the standard BG statistics and its Maxwellian distribution do not apply. The plasma environments in the astrophysical and laboratorial systems are obviously subject to spatial and temporal long-range interactions evolving in a non-Euclidean space-time that make their behavior nonextensive. A suitable generalization of the Boltzmann-Gibbs-Shannon (BGS) entropy for statistical equilibrium was first proposed by Reyni (Reyni, 1955) and subsequently by Tsallis (Tsallis, 1988, 1994), preserving the usual properties of positivity, equiprobability and irreversibility, but suitably extending the standard extensivity or additivity of the entropy to nonextensivity. The nonextensive generalization of the BGS entropy which proposed by Tsallis in 1988 (Tsallis, 1988, 1994) is given by the following expression: $S_{q}=k_{B}\frac{1-\sum_{i}p_{i}^{q}}{q-1},$ (1) where $k_{B}$ is the standard Boltzmann constant, $\\{p_{i}\\}$ denotes the probabilities of the microstate configurations and $q$ is a real parameter quantifying the degree of nonextensivity. The most distinctive feature of $S_{q}$ is its pseudoadditivity. Given a composite system $A+B$, constituted by two subsystems $A$ and $B$, which are independent in the sense of factorizability of the joint microstate probabilities, the Tsallis entropy of the composite system $A+B$ satisfies $S_{q}(A+B)=S_{q}(A)+S_{q}(B)+(1-q)S_{q}(A)S_{q}(B)$. In the limit of $q\rightarrow 1$, $S_{q}$ reduces to the celebrated logarithmic Boltzmann- Gibbs entropy $S=-k_{B}\sum_{i}p_{i}\ln p_{i}$, and the usual additivity of entropy is recovered. Hence, $|1-q|$ is a measure of the lack of extensivity of the system. There are numerous evidences exhibiting that the nonextensive statistics, arising from $S_{q}$, is a better framework for describing many physical systems such as the galaxy clusters (Lavagno et al., 1998), the plasmas (Boghosian, 1996; Tsallis & de Souza, 1997), the turbulent systems (Arimitsu & Arimitsu, 2000; Beck, 2001; Beck et al., 2001), and so on, in which the system shows a nonextensive behavior as a result of long-range interactions and correlations. The experimental results in such systems display a non-Maxwellian velocity distribution for the particles (Huang & Driscoll, 1994; Liu et al., 1994). The functional form of the velocity distribution in the Tsallis formalism may be derived through a nonextensive generalization of the Maxwell ansatz (Silva et al., 1998), or through the maximizing Tsallis’ entropy under the constraints imposed by normalization and the energy mean value (Curado, 1999; Abe, 1999). Furthermore, from a nonextensive generalization of the “molecular chaos hypothesis”, it is shown that the equilibrium $q$-nonextensive distribution is a natural consequence of the _H_ theorem (Lima et al., 2001). It is to be noted that the empirically derived kappa distribution function in space plasmas is equivalent to the $q$-distribution function in Tsallis nonextensive formalism, in the sense that the spectrum of the velocity distribution function in both models show the similar behavior and, in fact, both the kappa distribution and the Tsallis $q$-nonextensive distribution describe deviations from the thermal distribution. Particularly, Leubner in 2002 (Leubner, 2002) showed that the distributions very close to the kappa distributions are a consequence of the generalized entropy favored by the nonextensive statistics, and proposed a link between the Tsallis nonextensive formalism and the kappa distribution functions. In fact, relating the parameter $q$ to $\kappa$ by formal transformation $\kappa=1/(1-q)$ (Leubner, 2002) provides the missing link between the $q$-nonextensive distribution and the $\kappa$-distribution function favored in space plasma physics, leading to a required theoretical justification for the use of $\kappa$-distributions from fundamental physics. Furthermore, Livadiotis and McComas in 2009 (Livadiotis & McComas, 2009) examined how kappa distributions arise naturally from the Tsallis statistical mechanics. On the other hand, the nonthermal distribution function introduced by Cairns et al. (Cairns et al., 1995) is a proposal function to model an electron distribution with a population of energetic particles. It is especially appropriate for describing the nonlinear propagation of large amplitude electrostatic excitations such as solitary waves and double layers which are very common in the magnetosphere. However, the lack of a statistical foundation behind this proposal function is clearly seen, leading to less attention to it rather than the kappa function and the Tsallis distribution. Anyway, the $q$-nonextensive formalism, with a powerful thermo-statistics foundation and numerous experimental evidences, may cover many features of the other nonthermal models and provide a good justification for its preference over the other models. It has considerably extended both statistical mechanics formalism and its range of applicability. The interested reader may refer to the Refs. (Plastino, 2004; Abe & Okamato, 2001; Gell-Mann & Tsallis, 2004; Tsallis, 2009) where the significance, historical background, physical motivations, foundations and applications of the nonextensive thermo- statistics have been discussed in detail. The problem of waves, Landau damping and instabilities in typical plasmas have been investigated by some authors in the framework of the Tsallis nonextensive statistics (Lima et al., 2000; Silva et al., 2005; Valentini, 2005; Liyan & Jiulin, 2008; Saberian & Esfandyari-Kalejahi, 2013). Particularly, it is to be noted that the physical state described by the $q$-nonextensive distribution in the Tsallis’s statistics is not exactly the thermodynamic equilibrium (Liyan & Jiulin, 2008). In fact, the deviation of $q$ from unity quantifies the degree of inhomogeneity of the temperature $T$ via the formula $k_{B}\nabla T+(1-q)Q_{\alpha}\nabla\phi=0$ (Du, 2004), where $Q_{\alpha}$ denotes the electric charge of specie $\alpha$, and $\phi$ is the electrostatic potential. In other words, the nonextensive statistics describes a system that have been evolved from a nonequilibrium stationary state with inhomogeneous temperature which contains a number of nonthermal particles. In the present work, we attempt to investigate the possibility of the acoustic-like modes in a field-free and collisionless pair plasma (electron- positron or pair-ion) and to discuss the damping and instability of modes in the context of the Tsallis’ nonextensive statistics. In Sec. 2, a kinetic theory model based on the linearized Vlasov and Poisson’s equations is applied for deriving the dielectric function ($D(k,\omega)$) for longitudinal waves in an unmagnetized pair plasma. We then find the solutions of $D(k,\omega)=0$ for the acoustic-like waves with the constraint of weak damping or growth by considering a $q$-nonextensive distribution for stationary state of the plasma, as demonstrated in Sec. 3. The dispersion relation, Landau damping and instability of the acoustic-like modes are discussed in Sec. 4. Finally, a summary of our results is given in Sec. 5. ## 2 The model equations In this section, we present a brief review of kinetic equations for describing the electrostatic collective modes specialized to a pair plasma (electron- positron or pair-ion) with the constraint of weak damping or growth. We consider a spatially uniform field-free pair plasma at the equilibrium state. If at a given time $t=0$ a small amount of charge is displaced in the plasma, the initial perturbation may be described by $f_{\alpha}(t=0)=f_{0,\alpha}(\vec{v})+f_{1,\alpha}(\vec{x},\vec{v},t=0),\ \ \ \ f_{1,\alpha}\ll f_{0,\alpha},$ where $f_{0,\alpha}$ corresponds to the unperturbed and time-independent stationary distribution and $f_{1,\alpha}$ is the corresponding perturbation about the equilibrium state. Here, $\alpha$ stands for electrons and positrons ($\alpha=e^{\pm}$) or fullerene pairs ($\alpha=\mathrm{C}_{60}^{\pm}$). We assume that the perturbation is electrostatic and the displacement of charge gives rise to a perturbed electric but no magnetic field. With this assumption, the time development of $f_{1,\alpha}(\vec{x},\vec{v},t)$ is given by the solution of the linearized Vlasov and Poisson’s equations as follows (Landau, 1946; Krall & Trivelpiece, 1973): $\frac{\partial f_{1,-}}{\partial t}+\vec{v}\cdot\frac{\partial f_{1,-}}{\partial\vec{x}}+\frac{e}{m}\nabla\phi_{1}\cdot\frac{\partial f_{0,-}}{\partial\vec{v}}\;=\;0,$ (2) $\frac{\partial f_{1,+}}{\partial t}+\vec{v}\cdot\frac{\partial f_{1,+}}{\partial\vec{x}}-\frac{e}{m}\nabla\phi_{1}\cdot\frac{\partial f_{0,+}}{\partial\vec{v}}\;=\;0,$ (3) $\nabla^{2}\phi_{1}=4\pi n_{0}e\int(f_{1,-}-f_{1,+})\,\mathrm{d}\vec{v},$ (4) where $e$, $m$ and $n$ denote, respectively, the absolute charge, mass and number density of the pairs and $\phi_{1}$ is the electrostatic potential produced by the perturbation. Here, we have labeled the distribution function of negative and positive pairs with the subscripts $\pm$. This set of linearized equations for perturbed quantities may be solved simultaneously to investigate the plasma properties for the time intervals shorter than the binary collision times. Specially, we can study the properties of the plasma waves whose oscillations period are much less than a binary collision time. The standard technique for simultaneously solving the differential equations (2)-(4) is the method of integral transforms, as developed for the first time by Landau in the case of an ordinary electron-ion plasma (Landau, 1946; Krall & Trivelpiece, 1973). Another simplified method of solving the Vlasov- Poisson’s equations for the longitudinal waves, with the frequency $\omega$ and the wave vector $\vec{k}$, is to assume that the solution has the form $\begin{array}[]{l}f_{1,\alpha}(\vec{x},\vec{v},t)=f_{1,\alpha}(\vec{v})e^{i(\vec{k}\cdot\vec{x}-\omega t)},\ \ \ \alpha=e^{\pm}\ \ \mathrm{or}\ \ \mathrm{C}_{60}^{\pm},\\\ \phi_{1}(\vec{x},t)=\phi_{1}e^{i(\vec{k}\cdot\vec{x}-\omega t)}.\end{array}$ (5) Without loss of the generality, we consider the $x$-axis to be along the direction of the wave vector $\vec{k}$, and let $v_{x}=u$. Then, by applying the Eq. (5) and solving the Eqs. (2)-(4) we find the dispersion relation for longitudinal waves in a pair plasma as follows $D(k,\omega)=1-\frac{4\pi n_{0}e^{2}}{mk^{2}}\int\frac{\frac{\partial}{\partial u}(f_{0,-}(u)+f_{0,+}(u))}{u-\frac{\omega}{k}}\,\mathrm{d}u=0,$ (6) where $D(k,\omega)$ is the dielectric function of a field-free pair plasma for the longitudinal oscillations. We then can investigate the response of the pair plasma to an arbitrary perturbation via the response dielectric function $D(k,\omega)$. In general, the frequency $\omega$ which satisfies the dispersion relation $D(k,\omega)=0$ is complex, i.e., $\omega=\omega_{r}+i\omega_{i}$. However, in many cases $Re[\omega(k)]\gg Im[\omega(k)]$, and the plasma responds to the perturbation a long time after the initial disturbance with oscillations at a range of the well-defined frequencies. These are the normal modes of the plasma, in the sense that they are the nontransient response of the plasma to an initial perturbation. We can determine the normal modes of the plasma via the dispersion relation $D[k,\omega(k)]=0$, which gives the frequency of the plasma waves as a function of the wave number $k$ or vice versa. It should be further mentioned that when we solve the Vlasov and Poisson’s equations as an initial valve problem, here via $f_{0,-}+f_{0,+}$, it is possible to obtain the solutions with negative or positive values of $\omega_{i}$, corresponding to the damped or growing waves, respectively. This can be explicitly seen from the electrostatic potential associated with the wave number $k$ of the excitation as follows: $\phi_{1}(x,t)=\phi_{1}e^{i(kx-\omega_{r}t)}e^{\omega_{i}t},$ (7) where a solution with negative $\omega_{i}$ displays a damped wave, while the solution with positive one corresponds to an unstable mode. When the damping or growth is weak we can expand the velocity integral in Eq. (6) around $\omega=\omega_{i}$ to find the zeros of $D(k,\omega)$. The dielectric function $D(k,\omega)$ is in general a complex function and thus the dispersion relation can be written as follows: $D(k,\omega_{r}+i\omega_{i})=D_{r}(k,\omega_{r}+i\omega_{i})+iD_{i}(k,\omega_{r}+i\omega_{i})=0,$ (8) where $D_{r}$ and $D_{i}$ are the real and imaginary parts of the dielectric function. Since we want to consider the weakly damped or growing waves, i.e., $\omega_{i}\ll\omega_{r}$, the Eq. (8) can be Taylor expanded in the small quantity $\omega_{i}$ as follows: $D_{r}(k,\omega_{r})+i\omega_{i}\frac{\partial D_{r}(k,\omega_{r})}{\partial\omega_{r}}+i[D_{i}(k,\omega_{r})+i\omega_{i}\frac{\partial D_{i}(k,\omega_{r})}{\partial\omega_{r}}],$ (9) where $D_{r}$ and $D_{i}$ read $D_{r}(k,\omega_{r})=1-\frac{4\pi n_{0}e^{2}}{mk^{2}}P.V.\int\frac{\frac{\partial}{\partial u}(f_{0,-}(u)+f_{0,+}(u))}{u-\frac{\omega_{r}}{k}}\,\mathrm{d}u,$ (10) $D_{i}(k,\omega_{r})=-\pi(\frac{4\pi n_{0}e^{2}}{mk^{2}})[\frac{\partial}{\partial u}(f_{0,-}(u)+f_{0,+}(u))]_{u=\frac{\omega_{r}}{k}}.$ (11) Here, we have made the analytic continuation of the velocity integral of the Eq. (6) over $u$, along the real axis, which passes under the pole at $u=\frac{\omega}{k}$ with the constraint of weakly damped waves, where $P.V.\int$ denotes the Cauchy principal value. With the assumption $\omega_{i}\ll\omega_{r}$, by balancing the real and imaginary parts of the Eq.(9) and neglecting the terms of order $(\frac{\omega_{i}}{\omega_{r}})^{2}$, we find that $\omega_{r}$ and $\omega_{i}$ can be computed, respectively, from the relations $\displaystyle D_{r}(k,\omega_{r})=0,$ (12a) $\displaystyle\omega_{i}=-\frac{D_{i}(k,\omega_{r})}{{\partial D_{r}(k,\omega_{r})}/{\partial\omega_{r}}}.$ (12b) ## 3 Acoustic modes with nonextensive stationary state Now, we want to obtain the formalism and some features of the acoustic-like modes in a pair (electron-positron or pair-ion) plasma in the context of the Tsallis nonextensive statistics. For this purpose we assume that the stationary state of the plasma obeys the $q$-nonextensive distribution function, instead of a Maxwellian one, which merely describes a fully equilibrium plasma. The $q$-nonextensive distribution function of stationary state for species $\alpha$ in one-dimension is given by (Silva et al., 1998; Curado, 1999; Abe, 1999; Lima et al., 2001) $f_{0\alpha}(u)=A_{\alpha,q}[1-(q-1)\frac{m_{\alpha}u^{2}}{2k_{B}T_{\alpha}}]^{\frac{1}{q-1}},$ (13) where $m_{\alpha}$ and $T_{\alpha}$ are, respectively, the mass and temperature of species $\alpha$ ($\alpha=e^{\pm}\ \mathrm{or}\ \mathrm{C}_{60}^{\pm}$) and $k_{B}$ is the standard Boltzmann constant. The normalization constant $A_{\alpha,q}$ can be written as $A_{\alpha,q}=L_{q}\sqrt{\frac{m_{\alpha}}{2\pi k_{B}T_{\alpha}}},$ (14) where the dimensionless $q$-dependent coefficient $L_{q}$ reeds $\displaystyle L_{q}=\frac{\Gamma(\frac{1}{1-q})}{\Gamma(\frac{1}{1-q}-\frac{1}{2})}\sqrt{1-q},\ \ \ \ \mathrm{for}\ \ -1<q\leq 1$ (15a) $\displaystyle L_{q}=(\frac{1+q}{2})\frac{\Gamma(\frac{1}{2}+\frac{1}{q-1})}{\Gamma(\frac{1}{q-1})}\sqrt{q-1}.\ \ \ \ \mathrm{for}\ \ q\geq 1$ (15b) One may examine that for $q>1$ , the $q$-distribution function (13) exhibits a thermal cutoff, which limits the velocity of particles to the values $u<u_{max}$, where $u_{max}=\sqrt{\frac{2k_{B}T_{\alpha}}{m_{\alpha}(q-1)}}$. For these values of the parameter $q$ we have $S_{q>1}(A+B)<S(A)+S(B)$ referred to the _subextensivity_. This thermal cutoff is absent when $q<1$ , that is, the velocity of particles is unbounded for these values of the parameter $q$. In this case, we have $S_{q<1}(A+B)>S(A)+S(B)$ referred to the _superextensivity_. Moreover, the $q$-nonextensive distribution (13) is unnormalizable for the values of the $q<-1$. Furthermore, the parameter $q$ may be further restricted by the other physical requirements, such as finite total number of particles and consideration of the energy equipartition for contribution of the total mean energy of the system. Interestingly, in the extensive limit $q\rightarrow 1$ where $S(A+B)=S(A)+S(B)$, and by using the formula $lim_{\mid z\mid\rightarrow\infty}z^{-a}[\frac{\Gamma(a+z)}{\Gamma(z)}]=1$ (Abramowitz & Stegun, 1972), the distribution function (13) reduces to the standard Maxwell- Boltzmann distribution $f_{0\alpha}(u)=\sqrt{\frac{m_{\alpha}}{2\pi k_{B}T_{\alpha}}}e^{-\frac{m_{\alpha}u^{2}}{2k_{B}T_{\alpha}}}$. In Fig. 1, we have depicted schematically the nonthermal behavior of the distribution function (13) for some values of the spectral index $q$ in which the velocity $u$ and the distribution function $f(u)$ have normalized by the standard thermal speed $v_{th}=\sqrt{\frac{2k_{B}T}{m}}$ and $\sqrt{\frac{m}{2\pi k_{B}T}}$, respectively. We can see that in the case of a superextensive distribution with $q<1$ [Fig. 1(a)], comparing with the Maxwellian limit (solid curve), there are more particles with the velocities faster than the thermal speed $v_{th}$. These are the so-called superthermal particles and we can see that the $q$-distribution with $q<1$ behave like the $\kappa$ distribution, the same as that introduced for the first time by Vasyliunas in 1968 to describe the space plasmas far from the thermal equilibrium (Vasyliunas, 1968). In fact, in a superthermal plasma modeled by a $\kappa$-like distribution (here, the cases in which $q<1$), the particles have distributed in a wider spectrum of the velocities, in comparison with a Maxwellian plasma. In other words, the low values of the spectral index $q$ correspond to a large fraction of superthermal particle populations in the plasma. On the other hand, in the case of a subextensive distribution with $q>1$ [Fig. 1(b)], comparing with the Maxwellian limit (solid curve), there is a large fraction of particles with the velocities slower than the thermal speed $v_{th}$. Moreover, for these values of the parameter $q$, we can explicitly see the mentioned thermal cutoff which limits the velocity of particles. In fact, the $q$-nonextensive distributions with $q>1$ are suitable for describing the systems containing a large number of low speed particles. The phase velocity of the acoustic modes in a pair plasma lies between the thermal velocities of the pairs. Here, we assume that $T_{+}<T_{-}$ and therefore the phase velocity of the acoustic waves lies in the frequency band $v_{th,+}<v_{\phi}<v_{th,-}$, where $v_{\phi}=\frac{\omega_{r}}{k}$ and $v_{th,\pm}=(\frac{k_{B}T_{\pm}}{m})^{\frac{1}{2}}$, respectively, denote the phase velocity of the wave and thermal speed of the pairs. It is to be noted that because of the symmetry involved in a pair plasma, the other case in which $T_{-}<T_{+}$ is physically identical to our assumption here. Moreover, it is reminded that because of the same dynamics of the species in a pure pair plasma, we do not make a considerable difference in temperatures of the pairs, but we assume that it is finite and small. As we mentioned earlier, we may postulate physically that this finite temperature-asymmetry in a pair plasma may arise from the typical experimental procedure in which the pair plasma is produced in the laboratory (Greaves & Surko, 1995; Liang et al., 1998; Greaves & Surko, 2001; Surko & Greaves, 2004). With $v_{th,+}<v_{\phi}<v_{th,-}$, the Cauchy principal value of Eq. (10) for the terms that are involving $f_{0,-}$ and $f_{0,+}$ may be evaluated by an expanding in $u$ as follows: $\displaystyle\int^{+u_{max}}_{-u_{max}}\frac{\frac{\partial}{\partial u}f_{0,-}(u)}{u-\frac{\omega_{r}}{k}}\,\mathrm{d}u=\int^{+u_{max}}_{-u_{max}}\frac{\partial f_{0,-}(u)}{\partial u}(\frac{1}{u}+\frac{1}{u^{2}}\frac{\omega_{r}}{k}+\frac{1}{u^{3}}\frac{\omega_{r}^{2}}{k^{2}}+...)\,\mathrm{d}u,$ (16a) $\displaystyle\int^{+u_{max}}_{-u_{max}}\frac{\frac{\partial}{\partial u}f_{0,+}(u)}{u-\frac{\omega_{r}}{k}}\,\mathrm{d}u=-\frac{k}{\omega_{r}}\int^{+u_{max}}_{-u_{max}}\frac{\partial f_{0,+}(u)}{\partial u}(1+\frac{k}{\omega_{r}}u+\frac{k^{2}}{\omega_{r}^{2}}u^{2}+\frac{k^{3}}{\omega_{r}^{3}}u^{3}+...)\,\mathrm{d}u.$ (16b) Here, in order to include both cases $q<1$ (superextensivity) and $q>1$ (subextensivity), we have denoted the integration limits in Eq. (16) by $\pm u_{max}$. In fact, as discussed earlier, the integration limits are unbounded, i.e., $\pm u_{max}=\pm\infty$, when $q<1$, and they are given by the $q$-dependent thermal cutoff $\pm u_{max}=\pm\sqrt{\frac{2k_{B}T_{\alpha}}{m_{\alpha}(q-1)}}$ when $q>1$. With the $q$-nonextensive distribution given in Eq. (13), noting that $f_{0\alpha}(u)$ is an even function with argument $u$ and $\frac{\partial f_{0\alpha}}{\partial u}$ is an odd function, we may calculate the real part of the dielectric function in Eq. (10) as follows: $D_{r}(k,\omega_{r})=1+\frac{4\pi n_{0}e^{2}}{mk^{2}}\frac{1}{v_{th,-}^{2}}(\frac{1+q}{2})-\frac{4\pi n_{0}e^{2}}{m\omega_{r}^{2}}[1+3(\frac{2}{3q-1})\frac{k^{2}}{\omega_{r}^{2}}v_{th,+}^{2}].$ (17) The integrals in Eq. (16) are computed by parts and there, we have calculated the average values of $u^{2}$ as follows: $<u^{2}>=\int^{+u_{max}}_{-u_{max}}u^{2}f_{\alpha 0}(u)\,\mathrm{d}u=\frac{2}{3q-1}\frac{k_{B}T_{\alpha}}{m_{\alpha}},$ (18) which requires that the parameter $q$ must restrict to the values of $q>\frac{1}{3}$. Note that for the values of $q$ equal or lower than the critical value $q_{c}=\frac{1}{3}$, the mean value of $u^{2}$ diverges. Therefore, we see that the parameter $q$ for the case $q<1$ is further restricted to the values $\frac{1}{3}<q<1$, in order that the physical requirement of energy equipartition is preserved. We emphasize that our results here are valid both for the case $\frac{1}{3}<q<1$ where the value of $u_{max}$ is unbounded and also in the case $q>1$ in which $u_{max}$ is given by the thermal cutoff $u_{max}=\sqrt{\frac{2k_{B}T_{\alpha}}{m_{\alpha}(q-1)}}$. Note that in both cases the integrals in Eq. (16) are evaluated by limits that are symmetric across the origin. The interested reader may easily check the validity of Eqs. (17) and (18) for all allowed values of $q$. Furthermore, in the extensive limit $q\rightarrow 1$, Eq. (18) reduces to the familiar energy equipartition theorem for each degree of freedom in the BG statistics as $<\frac{1}{2}m_{\alpha}u^{2}>=\frac{1}{2}k_{B}T_{\alpha}$. It is to be noted that the $q$ distribution given in Eq. (13) describes the stationary state of the species $\alpha$ in the framework of the Tsallis nonextensive formalism. The value of the spectral index $q$ is a measure that determines the slope of the energy spectrum of the nonthermal particles and measures the deviation from the standard thermal distribution (which is recovered at the limit $q\rightarrow 1$). The value of the spectral index $q$ is determined as a result of long-range interactions and correlations of the whole system. Therefore, a distinction between the pairs in $q$ can be or not, depend on the physics of the system under consideration. Here, following El- Tantawy _et al._ (El-Tantawy et al., 2012), we make no distinction between the pairs in $q$. The solution of the equation $D_{r}(k,\omega_{r})=0$ may yield the dispersion relation for the acoustic modes in a nonextensive pair plasma as follows: $\omega_{r}^{2}=k^{2}c_{s}^{2}[\frac{1}{(k\lambda_{D})^{2}(1+\frac{1}{\sigma})+(\frac{1+q}{2})}+3(\frac{2}{3q-1})\sigma],$ (19) where we have defined the sound-speed of the acoustic-like modes as $c_{s}={(\frac{k_{B}T_{-}}{m})}^{\frac{1}{2}}$. Here, $\sigma=\frac{T_{+}}{T_{-}}$ is the fractional temperature of positive to negative species and $\lambda_{D}$ is the Debye screening length and is given in a charge-neutral pair plasma by $\lambda_{D}^{-2}=\frac{4\pi n_{0}e^{2}}{k_{B}}(\frac{1}{T_{-}}+\frac{1}{T_{+}}).$ (20) By definition of the the natural oscillation frequency in a charge-neutral pair plasma as $\omega_{p}=(\frac{8\pi n_{0}e^{2}}{m})^{\frac{1}{2}}$ (Saberian & Esfandyari-Kalejahi, 2013), it is convenient to rewrite the linear dispersion relation for the later references as follows: $(\frac{\omega_{r}}{\omega_{p}})^{2}=(k\lambda_{D})^{2}[\frac{\frac{1}{2}(1+\frac{1}{\sigma})}{(k\lambda_{D})^{2}(1+\frac{1}{\sigma})+(\frac{1+q}{2})}+3(\frac{1}{3q-1})(1+\sigma)].$ (21) On the other hand, by using the Eq. (11) and applying the $q$-nonextensive distribution function (13), it is straightforward to obtain the imaginary part of the dielectric function as follows: $D_{i}(k,\omega_{r})=L_{q}\frac{\sqrt{\pi}}{k^{3}\lambda_{D}^{3}(1+\frac{1}{\sigma})^{\frac{3}{2}}}\frac{\omega_{r}}{\omega_{p}}\\{[1-(q-1)\frac{\omega_{r}^{2}}{k^{2}\lambda_{D}^{2}\omega_{p}^{2}(1+\frac{1}{\sigma})}]^{\frac{2-q}{q-1}}+\frac{1}{\sigma^{\frac{3}{2}}}[1-(q-1)\frac{\omega_{r}^{2}}{k^{2}\lambda_{D}^{2}\omega_{p}^{2}(1+\sigma)}]^{\frac{2-q}{q-1}}\\}.$ (22) By $D_{r}(k,\omega_{r})$ and $D_{i}(k,\omega_{r})$ given in Eqs. (17) and (22), we may obtain the explicit solution of the imaginary part of the frequency by using the relation (12b), noting that both $k\lambda_{D}$ and $\frac{\omega_{i}}{\omega_{r}}$ are assumed small. The result is as follows: $\displaystyle\omega_{i}=-\sqrt{\frac{\pi}{8}}\omega_{r}L_{q}(\frac{1}{(k\lambda_{D})^{2}(1+\frac{1}{\sigma})+(\frac{1+q}{2})}+3(\frac{2}{3q-1})\sigma)^{\frac{3}{2}}\times$ $\displaystyle\\{[1-(q-1)(\frac{\frac{1}{2}}{(k\lambda_{D})^{2}(1+\frac{1}{\sigma})+(\frac{1+q}{2})}+\frac{3}{2}(\frac{2}{3q-1})\sigma)]^{\frac{2-q}{q-1}}+$ $\displaystyle\frac{1}{\sigma^{\frac{3}{2}}}[1-(q-1)(\frac{\frac{1}{2\sigma}}{(k\lambda_{D})^{2}(1+\frac{1}{\sigma})+(\frac{1+q}{2})}+\frac{3}{2}(\frac{2}{3q-1}))]^{\frac{2-q}{q-1}}\\},$ (23) where $L_{q}$ is that given in Eq. (19). Note that in deriving the solutions (19) and (23) for the acoustic-like modes in a pair plasma, we have considered the condition $k\lambda_{D}\ll 1$ which indicates the regions with weak damping or growth (long wavelength limit). Moreover, the values of the parameter $\sigma$ (the fractional temperature of the species) must be considered at the vicinity of unit, in order that a suitable compatibility with the physical circumstances is preserved. In the extensive limit $q\rightarrow 1$, our results reduce to the solutions for the acoustic-like modes in a Maxwellian pair plasma as follows: $\omega_{r}^{2}=k^{2}c_{s}^{2}[\frac{1}{k^{2}\lambda_{D}^{2}(1+\frac{1}{\sigma})+1}+3\sigma]$ (24) $\frac{\omega_{i}}{\omega_{r}}=-\sqrt{\frac{\pi}{8}}(\frac{1}{k^{2}\lambda_{D}^{2}(1+\frac{1}{\sigma})+1}+3\sigma)^{\frac{3}{2}}\\{e^{-(\frac{\frac{1}{2}}{k^{2}\lambda_{D}^{2}(1+\frac{1}{\sigma})+1}+\frac{3}{2}\sigma)}+\frac{1}{\sigma^{\frac{3}{2}}}e^{-(\frac{\frac{1}{2\sigma}}{k^{2}\lambda_{D}^{2}(1+\frac{1}{\sigma})+1}+\frac{3}{2})}\\}$ (25) Note that in the extensive limit, the acoustic waves have only the (Landau) damping and no growth, because of the negative value of the imaginary part of the frequency, provided by Eq. (25). Furthermore, in the symmetric case $\sigma\rightarrow 1$, the dispersion relation of the acoustic waves in pair or pair-ion plasmas given in Eq.(24), reduce to Eq.(12) of Ref. (Kaladze et al., 2012). One basic feature of our work is the inclusion of the nonextensivity of the system, which is essentially as a result of the long- range Coulombic interactions of the charge particles in the plasma. The nonextensivity of the system is determined by the spectral index $q$ and may lead to positive or negative $\omega_{i}$ in Eq. (23). Therefore, depending on the nonextensivity of the plasma, both the damped and growing acoustic modes may be happened in a pair plasma. ## 4 Discussion ### 4.1 Dispersion relation The solutions (21) and (23) describe the acoustic-like modes in a nonextensive electron-positron plasma or pair-ion plasma at the limit of long wavelengths confirmed by $k\lambda_{D}\ll 1$. In Fig. (2a) we have plotted the dispersion relation of acoustic modes for some values of the nonextensivity index $q$. In the represented graph, the solid curve corresponds to the extensive limit $q=1$ and the other ones show the deviations from the Maxwellian limit. It is seen that for a given wavelength, the phase velocity of the acoustic modes increases with decreasing the value of $q$. The physical description can be discussed in the context of the nonextensive statistics as follows. As mentioned earlier, the $q$-distribution function with $q<1$, comparing with the Maxwellian one ($q=1$), indicates the systems with more superthermal particles, i.e., particles with the speed faster than the thermal speed $v_{th}=\sqrt{\frac{2k_{B}T}{m}}$ (superextensivity). On the other hand, the $q$-distribution with $q>1$ is suitable to describe systems containing a large number of low-speed particles (subextensivity). However, because of the long- range nature of Coulombic interactions in plasma environments and the presence of many superthermal particles in such systems, confirmed by many astrophysical measurements (Montgomery et al., 1968; Feldman et al., 1975; Maksimovic et al., 1997; Zouganelis, 2008), a $q$-distribution with $q<1$ is strongly suggested for the real plasma systems or superthermal plasmas. It is obvious that in a plasma with more superthermal particles ($q<1$), the phase velocity of the acoustic-like modes should be larger than the case with lack of superthemal particles ($q>1$), in agreement with our results here. In addition, we have illustrated the temperature-asymmetry effect, via $\sigma$, on the dispersion relation of acoustic modes in a pair plasma as shown in Fig. 2(b). There, the solid curve indicates the case in which the whole plasma is in a common thermal state with $T_{-}=T_{+}$, signifies a temperature-symmetric pair plasma, and the other curves show deviations from this symmetric case. We see that the temperature-asymmetry reduces the phase velocity of the acoustic modes in a pair plasma. However, our kinetic model confirms that the acoustic-like modes are possible in both symmetric and asymmetric pair plasmas, depart from a small shift in phase velocity. It is reminded that in this work we have specialized our study to the low- frequency band in which $v_{th,+}<v_{\phi}<v_{th,-}$ . Then, the Cauchy principal value of Eq. (10) is evaluated by an expanding in velocity in the form of Eq. (16). So, our calculations in this frequency band may lead to the acoustic modes and not to the Langmuir waves. On the other hand, considering a high frequency band in which the phase velocity of the wave is much larger than the thermal velocity of the particles ($v_{\phi}>>v_{th}$) may lead to the Langmuir-type waves, as studied in Ref. (Saberian & Esfandyari-Kalejahi, 2013). There, the dispersion relation for the Langmuir waves is given by $\omega_{r}^{2}=\omega_{p}^{2}[1+3(k\lambda_{D})^{2}\frac{2}{3q-1}].$ (26) However, for comparison of the acoustic modes and the Langmuir waves in a pair plasma with $T_{+}=T_{-}$, we have depicted both of the acoustic and Langmuir branches in Fig. 3. From this graph, we see explicitly that the acoustic waves belong to a low frequency band which tends to zero at the limit $k\rightarrow 0$, while the Langmuir waves occur at high frequencies above $\omega_{p}$. On the other hand, the experimental data presented by Oohara _et. al_ (Oohara et al., 2005) confirm the possibility of the acoustic-like modes in a pair plasma which is compatible with our results here. ### 4.2 Landau damping and unstable modes In Fig. 4 we have plotted the ratio $\omega_{i}/\omega_{r}$ with respect to the nonextensivity index $q$ for all allowed values of $q<1$ (referred to superextensivity) at the limit of long wavelengths (supported by, e.g., $k\lambda_{D}=0.1$). It is seen that both of the damped ($\omega_{i}<0$) and growing ($\omega_{i}>0$) acoustic-like modes are predicted in a nonextensive pair plasma with $q<1$. Our numerical analysis shows that in the $q$-region $0.34\lesssim q\lesssim 0.6$ the acoustic modes are unstable, due to the fact that $\omega$’s have positive imaginary parts and then the associated modes will grow in time (Eq. (7) is reminded). The mechanism which leads to this instability may explain as follows. As we expressed earlier, the $q$-nonextensive distribution with $q<1$ describes a system with a large number of superthermal particles. So, our solution for the Vlasov and Poisson’s equations with small values of $q<1$ indicates an evolution which has started from a stationary state with a large portion of superthermal particles. The acoustic-like waves may gain energy from these superthermal particles and results in growing waves in time. In other words, this instability arises from a stationary state which describes a superthermal plasma and, in fact, we have obtained a solution for acoustic-like modes in which the stationary sate of the plasma has started from a non-equilibrium distribution. However, our results have the flexibility to reduce to the equilibrium solutions in the limiting case of $q\rightarrow 1$ indicates a Maxwellian distribution. Furthermore, the acoustic-like modes have Landau damping in the $q$-region $0.6\lesssim q\lesssim 0.71$ because $\omega$’s have negative imaginary parts in these degrees of the nonextensivity (see Fig. 4). The Landau damping is a resonance phenomena between the plasma particles and the wave, for the particles moving with nearly the phase velocity of the wave (Landau, 1946; Krall & Trivelpiece, 1973). Noting that the $q$-distribution is a decreasing function with $u$, there are more particles moving slightly slower than the wave than the particles moving slightly faster than the wave; if the slower particles are accelerated by the wave, this must reduce the energy of the wave, and the wave damps. It is to be noted that our analysis shows that after $q=0.71$, the curve in Fig. 4 rises to positive values for a small interval of $q$ and then it returns to the negative values. The fluctuation of $\omega_{i}$ between the positive and negative values continues increasingly until to the limiting case at $q\rightarrow 1$. In fact, the curve in Fig. 4 don’t show a smooth behavior for the values $0.71<q<1$ and the analysis break down, until to the extensive limit at $q\rightarrow 1$, where our solutions reduce smoothly to that of a Maxwellian pair plasma given in Eqs. (24) and (25). This unsmooth behaviour is because of the existence of the terms $\Gamma(\frac{1}{1-q})$ and $\Gamma(\frac{1}{1-q}-\frac{1}{2})$ in our formalism supported by $L_{q}$. Indeed, this behavior is a mathematical consequence and there is not a physical justification for it. So, we have analyzed the problem in a well- defined interval of $q$, i.e, $1/3<q<0.71$, as shown in Fig. 4. We can also investigate the resonance between the plasma particles and the acoustic modes for the values of $q>1$ (referred to subextensivity). In Fig. 5, the ratio $\omega_{i}/\omega_{r}$ with respect to nonextensivity index $q$ is plotted for the values of $q>1$ at a typical long wavelength ($k\lambda_{D}=0.1$). From this graph, it is seen that the acoustic-like modes have only (Landau) damping and no growth for these degrees of the nonextensivity. Furthermore, the damping rate is relatively weak in these $q$-regions, in comparison with the case of a superthermal plasma ($q<1$). The reason is that the number of particles participating in the resonance with the wave is small for a stationary state with $q>1$. Strictly speaking, the slope of the velocity $q$-distribution function $f_{0\alpha}(u)$ given in Eq. (13) increases with $q$ and there is even a thermal cutoff in the case of $q>1$ [see Fig. 1(b)]. This corresponds to a weaker resonance with the wave, in comparison with the case $q<1$. Our analysis reveals that the acoustic-like modes are unstable in the $q$-region $0.34\lesssim q\lesssim 0.6$ (high superthermal $q$-region) yet, they are heavily damped in the $q$-region $0.6\lesssim q\lesssim 0.71$ (less superthermal $q$-region) and finally, they are relatively weakly damped for the values of $q>1$ (subextensive region). In Fig. 6, the damping and growing rates with respect to the wave number are plotted for some values of the nonextensive index $q$ for three cases of the heavily damped modes [Fig6(a)], weakly damped modes [Fig6(b)] and growing unstable modes [Fig6(c)]. We see that for the waves with longer wavelengths the rate of damping (or growth) becomes weaker. Moreover, our numerical analysis shows that in a pair plasma the acoustic-like modes have the maximum damping at the vicinity of $q=0.69$ [see Fig6(a)], and they have the maximum growth when the nonextensivity is at the vicinity of $q=0.55$ [see Fig6(c)]. In addition, we have included the Maxwellian limit ($q=1$) to the Fig. 6(b) which emphasizes that the acoustic- like modes in an equilibrium pair plasma are merely landau damped waves. For completing our discussion, we have examined the temperature-asymmetry effect, controlled by $\sigma$, on the Landau damping of the acoustic-like modes in a pair plasma, as plotted in Fig. 7. It is observed that the temperature-asymmetry in a pure pair plasma decreases the Landau damping. In other words, for a fixed value of $q$ and at a given wavelength, the Landau damping of the acoustic waves is maximum when a full symmetry in temperature of species is established, i.e, when $T_{-}=T_{+}$. ## 5 Conclusions In this paper, we have studied the acoustic-like modes in a collisionless and magnetic-field-free pair plasma on the basis of the nonextensive statistics. We have thereby used a kinetic theory model by employing the Vlasov and Poisson’s equations to obtain the response dielectric function of the pair plasma for the electrostatic waves. By using the dielectric function, we have investigated the acoustic-like modes whose phase speed lies between the thermal velocities of the species. The resultant dispersion relation in our study is compatible with the acoustic branch of the experimental data presented by Oohara _et. al_ (Oohara et al., 2005), in which the electrostatic waves have been examined in a pure pair-ion plasma. It has been shown that by decreasing the nonextensivity index $q$ the phase velocity of the acoustic modes increases, indicating to a plasma with a great deal of superthermal particles. Our kinetic model confirms the possibility of the acoustic modes in the case of a temperature-asymmetric and also symmetric pair plasma. However, it is found that the temperature-asymmetry in a pair plasma reduces the phase velocity of the acoustic modes. Furthermore, depending on the degree of nonextensivity of the plasma, both the damped and unstable acoustic modes are predicted in a collisionless pair plasma, arise from a resonance phenomena between the wave and nonthermal particles of the plasma. In the case of a superthemal plasma confirmed by $q<1$ (superextensivity), the heavily damped and growing unstable modes are predicted, while in the case $q>1$ (subextensivity) the acoustic-like modes have only damping and no growth. The mechanism that leads to the damping is the same as presented by Landau (Landau, 1946), arises from a decreasing velocity distribution function, but the mechanism of instability lies in the heart of the nonextensive formalism. We have postulated that the concerned instability can be associated with the presence of superthermal particles (in the case $q<1$), in the sense that in the process of the resonance they can give energy to the wave and then results in growing waves in time. This instability disappears in the case $q>1$, describing a plasma with plenty of the low-speed particles. Additionally, the damping rate is relatively weak in the case $q>1$, in comparison with the case of a superthermal plasma ($q<1$) with heavily damped modes. The reason is that the number of particles participating in the resonance with the wave is small for a stationary state with $q>1$. Moreover, our analysis indicates that the temperature-asymmetry in a pure pair plasma decreases the Landau damping of the acoustic-like modes. We emphasize that in the present work, we have considered an inhomogeneous plasma in a nonequilibrium thermal state by considering the $q$-nonextensive distribution for stationary state of the plasma. However, our solutions reduce to the ones for a homogeneous and equilibrium pair plasma at the extensive limit $q\rightarrow 1$, in which the Boltzamnn-Gibbs statistics describe the plasma state. It is hoped that the present study would be useful for explanation of the intriguing low-frequency modes in a pure pair plasma, which are out of the scope of the plasma fluid theory and the Boltzmann-Gibbs statistics. ## References * Abe (1999) Abe, S. 1999, Physica (Amsterdam), 269A, 403 * Abe & Okamato (2001) Abe, S., & Okamato, Y. 2001, _Nonextensive Statistical Mechanics and Its Applications, Series Lecture Notes in Physics_ , Springer-Verlag, Heidelberg * Abramowitz & Stegun (1972) Abramowitz, M. & Stegun, I.A. 1972, _Handbook of Mathematical Functions_ , Dover, New York, 257 * Amour et al. (2012) Amour, R., Tribeche, M., & Shukla, P.K. 2012, Ap&SS, 338, 287 * Amoretti et al. (2003) Amoretti, M., et al. 2003, Phys. Rev. Lett., 91, 055001 * Arimitsu & Arimitsu (2000) Arimitsu, T. & Arimitsu, N. 2000,Phys. Rev. E, 61, 3237 * Boghosian (1996) Boghosian, B.M. 1996, Phys. Rev. E, 53, 4754 * Beck (2001) Beck, C. 2001, Phys. Rev. Lett., 87, 180601 * Beck et al. (2001) Beck, C., Lewis, G.S., Swinney, H.L. 2001, Phys. Rev. E, 63, 035303 * Begelman et al. (1984) Begelman, M.C., Blandford, R.D., & Rees, M.D. 1984, Rev. Mod. Phys. 56, 255 * Berezhiani et al. (1993) Berezhiani, V.I., Skarka, V., & Mahajan, S. 1993, Phys. Rev. E, 48, R3252 * Boehmer et al. (1995) Boehmer, H., Adams, M., & Rynn, N. 1995, Phys. Plasmas, 2, 4369 * Cairns et al. (1995) Cairns, R.A., Mamum, A.A., Bingham, R., Bostrom, R., Dendy, R.O., Nairn, C.M.C., & Shukla, P.K. 1995, Geophys. Res. Lett., 22, 2709 * Chen et al. (2009) Chen, H., et al. 2009, Phys. Rev. Lett., 102, 105001 * Curado (1999) Curado, E.M.F. 1999, Braz. J. Phys., 29, 36 * Daniel & Tajima (1998) Daniel, J., & Tajima, T. 1998, ApJ, 498, 296 * Defler & Simonen (1969) Defler, H., & Simonen, T.C. 1969, Phys. Fluids, 12, 260 * Dovner et al. (1994) Dovner, P.O., Eriksson, A.I., Bostrom, R., & Holbackm B. 1994, Geophys. Res. Lett., 21, 1827 * Du (2004) Du, J.L. 2004, Phys. Lett. A, 329, 262 * Dubouloz et al. (1991) Dubouloz, N., Pottelette, R., Malingre, M. & Treumann, R.A. 1991, Geophys. Res. Lett., 18, 155 * El-Tantawy et al. (2012) El-Tantawy, S.A., Tribeche, M., & Moslem, W.M. 2012, Phys. Plasmas, 19, 032104 * Feldman et al. (1975) Feldman, W.C., Asbridge, J.R., Bame, S.J., Montgomery, M.D., & Gary, S.P. 1975, J. Geophys. Res., 80, 4181 * Gahn et al. (2000) Gahn, C., et al. 2000, Appl. Phys. Lett., 77, 2662 * Gedalin et al. (1998) Gedalin, M., Melrose, D.B., & Gruman, E. 1998, Phys. Rev. E, 57, 3399 * Gibson et al. (1960) Gibson, G., Jordan, W.C., & Lauer, E.J. 1960, Phys. Rev. Lett., 5, 141 * Gell-Mann & Tsallis (2004) Gell-Mann, M., & Tsallis, C. 2004, _Nonextensive Entropy - Interdisciplinary Applications_ , Oxford University Press, New York * Gibbons et al. (1983) Gibbons,G.W., Hawking, S.W., & Siklos, S. 1983, _The Very Early Universe_ , Cambridge University Press, Cambridge, UK * Goldreich & Julian (1969) Goldreich, P., & Julian, W.H. 1969, ApJ, 157, 869 * Greaves et al. (1994) Greaves, R.G., Tinkle, M.D. & Surko, C.M. 1994, Phys. Plasmas, 1, 1439 * Greaves & Surko (1995) Greaves, R.G., & Surko, C.M. 1995, Phys. Rev. Lett., 72, 3846 * Greaves & Surko (2001) Greaves, R.G., Surko, C.M. 2001, AIP Conf. Proc., 606, 10-23; doi: http://dx.doi.org/10.1063/1.1454263. * Huang & Driscoll (1994) Huang, X.-P. & Driscoll, C.F. 1994, Phys. Rev. Lett., 72, 2187 * Helander & Ward (2003) Helander, P., & Ward, D.J. 2003. Phys. Rev. Lett., 90, 135004 * Iwamoto (1993) Iwamoto, N. 1993, 47, 604 * Kakad et al. (2007) Kakad, A.P., Singh, S.V., Reddy, R.V., Lakhina, G.S., Tagare, S.G., & Verheest, F. 2007, Phys. Plasmas, 14, 052305 * Kaladze et al. (2012) Kaladze, T., Mahmood, S., & Ur-Rehman, H. 2012 Phys. Scripta, 86, 035506 * Keston et al. (2003) Keston, D.A., Laing, E.W., & Diver, D.A. 2003 Phys. Rev. E, 67, 036403 * Krall & Trivelpiece (1973) Krall, N.A., & Trivelpiece A.W. 1973, _Principles of Plasma Physics_ , McGraw-Hill, Kogakusha * Laing & Diver (2006) Laing, E.W., & Diver, D.A. 2006, Phys. Plasmas, 13, 092115 * Landau (1946) Landau, L.D. 1946, J. Phys. USSR, 10, 25 * Landsberg (1984) Landsberg, P.T. 1984, J. Stat. Phys., 35, 159 * Lavagno et al. (1998) Lavagno, A., Kaniadakis, G., Rego-Monteiro, M., Quarati, P., & Tsallis, C. 1998, Astrophys. Lett. Commun., 35/6, 449 * Leubner (2002) Leubner, M.P. 2002, Ap&SS, 282, 573 * Liang et al. (1998) Liang, E.P., Wilks, S.C., Tabak, M. 1998, Phys. Rev. Lett., 81, 4887 * Lima et al. (2000) Lima, J.A.S., Silva, R., & Janilo Santos 2000, Phys. Rev. E, 61, 3260 * Lima et al. (2001) Lima, J.A.S., Silva, R., & Plastino, A.R. 2001, Phys. Rev. Lett., 86, 2938 * Liu et al. (1994) Liu, J.M., De Groot, J.S., Matte, J.P., Johnston, T.W. & Drake, R.P. 1994, Phys. Rev. Lett., 72, 2717 * Livadiotis & McComas (2009) Livadiotis, G., & McComas, D.J. 2009, J. Geophys. Res., 114, A11105 * Liyan & Jiulin (2008) Liyan, L., & Jiulin, D. 2008, Physica A, 387, 4821 * Maksimovic et al. (1997) Maksimovic, M., Pierrard, V., & Riley, P. 1997, Geophys. Res. Lett., 24, 1151 * Maksimovic et al. (1997) Maksimovic, M., Pierrard, V., & Lemaire, J.F. 1997, A&A, 324, 725 * Max & Perkins (1972) Max, C., & Perkins, F.W. 1972, Phys. Rev. Lett., 29, 1731 * Michel (1982) Michel, F.C. 1982, Rev. Mod. Phys. 54, 1-66 * Michel (1991) Michel, F.C. 1991, _Theory of Neutron Star Magnetospheres_ , University of Chicago Press, Chicago * Miller & Witta (1987) Miller, H.R., & Witta, P.J. 1987, _in Active Galactic Nuclei_ , Springer, Berlin, 202 * Misner et al. (1973) Misner, W., Thorne, K.S., & Wheeler, J.A. 1973, _Gravitation_ , Freeman, San Francisco, 763 * Montgomery et al. (1968) Montgomery, M.D., Bame, S.J., & Hundhausen, A.J. 1968, J. Geophys. Res., 73, 4999 * Muoz (2004) Muoz, V. 2004, Phys. Plasmas, 11, 3497 * Oohara & Hatakeyama (2003) Oohara, W., & Hatakeyama, R. 2003, Phys. Rev. Lett., 91, 205005 * Oohara et al. (2005) Oohara, W., Date, D., & Hatakeyama, R. 2005, Phys. Rev. Lett., 95, 175003 * Orsoz et al. (1997) Orsoz, J.R., Remillard, R.A., Bailyn, C.D., & McClintock, J. E. 1997, ApJ, 478, L83 * Pedersen et al. (2003) Pedersen, T.S., Boozer, A.H., Dorland, W., Kremer, J.P., & Schmitt, R. 2003, J. Phys. B: At. Mol. Opt. Phys., 36, 1029 * Pedersen et al. (2004) Pedersen, T.S., Boozer, A.H., Kremer, J.P., & Lefrancois, R. 2004, Phys. Plasmas, 11, 2377 * Plastino (2004) Plastino, A. 2004, Physica A, 344, 608 * Reyni (1955) Reyni, A. 1955, Acta Math. Hungaria 6, 285 * Saberian & Esfandyari-Kalejahi (2013) Saberian, E., & Esfandyari-Kalejahi, A. 2013, Phys. Rev. E, 87, 053112 * Silva et al. (1998) Silva Jr., R., Plastino, A.R., & Lima, J.A.S. 1998, Phys. Lett. A, 249, 401 * Silva et al. (2005) Silva, R., Alcaniz, J.S., & Lima, J.A.S. 2005, Physica A, 356, 509 * Sturrock (1971) Sturrock, P. A. 1971, ApJ, 164, 529 * Surko et al. (1989) Surko, C.M., Leventhal, M., & Passner, A. 1989, Phys. Rev. Lett., 62, 901 * Surko & Murphy (1990) Surko, C.M., & Murphy, T. 1990, Phys. Fluids B, 2, 1372 * Surko & Greaves (2004) Surko, C.M., & Greaves, R.G. 2004, Phys. Plasmas, 11, 2333 * Tandberg-Hansen & Emslie (1988) Tandberg-Hansen, E., & Emslie, A.G. 1988, _The physics of solar flares_ , Cambridge University Press, Cambridge, 124 * Trivelpiece (1972) Trivelpiece, A.W. 1972, Comments Plasma Phys. Controlled Fusion, 1, 57 * Tsallis (1988) Tsallis, C. 1988, J. Stat. Phys., 52, 479 * Tsallis (1994) Tsallis, C. 1994, Phys. Lett. A, 195, 329 * Tsallis & de Souza (1997) Tsallis, C., & de Souza, A.M.C. 1997, Phys. Lett. A, 235, 444 * Tsallis et al. (1995) Tsallis, C., Sa Barreto, F.C., & Loh, E.D. 1995, Phys. Rev. E, 52, 1447 * Tsallis (1995) Tsallis, C. 1995, Chase, Chaos. Soliton. Fract., 6, 539 * Tsallis (1999) Tsallis, C. 1999, Braz. J. Phys., 29, 1 * Tsallis (2009) Tsallis, C. 2009, _Introduction to Nonextensive Statistical Mechanics - Approaching a Complex World_ , Springer, New York * Tsytovich & Wharton (1978) Tsytovich, V. & Wharton, C.B. 1978, Comments Plasma Phys. Controlled Fusion, 4, 91 * Valentini (2005) Valentini, F. 2005, Phys. Plasmas, 12, 072106 * Vasyliunas (1968) Vasyliunas, V.M. 1968, J. Geophys. Res., 73, 2839 * Verheest (1996) Verheest, F. 1996, Phys. Lett. A, 213, 177 * Verheest (2005) Verheest, F. 2005, Nonlinear Processes Geophys., 12, 569 * Vranjes & Poedts (2005) Vranjes, J., & Poedts, S. 2005, Plasma Sources Sci. Technol., 14, 485 * Wardle et al. (1998) Wardle, J.F.C., Homan, D.C., Ojha, R., & Roberts, D.H. 1998, Nature, 395, 457 * Watanabe & Taniuti (1977) Watanabe, K., & Taniuti, T. 1977, J. Phys. Soc. Jpn., 43, 1819 * Zank & Greaves (1995) Zank, G.P. & Greaves, R.G. 1995, Phys. Rev. E, 51, 6079 * Zouganelis (2008) Zouganelis, I. 2008, J. Geophys. Res., 113, A08111 Figure 1: The nonthermal behavior of the $q$-nonextensive distribution function and its comparison with the Maxwellian one (solid carve): (a) Superxtensive distribution with $q<1$ that behave alike the $\kappa$-distributions for superthermal plasmas. In this case, the particles have distributed in a wider spectrum of the velocities, in comparison with a Maxwellian distribution. (b) Subextensive distribution with $q>1$ which is suitable for describing the systems containing a large number of low-speed particles. In this case, there is a thermal cutoff which limits the velocity of particles. Figure 2: The linear dispersion relation of acoustic-like modes in a pair plasma. (a) The nonextensivity effect on dispersion relation with $\sigma=0.9$, where the solid curve corresponds to the extensive limit ($q=1$) and the other ones show the deviations from a Maxwellian pair plasma. (b) The effect of temperature-asymmetry on dispersion relation with $q=0.7$, where the solid curve corresponds to a temperature-symmetric pair plasma. Figure 3: The comparison of the acoustic-like modes and the Lanqmuir waves in a pair plasma with $T_{+}=T_{-}$. The acoustic waves belong to a low frequency band which tends to zero at the limit $k\rightarrow 0$, while the Langmuir waves occur in high frequencies above $\omega_{p}$. Figure 4: The imaginary part of the frequency with respect to the nonextensivity index for $q<1$, which shows the $q$-regions for the growing and heavily damped acoustic-like modes. Figure 5: The imaginary part of the frequency with respect to the nonextensivity parameter for $q>1$. For this values of the nonextensivity index $q$, the acoustic-like modes have only damping and no growth. Figure 6: The damping (growing) rate with respect to the wave number for (a) the heavily damped modes in the $q$-region $0.6\lesssim q\lesssim 0.71$, (b) the relatively weakly damped modes in the $q$-region $q>1$, and (c) the growing acoustic modes in the $q$-region $0.34\lesssim q\lesssim 0.6$, when $\sigma=0.9$. We have included the Maxwellian limit ($q=1$) to our results which emphasizes that the acoustic-like modes in an equilibrium pair plasma are merely the landau damped waves. Figure 7: The effect of temperature-asymmetry on Landau damping of the acoustic-like modes which indicates that the temperature- asymmetry in a pure pair plasma decreases the Landau damping rate.
# SaiNet: Stereo aware inpainting behind objects with generative networks Violeta Menéndez González1,2 <EMAIL_ADDRESS>Andrew Gilbert1 <EMAIL_ADDRESS>Graeme Phillipson2 <EMAIL_ADDRESS>Stephen Jolly2 <EMAIL_ADDRESS>Simon Hadfield1 <EMAIL_ADDRESS>1CVSSP, University of Surrey 2BBC R&D ###### Abstract In this work, we present an end-to-end network for stereo-consistent image inpainting with the objective of inpainting large missing regions behind objects. The proposed model consists of an edge-guided UNet-like network using Partial Convolutions. We enforce multi-view stereo consistency by introducing a disparity loss. More importantly, we develop a training scheme where the model is learned from realistic stereo masks representing object occlusions, instead of the more common random masks. The technique is trained in a supervised way. Our evaluation shows competitive results compared to previous state-of-the-art techniques. Figure 1: SaiNet: Inpainting behind objects using geometrically meaningful masks. ## 1 Introduction Image inpainting is the task of filling in the missing regions of an image with perceptually plausible content. It has many vital applications in computer vision and image processing: the removal of unwanted objects (e.g. superimposed text), image and film restoration (e.g. scratches or cracks), image completion (e.g. dis-occlusions), cinema post-production, among others. Our work focuses on the under-explored problem of stereo-inpainting. This lends itself to applications that need to see behind objects to generate reasonable image fillers, for example in novel view synthesis of scenes, object removal in stereoscopic video, 3D animation of still images, and dis- occlusion in virtual reality environments. This paper focuses on applications of inpainting which may improve view synthesis in media production, this requires an approach that can take advantage of multiple cameras, but doesn’t necessarily have the computer capacity of other novel view approaches that reconstruct a whole 3D scene representation. In addition, we want an approach that can generalise well to unseen scenes and generate creative content without human input, therefore we want to apply CNNs that can single-handedly propagate structures and textures reasonably. In previous works, traditional monocular techniques tried to achieve inpainting by propagating local image structures and textures or copying patches from the known areas of the image. This worked well for small or narrow regions, but it was prone to generating visual inconsistencies in more significant gaps. Early stereo techniques attempted to equivalently generate consistent image output by mechanically warping the available data from the other views [30], or completing the disparity images [19, 20], and then proceeding similarly to the monocular inpainting approaches. However, in recent years, Deep Learning (DL) techniques have taken advantage of large- scale training data to create more semantically significant inpainting outputs. Some works focused on learning embeddings of the images [22, 10], while others developed different types of convolutional layers to be able to handle more realistic irregular holes [15, 33]. However, the only DL techniques that address the stereo inpainting problem [4, 17, 16] to date have focused on artificial or unrealistic inpainting regions, or don’t enforce multi-camera consistency. In contrast, our approach focuses on inpainting one target image on geometrically meaningful masks while using the information available from the other viewpoint. Our network architecture is inspired by the 3D photography generation work of Shih _et al_. [27] using a Partial Convolution [15] architecture, which optimises the use of irregular masks at random locations. Furthermore, we improve the inpainting task by adding colour edge information following the idea by Nazeri _et al_. [21] in their work with EdgeConnect. More importantly, we propose a novel stereo inpainting training mechanism. Instead of using random image masks, which usually represent the physical damage a picture can suffer, we use meaningful and geometrically-consistent object masks that are not necessarily bounded within the image. We extend the 2D context/synthesis region approach proposed by [27] to use a bank of geometrically-consistent 3D object masks. Ground-truth training examples are generated from random virtual 3D objects placed at random locations in the foreground of the scene, allowing us to have a fully self-supervised stereo training approach. This data augmentation process addresses both the significance of masked regions and the stereo data scarcity problem. Furthermore, the resulting model is computationally efficient and able to generalise to previously unknown scenes and occluding objects. In summary, the contributions of this paper are: * • A novel stereo-aware structure-guided inpainting model suitable for efficient novel-view synthesis and free viewpoint VR applications. * • First inpainting work to take full advantage of stereo-context with geometrically-consistent object masks. * • A novel stereo consistency loss attempting to ensure that inpainting results are consistent with disoccluded information present in other views. ## 2 Background #### Learnable inpainting With the advancements of Deep Learning and the availability of large-scale training data, deep Convolutional Neural Networks (CNNs) became a popular tool for image prediction. Initial CNN models attempted to perform image inpainting by using feature learning with Denoising Autoencoders [32], translation variant interpolation [23], or exploiting the shape of the masks [14]. Yet all these methods were only applicable to tiny and thin masks and lacked semantic understanding of the scenes. With the addition of Generative Adversarial Networks (GANs) [8], CNN architectures were able to extract meaningful semantics from images and generate novel content. Pathak _et al_. [22] used an encoder-decoder architecture to create a latent feature representation of the image, which captured both the semantics and appearance of structures, but struggled to maintain global consistency. Iizuka _et al_. [10] proposed using both local and global context discriminators, which helped the local consistency of generated patches and still held image coherence in the whole. Yu _et al_. [34] added a contextual attention layer to aid the modelling of long-term correlations between the distant information and the hole regions. Traditional vanilla convolutions depend on the hole initialisation values, which usually leads to visual artefacts. Liu _et al_. [15] proposed the use of Partial Convolutions: masked and re-normalised convolutional filters conditioned only on valid pixels. Yu _et al_. [33] extended this idea with Gated Convolutions by generalising to a learnable dynamic features selection mechanism. Previous works focused on centred rectangular holes, which may cause methods to overfit to this kind of mask. Masked convolutions allowed models to handle more realistic irregular holes. Liu _et al_. [15] studied the effects when the holes are in contact with the image border and created a large benchmark of irregular masks with varying sizes and locations. Many of these methods still fail to reconstruct reasonable structures and usually over-smooth surfaces. Some approaches [29, 21, 24] tackle this problem by trying first to recover structural information to guide the inpainting of fine details and textures. With a two-stage adversarial model, EdgeConnect [21] first recovers colour edges, while StructureFlow [24] choose edge-preserved smooth images as the global structure information. #### Stereo Consistent Inpainting There is little research done on stereoscopic image inpainting in the framework of deep learning. Following a similar trajectory to monocular approaches, traditional patch-based methods [19, 20, 30] find example patches from the available parts of the image and fill the holes applying consistency constraints. Wang _et al_. [30] simultaneously inpaint colour and depth images using a greedy segmentation-based approach, inpainting first partial occlusions using warping, and total occlusions with a depth-assisted texture synthesis technique. Morse _et al_. [19] extend PatchMatch [1] to cross-image searching and matching without explicit warping, using a completed disparity map to guide the colour inpainting. Multi-view inpainting techniques such as Gilbert _et al_. [7] create a dictionary of patches from multiple available viewpoints that are then coherently selected and combined to recover the missing region. Figure 2: Model overview: Edge guidance, stereo context and disparity loss. The first stereo inpainting approach using deep learning was made by Luo _et al_. [16]. They use a double reprojection technique to generate image occlusion masks from several new viewpoints, then apply Partial Convolutions [15] to inpaint the holes, and aggregate the results in a layered depth image. This technique shows good visual results on their own Keystone B&W dataset, but they don’t take into account multi-view consistency and rely on depth maps to reconstruct the image. Other techniques take advantage of both left and right views, like Chen _et al_. [4]. They use an extension of Context Encoder [22]. They inpaint left and right views simultaneously encoding both views and aggregating them at the feature level. In addition, they introduced a local consistency loss which helps preserve the inpainting consistency at a pixel level. They applied this model to inpainting regular holes at the centre of the image. Ma _et al_. [17] use a similar architecture for two different tasks: reconstructing missing objects in one view that are available on the other view and coherently inpainting the same holes in both views similar to [4]. To do this, they use two different stereo consistency losses, a warping- based consistency loss and a stereo-matching PSMNet-based [3] disparity- reconstruction loss. However, because of a lack of ground-truth data for object removal, they only train their model on corruption restoration data. In contrast, our approach uses realistic and geometrically consistent foreground object masks to explore inpainting behind objects in stereo scenes. ## 3 Approach ### 3.1 Model overview An overall visualisation of our proposed model can be seen in Figure 2. It consists of a deep neural network that follows a UNet-like architecture [25] with partial convolutions [15]. The network takes the context and synthesis areas of an object, where the context area is the background surrounding the object. The synthesis area is the region behind the object (hole) that the network will inpaint. In addition, the colour edges are fed into the network for structural guidance. Finally, to enrich the inpainting and make the network aware of the stereo view, a stereo-context image is added to the input. ### 3.2 Object occlusion regions An essential part of image inpainting specifies the type of missing regions that the model needs to handle. Most previous approaches to inpainting have focused on randomly shaped inpainting masks of limited complexity. This is reasonable when dealing with image degradations such as scratches or removing regions containing nuisance objects in 2D. However, for stereo inpainting where we wish to maintain crisp object boundaries, this approach no longer makes sense. We need inpainting masks that represent real image occlusions. Therefore we propose a self-supervised approach where stereoscopic scenes are augmented with geometrically valid inpainting masks, based on a virtual 3D occluding object. This object is hallucinated in a stereo-consistent way over both images, which allows us to collect “behind the object” ground-truth data. As such, the network learns to fill in geometrically-meaningful holes with background information, which can then be applied to actual object occlusions in novel view synthesis applications. Figure 3: Data generation process. A collection of context/synthesis regions is created by extracting them from object boundaries in images on the COCO dataset. Then they are randomly sampled, warped, and pasted onto different images, forming the training dataset of ground truth context/synthesis regions We generate these geometrically-valid masks from an unrelated dataset of natural scenes with either ground truth depth or object segmentations. As summarised in Algorithm 1, we first detect depth discontinuities [27] along object boundaries and generate context and synthesis regions by propagating this boundary towards the background image (context), and the foreground object area (synthesis) (See Fig. 3 for a visualisation of these areas). This way, we create a geometrically-meaningful bank of masks for inpainting. Input: $\mathcal{N}=\\{\mathbf{I}:\mathbf{I}\text{ is a natural image}\\}$ Output: $\mathcal{M}=\\{(\mathbf{C}^{obj},\mathbf{S}^{obj})\mid\forall obj\in\mathbf{I},\forall\mathbf{I}\in\mathcal{N}\\}$ for _$\mathbf{I}$ in $\mathcal{N}$_ do Find set of discontinuities $d_{\mathbf{I}}\equiv\\{d^{obj}_{\mathbf{I}}\mid obj\text{ is an object in image }\mathbf{I}\\}$; for _$d^{obj}_{\mathbf{I}}$ in $d_{\mathbf{I}}$_ do Propagate background around $d^{obj}_{\mathbf{I}}$ to generate context mask $\mathbf{C}^{obj}$; Propagate foreground around $d^{obj}_{\mathbf{I}}$ to generate synthesis mask $\mathbf{S}^{obj}$; end for end for Algorithm 1 Generation of geometrically-valid masks These masks are varied and irregular, preventing our model overfitting one type of mask. Furthermore, as opposed to most methods, our model doesn’t use the whole image as context for the inpainting process, but just the region closer to the object boundaries. Although this reduces the available information that the network can learn from, it allows the network to narrow its attention to the most relevant and meaningful area. However, this poses a more challenging problem, as the context-to-synthesis area ratio is smaller, and the masked regions are not necessarily bounded by context on all sides. ### 3.3 Stereo awareness Our approach aims to make inpainting consistent across views in two different ways. One is by enriching the network with extra available information. The other is by enforcing a disparity loss on the output of the network (as explained in Section 3.4). The main advantage of having two (or more) cameras is the additional information we can extract to make the task of inpainting “unknown” areas easier. For example, some colours or textures may be completely occluded by the object in one view, but still be partially visible from the other view. In this case, the additional input can provide strong cues for the network to inpaint the occluded region. We make our system stereo-aware by providing this extra information as input to the network by warping the context mask of each object based on its estimated disparity value into the additionally available view (See Algorithm 2). We use PSMNet [3] to estimate this disparity and select the closest depth value to make sure the object is situated at the front of all other objects in the scene. In other words, we extract contextual information around the boundary of the occluding object, in both views. Then we feed this extra context into the network to learn to use it in filling in the synthesis area. In this way, we aid the inpainting process by enriching the texture and colour information available. Input: $\mathcal{D}=\\{(\mathbf{I}_{L},\mathbf{I}_{R}):\text{stereo pair of images}\\}$ $\mathcal{M}=\\{(\mathbf{C}^{obj},\mathbf{S}^{obj})\mid\forall\text{ object }obj\\}$ Output: $\mathrm{Training\\_set}=\\{(\mathbf{CC}_{L},\mathbf{CS}_{L},\mathbf{E}_{L},\mathbf{CC}_{R})\mid\forall(\mathbf{I}_{L},\mathbf{I}_{R})\in\mathcal{D}\\}$ for _$(\mathbf{I}_{L},\mathbf{I}_{R})$ in $\mathcal{D}$_ do 1\. Select random context and synthesis masks $\mathbf{C}^{obj},\mathbf{S}^{obj}$ from the $\mathrm{Mask\\_Bank}$; 2\. Select a random position $x,y$ to situate the object at $\mathbf{I}_{L}$; 3\. Crop image $\mathbf{I}_{L}$ at $x,y$ with mask $\mathbf{C}^{obj}$ to generate colour context region $\mathbf{CC}_{L}$; 4\. Crop image $\mathbf{I}_{L}$ at $x,y$ with mask $\mathbf{S}^{obj}$ to generate colour synthesis region $\mathbf{CS}_{L}$; 5\. Generate edge map $\mathbf{E}_{L}=\operatorname{Canny}(\mathbf{CC}_{L}+\mathbf{CS}_{L})$; 6\. Estimate depth map $\mathbf{D}_{L}=\operatorname{PSMNet}(\mathbf{I}_{L},\mathbf{I}_{R})$; 7\. Crop image $\mathbf{D}_{L}$ at $x,y$ with mask $\mathbf{S}^{obj}$ to generate depth synthesis region $\mathbf{DS}_{L}$; 8\. $disp=\max(\mathbf{DS}_{L})$; 9\. Reproject $\mathbf{C}^{obj}$ using $disp$ value onto $\mathbf{I}_{R}$ and crop to generate stereo colour context region $\mathbf{CC}_{R}$; end for Algorithm 2 Stereo-aware training set generator Several methods [21, 24, 29] have shown that structure-guided inpainting performs better at reconstructing high frequency information accurately. Since image structure is well-represented in its edge mask, superior results can be obtained by conditioning an image inpainting network on edges in the missing regions. For this reason, we feed the edge maps generated using Canny edge detector [2] along with the colour information, as a bias to our network, following a similar process to Nazeri _et al_. [21]. At test time, we estimate the edges using a pre-trained EdgeConnect [21] model. ### 3.4 Stereo consistency Loss Inspired by the work of Chen _et al_. [4], we propose a local consistency loss which measures the consistency between the inpainted area in one view, and the ground truth in the other view. In this way, we encourage the system to use the stereo context; inpainting not just any perceptually acceptable background, but specifically the one consistent with any partial observations. The loss is illustrated in Fig. 2. We compare a patch $P\left(i\right)$ around every pixel $i$ in the inpainted area $\mathbf{S}\odot\mathbf{I}$ against a patch centred on the corresponding pixel on the other view. $\mathbf{S}$ is the binary mask indicating the synthesis region, $\mathbf{I}$ is the inpainted image, and $\odot$ denotes the Hadamard product. $\displaystyle\mathcal{L}_{disp}$ $\displaystyle=\frac{1}{\left|\mathbf{S}\right|}\sum_{i\in\mathbf{S}\odot\mathbf{I}}\overleftarrow{cost}\left(i\right),$ (1) $\displaystyle\overleftarrow{cost}\left(i\right)$ $\displaystyle=1-\Phi\left(P\left(i\right),P\left(\overleftarrow{W}\left(i\right)\right)\right)$ (2) where $\overleftarrow{W}$ is the warping function corresponding to a change from source to target view, using the disparity estimated by PSMNet [3]. We use a Normalised Cross-Correlation (NCC) as our stereo matching cost ($\Phi$) which works well with back-propagation. $\Phi(X,Y)=\dfrac{\left\|X\odot Y\right\|_{1,1}}{\left\|X\right\|_{F}\left\|Y\right\|_{F}}$ (3) here $\left\|\cdot\right\|_{1,1}$ and $\left\|\cdot\right\|_{F}$ are the 1-entrywise and Frobenious matrix norms respectively. ### 3.5 Inpainting losses In addition to the disparity loss, other per-pixel similarity losses and losses based on deep features are used to enforce perceptually realistic results. First, two per-pixel reconstruction losses are defined over the synthesis and context regions, these losses help guiding the inpainting of the missing areas, as well as making sure that context and synthesis areas are recovered consistently and with smooth boundaries. $\displaystyle\mathcal{L}_{synthesis}$ $\displaystyle=\frac{1}{N_{\mathbf{I}_{gt}}}\left\|\mathbf{S}\odot\left(\mathbf{I}-\mathbf{I}_{gt}\right)\right\|_{1},$ (4) $\displaystyle\mathcal{L}_{context}$ $\displaystyle=\frac{1}{N_{\mathbf{I}_{gt}}}\left\|\mathbf{C}\odot\left(\mathbf{I}-\mathbf{I}_{gt}\right)\right\|_{1}$ (5) where $\mathbf{S}$ and $\mathbf{C}$ are the binary masks indicating synthesis and context regions respectively, $N_{\mathbf{I}_{gt}}$ is the total number of pixels, $\mathbf{I}$ is the inpainted result, and $\mathbf{I}_{gt}$ is the ground truth image. In addition, we include two deep feature losses from Johnson _et al_. [12], based on VGG-16 [28] embeddings, that measure high- level perceptual and semantic differences. Firstly $\displaystyle\mathcal{L}_{perceptual}=\sum_{p=0}^{P-1}\frac{\left\|\Psi_{p}\left(\mathbf{I}\right)-\Psi_{p}\left(\mathbf{I}_{gt}\right)\right\|_{1}}{N_{\Psi_{p}}}$ (6) where, $\Psi_{p}(\cdot)$ is the output of the p’th layer from VGG-16 [28], and $N_{\Psi_{p}}$ is the total number of elements in the layer. Secondly, the style loss is defined as, $\displaystyle\mathcal{L}_{style}=\sum_{p=0}^{P-1}\frac{1}{C_{p}C_{p}}\left\|K_{p}\left[\left(\Psi^{\mathbf{I}}_{p}\right)^{\intercal}\Psi^{\mathbf{I}}_{p}-\left(\Psi^{\mathbf{I}_{gt}}_{p}\right)^{\intercal}\Psi^{\mathbf{I}_{gt}}_{p}\right]\right\|_{1}$ (7) where $K_{p}=\frac{1}{C_{p}H_{p}W_{p}}$ is a normalisation factor, and $C_{p},H_{p},W_{p}$ are the number of channels, height, and width of the output $\Psi_{p}(\cdot)$. These perceptual losses encourage the network to create images with similar content and similar feature representations. The style loss ensures that the style of the output images resemble the input in colour, textures, etc. Finally, a total variation loss is used as a smooth regularization. $\displaystyle\mathcal{L}_{tv}$ $\displaystyle=\sum_{\left(i,j\right)\in\mathbf{S}}\frac{\left\|\mathbf{I}(i,j+1)-\mathbf{I}(i,j)\right\|_{1}}{N_{\mathbf{I}_{gt}}}$ (8) $\displaystyle+\sum_{\left(i,j\right)\in\mathbf{S}}\frac{\left\|\mathbf{I}(i+1,j)-\mathbf{I}(i,j)\right\|_{1}}{N_{\mathbf{I}_{gt}}}$ (9) where the $\mathbf{S}$ denotes the pixels in the synthesis region. Figure 4: Real dataset evaluation over Middlebury [11] using Canny edge detector. The zoomed-in crop of yellow area is visualised as “Ground Truth”. Similar to Liu _et al_. [15], we use the following weights to combine all these losses to yield the final training objective: $\lambda_{synthesis}=6$, $\lambda_{context}=1$, $\lambda_{perceptual}=0.05$, $\lambda_{style}=120$, $\lambda_{tv}=0.1$, $\lambda_{disparity}=0.1$. The same parameters are used for all evaluations. ### 3.6 Datasets Good quality, natural stereo datasets are very hard to come by. This is a problem for training deep neural networks, which usually require a high number of images to extract meaningful statistical information. Our approach to data collection intrinsically performs data augmentation, as the random sampling of context-synthesis areas makes it possible to use different samplings of the same image without overfitting. For training we have used three different datasets: SceneFlow [18]: FlyingThings3D, Driving, and Middlebury [26]. FlyingThings3D consists of 21,818 frames from 2,247 scenes, containing everyday objects flying around in a randomised way. This is ideal for training CNNs due to the large amount of data and variety of objects. Driving is a more naturalistic-looking dynamic street scene resembling the KITTI dataset [6]. It contains 4400 images from one scene. On the other hand, the Middlebury dataset consists of only 33 pairs of stereo images of natural scenes. Even though this dataset is not big enough to train a Deep Learning model, we are able to perform transfer learning and generate pleasant results over real world data (See Fig. 4). These datasets contain ground truth disparity maps, but for our model we have included a disparity estimation step using PSMNet so we don’t rely on existing ground truth data. This makes it fairer to compare to other models that use a similar approach, as well as being more relevant to our application to media production, where we may have several views from the same scene, but no depth information. Figure 5: Qualitative inpainting results for FlyingThings3D. Baseline is Shi _et al_. [27]. The zoomed-in crop of the yellow area is visualised in the “Ground Truth” column. Table 1: Quantitative results. Image quality & stereo consistency of different models. Bold is best. $\star$ values are from their paper. Dataset | Model | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | DispE (%)$\downarrow$ ---|---|---|---|---|--- FlyingThings3D | Shih _et al_. [27] | 28.32 | 0.8589 | 0.0707 | 7.96 Ours | 30.50 | 0.8643 | 0.0556 | 7.67 Driving | Shih _et al_. [27] | 30.46 | 0.969 | 0.1141 | 9.94 Chen _et al_. [4]⋆ | 22.38 | 0.959 | - | 7.79 Ma _et al_. [17]⋆ | 23.20 | 0.964 | - | 4.72 | Ours | 34.94 | 0.977 | 0.0628 | 8.01 Table 2: Ablation study. Compare the accuracy of different stages of the model over all regions. ‘Baseline’ is the monocular inpainting model, ‘Stereo’ is the model + stereo context, ‘Disp’ is the model + disparity loss, and ‘Full’ if the model with both stereo context and disparity loss. Bold is best result. Blue are results in synthesis regions only. Model | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | DispE (%)$\downarrow$ ---|---|---|---|--- Baseline | 28.32 (22.26) | 0.8589 (0.5619) | 0.0707 (0.0676) | 7.96 Ours (Stereo) | 29.41 (23.61) | 0.8604 (0.5597) | 0.0625 (0.0582) | 7.68 Ours (Disp) | 29.79 (24.00) | 0.8619 (0.5684) | 0.0570 (0.0569) | 7.71 Ours (Full) | 30.50 (24.70) | 0.8643 (0.5771) | 0.0556 (0.0539) | 7.67 ### 3.7 Experiment setup The network is trained using a batch size of 8 and $256\times 256$ images. The model is optimised using Adam optimiser [13] and a learning rate of 0.1. A model has been trained for each different dataset. As FlyingThings3D is 3 to 5 times bigger than the other datasets, a transfer learning approach has been followed where the model is trained on FlyingThings3D first and then fine- tuned over Driving, and Middlebury. For fair comparison to the results of Chen _et al_. [4] and Ma _et al_. [17], we have trained our Driving model using $128\times 128$ square context masks and $64\times 64$ centred synthesis masks. Our baseline model is Shih _et al_. [27] 3D photography colour inpainting network, which has been trained in the same fashion as our model, and conditioned over depth edges instead of colour as per their original pipeline. For training, we generate edge maps using Canny [2] edge detector following EdgeConnect [21] approach. At test time, we apply pre-trained EdgeConnect models to generate the synthesis area edges, using the pre-trained model over Places2 [36] for our FlyingThings3D, and Middlebury, and a pre-trained model over Paris StreetView [5] for our Driving. ## 4 Results and Discussion In this section we show different evaluations and comparatives that demonstrate the value of our work. We train our model on three different datasets as explained in Section 3.7, and we compare its accuracy and consistency against state-of-the-art methods. We also perform an ablation study to evidence the benefits of the different contributions of our model. ### 4.1 Evaluation of Accuracy There is no perfect numerical metric to evaluate image inpainting outputs given the variety of possible valid results. For the purpose of quantifying how well our model performs, we make use of several popular metrics that measure different characteristics of an image. To measure image quality, we use Peak Signal-To-Noise Ratio (PSNR) [9] and Structural SIMilarity (SSIM) [31] index. PSNR shows the overall pixel consistency, while SSIM measures the coherence of local structures. These metrics assume pixel-wise independence, which may assign favourable scores to perceptually inaccurate results. For this reason, we also include the use of a Learned Perceptual Image Patch Similarity (LPIPS) [35] metric, which aims to capture human perception using deep features. The stereo consistency is quantified using the disparity error metric from [17], which counts the erroneous pixels of the PSMNet estimated disparity map of the inpainted image, compared against the ground truth111The definition of [17] has a typo where the absolute error $\left|d^{i}_{est}-d^{i}_{gt}\right|$ is replaced by $d^{i}_{est}$.. Given the inpainted image $\mathbf{I}$, for every pixel $i$ we consider its estimated disparity $d^{i}_{est}$ to be erroneous iff the absolute error against the equivalent pixel in the ground truth disparity image $d^{i}_{gt}$ is greater than $p_{1}$ and its relative error greater than $p_{2}$ (we use $p_{1}=3$ and $p_{2}=0.05$). This is described in equation 10, where $N$ is the total number of pixels, and $[\;]$ is the Iverson bracket. $\displaystyle\mathit{DispE}=\frac{1}{N}\sum_{i\in\mathbf{I}}\Biggl{[}\Bigl{(}\left|d^{i}_{est}-d^{i}_{gt}\right|$ $\displaystyle>p_{1}\Bigr{)}$ (10) $\displaystyle\And\Biggl{(}\frac{\left|d^{i}_{est}-d^{i}_{gt}\right|}{d^{i}_{gt}}$ $\displaystyle>p_{2}\Biggr{)}\Biggr{]}$ (11) ### 4.2 Inpainting comparison We perform a quantitative comparison of our inpainting model against other state-of-the-art methods [27, 4, 17], following the experiment setup described in Section 3.7. Results can be seen in Table 1. We can observe our model performs better across all metrics compared with the baseline model of Shih _et al_. [27]. Our model also performs competitively against other stereo inpainting models [4, 17], showing a superior image inpainting quality with an improvement on PSNR values of 50%, and some improvement to SSIM. Due to the nature of our mask generation process, our stereo context information is quite narrow, limiting the visible area that our network can learn from. Despite this, our model accomplishes similar results to the stereo consistency of Chen _et al_. [4]. The image quality of the inpainting is superior on the Driving dataset, which was trained using square centred masks to match the experimental setup of [4, 17]. Meanwhile, the object-like occlusion masks used on the FlyingThings3D dataset, which are not fully bounded, are much more challenging. A qualitative example is shown in Figure 5222For further results and analysis we refer the readers to the supplementary material. Despite having access to depth edges, Shi _et al_. struggles to produce sharp object boundaries in the inpainted region. Meanwhile SaiNet is able to use stereo context to inpaint sharp boundaries using colour edge information. This is evidenced by Shih _et al_. success recovering the green bar in the first example, but failing on the colour edge of the second example. However, as shown in the 4th example, our technique still struggles to inpaint especially intricate structures which are not visible through stereo context. Nevertheless it produces sharper and more visually pleasing results. ### 4.3 Ablation Study In the interest of proving the contribution of every stage to the accuracy of the model, we have studied its performance removing the key contributions. Results presented in Table 2 show that every part of the model performs better than the baseline, with the combination of all modules having the best performance across all metrics. It is interesting to note that the use of a disparity loss provides the largest individual benefit in terms of stereo consistency. ## 5 Conclusion We introduced a new stereo-aware learned inpainting model that enforces stereo consistency on its output, trained in a self-supervision fashion over geometrically meaningful masks representing object occlusions. This technique improved over state-of-the-art models by up to 50% PSNR, and we demonstrated its performance over several diverse datasets. As future work, it would be helpful to explore how we could extend similar techniques to cope with the challenges that wide-baseline non-parallel cameras would provide. #### Acknowledgements. This work was partially supported by the British Broadcasting Corporation (BBC) and the Engineering and Physical Sciences Research Council’s (EPSRC) industrial CASE project “Generating virtual camera views with generative networks” (voucher number 19000033). ## References * [1] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Transactions on Graphics, Aug. 2009. * [2] John Canny. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 1986. * [3] Jia-Ren Chang and Yong-Sheng Chen. Pyramid Stereo Matching Network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018. * [4] Shen Chen, Wei Ma, and Yue Qin. CNN-Based Stereoscopic Image Inpainting. In Int. Conf. on Image and Graphics (ICIG), Nov. 2019. * [5] Carl Doersch, Saurabh Singh, Abhinav Gupta, Josef Sivic, and Alexei A. Efros. What Makes Paris Look Like Paris? ACM Transactions on Graphics (SIGGRAPH), 2012. * [6] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, June 2012. * [7] Andrew Gilbert, Matt Trumble, Adrian Hilton, and John Collomosse. Inpainting of Wide-Baseline Multiple Viewpoint Video. IEEE Transactions on Visualization and Computer Graphics, Dec. 2018\. * [8] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Dec. 2014. * [9] Q. Huynh-Thu and M. Ghanbari. Scope of validity of PSNR in image/video quality assessment. Electronics Letters, 2008. * [10] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and Locally Consistent Image Completion. ACM Transactions on Graphics, 2017. * [11] R. Jensen, A. Dahl, G. Vogiatzis, E. Tola, and H. Aanæs. Large Scale Multi-view Stereopsis Evaluation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 2014. * [12] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European Conference on Computer Vision, Oct. 2016. * [13] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations, 2014. * [14] Rolf Köhler, Christian Schuler, Bernhard Schölkopf, and Stefan Harmeling. Mask-Specific Inpainting with Deep Neural Networks. In Pattern Recognition - 36th German Conference, GCPR. Springer International Publishing, 2014. * [15] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image Inpainting for Irregular Holes Using Partial Convolutions. Proceedings of the European Conference on Computer Vision (ECCV), Apr. 2018. * [16] Xuan Luo, Yanmeng Kong, Jason Lawrence, Ricardo Martin-Brualla, and Steven M. Seitz. KeystoneDepth: History in 3D. In 2020 International Conference on 3D Vision (3DV), Nov. 2020. * [17] Wei Ma, Mana Zheng, Wenguang Ma, Shibiao Xu, and Xiaopeng Zhang. Learning across views for stereo image completion. IET Computer Vision, 2020. * [18] Nikolaus Mayer, Eddy Ilg, Philip Häusser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. * [19] B. Morse, J. Howard, S. Cohen, and B. Price. PatchMatch-Based Content Completion of Stereo Image Pairs. In 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Oct. 2012. * [20] Tai-Jiang Mu, Ju-Hong Wang, Song-Pei Du, and Shi-Min Hu. Stereoscopic image completion and depth recovery. The Visual Computer: International Journal of Computer Graphics, June 2014. * [21] Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Qureshi, and Mehran Ebrahimi. EdgeConnect: Structure Guided Image Inpainting using Edge Prediction. In IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Oct. 2019. * [22] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context Encoders: Feature Learning by Inpainting. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Apr. 2016. * [23] Jimmy SJ. Ren, Li Xu, Qiong Yan, and Wenxiu Sun. Shepard convolutional neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, Dec. 2015. * [24] Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H. Li, Shan Liu, and Ge Li. StructureFlow: Image Inpainting via Structure-Aware Appearance Flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. * [25] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI, 2015. * [26] D. Scharstein, H. Hirschmüller, York Kitajima, Greg Krathwohl, Nera Nesic, Xi Wang, and P. Westling. High-Resolution Stereo Datasets with Subpixel-Accurate Ground Truth. In GCPR, 2014. * [27] Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3D Photography using Context-aware Layered Depth Inpainting. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Apr. 2020. * [28] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. International Conference on Learning Representations, Apr. 2015\. * [29] Yuhang Song, Chao Yang, Yeji Shen, Peng Wang, Qin Huang, and C. C. Jay Kuo. SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting. British Machine Vision Conference (BMVC), May 2018. * [30] L. Wang, H. Jin, R. Yang, and M. Gong. Stereoscopic inpainting: Joint color and depth completion from stereo images. In IEEE Conference on Computer Vision and Pattern Recognition, June 2008. * [31] Wang, Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, Apr. 2004. * [32] Junyuan Xie, Linli Xu, and Enhong Chen. Image Denoising and Inpainting with Deep Neural Networks. Advances in Neural Information Processing Systems, Jan. 2012. * [33] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas Huang. Free-Form Image Inpainting With Gated Convolution. In IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2019. * [34] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S. Huang. Generative Image Inpainting with Contextual Attention. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jan. 2018. * [35] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2018. * [36] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 Million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, June 2018.
11institutetext: Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Firenze, Italy 22institutetext: Department of Information Engineering (DINFO), University of Firenze, Italy 33institutetext: Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Milano-Bicocca, Italy 44institutetext: Department of Physics, University of Milano-Bicocca, Italy 55institutetext: European Organization for Nuclear Research (CERN), Switzerland 66institutetext: Department of Physics and Astronomy, University of Manchester, United Kingdom 77institutetext: Affiliated with an institute covered by a cooperation agreement with CERN 88institutetext: Department of Physics, University of Ferrara, Italy 99institutetext: Laboratoire de Physique de Clermont (LPC), Università Clermont Auvergne, France # The LHCb ultra-fast simulation option, Lamarr design and validation Lucio Anderlini 11 Matteo Barbetti 1122<EMAIL_ADDRESS>Simone Capelli 3344 Gloria Corti 55 Adam Davis 66 Denis Derkach 77 Nikita Kazeev 77 Artem Maevskiy 77 Maurizio Martinelli 3344 Sergei Mokonenko 77 Benedetto G. Siddi 88 Zehua Xu on behalf of the LHCb Simulation Project 99 ###### Abstract Detailed detector simulation is the major consumer of CPU resources at LHCb, having used more than 90% of the total computing budget during Run 2 of the Large Hadron Collider at CERN. As data is collected by the upgraded LHCb detector during Run 3 of the LHC, larger requests for simulated data samples are necessary, and will far exceed the pledged resources of the experiment, even with existing fast simulation options. An evolution of technologies and techniques to produce simulated samples is mandatory to meet the upcoming needs of analysis to interpret signal versus background and measure efficiencies. In this context, we propose Lamarr, a Gaudi-based framework designed to offer the fastest solution for the simulation of the LHCb detector. Lamarr consists of a pipeline of modules parameterizing both the detector response and the reconstruction algorithms of the LHCb experiment. Most of the parameterizations are made of Deep Generative Models and Gradient Boosted Decision Trees trained on simulated samples or alternatively, where possible, on real data. Embedding Lamarr in the general LHCb Gauss Simulation framework allows combining its execution with any of the available generators in a seamless way. Lamarr has been validated by comparing key reconstructed quantities with Detailed Simulation. Good agreement of the simulated distributions is obtained with two-order-of-magnitude speed-up of the simulation phase. ## 1 Introduction The LHCb experiment LHCb:2008vvz has been originally designed to study rare decays of particles containing $b$ and $c$ quarks produced at the Large Hadron Collider (LHC). The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range of $2<\eta<5$, that includes a Tracking system and a Particle Identification (PID) system LHCb:2014set . The Tracking system provides high-precision measurements of the momentum $p$ of charged particles and the position of primary vertices. Different types of charged hadrons are separated using the response of two ring-imaging Cherenkov (RICH) detectors. Photons, electrons and hadrons are identified by the calorimeter system relying on an electromagnetic calorimeter (ECAL) and a hadron calorimeter (HCAL). Finally, a dedicated system named MUON identifies muons alternating layers of iron and multi-wire proportional chambers. The RICH, calorimeters and MUON detectors are part of the PID system. Interpreting signal, rejecting background contributions and performing efficiency studies requires to have a full understanding of its data sample, from the high-energy collisions to the set of physics processes responsible for the detector high-level response. This kind of studies greatly benefits from the use of simulated samples. At LHCb, the simulation production mainly relies on the Gauss framework Clemencic:2011zza that implements the generation and simulation phases, and is based on the Gaudi processing framework Barrand:2001ny . The high-energy collisions and all the physics processes that produce the set of particles (e.g., muons, pions, kaons or protons) able to traverse the LHCb spectrometer are simulated during the _generation phase_ using software like Pythia8 Sjostrand:2007gs and EvtGen Lange:2001uf . The radiation-matter interactions between the detector materials and the traversing particles are reproduced during the _simulation phase_ that aims to compute the energy deposited in the active volumes and relies on the Geant4 toolkit Allison:2006ve . Then, a separate application converts the energy deposits into raw data compatible with the real one collected by LHCb. The simulation of all the physics events occurring within the detector is the major consumer of CPU resources at LHCb, having used more than 90% of the total computing budget during LHC $\mathrm{Run\leavevmode\nobreak\ 2}$. The upgraded version of the experiment is designed to collect one-order-of- magnitude larger data samples during $\mathrm{Run\leavevmode\nobreak\ 3}$. Meeting the upcoming and future requests for simulated samples is not sustainable relying only on the traditional _detailed simulation_. For this reason, the LHCb Collaboration is spending great efforts in modernizing the simulation software stack through the novel experiment-independent framework Gaussino 111Visit https://gaussino.docs.cern.ch for additional details. Mazurek:2021abc ; Mazurek:2022tlu on which a newer version of Gauss will be built on, and in developing faster simulation options, some of which also powered by machine learning algorithms Chekalina:2018hxi ; Maevskiy:2019vwj ; Anderlini:2022ofl ; Barbetti:2023bvi . ## 2 Fast simulation VS. ultra-fast simulation Simulating all the physics processes of interest for LHCb is extremely expensive in terms of computing resources, especially the Geant4-based step that is the major CPU-consumer. Speeding up the computation of the energy deposits or, more generally, the detector response is mandatory to satisfy the demand for simulations expected for $\mathrm{Run\leavevmode\nobreak\ 3}$ and those that will follow. Actually, this is a shared problem across the High Energy Physics (HEP) community that is collectively facing it, including by exploiting the latest achievements in Computer Science and adapting _deep generative models_ to parameterize the low-level response of the various experiments Paganini:2017dwg ; Krause:2021ilc ; Amram:2023onf . The literature refers to this kind of strategies with the term _fast simulation_. Fast simulations share their data processing scheme and the reconstruction step with the detailed simulation (as depicted in Figure 1), and are proven capable of reducing the computation cost of a simulated sample up to a factor of 20. Figure 1: Schematic representation of the data processing flow in the _detailed_ (top), _fast_ (center) and _ultra-fast_ (bottom) simulation paradigms. To meet the upcoming and future requests for simulated samples, the LHCb Collaboration is also considering a more radical approach based on the so- called _ultra-fast simulation_ paradigm. In this case, the aim is to directly reproduce the high-level response of the detector relying on a set of parameterizations developed to transform generator-level particles information into reconstructed physics objects as schematically represented in Figure 1 (bottom). Such parameterizations can still be built using generative models, like _Generative Adversarial Networks_ (GAN), proven to succeed in reproducing the high-level response of the LHCb detector Ratnikov:2023wof and offering reliable synthetic simulated samples Anderlini:2022ckd . Following pioneering studies on the ultra-fast simulation of the electromagnetic calorimeter based on GANs Musella:2018rdi , the CMS Collaboration has recently started developing a full-scope ultra-fast simulation based on _Normalizing Flow_ , named FlashSim Vaselli:2858890 . ## 3 Lamarr: the LHCb ultra-fast simulation framework Lamarr Anderlini:2022ofl ; Barbetti:2023bvi is the official ultra-fast simulation framework for LHCb, able to offer the fastest options for simulation. Originating from the attempt of an LHCb customized version of Delphes deFavereau:2013fsa ; Siddi:2019abc , Lamarr is an independent project retaining only the inspiration of its modular layout from Delphes. In particular, the Lamarr framework consists of a pipeline of modular parameterizations, most of which based on machine learning algorithms, designed to take as input the particles generated by the physics generators and provide as output the high-level response of the various LHCb sub- detectors. The Lamarr pipeline can be logically split in two separated chains according to the charge of the generated particles. We expect that charged particles leave a mark in the Tracking system that Lamarr characterizes in terms of acceptance, efficiency and resolution as described in Section 3.1. The reconstructed tracking variables are then used to compute the response of the PID system for a set of traversing charged particles (muons, pions, kaons or protons) as detailed in Section 3.2. In case of neutral particles (e.g., photons), the calorimeters play a key role and, since multiple photons can concur to the energy of a single calorimetric cluster, parameterizing particle-to-particle correlation effects is of major relevance. The solutions under investigation are reported in Section 3.3. The Lamarr pipelines described above are shown in Figure 2. ### 3.1 Tracking system One of the aims of the LHCb Tracking system is to measure the momentum $p$ of charged particles (i.e., electrons, muons, pions, kaons and protons), exploiting the deflection of their trajectories due to the dipole magnet located in between the tracking detectors. Hence, the first step of the _charged chain_ reported in Figure 2 is the propagation through the magnetic field of the particles provided by the physics generators. Lamarr parameterizes the particle trajectories as two rectilinear segments with a single deflection point (inversely proportional to the transverse momentum $p_{T}$), implementing the so-called _single $p_{T}$ kick_ approximation. Figure 2: Scheme of the Lamarr modular pipeline. According to the charge of the particle provided by the physics generator, two sets of parameterizations are defined: the charged particles are passed through the Tracking and PID models, while the neutral ones follow a different path where the calorimeter modeling plays a key role. The next step requires to select the subset of tracks that fall within the LHCb geometrical acceptance and that have any chance to be reconstructed. To this end, Lamarr uses _Gradient Boosted Decision Trees_ (GBDT) trained to learn the fraction of candidates that are in the acceptance as a function of the kinematic information provided by the physics generators. Given a generated track in acceptance, we ask whether the latter will be reconstructed and, in case of positive answer, which tracking detectors are involved in the reconstruction procedure. Lamarr statistically infers such information, namely the tracking efficiency, relying on _neural networks_ trained to perform a multi-class classification according to the track kinematics. A major effort is ongoing to improve the performance of the efficiency model on the basis of the type of tracks and particle species (i.e., electrons, muons or hadrons). At this point, Lamarr disposes of the subset of the generated particles that can be considered as reconstructed tracks, but their kinematics and geometry are still identical to those provided by the physics generators. The smearing of these features, mimicking the effect of the reconstruction, is achieved using GANs. Driven by a _binary cross-entropy_ loss function and powered by _skip connections_ , GANs succeed in describing the resolution effects due to, for example, multiple scattering phenomena, only relying on track kinematic information at generator-level as input conditions. A similar GAN-based architecture is used to provide the correlation matrix obtained from the Kalman filter adopted in the reconstruction algorithm to define the position, slope and curvature of each track. Stacking the parameterizations described above, Lamarr is able to provide the high-level response of the LHCb Tracking system. The resulting reconstructed quantities can be further processed using the LHCb analysis software to combine the parameterized tracks into decay candidates as depicted by the green slot in Figure 1 (bottom). ### 3.2 Particle identification system To accomplish the LHCb physics program, disposing of a high-performance PID system is crucial since it allows for discriminating the various particle species that traverse the detector. Lamarr provides parameterizations for the majority of the charged particles for which the PID detectors are relevant (i.e., muons, pions, kaons or protons). Specialized parameterizations for the electrons, encoding the multiple scattering and Bremsstrahlung emission contributions in the interaction with the detector materials, is planned as future development. Identifying these subset particles involves mainly the RICH and MUON detectors, while the role played by the calorimeters is minor. In general, we expect that the response of the PID system depends only on the specie of the traversing particle, its kinematics, and the detector occupancy. According to these dependencies, Lamarr provides the high-level response for both the detectors using GAN-based models properly conditioned Maevskiy:2019vwj ; Anderlini:2022ckd . Given the particle specie from the physics generators, its kinematic information results from the Lamarr Tracking modules, while the detector occupancy is described by the total number of tracks traversing the detector. In real data, the combination of the responses from RICH detectors, calorimeters, MUON system and a binary muon-identification criterion implemented via FPGA and named isMuon allows to compute the higher-level response of the PID system, referred to as GlobalPID variables. The parameterization of the GlobalPID variables still relies on conditioned GANs, adding as input what results from the RichGAN and MuonGAN models. The binary output of a neural-network-based implementation of isMuon is used as additional input feature, while no explicit calorimeters contribution is defined leaving the missing information problem to the generator _latent space_. GAN-based models, driven by a _Wasserstein distance_ loss function and trained using a Lipschitz-constrained discriminator Terjek:2020 , succeed in describing the high-level response of the RICH and MUON systems. Chaining together different GANs, Lamarr is also able to provide the higher-level response of the LHCb PID system, injecting an implicit contribution from the calorimeters. ### 3.3 Electromagnetic calorimeter Providing a parameterization for the electrons requires describing the response to Bremsstrahlung photons by the LHCb ECAL detector. Since interested by a multitude of secondary particles, the detailed simulation of the calorimeter system is the most computationally expensive step in the simulation pipeline. The latter is a shared problem across the HEP community, that is investing great efforts in tuning deep generative models to properly parameterize the energy deposited in the calorimeter cells Chekalina:2018hxi ; Paganini:2017dwg ; Krause:2021ilc ; Amram:2023onf . Such studies belong to the fast-simulation paradigm that aims to reduce the Geant4 use, providing models for the low-level response of the various experiments. The current version of Lamarr provides a simplified parameterization for the LHCb calorimeter, designed for detector studies and based on a fast-simulation approach. Disposing information at the calorimeter cell level requires running reconstruction algorithms to obtain analysis-level quantities that may become rather CPU-expensive for high-multiplicity events. In addition, since non- physical strategies are used to simulate the energy deposits (as is the case for GANs), there is no certainty that the reconstruction software stack can correctly reproduce the expected distributions for the high-level variables Rogachev:2022hjg . Hence, the Lamarr project is actively working to provide an ultra-fast solution for the ECAL detector. Reproducing the calorimeter high-level response is a non-trivial task since traditional generative models rely on the hypothesis that an unambiguous relation between the generated particle and the reconstructed object exists222To a first approximation, the response of the Tracking and PID systems satisfy this condition.. Instead, the presence of merged $\pi^{0}$ and Bremsstrahlung photons may lead to having $n$ generated particles responsible for $m$ reconstructed objects (in general with $n\neq m$). A strategy to face this particle-to-particle correlation problem can be built using techniques designed in the context of Language Modeling, describing the calorimeter simulation as a _translation problem_. To this end, _Graph Neural Network_ (GNN) Scarselli:2009abc and _Transformer_ Vaswani:2017abc models are currently under investigation. Both the models are designed to process a sequence of $n$ generated photons and infer the kinematics of a sequence of $m$ reconstructed clusters. The non- trivial correlations between any particles of the source sequence (photons) and the target one (clusters) rely on the _attention mechanism_ Vaswani:2017abc ; Brody:2022abc . To improve the quality of the resulting parameterizations, the training of both GNN and Transformer-based models is driven by an adversarial procedure (similarly to what occurs for GANs). The discriminator is currently implemented through a _Deep Sets_ model Zaheer:2017abc , while further studies are ongoing to replace it with a second Transformer Lee:2022abc . Considering the complexity of the problem, the preliminary results are promising as depicted in Figure 3, where the joint action of Transformer and Deep Sets succeeds in deriving the energy distribution on the ECAL face. The center of the calorimeter has not active material since is used to host the LHC beam pipe. It should be pointed out that no constraints are applied to the model output to reproduce such conditions, and that the empty space shown in Figure 3 (right) is the result of the adversarial training procedure. Figure 3: Distribution of the $(x,y)$-position of the reconstructed clusters on the LHCb ECAL face for a $2000\times 1500\leavevmode\nobreak\ \rm{mm}^{2}$ frame placed around the center. The geometrical information is combined with the energy signature properly weighting each bin entry. What obtained from detailed simulation is reported on the left, while the predictions of an adversarial trained Transformer model is shown on the right. The corresponding LHCB-FIGURE is in preparation. ## 4 Validation campaign and timing performance The ultra-fast philosophy at the base of the Lamarr framework is being validated by comparing the distributions obtained from machine-learnt models trained on detailed simulation and the ones resulting from standard simulation strategies. In particular, we will briefly discuss the validation studies performed for the charged particles pipeline using simulated $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}$ decays with $\Lambda_{c}^{+}\to pK^{-}\pi^{+}$. The semileptonic nature of the $\Lambda^{0}_{b}$ decay requires an interface with dedicated generators, in this case EvtGen. Deeply studied by LHCb, this decay channel includes in its final state the four charged particle species parameterized in the current version of Lamarr, namely muons, pions, kaons and protons. The validation of the Lamarr Tracking modules is depicted in Figure 4 (left) where the agreement between the $\Lambda^{+}_{c}$ invariant mass distribution resulting from the ultra-fast paradigm and the one obtained from detailed simulation proves that the decay dynamics is well reproduced and the resolution effects correctly parameterized. To show the good performance of the Lamarr PID models, a comparison between the selection efficiencies for a tight requirement on a multivariate proton classifier is shown in Figure 4 (right). Figure 4: Validation plots for $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}$ decays with $\Lambda_{c}^{+}\to pK^{-}\pi^{+}$ simulated with Pythia8, EvtGen and Lamarr (orange markers) and compared with detailed simulation samples relying on Pythia8, EvtGen and Geant4 (cyan shaded histogram). Reproduced from LHCB- FIGURE-2022-014. Comparing the CPU time spent per event by Geant4-based production of $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}$ samples and the one needed by Lamarr, we estimate a CPU reduction of two-order-of-magnitude only for the simulation phase. Interestingly, since the generation of $b$-baryons is exceptionally expensive, Pythia8 becomes the major consumer of CPU resources in the ultra-fast paradigm. A further speed-up can be reached reducing the cost for generation, for example using a _Particle Gun_ that simulates directly the signal particles without going through the high-energy collisions, not needed since Lamarr parameterizes the detector occupancy. Even in these physics-simplified settings, the ultra-fast philosophy succeeds in reproducing thee distributions obtained from detailed simulation Anderlini:2022ofl . ## 5 Integration with the LHCb simulation framework To be integrated within the LHCb software stack, the parameterizations provided by Lamarr need to be queried from a C++ application, running in the Gaudi framework. Traditional deployment strategies were found to lead to unacceptably large overheads due to the presence of different multi-threading schedulers and context switching issues. Hence, a custom deployment strategy was preferred: models trained with scikit-learn and Keras are converted into compatible C code using the scikinC toolkit Anderlini:2022ltm , and then distributed through the LHCb Computing Grid via the CERN VM file-system (cvmfs) Buncic:2010zz . The modular layout of Lamarr enables a variety of studies and developments on the single parameterizations, providing a unique and shared infrastructure for validation and performance measurements. While crucial for applications within LHCb, the integration with Gaudi and Gauss makes the adoptions of Lamarr unappealing for researchers outside of the LHCb community. The SQLamarr package333Visit https://lamarrsim.github.io/SQLamarr for additional details. aims to mitigate this problem, providing a stand-alone ultra-fast simulation framework with minimal dependencies. Based on SQLite3, SQLamarr provides a set of classes and functions for loading data from physics generators and defining pipelines from compiled models. An integration between SQLamarr and Gaussino is currently under investigation with the aim of providing ultra-fast parameterizations following the experiment-independent philosophy of the newest LHCb simulation framework, named Gauss-on-Gaussino444Visit https://lhcb-gauss.docs.cern.ch/Futurev5 for additional details. Mazurek:2021abc ; Mazurek:2022tlu . ## 6 Conclusion An evolution of the LHCb software stack and the simulation techniques are mandatory to meet the upcoming and future demand for simulated samples expected for $\mathrm{Run\leavevmode\nobreak\ 3}$ and those that will follow. Ultra-fast-based solutions will play a key role in reducing the pressure on pledged CPU resources, without compromising unreasonably the description of the uncertainties introduced in the detection and reconstruction phases. Such techniques, powered by deep generative models, are provided to LHCb via the novel Lamarr framework. Well integrated with the physics generators within the Gauss framework, Lamarr delivers two pipelines according to the charge of the generated particle. The statistical models for the Tracking and the charged PID systems have been deployed and validated with satisfactory results on $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}$ decays. Several models are currently under investigation for the neutral pipeline, where the translation problem approach offers a viable solution to face the particle-to- particle correlation problem. Further development of the integration between Lamarr and the LHCb simulation framework is one of the major ongoing activities to put the former in production and make its parameterizations available to the HEP community. ## Acknowledgements This work is partially supported by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union – NextGenerationEU. ## References * (1) A.A. Alves, Jr. et al. (LHCb), JINST 3, S08005 (2008) * (2) R. Aaij et al. (LHCb), Int. J. Mod. Phys. A 30, 1530022 (2015), 1412.6352 * (3) M. Clemencic et al. (LHCb), J. Phys. Conf. Ser. 331, 032023 (2011) * (4) G. Barrand et al., Comput. Phys. Commun. 140, 45 (2001) * (5) T. Sjostrand, S. Mrenna, P.Z. Skands, Comput. Phys. Commun. 178, 852 (2008), 0710.3820 * (6) D.J. Lange, Nucl. Instrum. Meth. A 462, 152 (2001) * (7) J. Allison et al., IEEE Trans. Nucl. Sci. 53, 270 (2006) * (8) M. Mazurek, G. Corti, D. MÃller, Comput. Inform. 40, 815â832 (2021), 2112.04789 * (9) M. Mazurek, M. Clemencic, G. Corti, PoS ICHEP2022, 225 (2023) * (10) V. Chekalina et al., EPJ Web Conf. 214, 02034 (2019), 1812.01319 * (11) A. Maevskiy et al. (LHCb), J. Phys. Conf. Ser. 1525, 012097 (2020), 1905.11825 * (12) L. Anderlini et al., PoS ICHEP2022, 233 (2023) * (13) M. Barbetti (2023), ACAT’22, 2303.11428 * (14) M. Paganini, L. de Oliveira, B. Nachman, Phys. Rev. D 97, 014021 (2018), 1712.10321 * (15) C. Krause, D. Shih, Phys. Rev. D 107, 113003 (2023), 2106.05285 * (16) O. Amram, K. Pedro (2023), 2308.03876 * (17) F. Ratnikov et al., Nucl. Instrum. Meth. A 1046, 167591 (2023) * (18) L. Anderlini et al. (LHCb), J. Phys. Conf. Ser. 2438, 012130 (2023), 2204.09947 * (19) P. Musella, F. Pandolfi, Comput. Softw. Big Sci. 2, 8 (2018), 1805.00850 * (20) F. Vaselli et al. (CMS), Tech. rep. (2023), https://cds.cern.ch/record/2858890 * (21) J. de Favereau et al. (DELPHES 3), JHEP 02, 057 (2014), 1307.6346 * (22) B.G. Siddi, EPJ Web Conf. 214, 02024 (2019) * (23) D. TerjÃk (2020), ICLR’20, 1907.05681 * (24) A. Rogachev, F. Ratnikov, J. Phys. Conf. Ser. 2438, 012086 (2023), 2207.06329 * (25) F. Scarselli et al., IEEE Trans. Neural. Netw. 20, 61 (2009) * (26) A. Vaswani et al. (2017), NeurIPS’17, 1706.03762 * (27) S. Brody, U. Alon, E. Yahav (2022), ICLR’22, 2105.14491 * (28) M. Zaheer et al. (2017), NeurIPS’17, 1703.06114 * (29) K. Lee et al. (2022), ICLR’22, 2107.04589 * (30) L. Anderlini, M. Barbetti, PoS CompTools2021, 034 (2022) * (31) P. Buncic et al., J. Phys. Conf. Ser. 219, 042003 (2010)
# Root Cause Analysis In Microservice Using Neural Granger Causal Discovery Cheng-Ming Lin, Ching Chang, Wei-Yao Wang, Kuang-Da Wang, Wen-Chih Peng ###### Abstract In recent years, microservices have gained widespread adoption in IT operations due to their scalability, maintenance, and flexibility. However, it becomes challenging for site reliability engineers (SREs) to pinpoint the root cause due to the complex relationships in microservices when facing system malfunctions. Previous research employed structured learning methods (e.g., PC-algorithm) to establish causal relationships and derive root causes from causal graphs. Nevertheless, they ignored the temporal order of time series data and failed to leverage the rich information inherent in the temporal relationships. For instance, in cases where there is a sudden spike in CPU utilization, it can lead to an increase in latency for other microservices. However, in this scenario, the anomaly in CPU utilization occurs before the latency increase, rather than simultaneously. As a result, the PC-algorithm fails to capture such characteristics. To address these challenges, we propose RUN, a novel approach for root cause analysis using neural Granger causal discovery with contrastive learning. RUN enhances the backbone encoder by integrating contextual information from time series, and leverages a time series forecasting model to conduct neural Granger causal discovery. In addition, RUN incorporates Pagerank with a personalization vector to efficiently recommend the top-k root causes. Extensive experiments conducted on the synthetic and real-world microservice-based datasets demonstrate that RUN noticeably outperforms the state-of-the-art root cause analysis methods. Moreover, we provide an analysis scenario for the sock-shop case to showcase the practicality and efficacy of RUN in microservice-based applications. Our code is publicly available at https://github.com/zmlin1998/RUN. ## 1 Introduction Root cause analysis plays a crucial role in numerous domains such as cloud system operations (Soldani, Forti, and Brogi 2022), manufacturing processes (Oliveira, Miguéis, and de Moura Borges 2023), or telecommunications networks (Chen et al. 2022). For instance, during the manufacturing process of wafers, if the thickness of a wafer is found to deviate from the standard, root cause analysis can be employed to identify the specific manufacturing step that caused the abnormal thickness. By applying root cause analysis to sensor data with complex relationships, these situations can effectively pinpoint the underlying causes when a system malfunction occurs. In recent years, as the companies experience continued growth, the system operations have scaled up and grown increasingly complex. Therefore, these organizations have opted to migrate from the so-called monolithic architecture to microservice architecture (Liu et al. 2021). Microservice architecture offers benefits such as better scalability, easier maintenance, and greater flexibility. Each service can be independently scaled based on demand, enabling efficient resource utilization. Developers can focus on individual services, making it simpler to debug, test, and deploy changes. Despite the numerous benefits of microservices, when an anomaly arises in one service, the interdependencies among services can create a domino effect, resulting in subsequent issues and ultimately leading to system failure (Zhang et al. 2022). In such scenarios, in-depth analysis becomes imperative to identify the culprit of the anomaly and mitigate the problem effectively. Figure 1: An example of causal structure discovery-based techniques for RCA. However, in microservice monitoring systems, only the operational values of the system are recorded, without documenting the relationships between them. Hence, researchers have recently employed causal structure discovery-based techniques for Root Cause Analysis (RCA) in cloud applications (Wang et al. 2023), aiming to identify the underlying causes of anomalies. Figure 1 illustrates a flow of RCA using the causal structure discovery-based approach. When the anomaly detection alarm is triggered by an anomalous Key Performance Indicator (KPI), engineers initially designate that particular KPI as the trigger point. Subsequently, they aim to identify the underlying root cause of this trigger point. To achieve this, they construct a causal graph that establishes relationships between different KPIs, allowing them to precisely pinpoint the culprit of the anomaly using the insights provided by the causal graph. On the other hand, the Granger causality (Granger 1969) analysis is another widely recognized approach used to assess whether a set of time series $x$ is a causal factor for another set of time series $y$. Granger causality has gained attention and is widely acknowledged for its advantages of interpretability and compatibility with the emergence of deep learning (Tank et al. 2022). Previous works (Nauta, Bucur, and Seifert 2019) utilized time series forecasting models and interpreted model parameters as the relationships between variables to establish causal relationships. However, prior approaches in neural Granger causal discovery have not effectively leveraged the contextual information inherent in temporal data. The previous approach (Yue et al. 2022) generated positive and negative samples at instance-wise and temporal dimensions for generating fine-grained representations for any granularity. However, we notice that real-world time series often exhibit multiple-periodicity. For instance, the temperature tends to reach its peak at noon every day, there is an influx of people in the city during weekends, and electricity consumption significantly increases during the summer every year. Therefore, this approach might inadvertently treat timestamps with the same periodicity as negative pairs, even though they actually hold similar significance. To tackle these challenges, we propose a novel framework RUN, which employs a self-supervised neural Granger causal discovery approach for conducting root cause analysis. To capture contextual information in time series, we employ a self-supervised learning scheme that ensures timestamps with diverse contexts, but the same timestamps are learned to exhibit proximity, where we only treat the identical timestamps as positive pairs for contrastive learning. This way aims to prevent the inclusion of erroneous information stemming from negative pairs. After enhancing the backbone encoder to acquire improved representations, we utilize neural Granger causal discovery to explore the causal relationships among variables. Subsequently, we proceed to identify the root causes of the trigger point within the acquired causal graph using Pagerank with a personalized vector. Comprehensive experiments conducted on both synthetic and real-world microservice-based datasets conclusively demonstrate that our proposed framework outperforms state-of-the-art root cause analysis methods. Our main contributions are summarized as follows: * • We propose a self-supervised neural Granger causal discovery-based root cause analysis framework RUN, which captures contextual information in temporal data, and then leverages the time series forecasting model to construct a causal graph between multivariate time series. RUN employs Pagerank on the derived causal graph to identify the root cause of the trigger point. * • We introduce an innovative self-supervised learning method for time series data that exclusively treats identical timestamps with distinct contextual information as positive pairs. This approach mitigates the problem of misidentifying timestamps with similar periodicity as negative pairs, thereby preventing the separation of representations for timestamps sharing the same periodicity. * • RUN outperforms existing state-of-the-art methods on synthetic datasets and data generated from a real-world application. We further conduct an ablation study to validate the effectiveness of our contrastive learning method. ## 2 Related Work Figure 2: Illustrations of contextual information from the same timestamps but with different contexts. The red window and green window respectively represent two distinct types of contextual information. The same timestamp with different contexts should be close. Figure 3: An illustrated issue of negative pair selection. Figure 4: Overview of our proposed framework, RUN, consisting of three stages: 1) Maximizing the positive pair to capture the contextual information; 2) Neural Granger causal discovery to derive the causal graph from multivariate time series; and 3) The diagnosis stage infers the root cause from the obtained causal graph. ### 2.1 Root Cause Analysis in Microservices Within large organizations, the adoption of microservices has become the prevailing architecture due to its advantages in terms of enhanced scalability, simplified maintenance, and increased flexibility. When the anomaly detection alarm is activated by an anomalous KPI, engineers often have to invest a significant amount of time to pinpoint the underlying cause behind this anomaly. Consequently, they engage in the development of automated root cause analysis systems to mitigate the time spent on the investigation. $\epsilon$-diagnosis (Shan et al. 2019) considers the time series of KPIs and computes the similarity between them during normal periods and abnormal periods, which serves as an anomaly indicator. AutoMAP (Ma et al. 2020) utilizes the PC-algorithm to construct a causality graph based on the KPIs. It further conducts a random walk to determine the candidate’s root causes. RCD (Ikram et al. 2022) employs the hierarchical and localized learning algorithm. It deems the failures as an intervention, and adopts a divide-and-conquer scheme to divide the variables into multiple subsets for conducting $\Psi$-PC to find interventional targets from each subset. The candidate root causes from all subsets are then combined, and the same process is iteratively applied until further splitting of candidate root causes is not possible. Nonetheless, these methods rely on similarity or the PC-algorithm, both of which overlook the essential role of temporal dependency in root cause analysis. For instance, when a sudden spike in CPU utilization occurs, it can lead to increased latency in other microservices, effectively encompassing the temporal dimension of the information. Therefore, we aim to develop a neural Granger causal discovery method that can leverage the temporal dependency to identify the root causes more accurately. ### 2.2 Neural Granger Causal Discovery As deep Neural Networks (NNs) continue to advance rapidly, researchers have started using Recurrent Neural Networks or other Temporal Convolutional Networks to infer nonlinear Granger causality. MSNGC (Fan et al. 2023) extracts diverse causal information from the data, considering various delay ranges, and effectively integrates them using learned attention weights. Tank et al. (2022) applies structured component-wise multilayer perceptrons (MLPs) combined with sparsity-inducing penalties on the weights. CUTS (Cheng et al. 2023) utilizes EM-Style joint causal graph learning and missing data imputation for irregular temporal data. Nevertheless, previous works have not effectively leveraged contextual information in time series data. As a result, we present an innovative approach for time series contrastive learning that enables to the capture of contextual information from identical timestamps but with different contexts, as depicted in Figure 2. ### 2.3 Contrastive Learning Contrastive learning has been widely used in various domains, such as natural language processing (NLP) (Gao, Yao, and Chen 2021), computer vision (CV) (Chen and He 2021), and time series (Yue et al. 2022). TS2Vec (Yue et al. 2022) samples positive pairs which are the same timestamp with different contexts, and negative pairs which are different timestamps. However, we observe that real-world time series frequently demonstrate multiple- periodicity. With the advancement of contrastive learning, (Grill et al. 2020; Chen and He 2021) argue that it can perform well even without negative pairs because they aim to prevent the inclusion of incorrect negative pairs. Hence, we propose an innovative approach that exclusively leverages identical timestamps as positive pairs for self-supervised learning. This scheme is designed to prevent the incorporation of information from negative pairs. In Figure 3, we can observe that the time series data exhibits periodic patterns. Therefore, if we were to consider the two circled red points on the graph as negative pairs, it would lead to incorporating incorrect information from them. ## 3 Problem Formulation The monitoring system will monitor the operational status of the microservice system through KPIs, and in typical scenarios, we pinpoint the root cause when an anomalous KPI triggers the anomaly detection system. We term the anomalous KPI as a trigger point and subsequently identify the underlying root causes behind the trigger point. These KPIs belong to multivariate time series data. In the microservice architecture, identifying the underlying causes for the anomaly can be formalized as follows. Multivariate time series X consists of $N$ features, $\textbf{X}=[X_{1},X_{2},...,X_{N}]$. We collect the corresponding time series with a specified time period $T$, each time series can be denoted as $X_{i}=[x^{1}_{i},x^{2}_{i},\ldots,x^{T}_{i}]$, $\textbf{X}\in\mathbb{R}^{N\times T}$. We utilize the neural Granger causal discovery method to derive the causal graph $\hat{\textbf{G}}=\\{V,E\\}$, where nodes $V$ denotes each feature $X_{i}$ and edges $E$ denotes that feature $X_{i}$ Granger causes feature $X_{j}$. Our goal is to identify the root cause $X_{culprit}$ which leads to the trigger point within the causal graph $\hat{\textbf{G}}$. ## 4 Methodology Our proposed framework is shown in Figure 4, which consists of three stages: the pre-training stage, the neural Granger causal discovery stage, and the diagnosis stage. In the pre-training stage, we enhance the backbone encoder to generate informative time series representations that incorporate contextual information. This enhancement is achieved by maximizing the agreement among instances with the same timestamp but with different contexts. In the neural Granger causal discovery stage, we utilize the time series forecasting model to exploit the causal graph and prune the spurious edge to get the final Directed Acyclic Graphs (DAGs). In the diagnosis stage, we apply a random walk algorithm called Pagerank with a personalization vector which recommends the most likely root causes based on the causal graph. ### 4.1 Pre-training Stage We utilize DLinear (Zeng et al. 2023) as our backbone encoder for time series forecasting. In order to enhance our backbone encoder, we design contrastive learning without relying on negative pairs. We consider that negative pairs are not suitable for time series because they exhibit periodic patterns as depicted in Figure 3. When selecting different timestamps as negative pairs, it is possible that they belong to the same periodicity. If we choose the incorrect negative pairs, it may lead to separating embeddings that should be similar. Therefore, we learn the context information by considering instances with the same timestamp but with different contexts as positive pairs. First, we utilize random cropping following (Yue et al. 2022) by randomly sampling two overlapping time segments $y_{1}$ and $y_{2}$, $y_{1}=[a_{1},b_{1}],y_{2}=[a_{2},b_{2}],0<a_{1}\leq a_{2}\leq b_{1}\leq b_{2}\leq T.$ (1) The two views $y_{1}$ and $y_{2}$ are processed by our backbone encoder $f$ and a projector $g$. To learn the contextual information without negative pairs, we utilize the same schema of SimSiam (Chen and He 2021). We denote the two representations as $z_{1}=g(f(y_{1}))$ and $p_{2}=f(y_{2})$, and maximize the similarity between both sides. Here, we define our loss function as follows: $L_{con}=-\frac{1}{2}(cos(z_{1},stopgrad(p_{2}))+cos(stopgrad(p_{1}),z_{2})).$ (2) ### 4.2 Neural Granger Causal Discovery Stage RUN represents the causal graph as a Causal Attention Matrix $\textbf{G}=\\{0\leq\alpha_{ij}\leq 1,\ \forall i,j\in[1,2,...,N]\\}$, where the element $\alpha_{ij}$ demonstrates the attention score from $X_{i}$ to $X_{j}$. Each element $\alpha_{ij}$ serves as a trainable parameter, contributing to identifying causal relationships among time series. We define the final causal graph $\bm{\widetilde{G}}=\\{g_{ij}\in[0,1],\forall i,j\in[0,1]\\}$. If the attention score $\alpha_{ij}$ is beyond a certain threshold, $g_{ij}$ will be 1, indicating that time series $X_{i}$ Granger causes $X_{j}$. The proposed neural Granger causal discovery comprises two components: time series forecasting and causal graph discovery. #### Time Series Forecasting In time series forecasting, we propose $N$ independent neural network $f_{\theta_{i}}$ for time series $X_{i}$ to fit its data generation function (Cheng et al. 2023). We define the input for our model by applying a sliding window with a size of $w$ across all the historical time series X and multiplying with corresponding attention score $\alpha_{ni}$ and the forecast $\hat{x}_{i}$ at timestamp $t$ is the output of the neural network $f_{\theta_{i}}$ $\displaystyle\hat{x}^{t}_{i}=f_{\theta_{i}}(\textbf{X}\odot G)=f_{\theta_{i}}(x^{t-w:t-1}_{1}\odot\alpha_{1i},...,x^{t-w:t-1}_{N}\odot\alpha_{Ni}),$ (3) $\displaystyle i=1,2,...,N.$ An overview of time series forecasting is illustrated in 5. Each independent neural network has the same architecture but a different target time series. For a neural network $f_{\theta_{i}}$, the goal is to forecast corresponding time series $X_{i}$ by minimizing the following loss function $L_{MSE}=-\frac{1}{N-w}\sum_{t=w+1}^{T}(x^{t}_{i}-\hat{x}^{t}_{i})^{2}$ (4) Figure 5: Overview of time series forecasting. There are $N$ independent neural networks for each time series $i$ to predict their causal relationships. #### Causal Graph Discovery After training the networks, the learnable Causal Attention Matrix $G$ can be used to interpret the causal relationships among time series. When $\alpha_{ij}$ exceeds a certain threshold $H$, we can infer that time series $X_{i}$ Granger causes $X_{j}$ and add an edge from $X_{i}$ to $X_{j}$ in the causal graph $\bm{\widetilde{G}}$, that is $\bm{\widetilde{G}}_{ij}=\begin{cases}1,&\text{if}\ \alpha_{ij}>H\\\ 0,&\text{else}\end{cases}$ (5) Based on the definition of a causal graph, cycles are not allowed in the graph. Therefore, we need to incorporate a pruning strategy to remove spurious edges. We calculate the similarity of each edge, which involves computing the Pearson correlation between two connected nodes. Subsequently, we eliminate the edge with the lowest similarity iteratively until $\bm{\widetilde{G}}$ transforms into the final causal graph $\hat{\textbf{G}}$. ### 4.3 Diagnosis Stage In the diagnosis stage, we follow GrootRank (Wang et al. 2021) to apply PageRank with node weight personalization to calculate the root cause ranking. Wang et al. (2021) observes that dangling nodes are more likely to be the root cause. Hence, we customize the personalization vector as $P_{d}$ and $P_{n}$, where $P_{d}$ is the personalization score for dangling nodes and $P_{n}$ is for the remaining nodes. In the case of tied rankings, we calculate the access distance from the trigger point to resolve the tie. We calculate the access distance (AD) differently from GrootRank $\bm{AD}=\begin{cases}D,&\text{distance from trigger point to the variable}\\\ 0,&\text{if any “access" is not reachable}\end{cases}$ (6) In GrootRank, the authors set the distance of unreachable nodes as infinity and consider that shorter access distance will more likely be the root cause. However, this may contradict the observed phenomena which suggest that dangling nodes are more likely to be the root cause. Hence, we set the distance of the unreachable node as $0$ and consider the bigger access distance will more likely be the root cause. Finally, RUN outputs the top-k root causes based on the root cause ranking. ## 5 Experiments ### 5.1 Experiment Settings #### Dataset As no publicly available real-world dataset for root cause analysis is accessible due to data confidentiality, we test on a synthetic dataset and a test bed utilizing an actual microservice-based application. * • Synthetic data: To generate synthetic data, we follow RCD Ikram et al. (2022) to randomly generate DAG and generate conditional probability tables (CPT) for the normal period dataset. Then, we inject failures by randomly choosing one node as the root cause and regenerating CPT for it. Hence, we can get the anomalous period dataset and combine them into one case. Finally, we also utilize an anomaly detection system to identify the trigger point. The data are generated with node counts of 10, 20, 30, 40, and 50, spanning over 2,000 timestamps. * • Sock-shop (Daniel Holbach 2022): The framework of sock-shop encompasses a total of 13 microservices, each developed using distinct technologies. These microservices are deployed on individual virtual machines or containers. Communication among them occurs through HTTP-based API requests. Additionally, these microservices offer a substantial volume of statistical data in the form of various metrics such as CPU and memory utilization. Within the sock-shop dataset, ten distinct categories of root causes are present, each comprising five instances. #### Baselines We compare our performance with the following baseline approaches: * • $\epsilon$-Diagnosis (Shan et al. 2019): $\epsilon$-Diagnosis analyzes the time series of KPIs and calculates the similarity between them during both normal and abnormal periods, which serves as an indicator for anomalies. If the similarity of a particular KPI between normal and anomalous time periods falls below a certain confidence threshold, indicating significant changes in that KPI before and after the anomaly occurrence, it can be considered a candidate root cause. * • AutoMAP (Ma et al. 2020): AutoMAP creates a weighted causal graph utilizing an adapted PC algorithm. In this graph, the weight of an edge between two microservices reflects the degree of dependence of a performance metric. The process concludes by navigating nodes through a random walk algorithm and identifying the root cause via the correlation metric from the weighted causal graph. * • RCD (Ikram et al. 2022): RCD treats failures as interventions on the nodes representing root causes, and utilizes a localized hierarchical learning algorithm to identify these root causes of failure. It also employs a divide- and-conquer scheme to decrease the time required for inferring the root cause from the entire graph. * • $\Psi$-PC: $\Psi$-PC is a specific instance of RCD. $\Psi$-PC will acquire the entire causal graph, which might not be essential for root cause identification. Additionally, localizing learning would demand extra time. * • CausalRCA (Xin, Chen, and Zhao 2023): CausalRCA uses a gradient-based causal structure learning method to generate weighted causal graphs and a root cause inference method to localize root cause metrics. Dataset | # of Time Series | # of Timestamps ---|---|--- Synthetic data | 10 | 2000 20 30 40 50 Sock-shop | 38 | 600 Table 1: Statistics of the synthetic and sock-shop datasets. CPU hog Memory leak $\epsilon$-Diag. AMAP $\Psi$-PC* RCD* CausalRCA RUN $\epsilon$-Diag. AMAP $\Psi$-PC* RCD* CausalRCA RUN HR@1 Carts 0.20 0.05 0.02 0.00 0.00 0.20 0.00 0.14 0.02 0.02 0.00 0.00 Catalogue 0.20 0.22 0.02 0.08 0.00 0.40 0.40 0.00 0.10 0.06 0.00 0.00 Orders 0.20 0.01 0.06 0.12 0.80 0.00 0.00 0.26 0.02 0.04 0.00 0.20 Payment 0.60 0.07 0.06 0.04 0.00 0.40 0.20 0.01 0.02 0.02 0.00 0.00 User 0.20 0.29 0.06 0.12 0.20 0.20 0.40 0.08 0.20 0.14 0.00 0.40 Avg. 0.28 0.13 0.04 0.07 0.04 0.40 0.20 0.10 0.07 0.06 0.00 0.12 HR@3 Carts 0.40 0.23 0.04 0.04 0.00 0.40 0.00 0.20 0.02 0.06 0.00 0.20 Catalogue 0.20 0.22 0.04 0.12 0.00 0.60 0.40 0.00 0.06 0.12 0.00 0.40 Orders 0.20 0.36 0.06 0.30 0.00 0.80 0.00 0.37 0.06 0.14 0.00 0.20 Payment 0.60 0.07 0.06 0.12 0.00 0.40 0.20 0.05 0.02 0.16 0.00 0.40 User 0.20 0.53 0.04 0.12 0.40 0.20 0.40 0.27 0.16 0.28 0.20 0.40 Avg. 0.32 0.28 0.05 0.16 0.08 0.48 0.20 0.14 0.06 0.15 0.04 0.32 HR@5 Carts 0.40 0.24 0.08 0.22 0.20 0.40 0.00 0.25 0.02 0.04 0.20 0.20 Catalogue 0.20 0.23 0.06 0.08 0.40 0.80 0.40 0.00 0.08 0.12 0.00 0.60 Orders 0.20 0.51 0.06 0.36 0.00 0.80 0.00 0.49 0.08 0.08 0.00 0.20 Payment 0.60 0.07 0.06 0.06 0.00 0.40 0.20 0.11 0.04 0.10 0.20 0.40 User 0.20 0.55 0.06 0.34 0.40 0.20 0.40 0.30 0.20 0.28 0.20 0.40 Avg. 0.32 0.32 0.06 0.21 0.20 0.52 0.20 0.18 0.08 0.12 0.12 0.36 Table 2: HR@k on sock-shop data. Bold indicates the best performance and underline represents the second best performance. We note that $\Psi$-PC* and RCD* indicate different results compared with the original paper, where we directly reran the experiments from the official codes. #### Evaluation Metrics We evaluate our solution by top-k hit ratio (HR@$k$) and mean reciprocal rank (MRR). * • HR@$k$ represents the probability of getting the correct root cause from the top-k outputs. * • MRR sums up the reciprocal of the rank of the root cause. If the root cause is not included in the output, its rank can be deemed as infinity, resulting in a score of zero. #### Implementation Details We implement our method on a machine with AMD EPYC 7302 16-Core CPU, NVIDIA RTX A5000 graphics cards. In the time series forecasting stage, the window size $w$ is set to $32$. We use the Adam (Kingma and Ba 2015) optimizer and set the learning rate as $0.001$ and the batch size as $128$. In the causal graph discovery stage, the threshold $H$ is set to $0.5$. The training epochs of the pre-training and fine-tuning stages are set to 50. In the diagnosis stage, we set the value of the personalization vector $P_{d}$ as 1 and $P_{n}$ as 0.5, similar to (Wang et al. 2021). The k in $HR@k$ is set to 1, 3, and 5. CPU hog Memory leak $\epsilon$-Diag. RUN $\epsilon$-Diag. RUN MRR Carts 0.273 0.299 0.036 0.158 Catalogue 0.203 0.537 0.412 0.207 Orders 0.212 0.825 0.017 0.267 Payment 0.613 0.494 0.437 0.164 User 0.221 0.264 0.231 0.443 Avg. 0.304 0.484 0.227 0.248 Table 3: MRR on sock-shop data. ### 5.2 Overall Performance #### Sock-shop Table 2 demonstrates the HR@k of root cause analysis methods between RUN and baselines, which demonstrates that our proposed model achieves at least a 63% improvement in CPU hog, and 80% improvement in Memory leak compared with the best-performing baseline. Considering baselines, $\epsilon$-diagnosis outperforms other baselines because the root cause in the sock-shop data exhibits distinct behaviors during normal and anomaly periods. AutoMAP’s performance is suboptimal in some cases due to the fact that it constructs the causal graph utilizing PC-algorithm, which, however, overlooks the temporal dependency. Although we adhere to the settings used in RCD, $\Psi$-PC and RCD yield varying root causes in each run and perform worse on the sock-shop dataset. CausalRCA neglects temporal information within time series data, leading to suboptimal performance in sock-shop. The comparison of RUN and causal discovery-based approaches reveals the significance of temporal dependency. RUN leverages positive pairs which capture the contextual information, and applies neural Granger causal discovery to consider the temporal dimension. This underscores the importance of considering temporal dependency, particularly in time series data within microservice systems. As $\Psi$-PC and RCD cannot output the rank of the root cause, they are omitted for comparing performance in terms of MRR. Table 3 illustrates the MRR of root cause analysis methods between RUN and $\epsilon$-diagnosis. The table clearly illustrates that even though our performance in HR@1 for Memory leaks is the second best, it is still noticeable that the root cause maintains a relatively high ranking. Furthermore, we observe that RUN outperforms $\epsilon$-diagnosis significantly, underscoring the impressive performance of RUN in identifying CPU hog incidents. (a) HR@1 (b) HR@3 (c) HR@5 Figure 6: HR@k on a synthetic dataset Figure 7: Ablation study removing the pre-training stage and incorporating the negative pairs in contrastive learning. #### Synthetic Data Figure 6 shows the HR@k of root cause analysis methods between RUN and baselines. However, $\epsilon$-diagnosis falls short due to the fact that synthetic data lack the distinctive behaviors during both normal and anomaly periods that are characteristic of microservice-based data when anomalies occur. Since the synthetic dataset is generated using the function of generateCPT in PyAgrum, it might not fully exhibit the characteristics of time series data. Both $\Psi$-PC and RCD, on the other hand, learn causal graphs using the PC-algorithm from these synthetic datasets. As a result, they exhibit better performance on synthetic data when compared to the sock-shop data. However, our model is specifically designed for performing root cause analysis within microservice systems consisting of abundant time series data, making it less suitable for application to the synthetic dataset. Nonetheless, our approach still demonstrates competitive performance with both RCD and $\Psi$-PC. ### 5.3 Ablation Study To dissect the contributions arising from different designs of RUN, we conducted an ablation study by omitting the pre-training stage and incorporating the consideration of negative pairs in contrastive learning on sock-shop data for its microservice-based architecture. The results are presented in Figure 7. As expected, removing the pre-training stage significantly results in a notable reduction in the performance of RUN. This indicates that when performing root cause analysis via neural Granger causal discovery, incorporating contextual information can effectively elevate performance. Nevertheless, excluding the incorporation of negative pairs in contrastive learning does not lead to a substantial decrease in performance. We theorize that this finding can be ascribed to the limited temporal extent of the sock-shop data, potentially constraining the capacity to identify the multi-periodicity within the temporal domain. ### 5.4 Case Study: Neural Granger Causal Discovery In order to investigate the efficacy of neural Granger causal discovery, we sample a causal graph in which each node has a pathway to the root cause. In Figure 8, the yellow node represents the trigger point and the red node represents the root cause. Figure 8 represents causal relationships that are associated with temporal order based on Granger causality. To identify the root cause, each edge is reversed from the original causal graph for PageRank analysis. According to the observation from Wang et al. (2021), the root cause is a dangling node and impacts the trigger point. Our approach validates the observation and successfully identifies the root cause from the causal graph. It also illustrates the neural Granger causal discovery-based approach in explaining how to identify the root cause. Figure 8: Construct the causal graph from the multivariate time series. ## 6 Conclusion In this paper, we propose RUN for identifying the root causes in microservices. Existing works apply PC-algorithm on time series data which leads to overlooking the temporal dimension of data. Distinct from these works, our proposed method is able to capture the temporal dependency. Based on neural Granger causal discovery architecture, our model learns the graph structure using a time series forecasting model. However, previous works on neural Granger causal discovery did not leverage the inherent contextual information in time series data. Hence, contrastive learning without negative pairs is proposed to leverage the context of time series to enhance representations of time series. The quantitative analysis carried out on both synthetic and sock-shop datasets showcases the effectiveness of our proposed approach compared to state-of-the-art baselines. For our future work, we intend to enhance the scalability of our model to accommodate datasets of varying sizes. This improvement is crucial as it will allow us to not only build upon the experiments conducted on the sock-shop dataset but also extend our research to larger-scale and more representative datasets. These larger datasets will provide a more comprehensive understanding of our model’s performance in real-world scenarios, further enriching the depth and applicability of our findings. ## References * Chen et al. (2022) Chen, B.; Yu, L.; Luo, W.; Wu, C.; Li, M.; Tan, H.; Huang, J.; and Wan, Z. 2022\. Hybrid tree model for root cause analysis of wireless network fault localization. _Web Intell._ , 20(3): 213–223. * Chen and He (2021) Chen, X.; and He, K. 2021. Exploring Simple Siamese Representation Learning. In _CVPR_ , 15750–15758. Computer Vision Foundation / IEEE. * Cheng et al. (2023) Cheng, Y.; Yang, R.; Xiao, T.; Li, Z.; Suo, J.; He, K.; and Dai, Q. 2023. CUTS: Neural Causal Discovery from Irregular Time-Series Data. In _ICLR_. OpenReview.net. * Daniel Holbach (2022) Daniel Holbach. 2022. Sock-shop, a microservice demo application. GitHub repository. * Fan et al. (2023) Fan, C.; Wang, Y.; Zhang, Y.; and Ouyang, W. 2023. Interpretable Multi-Scale Neural Network for Granger Causality Discovery. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 1–5. IEEE. * Gao, Yao, and Chen (2021) Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In _EMNLP (1)_ , 6894–6910. Association for Computational Linguistics. * Granger (1969) Granger, C. W. 1969. Investigating causal relations by econometric models and cross-spectral methods. _Econometrica: journal of the Econometric Society_ , 424–438. * Grill et al. (2020) Grill, J.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P. H.; Buchatskaya, E.; Doersch, C.; Pires, B. Á.; Guo, Z.; Azar, M. G.; Piot, B.; Kavukcuoglu, K.; Munos, R.; and Valko, M. 2020. Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning. In _NeurIPS_. * Ikram et al. (2022) Ikram, A.; Chakraborty, S.; Mitra, S.; Saini, S. K.; Bagchi, S.; and Kocaoglu, M. 2022. Root Cause Analysis of Failures in Microservices through Causal Discovery. In _NeurIPS_. * Kingma and Ba (2015) Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In _ICLR (Poster)_. * Liu et al. (2021) Liu, D.; He, C.; Peng, X.; Lin, F.; Zhang, C.; Gong, S.; Li, Z.; Ou, J.; and Wu, Z. 2021. MicroHECL: High-Efficient Root Cause Localization in Large-Scale Microservice Systems. In _ICSE (SEIP)_ , 338–347. IEEE. * Ma et al. (2020) Ma, M.; Xu, J.; Wang, Y.; Chen, P.; Zhang, Z.; and Wang, P. 2020. AutoMAP: Diagnose Your Microservice-based Web Applications Automatically. In _WWW_ , 246–258. ACM / IW3C2. * Nauta, Bucur, and Seifert (2019) Nauta, M.; Bucur, D.; and Seifert, C. 2019. Causal Discovery with Attention-Based Convolutional Neural Networks. _Mach. Learn. Knowl. Extr._ , 1(1): 312–340. * Oliveira, Miguéis, and de Moura Borges (2023) Oliveira, E. E.; Miguéis, V. L.; and de Moura Borges, J. L. C. 2023. Automatic root cause analysis in manufacturing: an overview & conceptualization. _J. Intell. Manuf._ , 34(5): 2061–2078. * Shan et al. (2019) Shan, H.; Chen, Y.; Liu, H.; Zhang, Y.; Xiao, X.; He, X.; Li, M.; and Ding, W. 2019\. ?-Diagnosis: Unsupervised and Real-time Diagnosis of Small- window Long-tail Latency in Large-scale Microservice Platforms. In _WWW_ , 3215–3222. ACM. * Soldani, Forti, and Brogi (2022) Soldani, J.; Forti, S.; and Brogi, A. 2022. Failure Root Cause Analysis for Microservices, Explained. In _DAIS_ , volume 13272 of _Lecture Notes in Computer Science_ , 74–91. Springer. * Tank et al. (2022) Tank, A.; Covert, I.; Foti, N. J.; Shojaie, A.; and Fox, E. B. 2022. Neural Granger Causality. _IEEE Trans. Pattern Anal. Mach. Intell._ , 44(8): 4267–4279. * Wang et al. (2023) Wang, D.; Chen, Z.; Ni, J.; Tong, L.; Wang, Z.; Fu, Y.; and Chen, H. 2023. Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization. _CoRR_ , abs/2302.01987. * Wang et al. (2021) Wang, H.; Wu, Z.; Jiang, H.; Huang, Y.; Wang, J.; Köprü, S.; and Xie, T. 2021. Groot: An Event-graph-based Approach for Root Cause Analysis in Industrial Settings. In _ASE_ , 419–429. IEEE. * Xin, Chen, and Zhao (2023) Xin, R.; Chen, P.; and Zhao, Z. 2023. CausalRCA: Causal inference based precise fine-grained root cause localization for microservice applications. _J. Syst. Softw._ , 203: 111724. * Yue et al. (2022) Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; and Xu, B. 2022. TS2Vec: Towards Universal Representation of Time Series. In _AAAI_ , 8980–8987. AAAI Press. * Zeng et al. (2023) Zeng, A.; Chen, M.; Zhang, L.; and Xu, Q. 2023. Are Transformers Effective for Time Series Forecasting? In _AAAI_ , 11121–11128. AAAI Press. * Zhang et al. (2022) Zhang, C.; Peng, X.; Sha, C.; Zhang, K.; Fu, Z.; Wu, X.; Lin, Q.; and Zhang, D. 2022\. DeepTraLog: Trace-Log Combined Microservice Anomaly Detection through Graph-based Deep Learning. In _ICSE_ , 623–634. ACM.
# Multiwavelength investigation of the candidate Galactic PeVatron MGRO J1908+06 S. Crestan1,2, A. Giuliani2, S. Mereghetti2, L. Sidoli2, F. Pintore3, N. La Palombara2 1 Università degli Studi dell’Insubria, Via Valleggio 11, 22100 Como, Italy 2 INAF – IASF Milano, Via A. Corti 12, 20133 Milano, Italy 3 INAF – IASF Palermo, Via U. La Malfa 153, 90146 Palermo, Italy E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract The candidate PeVatron MGRO J1908+06, which shows a hard spectrum beyond 100 TeV, is one of the most peculiar $\gamma$-ray sources in the Galactic plane. Its complex morphology and some possible counterparts spatially related with the VHE emission region, preclude to distinguish between a hadronic or leptonic nature of the $\gamma$-ray emission. In this paper we illustrate a new multiwavelength analysis of MGRO J1908+06, with the aim to shed light on its nature and the origin of its ultra high-energy emission. We performed an analysis of the 12CO and 13CO molecular line emission demonstrating the presence of dense molecular clouds spatially correlated with the source region. We also analyzed 12-years of Fermi-LAT data between 10 GeV and 1 TeV finding a counterpart with a hard spectrum ($\Gamma\sim 1.6$). Our reanalysis of XMM-Newton data allowed us to put a more stringent constraint on the X-ray flux from this source. We demonstrate that a single accelerator cannot explain the whole set of multiwavelength data, regardless of whether it accelerates protons or electrons, but a 2-zone model is needed to explain the emission from MGRO J1908+06. The VHE emission seems most likely the superposition of a TeV PWN powered by PSR J1907+0602, in the southern part, and of the interaction between the supernova remnant G40.5-0.5 and the molecular clouds towards the northern region. ###### keywords: ISM: individual object: MGRO J1908+06 – ISM: clouds – ISM: cosmic rays – ISM: supernova remnants ††pubyear: 2020††pagerange: Multiwavelength investigation of the candidate Galactic PeVatron MGRO J1908+06–References ## 1 Introduction Cosmic rays (CRs) are high-energy atomic nuclei (90% protons) moving through space at nearly the speed of light. The local CR proton spectrum is well described by a power-law up to the so-called ‘knee’ around 1 PeV ($=10^{15}$ eV), indicating the existence of powerful proton accelerators (‘PeVatrons’) residing in our Galaxy. However, despite decades of efforts, no specific Galactic source has been securely identified as a proton PeVatron, with the possible exception of the Galactic Centre (HESS Collaboration et al., 2016; H. E. S. S. Collaboration et al., 2018; MAGIC Collaboration et al., 2020). The association of a $\gamma$-ray source with a PeVatron can be indicated by the absence of an exponential cut-off in the $\gamma$-ray spectrum below 100 TeV, as expected when CRs are accelerated up to PeV energies and produce $\gamma$-rays interacting with the ambient material. The source MGRO J1908+06 is one of the best Galactic PeVatron candidates, thanks to its hard spectrum reaching energies above 100 TeV as observed by HAWC (Abeysekara et al., 2020), and with no evidence of a cut-off. It was discovered by the MILAGRO collaboration (Abdo et al., 2007) and later confirmed with the HESS atmospheric Cherenkov telescope (Aharonian et al., 2009). These authors reported the source detection above 300 GeV, with a large angular size ($\sigma=0.34$°) and a hard spectrum with a photon index of $2.1$. MGRO J1908+06 was observed at very high energy also with VERITAS (Aliu et al., 2014), which revealed an extended morphology ($\sigma=0.44$°) with three peaks of emission and a spectrum with photon index of $2.2$. Observations of this sky region with HAWC revealed emission up to 100 TeV spatially consistent with the VERITAS error box of MGRO J1908+06, indicating the presence of very energetic particles (Abeysekara et al., 2020). Due to its complex spatial structure, difficult to study with the limited angular resolution of current instruments, the origin of the $\gamma$-ray emission from MGRO J1908+06 is still uncertain. As discussed below, a few counterparts (the supernova remnant (SNR) G40.5-0.5, dense molecular clouds illuminated by cosmic rays from the nearby SNR, and the pulsar PSR J1907+0602) are compatible with the $\gamma$-ray source error box ($\sim 0.5$°), preventing a secure identification of this extreme accelerator and making it difficult to distinguish between hadronic or leptonic interpretations of its emission. This source has been studied also at radio wavelength by Duvidovich et al. (2020). These authors reported the presence of molecular material toward the SNR. In this work we investigate the molecular cloud environment of MGRO J1908+06 using data from molecular line radio surveys, modelling its multiwavelength spectral energy distribution using published TeV spectra and our re-analysis of Fermi-LAT and XMM-Newton data. Figure 1: VLA Galactic Plane Survey at 1.4 GHz (Stil et al., 2006) map covering the entire source MGRO J1908+06. The white solid lines are the VHE contours taken from VERITAS (Aliu et al., 2014), where the significance level ranges from 3 to 5.2. The yellow and cyan crosses indicate the positions of PSR J1907+0631 and PSR J1907+0602, respectively. The thin dashed magenta line indicates the SNR G40.5-0.5 shell, while the red, green and orange dashed lines mark the VERITAS emission lobes to which we refer in the following. ## 2 Possible counterparts of MGRO J1908+06 Several possible counterparts are present in the region of MGRO J1908+06: in Fig. 1 we show the continuum emission at 1.4 GHz obtained in the VLA Galactic Plane Survey (VGPS - (Stil et al., 2006)) with superimposed the TeV contours obtained by VERITAS (Aliu et al., 2014). The radio data show the shell-like SNR G40.05-0.5, which is much brighter in the northern region. The positions of the two pulsars PSR J1907+0602 and PSR J1907+0631 are also indicated. G40.05-0.5 is a middle aged supernova remnant, with estimated age between 20 and 40 kyr (Downes et al., 1980) and uncertain distance. The $\Sigma-D$ relation gives a distance between 5.5 and 8.5 kpc (Downes et al., 1980). A more accurate estimate can be obtained from the HI absorption spectrum (see Ranasinghe & Leahy 2017 for details) but, as reported by Duvidovich et al. (2020), for SNR G40.5-0.5 this method resulted in a very noisy spectrum, probably due to the fact that the neutral gas is patchy. The relatively young and energetic pulsar PSR J1907+0631 (characteristic age $\tau$=11 kyr, spin- down luminosity $\sim$5$\times 10^{35}$ erg s-1) lies close to the centre of the SNR (Lyne et al., 2017). Its dispersion measure (DM) implies a distance of 7.9 kpc (Cordes & Lazio, 2002), compatible with the range estimated for G40.05-0.5 and suggesting an association between these two objects. Duvidovich et al. (2020) showed that, in principle, PSR J1907+0631 could power the whole TeV source, since this would require a conversion efficiency from rotational energy to $\gamma$-rays of about 3%, in line with the efficiency $\leq 10\%$ of other known TeV sources associated to pulsar wind nebulae (Gallant, 2007). However, this hypothesis is disfavoured by the pulsar position slightly outside the TeV contours and significantly offset from the centroid of the $\gamma$-ray emission. The $\gamma$-ray loud pulsar PSR J1907+0602, discovered with Fermi-LAT (Abdo et al., 2010), is located in the southern part of MGRO J1908+06, slightly offset from the peak of the $\gamma$-ray excess counts. This pulsar has a characteristic age of 19.5 kyr and a spin-down luminosity of $\sim$3$\times 10^{36}$ erg s-1. The source distance was estimated to be 3.2 kpc (Abdo et al., 2010), as derived from the DM of $\sim 82$ pc cm-3 with the electron distribution model of Cordes & Lazio (2002). Another pulsar (PSR J1905+0600, not marked in Fig. 1) lies in this region, but its distance of $\sim$18 kpc and large characteristic age of 6 Mys (Hobbs et al., 2004) exclude it as a possible counterpart. In conclusion, the two main candidates for the $\gamma$-ray emission from MGRO J1908+06 are the SNR G40.5-0.5, near the northern border of the TeV source, and PSR J1907+0602, lying in the southern region. ## 3 Data analysis and results ### 3.1 CO Analysis We have investigated the distribution of the CO gas in the environment of MGRO J1908+06 in order to identify molecular material in spatial correlation with the SNR and the $\gamma$-ray emission. Any such association would be relevant for hadronic models to explain the TeV emission and it would also provide some information on the source distance. We used the molecular line emission extracted from the FOREST Unbiased Galactic Plane Imaging (FUGIN) survey111Available at http://jvo.nao.ac.jp/portal/. This project aims at investigating the distribution, kinematics, and physical properties of both diffuse gas and dense molecular clouds in the Galaxy by observing simultaneously the 12CO, 13CO, and 18CO J=1-0 lines. This survey achieves the highest angular resolution to date ($\sim$20″) for the Galactic plane, making it possible to find dense clumps located at farther distances than those seen in previous surveys. Figure 2: The 12CO (top) and 13CO (bottom) summed spectra in the region of MGRO J1908+06. The velocity interval between the two dashed lines (58–78 km s-1) represents the bulk of the emission, while the red zone marks the velocity range between 58 and 62 km s-1 (shown in Fig. 3) that is the velocity range considered for the molecular cloud analysis (section 3.1). We recovered the spectra in brightness temperature $T_{B}$ as a function of the local standard of rest velocity ($V_{LSR}$) for the whole region corresponding to the $3\sigma$ contours of the TeV emission, both in 12CO and 13CO. As shown in Fig. 2, the bulk of the emission is concentrated between 50 and 80 km s-1. We plot in Fig. 3 the 12CO and 13CO molecular line emission integrated from 58 to 62 km s-1. The contours presented in the figure are those of the VERITAS TeV emission (Aliu et al., 2014) and of the SNR G40.5-0.5 at 1.4 GHz from the VGPS. We denote the three maxima of $\gamma$-ray emission as lobes A, B, and C (see Fig. 1). The maps of Fig. 3 show that lobe A overlaps with CO emission, lobe B partially overlaps with CO emissions, while no obvious molecular clouds association is seen for lobe C. Figure 3: Maps of 12CO (left) and 13CO (right) emission in the MGRO J1908+06 region integrated between 58–62 km s-1. The white solid lines are the same as in Fig. 1, while the green contours are the continuum emission from SNR G40.5-0.5 at 1.4 GHz. We concentrate on the molecular cloud in the 58–62 km s-1 velocity interval, as it overlaps both the A-B lobes and the southern border of the SNR. We obtain the distance of the cloud using the Galaxy rotation curve from Clemens (1985), with $R_{\sun}$= 8.5 kpc and $v_{\sun}=220$ km s-1. The first Galactic quadrant presents distance ambiguity for positive radial velocities, so adopting 60 km s-1, we obtain near and far distances of 3.0 and 9.4 kpc, respectively. To study the properties of the molecular gas, and in particular to estimate their density, we use the dendrogram technique (Rosolowsky et al., 2008). A dendrogram is a topological representation of the significant local maxima in N-dimensional intensity data and the way these local maxima are connected along contours (or isosurfaces) of constant intensity. A local maximum, by definition, has a small region around it containing no data greater than its value and, hence, a distinct isosurface containing only that local maximum can be drawn. The local maxima determines the top level of the dendrogram, which we refer to as the “leaves” , defined as the set of isosurfaces that contain a single local maximum. We identify and characterize molecular clouds in the CO data cube between 58–62 km s-1 using astrodendro222Available at http://www.dendrograms.org/. This python algorithm efficiently constructs a dendrogram representation of all the emission in the selected region. The minimum value to consider (any value lower than this will not be considered in the dendrogram) is set as the “detection level”, namely 5$~{}\sigma_{T}$, where $\sigma_{T}$ is the median RMS noise level in the dataset, so that only significant values are included in the dendrogram (T${}_{\rm min}=3$ K). Another consideration is about how significant a leaf has to be in order to be considered an independent entity. The significance of a leaf is measured from the difference between its peak flux and the value at which it is being merged into the tree. This parameter is set to 1 $\sigma_{T}$, which means that any leaf that is locally less than 1 $\sigma_{T}$ high is combined with its neighboring leaf (or branch) and is no longer considered as a separate entity. Once an index of structures in the data has been produced by the algorithm, it can be used to catalog the properties of each structure, such as integrated intensity, centroid position, spatial position angle, spatial extent, and spectral line-width. We estimate the luminosity based on the zeroth moment, i.e., the sum of the intensity, and then translate the moments into estimates of physical quantities. For these calculations, we consider the pixels in a cloud mask $\mathcal{M}$, i.e. only the pixels belonging to a single cloud identified by the segmentation algorithm. We measure the luminosity of each cloud as: $L_{\rm CO}=A_{\rm pix}\Delta v\sum_{i}T_{i}$ (1) where $A_{\rm pix}$ is the projected physical area of a cube pixel in pc2, $\Delta v$ = 4 km s-1 is the channel width, and $T_{i}$ is the brightness of the cube pixels measured in K in the cloud mask $\mathcal{M}$. We convert from luminosity to mass, scaling the extrapolated luminosity through the CO-to-H2 conversion factor, $\alpha_{\rm CO}$. $M_{\rm CO}=L_{\rm CO}\alpha_{\rm CO}$ (2) where we take $\alpha_{\rm CO}$= 4.35 $M_{\sun}$ pc-2 (km s-1 K)-1 at solar metallicity (Bolatto et al., 2013). To measure cloud radii we convert from the deconvolved major and minor sizes, $\sigma_{\rm maj}$ and $\sigma_{\rm min}$, to a cloud radius measurement using: $R=\eta\sqrt{\sigma_{\rm maj}\sigma_{\rm min}}$ (3) The factor $\eta$ depends on the light or mass distribution within the cloud. We adopt $\eta$=1.91 following Rosolowsky & Leroy (2006). Our model approximates the cloud as a spherically symmetric object so that R also characterizes the object in three dimensions. Therefore, we do not apply any inclination corrections to R. The resulting mean cloud density is $\sim 180$ particles cm-3 assuming a distance of 3 kpc, while it is $\sim 60$ particles cm-3 assuming 9 kpc. ### 3.2 Fermi-LAT data analysis We analyzed 12 years of Fermi-LAT data, obtained from 2008-09-01 to 2020-12-16, exploiting the Pass 8 data processing (P8R3) with the public fermitools (v2.0.0) and fermipy packages (v1.0.0). We selected the Pass 8 ‘source’ class and ‘front+back’ type events coming from zenith angles smaller than 90° and from a circular region of interest (ROI) with radius of 10° centered at R.A. = 286.97° and Dec. = 6.03° (J2000). The instrument response function version P8R3-SOURCE-V3 was used. We selected only the events in the 10 GeV–1 TeV energy range, to avoid the contribution from PSR J1907+0602 (see fig. 4 of Abdo et al. 2010). We included in the background model all the sources from the 4FGL catalog within the ROI, as well as the Galactic (gll- iem-v07.fits) and the isotropic (P8R3-SOURCE-V3-v1) diffuse components. We performed a binned analysis with five bins per energy decade and spatial pixel size of 0.05° . In the maximum likelihood fitting, the normalization parameter of all the sources within 3° of the ROI centre, as well as the diffuse emission components, were left free to vary. Instead the parameters of all the other sources at more than 3° were fixed to the values given in the 4FGL catalog (Abdollahi et al., 2020). To describe the spatial morphology of MGRO J1908+06, we used the VERITAS emission region at 3$\sigma$ level (i.e. the outermost contour in Fig. 1 ), while for the spectral model we assumed a power law with photon index $\Gamma=1.6$. This leads to a detection significance $\sqrt{TS}\sim 6$ in the energy band considered. The $\gamma$-ray flux was obtained by binning the $\gamma$-ray data in the range from 10 to 1000 GeV into four energy intervals, and performing a binned likelihood analysis in each energy bin. The resulting Fermi-LAT spectral energy distribution is plotted in Fig. 4. ### 3.3 X-ray Analysis To study the X-ray emission in the vicinity of PSR J1907+0602 we used a 52 ks long observation carried out on 2010 April 26 with the XMM-Newton satellite. We analyzed the data of the EPIC-MOS instrument that was operated in full frame imaging mode and with the medium thickness optical filter. We excluded time intervals with high background, resulting in net exposure times of 36 and 38 ks for the two MOS cameras. Using the Extended Source Analysis Software (ESAS333https://heasarc.gsfc.nasa.gov/docs/xmm/esas/cookbook/xmm-esas.html), we extracted the spectra from a circular region of 5 arcmin radius centered at the position of PSR J1907+0602 (excluding a circle of 30″radius around the source) and from a concentric annular region with radii 5 and 12.5 arcmin. The latter was used to estimate the X-ray background (which in this sky region is dominated by the Galactic Ridge diffuse emission). Comparison of the two spectra showed no evidence for diffuse emission associated with PSR J1907+0602, with an upper limit (at 95% c.l.) of 1.2$\times 10^{-15}$ erg cm-2 s-1 arcmin-2 on the surface brightness in the 1–10 keV energy range. ## 4 Origin of the $\gamma$-ray emission Emission at TeV energies indicates the presence of ultra-relativistic particles which, in principle, can produce it through Inverse Compton (IC) scattering of the CMB, IR and/or star-light seed photons by electrons, or through the decay of neutral pions resulting from proton-proton (and/or other nuclei) interactions. In this Section, we first explore the possibility that a single mechanism is responsible for the emission from the whole trilobed region in either the leptonic or hadronic scenario. We then consider the possibility of a two-zone model, in which both components (hadronic and leptonic) are present. This scenario can be originated by a source able to efficiently accelerate both protons and electrons, or by 2 sources lying in the same sky region. In the following, we use the Fermi-LAT and XMM-Newton results derived as described above and the TeV spectra obtained with VERITAS (Aliu et al., 2014) and MILAGRO (Abdo et al., 2007). We have not used the HAWC data reported in (Abeysekara et al., 2020) for our fit, because they are inconsistent with both the VERITAS and HESS data in an energy range (1-10 TeV) where these instruments proved to show reliable results. Also HESS data were not considered in the fitting procedure as they are fully compatible with the VERITAS ones. This discrepancy is explained in (Abeysekara et al., 2020) as consequence a the larger source extent observed by HAWC. However, we note that our model reproduces quite well the slope of the HAWC spectrum at energies of around 100 TeV, hence in the range where there is consistency between the results reported in (Abeysekara et al., 2020) and (Abeysekara et al., 2017). The X-ray upper limit, derived for a region of about 75 arcmin2, was rescaled for the larger area enclosed in the 3$\sigma$ contours of the TeV emission ($\sim$ 1400 arcmin2). This is a conservative assumption, because any diffuse X-ray emission produced by particles accelerated by the pulsar would likely decrease in intensity at larger distances. ### 4.1 1-zone leptonic Hypothesis In the leptonic scenario, we assume that the whole emission from MGRO J1908+06 is due to a population of relativistic electrons interacting, through IC, with the ISRF photons. These electrons can be supplied by the energetic pulsar PSR J1907+0602 or by the SNR G40.5-0.5. In order to infer the properties of the parent particle distribution, we fit the multiwavelength SED through a Markov Chain Monte Carlo (MCMC) procedure using the naima python package (see Zabalza 2015 for a detailed description of the fitting procedure and of the IC radiative model). We modeled the three dominant photon fields with energy densities fixed at $\epsilon_{\rm CMB}$ = 0.261 erg cm-3 for the Cosmic Microwave Background, $\epsilon_{\rm FIR}$ = 0.5 erg cm-3 for the far-infrared dust emission, and $\epsilon_{\rm NIR}$ = 1.0 erg cm-3 for the near-infrared stellar emission. The electron distribution was modeled as a broken power law with an exponential cut-off with all the parameters free to vary during the fitting procedure. We obtained a best fit (see Fig. 4) with the parameters for the electron distribution shown in Table 1. The high-energy electrons responsible for the TeV emission interact also with the ambient magnetic field producing synchrotron radiation. We found that the electron population obtained in our best fit can be reconciled with the XMM- Newton upper limit in the X-ray band only for ambient magnetic fields (B) smaller than 1.2 $\mu$G. This limit is rather small compared to the typical value of the Galactic magnetic field of 5 $\mu$G (Haverkorn, 2015). If we assume a magnetic field of 5 $\mu$G and the same spectral slope of the best fit, the normalization of the electrons population must be reduced by a factor $\sim$15 to be consistent with the X-ray upper limit (see Fig. 5). This results shows that a 1-zone leptonic model alone cannot explain the whole multiwavelength set of data. Figure 4: Multiwavelength SED of MGRO J1908+06 fitted with the leptonic scenario (Inverse Compton + synchrotron) with B=1.2 $\mu$G. The black line shows the total IC emission, whose components are indicated by the thin solid, dashed and dotted black line for the CMB, FIR and NIR, respectively. The red solid line is for the synchrotron emission. The XMM-Newton upper limit (black arrow) as well as the Fermi-LAT data (red squares) are from this work. The other data are taken from Aliu et al. (2014) (VERITAS - green diamonds) and Abdo et al. (2007) (MILAGRO - pink hexagons). The blue and gray butterflies are the HAWC models from Abeysekara et al. (2020) and from Abeysekara et al. (2017), plotted for comparison, but not used in the fit. Figure 5: Same as Fig. 4, but with B=5 $\mu$G and the electron population normalization reduced by a factor 15. . ### 4.2 1-zone hadronic hypothesis In the hadronic scenario we assume that a population of relativistic protons interacts with dense interstellar material and produces TeV photons via pion decay. A good candidate for the acceleration of these protons is the SNR G40.5-0.5. In fact, our analysis of the molecular gas around the SNR demonstrated the presence of molecular clouds in good spatial correlation with the SNR shell. We used naima to fit the $\gamma$-ray SED, assuming a proton distribution described by a broken power law with exponential cut-off and pion decay as radiative model. The parameters of the best fit, shown in Fig. 6, are given in Table 1. With the average densities of the clouds derived in section 3.1, $180$ or $60$ particles cm-3 depending on the considered distance, the total proton energy required by the fit is $7\times 10^{47}$ erg or $2\times 10^{49}$ erg, respectively. Figure 6: MGRO J1908+06 hadronic emission model. The black line shows the Pion Decay best fit model. Data are the same as in Fig. 4 ### 4.3 2-zone model Considering the complex spatial distribution of the VHE emission, it can not be excluded that both a TeV PWN powered by PSR J1907+0602 and hadronic processes associated to the SNR G40.5-0.5 contribute to the $\gamma$-ray emission observed from this sky region. Therefore, we have also explored a hybrid emission model in which the TeV emission is due to the superposition of leptonic and hadronic components from these two sources. Of course, in this scenario, MGRO J1908+06 might consist of two physically separated sources, not necessarily at the same distance. Figure 7: MGRO J1908+06 2-zone emission model. The black line shows the total emission while the gray lines and the orange lines are for the IC and pion decay emission respectively. Other colors are the same as in Fig. 4. The magnetic field is B=4 $\mu$G. To fit the spectral energy distribution we assume that the steep spectrum at GeV energies has a leptonic origin, while the hadronic emission is responsible for the softer part at TeV energies. We used naima to recover the radiative models from a particle distribution. We used an exponential cut-off broken power law both for electrons and protons. The resulting emission model is plotted in Fig. 7, while the assumed parameters are reported in Table 1. The recovered total protons energy for distances of 3 and 9 kpc is $4\times 10^{47}$ erg and $10^{49}$ erg, respectively, while the electrons energy is $9\times 10^{46}$ erg. Model | Component | d | $\Gamma_{1}$ | $\Gamma_{2}$ | $W$ | $E_{0}$ | $E_{b}$ | $E_{c}$ ---|---|---|---|---|---|---|---|--- | | (kpc) | | | (erg) | (TeV) | (TeV) | (PeV) 1-zone | Leptonic | 3 | 1.0 $\pm$ 0.4 | 2.6 $\pm$ 0.1 | 2$\times 10^{47}$ | 10 | 2.7 $\pm$ 0.7 | 7.1 $\pm$ 6.0 1-zone | Hadronic | 3 | 1.0 $\pm$ 0.1 | 2.1 $\pm$ 0.1 | 7$\times 10^{47}$ | 30 | 2.8 $\pm$ 0.8 | 3.0 $\pm$ 0.9 1-zone | Hadronic | 9 | 1.1 $\pm$ 0.2 | 2.1 $\pm$ 0.1 | 2$\times 10^{49}$ | 30 | 3.4 $\pm$ 1.2 | 1.9 $\pm$ 0.5 2-component | Leptonic | 3 | 1.2 | 1.2 | 9$\times 10^{46}$ | 10 | 0.2 | 0.011 | Hadronic | 3 | 1.6 | 2.0 | 4$\times 10^{47}$ | 30 | 200 | >1 | Hadronic | 9 | 1.6 | 2.0 | 1$\times 10^{49}$ | 30 | 200 | >1 Table 1: Parameters for all emission models considered. $\Gamma_{1}$ and $\Gamma_{2}$ are the indices before and after the break ($E_{b}$), W is the particles total energy, $E_{0}$ is the reference energy while $E_{c}$ is the cut-off energy. ## 5 Discussion The association of MGRO J1908+06 with an offset relic PWN driven by PSR J1907+0602 was initially considered the most likely origin of the VHE (Abdo et al., 2010). Our results show that, for reasonable values of the ambient magnetic field, a leptonic emission model fitting the $\gamma$-ray spectrum from the whole source would produce a synchrotron X-ray flux incompatible with the upper limit in the few keV region. The leptonic interpretation is also disfavored by the spatial shape of the TeV emission, extending far from the pulsar position and without evident sign of spectral softening with distance from the pulsar, as it would be expected from electron cooling (Aliu et al., 2014). We further note that, due to the Klein-Nishina suppression of the IC cross section at high energies, a rather large value of the electron maximum energy is required to fit the $\gamma$-ray spectrum. Our analysis of the molecular gas around SNR G40.5-0.5 demonstrates the presence of molecular clouds in good spatial correlation with the SNR (see also Duvidovich et al. 2020) and motivates the exploration of a hadronic scenario. The cloud densities required by the best fit proton distribution used to reproduce the observed $\gamma$-ray spectrum are consistent with the ones that we derived from an analysis of the CO data, independent of the source association with the near (3 kpc) or far (9 kpc) source distance. However, the VHE emission of MGRO J1908+06 extends beyond the spatial distribution of the target material, with no obvious molecular cloud counterparts in the southern region. The very hard photon index $\Gamma_{1}$ required to fit the Fermi-LAT data (not seen in other TeV sources associated with SNRs) also disfavors a fully hadronic model for MGRO J1908+06. The difficulties discussed above for single zone models are easily solved assuming that the MGRO J1908+06 spectrum is a sum of two components (hadronic and leptonic). The available data do not allow us to perform a spatially- resolved spectral analysis. Thus we cannot exclude that these two components arise from different zones of the source (for example, a TeV PWN powered by PSR J1907+0602 could be responsible for the southern lobe, and the interaction between the SNR G40.5-0.5 and the molecular clouds for the northern part). An important result of this analysis is that for both the 1-zone hadronic model and the 2-component model the maximum energy of the emitting protons required to fit the gamma-ray spectrum is greater than 1 PeV. ## 6 Conclusions Our multiwavelength modeling of MGRO J1908+06 confirms that this source is one of the best Galactic PeVatron candidates. We found that single-zone models, although in principle justified by the presence of plausible counterparts in both the leptonic and hadronic scenarios (a pulsar wind nebula powered by PSR J1907+0602 or SNR G40.05-0.5 interacting with molecular clouds, respectively), run into problems to explain the multiwavelength and spatial morphology properties of MGRO J1908+06. Therefore a 2-zone model is preferred to describe the emission from this source. Spatially resolved data, as those that will be provided by the next generation of Cherenkov telescopes such as the upcoming ASTRI Mini-array and the Cherenkov Telescope Array (CTA), are needed to separate the emission components of this source. ## Acknowledgements This work made use of data from FUGIN, FOREST Unbiased Galactic Plane Imaging survey with the Nobeyama 45-m telescope, a legacy project in the Nobeyama 45-m radio telescope. ## DATA Availability The data underlying this article will be shared on reasonable request to the corresponding author. ## References * Abdo et al. (2007) Abdo A. A., et al., 2007, ApJ, 664, L91 * Abdo et al. (2010) Abdo A. A., et al., 2010, ApJ, 711, 64 * Abdollahi et al. (2020) Abdollahi S., et al., 2020, ApJS, 247, 33 * Abeysekara et al. (2017) Abeysekara A. U., et al., 2017, The Astrophysical Journal, 843, 40 * Abeysekara et al. (2020) Abeysekara A. U., et al., 2020, Phys. Rev. Lett., 124, 021102 * Aharonian et al. (2009) Aharonian F., et al., 2009, A&A, 499, 723 * Aliu et al. (2014) Aliu E., et al., 2014, ApJ, 787, 166 * Bolatto et al. (2013) Bolatto A. D., Wolfire M., Leroy A. K., 2013, ARA&A, 51, 207 * Clemens (1985) Clemens D. P., 1985, ApJ, 295, 422 * Cordes & Lazio (2002) Cordes J. M., Lazio T. J. W., 2002, arXiv e-prints, pp astro–ph/0207156 * Downes et al. (1980) Downes A. J. B., Pauls T., Salter C. J., 1980, A&A, 92, 47 * Duvidovich et al. (2020) Duvidovich L., Petriella A., Giacani E., 2020, MNRAS, 491, 5732 * Gallant (2007) Gallant Y. A., 2007, Astrophysics and Space Science, 309, 197–202 * H. E. S. S. Collaboration et al. (2018) H. E. S. S. Collaboration et al., 2018, A&A, 612, A9 * HESS Collaboration et al. (2016) HESS Collaboration et al., 2016, Nature, 531, 476 * Haverkorn (2015) Haverkorn M., 2015, in Lazarian A., de Gouveia Dal Pino E. M., Melioli C., eds, Vol. 407, Magnetic Fields in Diffuse Media. p. 483 (arXiv:1406.0283), doi:10.1007/978-3-662-44625-6_17 * Hobbs et al. (2004) Hobbs G., et al., 2004, MNRAS, 352, 1439 * Lyne et al. (2017) Lyne A. G., et al., 2017, ApJ, 834, 137 * MAGIC Collaboration et al. (2020) MAGIC Collaboration et al., 2020, A&A, 642, A190 * Ranasinghe & Leahy (2017) Ranasinghe S., Leahy D. A., 2017, ApJ, 843, 119 * Rosolowsky & Leroy (2006) Rosolowsky E., Leroy A., 2006, PASP, 118, 590 * Rosolowsky et al. (2008) Rosolowsky E. W., Pineda J. E., Kauffmann J., Goodman A. A., 2008, ApJ, 679, 1338 * Stil et al. (2006) Stil J. M., et al., 2006, The Astronomical Journal, 132, 1158–1176 * Zabalza (2015) Zabalza V., 2015, in 34th International Cosmic Ray Conference (ICRC2015). p. 922 (arXiv:1509.03319)
2022 [1]Cong Yang [1]Soochow University, Suzhou, China 2]Jagiellonian University, Cracow, Poland 3]Heriot-Watt University (Malaysia), Putrajaya, Malaysia 4]Clobotics, Shanghai, China 5]Fraunhofer FIT, Sankt Augustin, Germany 6]Southeast University, Nanjing, China 7]University of Lübeck, Lübeck, Germany 8]University of Hospital of Cologne, Cologne, Germany # Skeleton Ground Truth Extraction: Methodology, Annotation Tool and Benchmarks <EMAIL_ADDRESS>Bipin Indurkhya John See Bo Gao Yan Ke Zeyd Boukhers Zhenyu Yang Marcin Grzegorzek * [ [ [ [ [ [ [ ###### Abstract Skeleton Ground Truth (GT) is critical to the success of supervised skeleton extraction methods, especially with the popularity of deep learning techniques. Furthermore, we see skeleton GTs used not only for training skeleton detectors with Convolutional Neural Networks (CNN) but also for evaluating skeleton-related pruning and matching algorithms. However, most existing shape and image datasets suffer from the lack of skeleton GT and inconsistency of GT standards. As a result, it is difficult to evaluate and reproduce CNN-based skeleton detectors and algorithms on a fair basis. In this paper, we present a heuristic strategy for object skeleton GT extraction in binary shapes and natural images. Our strategy is built on an extended theory of diagnosticity hypothesis, which enables encoding human-in-the-loop GT extraction based on clues from the target’s context, simplicity, and completeness. Using this strategy, we developed a tool, SkeView, to generate skeleton GT of 17 existing shape and image datasets. The GTs are then structurally evaluated with representative methods to build viable baselines for fair comparisons. Experiments demonstrate that GTs generated by our strategy yield promising quality with respect to standard consistency, and also provide a balance between simplicity and completeness. ## 1 Introduction Figure 1: Definition of skeletons and components in a higher level: (a) in binary shape, (b) in natural image, (c) endpoints (orange) and junction points (green), (d) skeleton branches with different colours. Skeleton Ground Truth (GT) is critical to the success of supervised skeleton extraction in binary shapes Panichev2019 and natural images Wang2019DFS (hereafter referred to as “shape” and “image”, respectively, see Fig. 1 (a) and (b)). A number of modern skeleton detectors, _i.e._ AdaLSN Liu2020ALS and SkeletonNetV2 Nathan2021SAD , are based on Convolutional Neural Networks (CNN), which are trained using skeleton GTs from image and shape datasets, respectively. Moreover, skeleton GT is important to facilitate skeleton- related algorithms such as pruning Bai2007SPB , matching Bai2008PSS , and classification Bai2009ICA . In addition to skeletonization with morphological and geometrical operations Giesen2009TSA ; Liu2011EGT ; Telea2002AAF ; Jalba2015AUM ; Zhang1984AFP ; Ge1996OTG , skeleton GT extraction should also meet the eye-level view assumption Chaz2014PTT of skeleton simplicity and completeness in different domains. For clarity in terminology, commonly used skeleton components Bai2008PSS ; Bai2007SPB ; Cornea2007CPA and expressions Shen2013SPA are defined (see Fig. 1 (c) and (d)): * • Endpoint: a skeleton point with only one adjacent point. * • Junction point: a skeleton point with three or more adjacent points. * • Connection point: a skeleton point that is neither an endpoint nor a junction point. * • Skeleton branch: a sequence of connection points within two directly connected skeleton points. * • Skeleton simplicity: higher skeleton simplicity means simpler skeleton structure, e.g. minimal number of branches. * • Skeleton completeness: higher completeness means a finer-grained representation of object features, e.g. small branches correlated to shape boundary perturbations. Figure 2: Object skeletons (from simple to complex) in various applications: (a) farmland ridge detection for agricultural robot navigation Li2018LSE ; Shokouh2021RDB , (b) character recognition Zhang2015STL ; Bag2011ROB , and (3) plant analysis Bucksch2014API ; Sharma2021PCO . As presented in Fig. 2, the requirement of complexity is different in real- world applications Saha2016ASO . For instance, in the scenario of farmland ridge detection for agricultural robot navigation, the ridge skeletons are relatively simple and close to curves). Differently, plant root skeletons are primarily complex, thereby preserving root hair and other details. To properly encode such requirement, skeleton GT extraction is normally addressed by a human-in-the-loop fashion Ilke2019SDA . Particularly, an optimal skeleton GT requires a trade-off between its simplicity and completeness. Thus, following the convention in Chaz2014PTT ; Lowet2018SSS ; Cong2016IOS ; Bai2007SPB ; Shen2013SPA , skeleton GT is a satisfaction of the branch simplicity between domain requirements and human perception. An intuitive explanation of such trade-off is that a skeleton GT should satisfy the requirement of simplicity in various domains, while including a proper number of desirable branches (aka. completeness) to preserve object geometrical features. Otherwise, for instance, a skeleton with over-detailed branches could lead to a higher cost on computation and an occurrence of over-fitting problems on matching Bai2008PSS . However, one crucial limitation in existing skeleton GTs lies in the lack of clarity and inconsistency of standards. Table 1: Comparison of skeleton GT in actively used shape and image datasets. S&I: Shape and Image. $\surd$ (Yes) and $\times$ (No) denote whether skeleton GT of the full dataset is public available. The size column detail the number of images in each dataset. Dataset | Type | Size | GT | Dataset | Type | Size | GT ---|---|---|---|---|---|---|--- Animal2000 Bai2009ICA | Shape | 2000 | $\times$ | ArticulatedShapes Haibin2007SCU | Shape | 40 | $\times$ SkelNetOn Ilke2019SDA | Shape | 1725 | $\surd$ | Kimia99 Sebastian2004ROS | Shape | 99 | $\times$ Kimia216 Sebastian2004ROS | Shape | 216 | $\times$ | MPEG7 Latecki2000SDF | Shape | 1400 | $\times$ MPEG400 Cong2014SBO | Shape | 400 | $\times$ | SwedishLeaves Soderkvist2001CVC | Shape | 1125 | $\times$ Tari56 Asian2005AAR | Shape | 56 | $\times$ | Tetrapod120 Cong2016OMW | Shape | 120 | $\times$ SK506 Shen2016OSE | Image | 506 | $\surd$ | SK1491 Shen2017DLM | Image | 1491 | $\surd$ SYMMAX300 Tsogkas2012LSD | Image | 300 | $\surd$ | SymPASCAL Ke2017SSR | Image | 1435 | $\surd$ EM200 Yang2014SCO | S&I | 200 | $\times$ | SmithsonianLeaves Haibin2007SCU | S&I | 343 | $\times$ WH-SYMMAX Shen2016MIS | S&I | 328 | $\surd$ | Our | S&I | All | $\surd$ Figure 3: Skeleton GTs in shape (SkeNetOn Ilke2019SDA and Kimia216 Sebastian2004ROS ) and image (WH-SYMMAX Shen2016MIS , SYMMAX300 Tsogkas2012LSD and SK506 Shen2016OSE ) datasets. $\star$: since there is no public available GT for Kimia216, we take the optimal pruning result of a horse shape presented in Bai2007SPB for comparison. Lack of clarity: Skeleton GTs of most existing shape datasets are unclear. As presented in Table 1, only two (SkelNetOn Ilke2019SDA and WH-SYMMAX Shen2016MIS ) of thirteen actively used shape datasets have publicly available skeleton GTs, though SkelNetOn is only accessible to the registered participates of the SkelNetOn Challenge Ilke2019SDA . For image datasets, skeleton GTs are semi-automatically extracted by object segmentation and skeletonization approaches Durix2019TPS ; Shen2011SGA . However, it is unclear whether humans have a similar and stable perception on simplicity and completeness, especially under different contexts from object foreground, background and shape. Context usually refers to the source of contextual associations to be exploited by the visual system Oliva2007TRO . A natural way of representing the context of an object is in terms of its relationship to other objects. In our case, object context is defined as an object’s foreground, background, and shape, which are primarily associated with the object skeleton. Theoretically, shape is part of the information in the foreground, while we can easily extract shape by binarizing and filling an object’s foreground. Here, we denote shape as an independent context since ten datasets contain only shapes without foreground (see Table 1). In short, there are two uncertainties: (1) it is unclear whether humans have a similar and stable perception of skeleton simplicity and completeness, and (2) it is unclear whether such a perception could be influenced by object foreground and background. Such uncertainties were not structurally studied in existing literature. They can have a tremendous impact on training CNN-based skeleton detectors, making it difficult to compare different skeleton-related algorithms. Inconsistency of standards: We observe glaring inconsistencies among various existing GTs: (1) GT skeletons among existing shape datasets are not always the same. For example, in Fig. 3 (a), the main skeleton branches are shortened whereas some spurious skeleton branches remain in the mouth, neck and hind leg regions. In contrast, in another dataset shown in Fig. 3 (b), only the main branches (not shortened ones) are preserved. (2) GT skeletons from existing image datasets are not consistent. We can clearly see that the GT skeletons in Fig. 3 (d) are in discrete segments, rather than a single connected medial- axis as in Fig. 3 (c). Skeleton GT in Fig. 3 (e) is not accurate. (3) GT skeletons of the shape and the image datasets are not always consistent (Fig. 3 (b) and (c)). Although the main skeleton branches are preserved in both horses, skeleton branches in (c) are shortened. Typically, the shortening of branches may cause blurring between the branches of significant visual parts and branches resulting from noise Bai2007SPB . To sum up, the standards on GT structure (simplicity, completeness, connectivities to branch and boundary) are not consistent. As a result, evaluating skeleton-related pruning, matching and classification approaches with inconsistent GT is an ill-posed problem. In this paper, we introduce an annotation tool, SkeView, for skeleton GT extraction in image and shape datasets. To do so, we first report an empirical study of human perception on skeleton structure based on the theory of diagnosticity hypothesis Tversky1977FOS . Diagnosticity hypothesis aims to capture the effect of context on target similarity from the perspective of human perception. In our case, exploring human perception on skeleton structure by varying the object context (foreground, background, and shape), time, and participants. Based on these studies, we introduce a general strategy for extracting skeleton GT in image and shape datasets. Our proposed strategy is able to encode human-in-the-loop GT extraction based on clues from the target context, simplicity and completeness. Using this strategy, SkeView is designed and developed to generate skeleton GTs for existing datasets including those in Table 1. Our generated GTs have consistent standards, and properly represent the object geometrical and topological features. These aspects provide a reliable benchmark for assessment. Thus, we can systematically evaluate representative methods using our GTs on skeleton detectors and skeleton-based algorithms, and generate viable baselines for the community. It should be emphasized that introducing a new skeletonization method is not the focus of this paper, though SkeView can be extended for this purpose. This is because our proposed strategy is applied semi-automatically, and therefore is not suitable for real-time (or quasi real-time) skeleton extraction in various applications. Moreover, desirable properties of skeletons have been well-defined (in 2D at least) via Blum Transform Blum1967ATF , discontinuities of the Distance Transform Ge1996OTG , and many other equivalent definitions from Ogniewicz Ogniewicz1992VST , Telea Telea2002AAF , Latecki Latecki2000SDF , Bai Bai2007SPB and Cornea Cornea2007CPA , etc. Therefore, in this paper, we underscore the suitability of SkeView for training and testing data extraction of skeleton GTs, especially in this era of deep learning. Moreover, skeletons and GTs can be defined in general on a higher level, while it is not possible to find a general definition on a lower level, particularly towards various applications. This is because different applications may have different requirements on skeleton properties (e.g., 2D, 3D, and simplicity). Thus, we also underscore the generalization of our GTs (see Section 4.2) to specific vision tasks in the original datasets, such as skeleton detector training, skeleton matching and shape retrieval, etc. Succinctly, the main contribution is that we introduce a general strategy to extract skeleton GTs in shape and image datasets. Our strategy meaningfully considers human perception on skeleton simplicity and completeness to adopt various requirements for real-world applications. We present a tool, SkeView, which utilises the proposed methodology to generate skeleton GTs in image and shape datasets. This contributes towards facilitating practical applications and proper benchmarking in future. We also generate skeleton GTs for 17 actively used datasets in Table 1 to build new baselines on a consistent and standardized manner. Our comprehensive evaluation demonstrates the efficacy of SkeView, highlighting the need for a new perspective for CNN-based skeleton detectors to become practically relevant and feasible. ## 2 Related Works We present here a brief overview of several existing methods that were proposed for extracting skeleton GTs. For a more thorough treatment on skeletonization methods, compilations by Saha Saha2016ASO , Telea Tagliasacchi2016SAS and Liu Liu2011EGT offer sufficiently good reviews. ### 2.1 GT in Shape Datasets Figure 4: Skeleton GT extraction in shape dataset: (a) automatically with a fixed pruning power Bai2007SPB . (b) semi-automatically with a manual optimized pruning power in SkeView. (c) Purely manual via shape tapping Chaz2014PTT . Fig. 4 presents existing approaches that could be applied for skeleton GT extraction in shape datasets. As mentioned in Section 1, these methods are normally applied semi-automatically to meet human perception on complexity. Otherwise, the extracted GTs are too simple or contain redundant small branches. For instance, Bai2007SPB requires a stop parameter $k$ to control the simplicity of skeleton structures. If $k$ is fixed without manual calibration, redundant small branches are not removed completely in simple shapes, e.g. the GT in Fig. 4 (a) with a fixed $k=30$. In contrast, the GT in Fig. 4 (b) is extracted based on an optimized $k$ in SkeView. We can clearly see that it is more perception friendly in terms of balancing the skeleton simplicity and completeness. In contrast to semi-automatic approaches, purely manual GT extraction is conducted with more user interaction, typically using a variety of tools. As shown in Fig. 4 (c), Firestone et al. Chaz2014PTT developed an application for a touch-sensitive tablet computer to display single closed geometric shapes, thereby collecting touch data from the participants. Each participate could tap on the displayed shapes anywhere they wished. The collection of their tapped locations provide a global representation of the crowd-sourced perception of major skeletons (aka. GTs). Instead of generating skeletons from scratch, Yang et al. Cong2016IOS generated a set of GT candidates with different complexity, and then applied a voting scheme based on questionnaires. Each participant was provided with three candidates in a questionnaire, and was asked to select the most promising one, or to draw a new one. Though both these manual approaches can capture crowd-sourced perceptions on skeleton complexity in a proper manner, they are not efficient enough for datasets with a massive number of shapes. Unlike the purely manual approaches, our proposed strategy is more efficient as it generates GT via SkeView semi-automatically and in parallel. ### 2.2 GT in Image Datasets In practice, GTs in image datasets are extracted semi-automatically via two steps: segmentation and skeletonisation. The segmentation step is mostly applied manually. For instance, in the SYMMAX300 Tsogkas2012LSD dataset, each image was accompanied by 5-7 human segmentations. Thus, multiple binary objects can be obtained for the followed skeletonisation and integration. Although purely manual segmentation can properly ensure the integrity of objects while reducing boundary noises, it is not efficient enough to be applied in practice, particularly preparing massive skeleton GTs for training scenarios. In terms of the skeletonisation step, some existing shape skeleton extraction approaches Bai2007SPB ; Telea2002AAF ; Shen2011SGA are applied semi-automatically on the shape of segmented objects. As shown in Fig. 3, these skeleton extraction approaches have different preferences on skeleton geometry and topology. Moreover, it is not clear whether humans have a similar and stable perception of skeleton complexity under different contexts. As a result, skeleton GTs in the existing image datasets are not very consistent (see Fig. 3 (c) (d) (e)). In contrast, our proposed method is better in terms of efficiency and consistency. Specifically, our strategy is more general and standardized, as it is built on a structural study of human perception on skeleton GT. Besides, SkeView has an easy-to-use user interface, and a set of convenient functions to improve the efficiency of GT extraction in both shapes and images. ## 3 Methodology Here, we first present a study of human perception of skeleton structure based on the theory of diagnosticity hypothesis Tversky1977FOS . Based on these observations, we introduce a strategy for skeleton GT extraction in the shape and the image datasets. ### 3.1 Diagnosticity Hypothesis The diagnosticity hypothesis is a classic framework to explore the relation between similarity and context (or grouping) in the domain of cognitive science Skov1986IPD . Specifically, the diagnosticity hypothesis implies that the change in context, induced by the substitution of an odd element, will change the similarities in a predictable manner. An example is shown in Fig. 5: consider two sets of four countries, which differ in only one of their elements (p and q). The four countries of each set were presented to participants, who were instructed to select the country most similar to Austria (a). Note that this experiment was done in the 1970s, so one has to remember the political map of Europe at that time. The final statistical results are shown in percentages. It is interesting to observe that the selection results in Set 1 and Set 2 are different (Austria (a) is grouped with Sweden (b) in Set 1, and with Hungary (c) in Set 2) by changing only one element (p to q), though both (p) and (q) are not the final results. The diagnosticity hypothesis example in Fig. 5 demonstrates that human perception of selection (a country most similar to Austria) could be influenced by a change of context (from Poland to Norway). In our case, human perception of selection (a branch to prune) could be affected by the shift in object contexts, such as shape, foreground, and background. Figure 5: An example of diagnosticity hypothesis Tversky1977FOS . The percentage of participants who selected each country (as most similar to Austria) is presented below the name. Figure 6: Interfaces of our APP for skeleton selection (best viewed in color). Accordingly, our study was conducted by evaluating the robustness of human perception on skeletons spatially and temporally. In other words, (1) perception of an object skeleton in the context of object shape, segmented foreground and full image, (2) perception of an object skeleton in different time slots, and (3) perception of an object skeleton by different volunteers. Thus, our study is an extension of diagnosticity hypothesis: verifying whether a skeleton GT could be robust for different people, at different times, and in different contexts. Due to the limitations of face-to-face surveying during the global pandemic Fanelli2020AAF , we developed a phone application (APP) to collect perceptions from different participants, as presented in Fig. 6. Our APP contained four major components: a counter showing processed/remaining images (top right), a setting panel for the boundary, colour and transparency (top left), a selection area for the skeletons (middle), and buttons for page navigation and submission (bottom). In total, 90 volunteers (45 females, 45 males) participated in the study (January to March, 2021), most of whom were students and teachers from Northeast Normal University (NENU), China. Table 2: Statistics of total endpoint numbers from the most voted skeletons. | Group1 | | Group2 | | Group3 ---|---|---|---|---|--- Date | Shape | Object | Image | | Shape | Object | Image | | Shape | Object | Image Jan 21 | 378 | - | - | | - | 330 | - | | - | - | 315 Feb 04 | - | 326 | - | | - | - | 320 | | 380 | - | - Feb 18 | - | - | 314 | | 373 | - | - | | - | 322 | - Mar 04 | 378 | - | - | | - | 330 | - | | - | - | 315 We randomly selected 30 images from the existing datasets in Table 1, and applied manual segmentation and semi-automatic skeletonization with the method introduced in Bai2007SPB . For some images with complex backgrounds, we intentionally generate two segmented samples, a promising one and a noisy one, for comparison. We generated six skeleton candidates for each shape with different levels of complexity, resulting in a total of $40\times 6$ skeletons. To reduce the influence of context from different formats, we organized our volunteers into three groups (30 in each group) and presented object shapes, foregrounds and full images to each group independently. To facilitate the study (see Table 2), we repeated the survey every two weeks so that the effect of context memorisation could be reduced. For each trial, the skeleton format was changed in each group so that the three formats could be fully surveyed from all groups. We also conducted an additional survey seven weeks later, using the format of the first survey, to measure the stability of results with respect to time passing. Figure 7: Comparison of each participant in Group2. The participant IDs are shown on the horizontal axis (1 to 30) (best viewed in color). For quantitative analysis, the number of endpoints (more branches implies more endpoints) is used in our study. It should be noted that skeleton simplicity (see Eq. 5 in Section 4.2.4) can also be used for the quantitative analysis. Particularly, it has higher discriminative power than the number of endpoints. Here, we employed the number of endpoints in Table 2 for two reasons: (1) The differences between manually voted skeletons from Shape, Object, and Image are distinct, e.g., 378, 326, and 314, respectively. Thus, endpoint statistics are already enough to tell the difference at the coarse-grained level. (2) It is easier to count and visually recheck, particularly in our user study scenario using the questionnaire in APP. Based on the statistics shown in Table 2, we found that the number of endpoints in shapes, foregrounds and full images (“shape”, “object”, “image”) are within $[373,380]$, $[322,330]$ and $[314,320]$, respectively. In other words, each group has a rather consistent perception on skeleton structure, with differences of only about $2\%$. However, as shown in Fig. 7, individual perception are varied, ranging from 365 to 385 for shapes, 323 to 338 for objects and 310 to 332 for images. For instance, ID 27 prefers concise skeletons while the perception of IDs 11 and 28 are erratic. We believe the idea of group integration Tsogkas2012LSD ; Cong2016IOS produces a more consistent performance than the individual scheme in Ke2017SSR ; Shen2016MIS ; Shen2016OSE ; Shen2017DLM . As the endpoint numbers on January 21 and March 04 were almost the same, we can assume that the human perception of skeleton structure is stable over time. Considering the mean values of shape (377) vs. object (326), we find that the foreground context has a considerable influence on human perception, with about 13.5% reduction from shape to object formats. However, the difference between object (326) and image (316) is less obvious, with only about 3.1% reduction. Figure 8: Most selected skeletons of a sheep in the full image, two segmentations, and the correlated shapes, using our APP (best viewed in color). To better understand these results, the most voted skeletons of a sheep image are presented in Fig. 8 (a), together with its two segmentations (noisy (b) and good (d)) and their corresponding shapes. We intentionally eliminated the fore- and background of the object in (c) and (e) to reduce their context influence. The only difference is that Shape 1 contains noises in the top-left region (head and neck). We find that skeletons in (a), (b) and (d) are almost the same. This is understandable as illusions from the background and the boundary noise can be easily filtered by human inspection. However, as presented in (c) and (e), most volunteers tended to use more skeleton branches to fill their perceptual gaps on shapes (where there is less context information). In cognitive science, the perceptual gap Teichmann2021RVM refers to cognitive biases from information gaps, such as occlusion (internal and external) and misunderstanding, etc. In our case, a perceptual gap occurs since it is difficult to identify the original object (a sheep or something else) from the noisy shape in (c). As a result, volunteers tend to use more skeleton branches to fill their perceptual gaps in this shape. For instance, it is difficult to identify the original object of Shape 1 in Fig. 8 (c), particularly at the head and neck regions. As a result, the skeleton in Shape (c) is erroneously more extensive than the ones in (b), (d) and (e). Overall, our observations can be summarized as follows: * • O1: Perception is robust to the time and volunteer groups. * • O2: Perception is robust to segmented objects and images. * • O3: Perception of shapes is not robust and is easily influenced by deformations from noises and occlusion. * • O4: People tend to use more skeleton branches when there exist perceptual gaps on shapes, and vice versa. These four observations are used to design the strategy and Graphical User Interface (GUI) of the annotation tool for extracting the skeleton GT in the image and the shape datasets. ### 3.2 Strategy Given an image I, let M and $\widehat{\textbf{M}}$ denote a segmented object and its shape, respectively. Let the final GT skeleton be S. In brief, our GT extraction strategy is composed by two steps: preprocessing and pruning. The preprocessing step includes target object segmentation (for image datasets) and initial GT extraction in a coarse level. Then, a heuristic pruning process is conducted semi-automatically based on the above observations (O1-O4) and the human perception on simplicity and completeness. Figure 9: Pipeline of our proposed skeleton GT generation strategy in the image scenario. $p(\cdot)$ and $f(\cdot)$ denote the satisfaction of human perception and shape reconstruction error, respectively (best viewed in color). Such coarse-to-fine strategy can inherently improve the efficiency of GT extraction, as most time-consuming operations are automatically applied in the first step. Specifically, based on the segmented $\widehat{\textbf{M}}$ (Fig. 9 (b) and (c)) with He2017MRC , the skeletonization approach Shen2013SPA is employed for extracting the initial skeleton. This process effectively reduces the workload of the manual pruning that follows, as most of the redundant branches are removed in the initial skeleton. To bring more flexibility, we intentionally preserve more branches than the optimal ones from the automatic approach to generate a set of candidates with different levels of complexity (Fig. 9 (d)). $\textbf{S}^{\prime}$ denotes the selected initial skeleton from the candidates. The second step of skeleton pruning is a heuristic and a semi-automatic process to identify the skeleton GT: that is, maximizing the (human) perceptual simplicity while keeping the skeleton as much complete as possible. For simplicity, Fig. 9 (d) and (e) depicts how the skeletons appear during the selection and the pruning processes. This is motivated by our observations in O2 and O3. For completeness, inspired by Shen2013SPA , we introduce a shape reconstruction error to represent the skeleton completeness: that is, keeping the reconstruction error of S to $\widehat{\textbf{M}}$ as small as possible (Fig. 9 (e)). Then, the skeleton GT S is extracted by: $\small\textbf{S}_{image}=\mathtt{max}(p(\mathtt{min}(f(\textbf{S}^{\prime},\widehat{\textbf{M}})),\textbf{M}))\quad.$ (1) where $p(\cdot)$ and $f(\cdot)$, respectively, represent the (human) perceptual satisfaction and the shape reconstruction error. Thus, Eq. 1 is a semi-automatic annotation method as the variable $f(\cdot)$ is computed automatically and $p(\cdot)$ is determined by a human during manual pruning (i.e. manual selection of branch candidates for pruning). That is, calculating $f(\cdot)$ to inspire a human on trading-off the skeleton simplicity (domain requirement) and completeness ($f(\cdot)$ value). As a result, $p(\cdot)$ and $f(\cdot)$, respectively, are inherently maximized and minimized. The rational behind Eq. 1 is that, as O4 suggests, people intend to use fewer branches (simple skeleton) on I and M. This applies the diagnosticity hypothesis, whereby factors from other contexts (i.e. the reconstruction error) could potentially influence human perception. In practice, $p(\cdot)$ is maximized by dynamically selecting and pruning branches based on the eye- level view assumption of skeleton simplicity, and hints from the reconstruction error $f(\textbf{S}^{\prime},\widehat{\textbf{M}})$: $\small f(\textbf{S}^{\prime},\widehat{\textbf{M}})=\frac{|\Lambda(\widehat{\textbf{M}})-\Lambda(R(\textbf{S}^{\prime}))|}{\Lambda(\widehat{\textbf{M}})}\quad.$ (2) where $\Lambda(\cdot)$ denotes the area in terms of pixels, $R(\textbf{S}^{\prime})$ is the shape reconstructed from $\textbf{S}^{\prime}$: $\small R(\textbf{S}^{\prime})=\bigcup_{s\in\textbf{S}^{\prime}}B(s,r(s))\quad.$ (3) where $r(s)$ is the radius of the maximal disc $B(s,r(s))$ centered at a point $s\in\textbf{S}^{\prime}$. In practice, $r(s)$ is approximated with the values of the distance transform at $s$. Motivated by the observation in O1, we suggest to conduct observations according to Eq. 1 by at least three participants, and heuristically take $\textbf{S}_{image}$ to be the one with the maximum votes (when 2 skeletons are the same) or median reconstruction error (when 3 skeletons are different). To promote the efficiency of the human-in-the-loop approach, we introduce a new tool, SkeView, in Section 4 with various functions for segmentation, initialization, pruning, and integration. Figure 10: Pipeline of our proposed skeleton GT generation strategy in the image scenario. Best viewed in color. For the shape scenario, as presented in Fig. 10, our strategy is similar to the workflow from Fig. 9 (c) to (f). As there is no M displayed below the skeletons, only shape contour and the skeleton are fused in the illustration in Fig. 10(c). Thus, shape skeleton GT is generated by: $\small\textbf{S}_{shape}=\mathtt{max}(p(\mathtt{min}(f(\textbf{S}^{\prime},\widehat{\textbf{M}})),\widehat{\textbf{M}}))\quad.$ (4) where $p(\cdot)$ and $f(\cdot)$ are same to Eq. 1. An intuitive example is presented in Fig. 11. We can clearly observe the changes in simplicity (SS) and reconstruction error (RE) during the pruning process. With the hints from RE and SS, most volunteers tend to select the third one (marked by the rectangle) since it strikes the best balance, being structurally complete and relatively simple. As shown in Fig. 9 (f) and Fig. 10 (d), skeleton GT generated by our strategies are perception-friendly, while at the same time properly balancing the skeleton simplicity and the shape reconstruction error. Figure 11: The changes in simplicity (SS) and reconstruction error (RE) during the pruning. ## 4 Annotation Tool and Ground Truth In this section, we first introduce the design of an annotation tool, SkeView, based on our proposed strategy. Then, using SkeView, we generate GTs for the 17 existing datasets shown in Table 1. ### 4.1 SkeView To facilitate the strategies in Eq. 1 and 4, we developed a tool, SkeView, for extracting skeleton GTs in shape and image datasets. The user interface contains five major panels (see Fig. 12): (a) Source. SkeView supports five source data types including shape, image, object (segmented foreground) and skeleton (only for pruning-related operations). Figure 12: User interface of SkeView. (a) Data and format selection. (b) Operations including segmentation, initial skeleton generation and pruning. (c) Result format and exporting. (d) Reconstruction error and log. (e) Preview of image, object, shape and their skeletons. (b) Operations. This includes image segmentation (only available for the “Image” format), initial skeleton generation/selection, and dynamic skeleton branch pruning. For instance, there are two modes to address segmentation: manual and semi-automatic. If the manual mode is selected, users can dynamically plot a polygon to crop the region-of-interest. Otherwise, a Mask RCNN model He2017MRC , pre-trained with COCO dataset Lin2014MCC , is loaded to extract the initial segmentation masks. Then, the selected mask is transformed into a polygon by uniformly inserting interactive plots along the mask boundary. This way, each interactive plot can be manually moved to optimize the shape of the mask. For images with multiple objects (i.e. SYMMAX300 Tsogkas2012LSD and SymPASCAL Ke2017SSR ), users can flexibly add and remove targets via buttons. For initial skeleton extraction, the automatic mode Shen2013SPA is selected by default. According to the proposed strategy in Section 3.2, we intentionally added slightly more branches in the initial skeleton to provide more flexibility in the following pruning step. SkeView also allows users to generate initial skeleton semi-automatically using the discrete curve evolution (DCE) method Bai2007SPB by varying the stop parameter $k$. Either way, users can coarsely add (or remove) skeleton branches by simply clicking on the “+” (or “-”) buttons until the generated skeleton is satisfactory. SkeView preserves all branches in each step of the skeleton evolution from complex to simple in Shen2013SPA ; Bai2007SPB . This operation is functionally similar to the skeleton selection process in Fig. 9 (d). Finally, as presented in Fig. 12 (e), users can finely prune redundant branches by selecting a target branch (marked in yellow) and clicking the “Prune” button (or the “Delete” key). (c) Exports. Each export format is a structure with multiple elements: “Skeleton” (skeleton binary matrix, list of endpoints and junction points), “Object” (segmented foreground, shape and boundary matrices) and “Thumb” (pure skeleton and preview images, as shown in (e)). SkeView also preserves the pruning parameters and the correlated skeletons for future domain mapping and learning. (d) Reconstruction error. In this panel, current and historic reconstruction errors (Eq. 2) of each target are displayed during skeleton initialization and pruning. To facilitate comparison between the current and the previously pruned skeleton, the current reconstruction error is presented in bold font at the top right corner, and also plotted dynamically (as blue points) on the graph. Moreover, users can easily click a point to load the previous pruning result for visualization and reconsideration. (e) Preview and branch selection. Users can preview images, segmented objects and initial skeletons in this panel. Similar to the APP in Fig. 6, the background transparency, skeleton colour and boundary visibility can be adjusted here. During the fine-grained pruning process, users can select multiple branches by clicking on the target while pressing the “Shift” key. Figure 13: User interface of the Integration Function. Tsogkas et al. Tsogkas2016MRF have introduced a tool with a user interface for annotating skeletons by manually drawing poly-lines. Besides being less efficient due to its purely manual operation, it also cannot ensure the symmetry of poly-lines according the 2D object contour. SkeView is advantageous in both these aspects. As SkeView is developed for individual users, we also provide a tool for skeleton integration and selection from a group of users (Fig. 12 (bottom left)). As presented in Fig. 13, skeletons from multiple users are presented together for final determination of the acceptable annotation. The tool can automatically count the duplicated skeletons (“Hints”) and calculate reconstruction error (“Error”). By default, the final skeleton is automatically selected according to the maximum “Hints” and median “Error”. For groups with fewer than three volunteers, SkeView integrates branches from the candidates to extract a new skeleton candidate. To evaluate the efficiency, we compare SkeView with the method in Shen2016OSE on SK506 dataset. Our statistics show that the time cost per image is reduced from 86.4 to 27.2 seconds. This suggests that SkeView is suitable for medium- scale datasets which are mostly those listed in Table 1. For large-scale datasets (e.g. more than 10,000 shapes), an efficient way is to generate initial skeletons using the SkeView semi-automatic method with big pruning power (e.g. $k=50$). The pruning process is conducted via online labelling tools (such as LabelMe Russell2008LAD ) by drawing bounding boxes on the endpoints that are intended to preserve. The pruned skeleton is generated by mapping skeleton paths between the preserved endpoints to a zero matrix. Compared with the branch-based pruning in Fig. 12 (c), the box-based pruning only offers a slight consideration of the context arising from dense endpoints. However, it is more efficient for the purpose of group collaboration with its range of rich online annotation tools Dasiopoulou2011ASO . SkeView is developed with Matlab R2015b with GUIDE for user interface. The toolbox SkeView is compiled into executable applications in both Windows and Linux. The source codes and datasets are publicly available in this repository111https://github.com/cong-yang/skeview. ### 4.2 Ground Truth To ensure the quality of annotation, each GT was generated by four participants (two males and two females) from NENU. To meet different requirements in image and shape datasets, image GTs included segmented foregrounds, binary shapes, skeletons, lists of endpoints and junction points. Shape GTs included skeletons, and the list of endpoints and junction points. All skeleton branches in our GTs are one pixel wide, and are connected to shape boundaries: this meets the quality requirements of most skeleton extraction and matching algorithms. Users can intentionally dilate and dilute a GT skeleton point depending on algorithms Wang2019DFS ; Atienza2019PUF . In practice, there are two strategies to ensure the application requirements on GT properties: * • Annotation documents: written by domain experts, detail the annotation and quality requirements, including annotation examples and corner cases. * • Annotation training: annotators (volunteers in our case) study the annotation documents, followed by a trial-checking process using some samples. Built on that, annotators not only follow their perception of skeleton simplicity and reconstruction error, but also consider the requirements of different domains. Besides, such strategies can ensure the quality and consistency of GTs. In our case, the extracted shape skeletons on the existing ten datasets are general enough for CNN-based skeleton detector training and skeleton matching. This is because these datasets were typically collected for the shape retrieval scenario. In terms of the four image datasets, the datasets are used for general object detection and analysis. Thus, our GTs not only respect their original setting and domain requirements, but also have better quality, clarity, and consistency. #### 4.2.1 Image Datasets Figure 14: Comparison of the original (yellow) GTs with the ones generated with our SkeView (red) in SK506 Shen2016OSE and SK1491 Shen2017DLM datasets. Figure 15: Comparison of the original (yellow) GTs with the ones generated with our SkeView (red) in SYMMAX300 Tsogkas2012LSD (top two rows) and SymPASCAL Ke2017SSR (bottom two rows). Fig. 14 and 15 present comparisons between the original GTs and our GTs generated by SkeView among four image datasets: SK506 Shen2016OSE (also known as SK-SMALL) was selected from the MS COCO Lin2014MCC dataset, with 506 natural images (300 for training and 206 for testing) and 16 object classes including humans, animals and artifacts. For each image, there is only one target for the skeleton GT generation. Due to inaccurate segmentation and unstable individual perception, the quality of the original GT is not promising. For instance, as shown in Fig. 14 (top), we observe the following issues: shortened branches of the elephant, the asymmetric branches in the airplane, an overly-simplified skeleton for the bird, and noisy branches in the hydrant. In contrast, GTs generated by SkeView are better in terms of various qualitative properties: consistency, perception friendliness, and the representation of object geometrical features. SK1491 Shen2017DLM (also known as SK-LARGE) is an extension of the SK506 by selecting more images from the MS COCO dataset. It includes 1,491 images (746 for training and 745 for testing). Similar to SK506, there is one target for each image and the GT skeletons are annotated in the same way. SYMMAX300 Tsogkas2012LSD is adapted from the Berkeley Segmentation Dataset (BSDS300) Martin2001ADO with 300 images (200 for training and 100 for testing). There are multiple targets in most images. This dataset is used for local reflection symmetry detection, which is a low-level image feature, without paying attention to the concept of ‘object’. While most branches are disconnected and the original GTs do not encode information about the connectivity of skeleton branches. Hence, it is ill-suited to evaluate object skeleton extraction methods as a large number of symmetries occur in non- object parts (see the bear, rhinoceros and lion images in Fig. 15 (top)). For this, we regenerated GTs only on target objects, as it was more meaningful to use object symmetry (foreground) instead of whole-image symmetry. As suggested in Tsogkas2016MRF , we ignore images without specific target objects. SymPASCAL Ke2017SSR was selected from the PASCAL-VOC dataset Everingham2010TPV , with 1,435 images (648 for training and 787 for testing). Most images contain multiple targets, partial visibility and complex backgrounds. However, there are still noisy symmetries from the background, incomplete skeleton graph and shortened skeleton branches. In contrast, GTs from SkeView focus only on the foregrounds, maintaining the same quality as with the other three image datasets. In Fig. 15, we clearly observe that our GTs in SYMMAX300 and SymPASCAL have the same quality as SK506, and skeleton branches for each object are well-connected. Such features can ensure a reliable evaluation on both skeleton extraction and matching algorithms Bai2008PSS . It should be noted that our annotation mainly captures the 2D contours, and partly loses the 3D symmetry awareness for some objects in images. However, our labelling is superior to the original GTs of the four image datasets, especially considering the consistency standards, branch connectivity and distinguished graphs. As a result, our GTs are more applicable for training and testing CNN-based skeleton detectors, as well as benchmarking skeleton- related pruning, matching and classification algorithms. In the future, we plan to update SkeView for 3D object and symmetry annotation Tagliasacchi2016SAS based on our strategy. #### 4.2.2 Image and Shape Datasets Figure 16: From left to right, skeleton GT of EM200 Yang2014SCO , SmithsonianLeaves Haibin2007SCU and WH-SYMMAX Shen2016MIS generated by SkeView. There are three datasets with both images and corresponding foreground shapes (Fig. 16). For this, we extracted initial skeletons using shapes and applied pruning using images in SkeView. EM200 Yang2014SCO contains 200 microscopic images (10 classes) of environmental microorganisms (EM). There are two types of segmented foregrounds provided by the original dataset: those generated manually or semi-automatically with the methods introduced in Li2013AMA . This dataset is challenging on colourless, transparent and spindly regions (flagellum). To ensure the quality of GTs, we employed the manual approach for initial skeleton generation. Then an efficient pruning in SkeView can best protect skeleton branches in those spindly regions for fine-grained EM matching and classification. SmithsonianLeaves Haibin2007SCU contains 343 leaves (187 for training and 156 for testing) from 93 different species of plants. Each leaf was photographed on a plain background. K-means clustering was employed to estimate the foreground based on colour, followed by morphological operations to fill in small holes. Thus, this dataset is relatively less challenging with respect to occlusion and complex backgrounds, but has richer geometrical characteristics. Our GTs can be used by botanists to compute leaf similarity in the digital archives of the specimen types. WH-SYMMAX Shen2016MIS contains 328 cropped images (228 for training and 100 for testing) from the Weizmann Horse dataset Borenstein2002CTS . Each image contains one manually segmented target. The original skeleton annotations are not only inconsistent concerning completeness across different horse shapes but also contain shortened branches. On the other hand, our GTs yield better quality with respect to consistency and completeness. #### 4.2.3 Shape Datasets Figure 17: Skeleton GTs of (a) Animal2000 Bai2009ICA , (b) ArticulatedShapes Haibin2007SCU , (c) SkelNetOn Ilke2019SDA , (d) Kimia99 Sebastian2004ROS , (e) MPEG7 Latecki2000SDF , (f) Kimia216 Sebastian2004ROS , (g) MPEG400 Cong2014SBO , (h) Tetrapod120 Cong2016OMW , (i) SwedishLeaves Soderkvist2001CVC , (j) Tari56 Asian2005AAR datasets generated by SkeView. Fig. 17 presents samples of ten shape datasets and their GTs generated by SkeView: Animal2000 Bai2009ICA contains 2,000 shapes (20 categories, 100 shapes each) ranging from poultry and domestic pets to insects and wild animals. Each class is characterised by large intra-class shape variations. Due to occlusion, some parts of certain objects (e.g. legs) are missing. There are also holes and boundary noises in some shapes due to incorrectly segmented foregrounds and backgrounds. This dataset is actively used in shape matching, classification and skeleton-based shape retrieval. As the shape category can be easily identified by human perception, critical parts of an object (e.g. legs, head, tentacles) are all preserved by the skeleton branches of our GTs. ArticulatedShapes Haibin2007SCU contains 40 images from eight different objects. This challenging dataset consists of various tools including scissors with holes. To preserve the original topology, our GTs at such regions are closed branches (Fig. 17 (b) (top)). Most existing matching algorithms Bai2008PSS cannot properly deal with skeleton graph structures with cycles, however we could provide skeleton GTs after filling the holes (Fig. 17 (b) (bottom)). SkelNetOn Ilke2019SDA contains 1,725 shapes (1,218 for training, 241 for validation and 266 for testing) represented as pixels. All shapes are of high quality with the holes and isolated pixels having removed by morphological operations (dilation and erosion) and manual adjustments. However, skeleton branches in this dataset are shortened and suffer from imbalance in simplicity, i.e. the original GTs in some shapes are extremely simple while others are overly complex. As such, it is difficult to conduct a fair comparison on skeleton-related algorithms such as extraction and matching. Moreover, this dataset is available only to registered participants in the SkelNetOn Challenge Ilke2019SDA . Our GTs are only for the purpose of skeleton quality analysis as shown in Table 3. Kimia99 Sebastian2004ROS contains 99 shapes (9 categories, 11 shapes each) assembled from a variety of sources such as tools and hands, etc. Challenges in each category come from occlusion, and articulation of missing parts. To avoid topology violation of shapes, branches of extrinsic regions (e.g. Fig. 17 (d) (top)) are preserved in GTs. MPEG7 Latecki2000SDF contains 1,400 (70 categories, 20 shapes each) shapes defined by their outer closed contours. It poses challenges with respect to deformation (e.g. change of view points and non-rigid object motion) and noises (e.g. quantisation and segmentation noise). This dataset is actively used for benchmarking shape representation, matching and retrieval algorithms Cong2016OMW ; Yang2020TAS . Similar to Kimia99, our GTs respect the topology of original shapes and properly preserve the challenges posed in each category. Kimia216 Sebastian2004ROS contains 216 shapes (18 categories, 12 shapes each) selected from the MPEG7 dataset. It is actively used in skeleton extraction, pruning, matching and shape retrieval scenarios. Our GTs in this dataset form a subset of MPEG7. MPEG400 Cong2014SBO contains 400 shapes selected from the MPEG7 dataset (20 categories, 20 shapes each). Instead of directly using the original shapes, boundary noises of these shapes were manually removed for ablation study. Thus, our GTs are slightly different from the corresponding ones in the MPEG7 dataset. Tetrapod120 Cong2016OMW contains 120 tetrapod animal shapes from six classes. As shapes of some species are visually similar, this dataset is normally employed to evaluate shape matching and fine-grained classification algorithms. An advantage of SkeView is that branches of major regions are preserved. However, our GTs are not recommended for evaluating fine-grained classification algorithms as some animal species can only be distinguished via branches in small regions (e.g. floppy vs. pointy ears). SwedishLeaves Soderkvist2001CVC contains 1,125 leaf shapes from 15 different Swedish tree species, with 75 leaves per species (25 for training, 50 for testing). This dataset is challenging as some species are quite similar. Past works Soderkvist2001CVC ; Haibin2007SCU have shown that it is not possible to distinguish them based on shape features alone. We do not intend to perform the same task using our GT skeletons. Instead, our GTs can be used for a wider scope of tasks – evaluating general skeleton extraction, pruning and matching algorithms. Tari56 Asian2005AAR contains 56 shapes (14 categories, 4 shapes each) for evaluating matching performance under visual transformations. Shapes of the same category show variations in orientation, scale, articulation and small boundary details. Motivated by this, our GT skeletons are useful for evaluating various skeleton-based shape matching algorithms. This is because our GTs contain branches with respect to the major and contextual shape regions. Moreover, our skeleton GTs are inherently robust to orientation and scale. #### 4.2.4 Properties We discuss two measured properties of skeleton GTs: the mean Reconstruction Error (RE) and Skeleton Simplicity (SS). RE is already calculated by Eq. 2. Here, SS is calculated by: $\small s(\textbf{S})=\mathtt{exp}(-\mathtt{log}(\Gamma(S)+1))\quad.$ (5) where $\Gamma(S)$ denotes the normalized curve length of skeleton $S$. Since the GT skeletons are one pixel wide, $\Gamma(S)$ can be calculated simply from the number of skeleton points, normalized by the average path length of the skeleton. A constant value of 1 is added to ensure that the value from log function is positive. Eq. 5 is motivated by the intuitive understanding that shorter skeletons have simpler structures. Here, SS is used for quantitative analysis since the differences between GTs are primarily at the fine-grained level. We note that another quantitative way to measure the simplicity of S is to use the number of junction and endpoints. However, experiments in Yang2020TAS show that endpoint statistics (both mean and standard deviation values) from different methods are similar to each other. In contrast, $\Gamma(S)$ is more distinguishable as it is sensitive to slight changes in the skeleton structure. In other words, SS has higher discriminative power than the number of endpoints, particularly at the fine-grained level. Table 3: Mean reconstruction error (RE) and skeleton simplicity (SS) of our GTs. Skeletons from the automatic approaches DCE Bai2007SPB (fixed $k=10$), AutoSke Shen2013SPA and Grafting Yang2020TAS are also detailed for reference. In each dataset, the lowest RE (towards complete skeleton structure) and the highest SS (towards simple skeleton structure) values are in boldface. | SK1491 | EM200 | Kimia216 | MPEG7 | Animal2000 | SkelNetOn ---|---|---|---|---|---|--- | RE | SS | RE | SS | RE | SS | RE | SS | RE | SS | RE | SS DCE | 0.90 | 0.09 | 0.90 | 0.07 | 0.81 | 0.09 | 0.92 | 0.08 | 0.86 | 0.09 | 0.87 | 0.07 AutoSke | 0.89 | 0.08 | 0.91 | 0.15 | 0.82 | 0.11 | 0.92 | 0.11 | 0.86 | 0.09 | 0.85 | 0.09 Grafting | 0.89 | 0.07 | 0.90 | 0.13 | 0.82 | 0.11 | 0.92 | 0.11 | 0.86 | 0.08 | 0.85 | 0.09 GTs | 0.88 | 0.05 | 0.90 | 0.07 | 0.81 | 0.08 | 0.92 | 0.08 | 0.85 | 0.07 | 0.82 | 0.07 Table 3 presents the RE and SS of our GTs. For comparison, skeletons generated by three automatic approaches DCE Bai2007SPB (with the fixed stop parameter $k=10$ recommended in Cong2016IOS ), AutoSke Shen2013SPA and Grafting Yang2020TAS are also presented. We first report their statistical distribution of RE and SS values. Taking the Kimia216 dataset as an example, the statistics of 216 shapes are twofold: statistics within the same method and between different methods: * • With our proposed SkeView, the values are close to each other. Notably, the statistical distribution of RE is between 0.80-0.82, and SS is between 0.08-0.09. In other words, our GTs strike a stable balance, being structurally complete and relatively simple. * • With different approaches (DCE, AutoSke, Grafting), the distributions are more varied. RE and SS of DCE: 0.73-0.97, 0.05-0.09; RE and SS of AutoSke: 0.67-0.94, 0.06-0.14; RE and SS of Grafting: 0.73-0.91, 0.06-0.14. Though the mean values of RE and SS are close to SkeView, skeleton structures are unstable. In other words, some skeletons are either too simple, or too complex. This phenomenon is inherently similar to our observations in Section 3.1, such as O1 (Perception is robust to the time and volunteer groups) and O2 (Perception is robust to segmented objects and images). Figure 18: Sample GTs generated by (a) AutoSke Shen2013SPA , (b) Grafting Yang2020TAS , (c) DCE Bai2007SPB and (d) SkeView in MPEG7 dataset. We also observe that our GTs have the lowest RE, while the structures are more complex (smaller SS means more complex structure). For instance, RE in the MPEG7 datasets are the same (0.92), while SS of AutoSke Shen2013SPA and Grafting Yang2020TAS are the smallest (0.11). However, as shown in Fig. 18 (a) and (b), their skeleton completeness are not perceptually promising. Though skeletons generated by the DCE method Bai2007SPB are the simplest in both SK1491 and Animal2000 datasets, their RE are relatively high (the first row in Table 3) while their skeleton structures are not visually promising (Fig. 18 (c)). Overall, our GT skeletons strike the best balance, being perceptually friendly, structurally complete (in most cases), and relatively simple (the median SS is only 0.03 lower than AutoSke). ## 5 Benchmarks In this section, we present a benchmark evaluation of skeleton detectors (mostly CNN-based methods) and skeleton-based matching methods using our GTs. For fairness, all settings follow their original papers unless stated otherwise. ### 5.1 Skeleton Detectors in Shapes Table 4: Average error pixel (AEP) of shape skeletons from different methods and datasets. Ani2000: Animal2000. SL1: SmithsonianLeaves. SL2: SwedishLeaves. AS: ArticulatedShapes. EM: EM200. The smallest AEP in each dataset are shown in boldface. | Kimia216 | Ani2000 | SL1 | SL2 | Tari56 | MPEG7 | AS | EM ---|---|---|---|---|---|---|---|--- DCE Bai2007SPB | 1.04 | 0.97 | 8.05 | 5.84 | 0.91 | 3.90 | 0.61 | 6.18 AutoSke Shen2013SPA | 0.80 | 1.01 | 3.67 | 3.19 | 0.51 | 3.03 | 0.39 | 4.10 Physics Krinidis2009ASF | 1.29 | 1.18 | 10.15 | 7.09 | 1.09 | 4.73 | 0.67 | 7.47 BPR Shen2011SGA | 0.88 | 1.14 | 4.13 | 3.66 | 0.56 | 3.33 | 0.44 | 4.55 U-Net Panichev2019 | 1.41 | 1.32 | 11.15 | 7.87 | 1.32 | 5.65 | 0.81 | 8.32 To quantitatively evaluate the performance of different skeleton detectors, we employed the average error pixel (AEP) proposed in Krinidis2009ASF as the error measure. Specifically, it measures the error $e(\widehat{\textbf{S}},\textbf{S})$ between a detected skeleton $\widehat{\textbf{S}}$ against a GT S using the mean square error of their skeleton points: $\small e(\widehat{\textbf{S}},\textbf{S})=\frac{1}{N}\sum_{i=1}^{N}(\sqrt[]{(\widehat{\textbf{S}}_{x}(i)-\textbf{S}_{x}(i))^{2}+(\widehat{\textbf{S}}_{y}(i)-\textbf{S}_{y}(i))^{2}}\quad.$ (6) where $(\widehat{\textbf{S}}_{x}(i),\widehat{\textbf{S}}_{y}(i))$ are the coordinates of a skeleton point in $\widehat{\textbf{S}}$, $N$ is their total number of points, and $(\textbf{S}_{x}(i)),\textbf{S}_{y}(i))$ is the closest point in S to the point $(\widehat{\textbf{S}}_{x}(i),\widehat{\textbf{S}}_{y}(i))$. Table 4 details the evaluation results of five representative methods on eight shape datasets. The Physics method Krinidis2009ASF generates skeleton points iteratively starting from a boundary point set based on a physics-based deformable model. Though it can be used to obtain stable skeletons with a fixed parameter setting, the results are not symmetric to the boundary and are sensitive to noises. The BPR Shen2011SGA method of pruning skeletons is based on the context (modelled by the bending potential ratio) of the boundary segment that corresponds to the branch. The U-Net Panichev2019 is a typical CNN-based method, which employs a modified U-Net architecture for direct skeleton regression. We can clearly see that most of the skeletons generated by AutoSke Shen2013SPA have the lowest AEP and thus are closest to GTs. Though the DCE Bai2007SPB method achieves the best result on Animal2000, it is still close to the result generated by AutoSke (only around 0.04 lower). Among all the methods, the CNN-based U-Net has the lowest performance, with around 0.31 and 2.62 higher AEP than the AutoSke method in Animal2000 and MPEG7, respectively. This is because skeletons generated by CNN-based methods normally yields low- quality branches Yang2022blumnet . To verify it, we visualize skeletons from existing five CNN-based methods trained using our GTs (see Fig. 19). We can clearly observe the noisy, disjointed, and incomplete skeleton branches. Figure 19: Shape skeleton extraction using CNN-based methods: FHN Jiang2019FHN , U-Net Panichev2019 , DISCO Song2021DUB , SDE Tang2021DAE , and SkeletonNetV2 Nathan2021SAD . ### 5.2 Skeleton Detectors in Images Table 5: F1 scores of skeleton detectors in images. SYMM: SYMMAX300. SymP: SymPASCAL. SL: SmithsonianLeaves. WHS: WH-SYMMAX. | SK506 | SK1491 | SYMM | SymP | EM200 | SL | WHS ---|---|---|---|---|---|---|--- HED Xie2015HED | 0.552 | 0.494 | 0.431 | 0.370 | 0.298 | 0.580 | 0.741 SRN Ke2017SSR | 0.652 | 0.677 | 0.447 | 0.443 | 0.303 | 0.593 | 0.780 Hi-Fi Zhao2018HHF | 0.693 | 0.727 | 0.460 | 0.458 | 0.311 | 0.620 | 0.822 DeepFlux Wang2019DFS | 0.715 | 0.752 | 0.494 | 0.520 | 0.315 | 0.625 | 0.849 Ada-LSN Liu2020ALS | 0.748 | 0.798 | 0.497 | 0.504 | 0.319 | 0.672 | 0.883 Figure 20: Image skeleton detection results from HED Xie2015HED , SRN Ke2017SSR , Hi-Fi Zhao2018HHF , DeepFlux Wang2019DFS , Ada-LSN Liu2020ALS , and BlumNet Yang2022blumnet methods, trained using our GTs. Skeletons in image are usually represented by binary maps after applying non- maximal suppression (NMS) and thresholding. The binary maps between the generated and the GT skeletons are matched pixel-wise to calculate the precision and recall values of the skeleton points. In practice, some small localization errors are allowed. Here, we used the F1 score (i.e. 2$\times$(precision$\times$recall)/(precision+recall)) to evaluate the performance of skeleton detectors on image datasets. Particularly, five CNN- based methods (HED Xie2015HED , SRN Ke2017SSR , Hi-Fi Zhao2018HHF , DeepFlux Wang2019DFS and Ada-LSN Liu2020ALS ) are trained and tested using our GTs. To address a fair evaluation, we used their original settings on the backbone, image input size, and loss function. The initial network weights were initialized by Xavier. We optimized the loss functions using AdamW Loshchilov2018DWD with a mini-batch size of 1, an initial learning rate of 2e-4, and a weight decay of 1e-4. Regarding data augmentation, we applied random transformations to the image, including rotation, flipping, resizing and color jittering. For each method, we generated a precision-recall curve by varying the threshold value. The optimal threshold was selected as the one that produces the highest F1 score along the curve. Their best results are presented in Table 5. We can see that Ada-LSN Liu2020ALS achieves the highest score on almost all datasets. Fig. 20 presents some sample results. It can be observed that the predictions are generally convergent towards our GTs, though their performances are clearly different. Particularly, most methods are not feasible to generate high- quality skeleton graphs. For instance, skeletons from HED (Holistically-Nested Edge Detection), SRN (Side-output Residual Network), and Hi-Fi (Hierarchical Feature integration) contains lots of noise and limited smoothness. Deep-Flux and AdaLSN (Adaptive Linear Span Network) output clearer and slimmer results, while there remain some false positive points, disjointed segments, and incomplete branches. The main reason is that most CNN-based methods output noisy, disjointed, and incomplete skeleton branches in heat maps (also called skeleton maps). Some networks cannot guarantee the topological and geometrical features in the representation. In practice, skeleton heat maps from existing CNN-based methods usually require heavy and semi-automatic processes to extract slim skeletons (one pixel wide). Nevertheless, the geometrical and topological features of the processed skeletons are still not ensured. We can also observe that F1 scores on the EM200 are the lowest among all the evaluated datasets. A major reason for this occurrence is that the training data is very limited (only 10 images), resulting in under-fitted models. It is still possible to extract one-pixel-wide skeleton graphs. As presented in Fig. 20 (rightmost), we introduced a novel framework, BlumNet, for object skeleton extraction from shapes and images Yang2022blumnet . Unlike skeleton heat map regression with existing CNN-based methods, BlumNet decomposes a skeleton graph into structured components and simplifies the skeleton extraction problem into graph component detection and assembling tasks. Consequently, the quality of extracted skeletons is dramatically improved since BlumNet directly outputs slim, low-noise, and identified skeleton graphs. It should be noted that BlumNet was trained using our GTs from SkeView. Figure 21: Visual comparison of HED Xie2015HED , SRN Ke2017SSR , Hi-Fi Zhao2018HHF , DeepFlux Wang2019DFS , and Ada-LSN Liu2020ALS skeletons, trained using our GTs. Table 6: Skeleton detection performance (F1 scores) comparison of 6 methods trained and tested using GTs from Original and SkeView on SK1491. | HED Xie2015HED | SRN Ke2017SSR | Hi-Fi Zhao2018HHF | DeepFlux Wang2019DFS | Ada-LSN Liu2020ALS ---|---|---|---|---|--- GT (Original) | 0.497 | 0.678 | 0.724 | 0.732 | 0.786 GT (SkeView) | 0.494 | 0.677 | 0.727 | 0.752 | 0.798 Fig. 21 visually compares the detected skeletons on two sample images from SK1491 and SmithsonianLeaves. Specifically, skeletons from DeepFlux Wang2019DFS and Ada-LSN Liu2020ALS are slimmer and these two methods yield better performances in terms of continuity and completeness. Skeletons produced by HED are not well-integrated and are prone to noise. Though SRN and Hi-Fi yield clearer skeletons, they are not smooth and contain many false positive points. It is also interesting to find that most methods yield a better F1 score using the training and testing data from our GTs. For instance, the Fl score of DeepFlux Wang2019DFS method improved from 0.732 to 0.752 on the SK1491 dataset (Table 6). This is because, comparing to the original GTs in the SK1491, our GTs are more consistent and possess better completeness in representing objects’ geometrical features (see Fig. 14). As a result, these CNN-based models converge faster and generalize easier. ### 5.3 Skeleton Matching In practice, the similarity of a shape pair can be calculated by matching their skeleton graphs. For instance, the Path Similarity (PS) method Bai2008PSS aims to match skeleton endpoints using the similarity between their corresponding skeleton paths (Fig. 22 (left)). The final shape similarity is calculated by summing up the similarity of corresponded endpoints. Based on the idea of shape context Belongie2002SMA , the skeleton context (SC) method Kamani2016SMU employs log-polar histograms to describe sample skeleton points along the paths for matching (Fig. 22 (right)). The final shape similarity is computed by adding the distances between the matched skeleton points. We employ these methods to build baselines using our GTs on the Kimia216, MPEG400, Tetrapod120 and SwedishLeaves datasets (as they are actively used in this scenario). Specifically, we use each shape as a query and retrieve ten most similar shapes from the whole dataset according to their similarities. As shown in Table 7, the final value in each position (columns) is the total number of occurrences that matches query class at that position based on all shapes within a dataset. For example, the third position of the PS method on Kimia216 dataset shows that from 216 retrieved results in this position, 197 shapes have the same class as their query. We can see that the PS Bai2008PSS method achieves the best result among all the datasets, across all positions. This is because the PS method employs geodesic paths for matching endpoints, which makes it robust to scaling, rotation, occlusion, and also to same-class-objects of different topological structures. Figure 22: Skeleton-based shape matching algorithms in our evaluation. Left: Path Similarity (PS) Bai2008PSS . Right: Skeleton Context (SC) Kamani2016SMU . Table 7: Comparison of two skeleton-based shape retrieval methods using our GTs. PS: path similarity. SC: skeleton context. Kimia216 | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | 9th | 10th ---|---|---|---|---|---|---|---|---|---|--- PS Bai2008PSS | 216 | 206 | 197 | 185 | 173 | 169 | 154 | 150 | 140 | 119 SC Kamani2016SMU | 196 | 91 | 80 | 82 | 77 | 71 | 70 | 72 | 61 | 57 MPEG400 | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | 9th | 10th PS Bai2008PSS | 400 | 384 | 380 | 367 | 363 | 351 | 339 | 338 | 327 | 318 SC Kamani2016SMU | 382 | 131 | 148 | 164 | 162 | 150 | 144 | 142 | 117 | 123 Tetrapod120 | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | 9th | 10th PS Bai2008PSS | 120 | 110 | 90 | 86 | 73 | 73 | 72 | 55 | 55 | 51 SC Kamani2016SMU | 113 | 48 | 33 | 24 | 34 | 32 | 18 | 31 | 23 | 21 SwedishLeaves | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | 9th | 10th PS Bai2008PSS | 1125 | 974 | 914 | 881 | 868 | 833 | 813 | 790 | 785 | 767 SC Kamani2016SMU | 1057 | 220 | 207 | 203 | 198 | 190 | 204 | 198 | 169 | 184 For the MPEG7 and Animal2000 datasets, the Bulls-eye Scores (BES) Latecki2000SDF are normally computed for quantitative evaluation. BES is calculated as a ratio between the correctly matched shapes to the total number of possible matches. For instance, as there are 1,400 and 2,000 queries in MPEG7 (20 in each class) and Animal2000 (100 in each class) datasets, the total number of possible matches are $1400\times 20$ and $2000\times 100$, respectively. Accordingly, we employ GTs for skeleton-based shape retrieval. In addition to the PS and SC algorithms, the High-order (HO) matching method proposed in Yang2020TAS is also used in the evaluation. The HO method fuses similarities between the skeleton graphs with their geometrical relations characterized by multiple skeleton endpoints. Motivated by Yang2020TAS ; Bai2012COT ; Kontschieder2010BPS , experiments on both datasets are clustered into two groups: (1) pairwise matching similar to the experiments in Table 7, and (2) context-based matching by increasing the discrimination between different classes within the shape manifold. For this, the Mutual $k$NN Graph (MG) Kontschieder2010BPS and Co-Transduction (CT) Bai2012COT methods are employed (Table 8). Table 8: Bulls-eye scores (BES, %) of three skeleton matching methods (PS: Path Similarity Bai2008PSS , SC: Skeleton Context Kamani2016SMU and HO: High-order Matching Yang2020TAS ) based on MPEG7 (M7) and Animal2000 (A2) GTs. MG (Mutual $k$NN Graph Kontschieder2010BPS ) and CT (Co-Transduction Bai2012COT ) are their context-based extensions. The best scores are in boldface. | PS | PS+MG | PS+CT | SC | SC+MG | SC+CT | HO | HO+MG | HO+CT ---|---|---|---|---|---|---|---|---|--- M7 | 62.96 | 75.46 | 80.98 | 13.67 | 9.40 | 13.20 | 78.74 | 83.22 | 87.28 A2 | 24.26 | 29.52 | 34.27 | 8.88 | 6.54 | 8.32 | 34.14 | 37.95 | 40.19 For the pairwise experiments, we can clearly see that the HO method yields the best performance in both datasets. For the context experiments, we find that the BES improves after applying the MG and CT methods on the matching results from PS and HO. However, we find that both methods are ineffective on SC, with a decline in BES. The main reason is that similarity values between skeletons as calculated by the SC method are close to each other, and this results in poor shape retrieval performance: 13.67% and 8.88% on MPEG7 and Animal2000, respectively. Thus, the similarity values within skeletons of the same class are easily mixed with other classes. ## 6 Discussion and Conclusion We present a brief overview of the challenges posed by the GT baselines and possible directions for future research. ### 6.1 Analysis of Challenges Skeleton Extraction: For most shape skeleton extraction approaches, we find that they cannot properly handle shapes with long and narrow (or lathy) regions. For instance, needle-like axopodia of actinophryid (EM200), petiole of leaves (SwedishLeaves), and antenna of insects (MPEG7). One possible solution is to generate their skeletons regionally, followed by integration and post-pruning steps. It is also interesting to note that most approaches are not guaranteed to preserve the topology of shapes containing holes. The simple way to resolve this issue is to fill the holes during the pre- processing step. Connected shapes with such kinds of complexity frequently occurs in the real-world but are rarely studied, e.g. class No. 10 in the SwedishLeaves dataset. Inspired by the theorems in Bai2007SPB , we suggest to incorporate boundary curves from both shapes and holes for skeleton extraction. For image skeleton extraction, the influence of the quality of training data is obvious. As a direct result of the consistent quality of our GTs, the F1 scores in Table 5 are generally higher than their original reported results Xie2015HED ; Ke2017SSR ; Zhao2018HHF ; Wang2019DFS ; Liu2020ALS . However, it is desirable for the community to introduce higher quality and larger scale datasets. Our GTs also capture richer dynamics that cannot be learned from existing datasets. For instance, we find that all CNN- based methods in Table 5 are sensitive to image rotation, scaling and flipping, which are fundamental requirements towards robust skeleton extraction in the real-world. However, there remains some inadequately addressed issues. As shown in Fig. 23, even for Ada-LSN Liu2020ALS trained with data augmentation (rotation, scaling and flipping) on the SmithsonianLeaves dataset, we still find that some major skeletons branches are shortened, disconnected, and erased. For this, junction points, endpoints and skeleton graph could be encoded to restrict skeleton regression during the training period. Figure 23: Evaluation of Ada-LSN Liu2020ALS on rotation, scaling and flipping. Figure 24: Poor detection samples from HED Xie2015HED , SRN Ke2017SSR , Hi-Fi Zhao2018HHF , DeepFlux Wang2019DFS and Ada-LSN Liu2020ALS methods. Skeleton Matching: We further evaluate the pairwise matching algorithms in Table 8 using the ArticulatedShapes dataset. This is because its GTs contain closed branches (Fig. 17 (b) (top)) with holes in tools such as scissors. We found that both the PS Bai2008PSS and HO Yang2020TAS algorithms cannot properly deal with skeleton graphs containing cycles. Though the SC Kamani2016SMU algorithm can be applied to such skeletons, it yields a poor performance with only 13.67 and 8.88 BES in MPEG7 and Animal2000, respectively. Therefore, we propose to improve the existing matching algorithms to support skeletons with closed branches. In Fig. 24, we see that most skeletons predicted in images are discontinuous with different widths and false positive points. In such cases, it is difficult to apply the existing algorithms for matching, classifying, and retrieval. In particular, these algorithms have been designed for one-pixel wide skeletons. To facilitate using image skeletons in practice, we propose to explore post-processing algorithms to bridge the gap between the image skeletons and the existing matching algorithms. Thus, a significant amount of research in future is necessary before image skeletons can become practically robust in many real- world objects. ### 6.2 Conclusion We introduced a heuristic strategy for skeleton GT extraction in shape and image datasets. Our strategy is substantiated on both theoretical grounding and empirical investigation of human perception of skeleton complexity. To facilitate this, we developed a tool, SkeView, for skeleton GT extraction and used it on 17 existing image and shape datasets. We also systematically evaluated the existing skeleton extraction and matching algorithms to generate valid baselines using our GTs. Experiments demonstrate that our GT is consistent and can properly balance the trade-off between skeleton simplicity and completeness. We expect that the release of SkeView and the GTs to the community will benefit future research, particularly to address practical real-world challenges in CNN-based skeleton detectors and matching algorithms. Acknowledgments Research activities leading to this work have been supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant Number: 22KJB520008) and the Research Fund of Clobotics (Grant Number: KB1801ZW201609-03). We would like to thank Zixuan Chen from Darmstadt University of Technology (Germany) for his help in assembling the first version of SkeView. ## References * * (1) Panichev, O., et al.: U-net based convolutional neural network for skeleton extraction. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–4 (2019) * (2) Wang, Y., Xu, Y., Tsogkas, S., Bai, X., Dickinson, S., Siddiqi, K.: Deepflux for skeletons in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5287–5296 (2019) * (3) Liu, C., Tian, Y., Chen, Z., Jiao, J., Ye, Q.: Adaptive linear span network for object skeleton detection. IEEE Transactions on Image Processing 30, 5096–5108 (2021) * (4) Nathan, S., Kansal, P.: Skeletonnetv2: A dense channel attention blocks for skeleton extraction. In: IEEE International Conference on Computer Vision Workshops, pp. 2142–2149 (2021) * (5) Bai, X., Latecki, L.J., Liu, W.: Skeleton pruning by contour partitioning with discrete curve evolution. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(3), 449–462 (2007) * (6) Bai, X., Latecki, L.J.: Path similarity skeleton graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 1282–1292 (2008) * (7) Bai, X., Liu, W., Tu, Z.: Integrating contour and skeleton for shape classification. In: IEEE International Conference on Computer Vision Workshops, pp. 360–367 (2009) * (8) Giesen, J., Miklos, B., Pauly, M., Wormser, C.: The scale axis transform. In: Proceedings of the Twenty-fifth Annual Symposium on Computational Geometry, pp. 106–115 (2009) * (9) Liu, L., Chambers, E.W., Letscher, D., Ju, T.: Extended grassfire transform on medial axes of 2d shapes. Computer-Aided Design 43(11), 1496–1505 (2011) * (10) Telea, A., Wijk, J.J.v.: An augmented fast marching method for computing skeletons and centerlines. In: Proceedings of VisSym, pp. 251–258 (2002) * (11) Jalba, A.C., Sobiecki, A., Telea, A.C.: An unified multiscale framework for planar, surface, and curve skeletonization. IEEE Transactions on Pattern Analysis and Machine Intelligence 38(1), 30–45 (2015) * (12) Zhang, T.Y., Suen, C.Y.: A fast parallel algorithm for thinning digital patterns. Communications of the ACM 27(3), 236–239 (1984) * (13) Ge, Y., Fitzpatrick, J.M.: On the generation of skeletons from discrete euclidean distance maps. IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1055–1066 (1996) * (14) Firestone, C., Scholl, B.J.: Please tap the shape, anywhere you like: Shape skeletons in human vision revealed by an exceedingly simple measure. Psychological Science 25(2), 377–386 (2014) * (15) Cornea, N.D., Silver, D., Min, P.: Curve-skeleton properties, applications and algorithms. IEEE Transactions on Visualization and Computer Graphics 13(3), 530–548 (2007) * (16) Shen, W., Bai, X., Yang, X., Latecki, L.J.: Skeleton pruning as trade-off between skeleton simplicity and reconstruction error. Science China Information Sciences 56(4), 1–14 (2013) * (17) Li, Y., Qu, H.: Lsd and skeleton extraction combined with farmland ridge detection. In: International Conference on Intelligent and Interactive Systems and Applications, pp. 446–453 (2018) * (18) Shokouh, G.-S., Magnier, B., Xu, B., Montesinos, P.: Ridge detection by image filtering techniques: a review and an objective analysis. Pattern Recognition and Image Analysis 31(3), 551–570 (2021) * (19) Zhang, Z., Shen, W., Yao, C., Bai, X.: Symmetry-based text line detection in natural scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2558–2567 (2015) * (20) Bag, S., Bhowmick, P., Harit, G.: Recognition of bengali handwritten characters using skeletal convexity and dynamic programming. In: International Conference on Emerging Applications of Information Technology, pp. 265–268 (2011) * (21) Bucksch, A.: A practical introduction to skeletons for the plant sciences. Applications in Plant Sciences 2(8), 1400005 (2014) * (22) Sharma, V., Jääskö, K., Yiannacou, K., Koivikko, A., Lampinen, V., Sariola, V.: Performance comparison of fast, transparent and biotic heaters based on leaf skeletons. Advanced Engineering Materials, 1–11 (2021) * (23) Saha, P.K., Borgefors, G., di Baja, G.S.: A survey on skeletonization algorithms and their applications. Pattern Recognition Letters 76, 3–12 (2016) * (24) Ilke, D., et al.: Skelneton 2019: Dataset and challenge on deep learning for geometric shape understanding. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–9 (2019) * (25) Lowet, A.S., Firestone, C., Scholl, B.J.: Seeing structure: Shape skeletons modulate perceived similarity. Attention, Perception, & Psychophysics 80(5), 1278–1289 (2018) * (26) Yang, C., Tiebe, O., Grzegorzek, M., Indurkhya, B.: Investigations on skeleton completeness for skeleton-based shape matching. In: Signal Processing: Algorithms, Architectures, Arrangements, and Applications, pp. 113–118 (2016) * (27) Ling, H., Jacobs, D.W.: Shape classification using the inner-distance. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(2), 286–299 (2007) * (28) Sebastian, T.B., Klein, P.N., Kimia, B.B.: Recognition of shapes by editing their shock graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(5), 550–571 (2004) * (29) Latecki, L.J., Lakamper, R., Eckhardt, T.: Shape descriptors for non-rigid shapes with a single closed contour. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 424–429 (2000) * (30) Yang, C., Tiebe, O., Pietsch, P., Feinen, C., Kelter, U., Grzegorzek, M.: Shape-based object retrieval by contour segment matching. In: IEEE International Conference on Image Processing, pp. 2202–2206 (2014) * (31) Söderkvist, O.: Computer vision classification of leaves from swedish trees. In: Master Thesis, Linköping University, pp. 1–74 (2001) * (32) Asian, C., Tari, S.: An axis-based representation for recognition. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1339–1346 (2005) * (33) Yang, C., Tiebe, O., Shirahama, K., Grzegorzek, M.: Object matching with hierarchical skeletons. Pattern Recognition 55, 183–197 (2016) * (34) Shen, W., Zhao, K., Jiang, Y., Wang, Y., Zhang, Z., Bai, X.: Object skeleton extraction in natural images by fusing scale-associated deep side outputs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 222–230 (2016) * (35) Shen, W., Zhao, K., Jiang, Y., Wang, Y., Bai, X., Yuille, A.: Deepskeleton: Learning multi-task scale-associated deep side outputs for object skeleton extraction in natural images. IEEE Transactions on Image Processing 26(11), 5298–5311 (2017) * (36) Tsogkas, S., Kokkinos, I.: Learning-based symmetry detection in natural images. In: European Conference on Computer Vision, pp. 41–54 (2012) * (37) Ke, W., Chen, J., Jiao, J., Zhao, G., Ye, Q.: Srn: Side-output residual network for object symmetry detection in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1068–1076 (2017) * (38) Yang, C., Li, C., Tiebe, O., Shirahama, K., Grzegorzek, M.: Shape-based classification of environmental microorganisms. In: International Conference on Pattern Recognition, pp. 3374–3379 (2014) * (39) Shen, W., Bai, X., Hu, Z., Zhang, Z.: Multiple instance subspace learning via partial random projection tree for local reflection symmetry in natural images. Pattern Recognition 52, 306–316 (2016) * (40) Durix, B., Chambon, S., Leonard, K., Mari, J.-L., Morin, G.: The propagated skeleton: A robust detail-preserving approach. In: International Conference on Discrete Geometry for Computer Imagery, pp. 343–354 (2019) * (41) Shen, W., Bai, X., Hu, R., Wang, H., Latecki, L.J.: Skeleton growing and pruning with bending potential ratio. Pattern Recognition 44(2), 196–209 (2011) * (42) Oliva, A., Torralba, A.: The role of context in object recognition. Trends in Cognitive Sciences 11(12), 520–527 (2007) * (43) Tversky, A.: Features of similarity. Psychological Review 84(4), 327–352 (1977) * (44) Blum, H.: A transformation for extracting new descriptors of shape. Models for Perception of Speech and Visual Forms, 362–380 (1967) * (45) Ogniewicz, R., Ilg, M.: Voronoi skeletons: theory and applications. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 63–69 (1992) * (46) Tagliasacchi, A., Delame, T., Spagnuolo, M., Amenta, N., Telea, A.: 3d skeletons: A state-of-the-art report. In: Computer Graphics Forum, vol. 35, pp. 573–597 (2016) * (47) Skov, R.B., Sherman, S.J.: Information-gathering processes: Diagnosticity, hypothesis-confirmatory strategies, and perceived hypothesis confirmation. Journal of Experimental Social Psychology 22(2), 93–121 (1986) * (48) Fanelli, D., Piazza, F.: Analysis and forecast of covid-19 spreading in china, italy and france. Chaos, Solitons & Fractals 134, 109761 (2020) * (49) Teichmann, L., Edwards, G., Baker, C.I.: Resolving visual motion through perceptual gaps. Trends in Cognitive Sciences 25(11), 978–991 (2021) * (50) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: IEEE International Conference on Computer Vision, pp. 2961–2969 (2017) * (51) Lin, T.-Y., et al.: Microsoft coco: Common objects in context. In: European Conference on Computer Vision, pp. 740–755 (2014) * (52) Tsogkas, S.: Mid-level representations for modeling objects. PhD thesis, Université Paris Saclay (COmUE) (2016) * (53) Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. International Journal of Computer Vision 77(1-3), 157–173 (2008) * (54) Dasiopoulou, S., Giannakidou, E., Litos, G., Malasioti, P., Kompatsiaris, Y.: A survey of semantic image and video annotation tools. In: Knowledge-driven Multimedia Information Extraction and Ontology Evolution, pp. 196–239 (2011) * (55) Atienza, R., et al.: Pyramid u-network for skeleton extraction from shape points. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–4 (2019) * (56) Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: IEEE International Conference on Computer Vision, vol. 2, pp. 416–423 (2001) * (57) Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88(2), 303–338 (2010) * (58) Li, C., Shirahama, K., Czajkowska, J., Grzegorzek, M., Ma, F., Zhou, B.: A multi-stage approach for automatic classification of environmental microorganisms. In: International Conference on Image Processing, Computer Vision, and Pattern Recognition, p. 1 (2013) * (59) Borenstein, E., Ullman, S.: Class-specific, top-down segmentation. In: European Conference on Computer Vision, pp. 109–122 (2002) * (60) Yang, C., Indurkhya, B., See, J., Grzegorzek, M.: Towards automatic skeleton extraction with skeleton grafting. IEEE Transactions on Visualization and Computer Graphics, 1–1 (2020) * (61) Krinidis, S., Chatzis, V.: A skeleton family generator via physics-based deformable models. IEEE Transactions on Image Processing 18(1), 1–11 (2009) * (62) Zhang, Y., Sang, L., Grzegorzek, M., See, J., Yang, C.: Blumnet: Graph component detection for object skeleton extraction. In: ACM International Conference on Multimedia, pp. 5527–5536 (2022) * (63) Jiang, N., et al.: Feature hourglass network for skeleton detection. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–5 (2019) * (64) Song, S., Bae, H., Park, J.: Disco - u-net based autoencoder architecture with dual input streams for skeleton image drawing. In: IEEE International Conference on Computer Vision Workshops, pp. 2128–2135 (2021) * (65) Tang, X., Zheng, R., Wang, Y.: Distance and edge transform for skeleton extraction. In: IEEE International Conference on Computer Vision Workshops, pp. 2136–2141 (2021) * (66) Xie, S., Tu, Z.: Holistically-nested edge detection. In: IEEE International Conference on Computer Vision, pp. 1395–1403 (2015) * (67) Zhao, K., Shen, W., Gao, S., Li, D., Cheng, M.-M.: Hi-fi: hierarchical feature integration for skeleton detection. In: International Joint Conference on Artificial Intelligence, pp. 1191–1197 (2018) * (68) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations, pp. 1–19 (2018) * (69) Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(4), 509–522 (2002) * (70) Kamani, M.M., Farhat, F., Wistar, S., Wang, J.Z.: Shape matching using skeleton context for automated bow echo detection. In: IEEE International Conference on Big Data, pp. 901–908 (2016) * (71) Bai, X., et al.: Co-transduction for shape retrieval. IEEE Transactions on Image Processing 21(5), 2747–2757 (2012) * (72) Kontschieder, P., et al.: Beyond pairwise shape similarity analysis. In: Asian Conference on Computer Vision, pp. 655–666 (2010)
# Scalable unsupervised alignment of general metric and non-metric structures Sanketh Vedula Technion – Israel Institute of Technology Institute of Science and Technology, Austria Valentino Maiorca Institute of Science and Technology, Austria Sapienza University of Rome Lorenzo Basile Institute of Science and Technology, Austria University of Trieste Francesco Locatello Institute of Science and Technology, Austria Equal advising. Alex Bronstein Technion – Israel Institute of Technology Institute of Science and Technology, Austria Equal advising. ###### Abstract Aligning data from different domains is a fundamental problem in machine learning with broad applications across very different areas, most notably aligning experimental readouts in single-cell multiomics. Mathematically, this problem can be formulated as the minimization of disagreement of pair-wise quantities such as distances and is related to the Gromov-Hausdorff and Gromov-Wasserstein distances. Computationally, it is a quadratic assignment problem (QAP) that is known to be NP-hard. Prior works attempted to solve the QAP directly with entropic or low-rank regularization on the permutation, which is computationally tractable only for modestly-sized inputs, and encode only limited inductive bias related to the domains being aligned. We consider the alignment of metric structures formulated as a discrete Gromov-Wasserstein problem and instead of solving the QAP directly, we propose to learn a related well-scalable linear assignment problem (LAP) whose solution is also a minimizer of the QAP. We also show a flexible extension of the proposed framework to general non-metric dissimilarities through differentiable ranks. We extensively evaluate our approach on synthetic and real datasets from single-cell multiomics and neural latent spaces, achieving state-of-the-art performance while being conceptually and computationally simple. ## 1 Introduction Unsupervised alignment of data that are related, yet not directly comparable, is a fundamental problem in machine learning. This problem is ubiquitous across a multitude of tasks such as non-rigid shape correspondence in computer vision [8, 32], unlabeled sensing in signal processing [59, 23], and latent space communication in representation learning [46, 43]. From an application perspective, we are particularly interested in single-cell biology. In fact, the development of single-cell sequencing technologies has led to the profiling of different molecular aspects within the cell at an unparalleled resolution. Profiling techniques have been developed to assay gene expression [38], chromatin accessibility and 3D conformation [29, 19], DNA methylation [28], and histone modifications [50]. The analysis of genome [47, 64], transcriptome [57, 31], and DNA methylation [54, 31] profiles has led to enhanced understanding of the heterogeneity across cell populations. The development of high-throughput sequencing [42, 35, 63], and spatial transcriptomics [52] technologies further enabled molecular profiling of cells at a high temporal and spatial resolution. One of the central problems within single-cell multiomics is integrating data from different molecular profiles, which is crucial in understanding joint regulatory mechanisms within the cell. Most single-cell sequencing techniques are invasive; thus, carrying out multiple assays on the same cell is rarely possible. While experimental co- assaying techniques are an active area of research [12, 39], they currently lack the high throughput of their single-assay counterparts. Computationally integrating data from different experimental modalities is, therefore, an important problem, and is the focus of the current paper. Using the formalism of Gromov-Hausdorff (GH) [30] and Gromov-Wasserstein (GW) [45] distances, unsupervised alignment can be formulated as the minimization of disagreement in pair-wise distances. Given two point clouds, both the GH and GW problems aim to find an assignment that is invariant to distance- preserving transformations (isometries) of the point clouds. GH seeks an exact point-wise assignment and can be shown to be a quadratic assignment problem (QAP) that is known to be an NP-hard [9] and, thus, computationally intractable. GW relaxes the GH problem to find a soft assignment and it is more tractable in practice. The most common approach to solving QAP relaxations like GW is by solving a sequence of linear assignment problems (LAPs) [26] or entropy-regularized optimal transport ($\epsilon$-OT) [16] problems. This approach, coupled with the idea of kernel matching and, specifically, simulated annealing of kernel matrices, has been demonstrated very successful in shape analysis, practically rendering non-rigid shape correspondence a solved problem [61, 44]. For more general, less structured and higher-dimensional data, recent works have aimed to accelerate the GW solver by (i) reducing the problem size by applying recursive clustering [6] or through the quantization of the input dissimilarities [14]; and (ii) imposing low-rank constraints on the pairwise distance matrices and the assignment matrix within the internal $\epsilon$-OT solver [53]. Specifically on the problem of unsupervised alignment of single-cell multiomic data, GW solvers have already shown promise. Nitzan _et al._ [48] showed that they could map spatial coordinates in 2D tissues that were obtained with fluorescence in situ hybridization (FISH) to gene expression data. More recently, Demetci _et al._ [18] demonstrated that GW solvers outperform other unsupervised alignment approaches on real data generated by the SNAREseq assay [10], which links chromatin expression to gene expression. Unfortunately, existing solvers have several limitations, including poor scalability to very large ($N\sim 10^{4}$) datasets, convergence to local minima, and lack of inductivity in the sense that the solver has to be run anew once new data are obtained. This paper proposes remedies to these shortcomings. Contributions. In this work, we introduce a new framework for solving GW-like problems. The core idea of our approach is to learn the cost of an OT problem (essentially, a LAP) whose solution is also the minimizer of the GW problem (essentially, a QAP). Instead of explicitly learning the cost matrix for the given set of samples, we propose to implicitly parametrize the cost as a ground-cost measured on neural network embeddings of the points that are being aligned. In order to learn the the neural networks parametrizing the cost, we render the entropy-regularized OT problem as an implicitly differentiable layer using the methodology proposed in [22], and demand that the soft assignment produced by $\epsilon$-OT minimizes the GW objective. This framework offers unique advantages over the standard approach of solving GW as a sequence of LAPs. Firstly, our method is inductive. Since we implicitly parametrize the cost with neural networks, when we encounter new pairs of unaligned samples at inference, we simply need to solve an $\epsilon$-OT problem on the embeddings produced by our trained network. This is in contrast to all the other GW solvers that, to the best of our knowledge, are transductive and would need to solve the GW problem anew by augmenting the test points. Secondly, our framework is scalable requiring to only solve a point-wise $\epsilon$-OT problem at inference. Compared to GW, $\epsilon$-OT is far simpler, and efficient solvers can be employed to solve this problem at scale [16, 25]. Thirdly, our framework is gradient descent-based and is, therefore, more expressive and general, as it is straightforward to induce additional domain knowledge into the problem or impose additional regularization on the minimizer. Furthermore, it is straightforward to extend our method to the semi-supervised setting where a partial correspondence is known, and to the fused GW [60] setting where a shared attribute is provided in both domains. Leveraging the advantages of the proposed framework, we propose several novel extensions. Firstly, we demonstrate, for the first time, solving arbitrary non-metric assignment problems. To this end, we propose a new objective that matches distance ranks instead of the absolute distances themselves and demonstrate that it is more effective in single-cell multiomic alignment. The standard GW solvers rely on the linearization of QAPs, and it is unclear how they can be extended to handle more complex objectives such as those involving ranking. Secondly, inspired by techniques in geometric matrix completion [34, 7], by interpreting the learned cost as a signal on the product manifold of both domains, we impose a regularization that demands that the cost is smooth on its domain. This is intuitive because similar samples in one domain incur a similar cost with respect to the samples from the other domain, and vice- versa. Thirdly, in order to robustify training through $\epsilon$-OT solvers, we propose a simulated-annealing–based approach allowing tuning of the regularization coefficient in the Sinkhorn algorithm during the training process. We evaluate our method both in inductive and transductive settings, on synthetic and real data. We demonstrated in an inductive setting our solver generalizes and scales to large sample sizes. We demonstrate that it outperforms the entropic GW solver on the SNARE-seq data from [11] and on human bonemarrow scATAC vs. scRNA mapping task proposed in Luecken et al. [41]. ## 2 Background and closely related works in Optimal Transport Given two sets of points $\\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots\mathbf{x}_{N}\\}\in\mathcal{X}$ and $\\{\mathbf{y}_{1},\ldots\mathbf{y}_{N}\\}\in\mathcal{Y}$, the goal of unpaired alignment is to find a point-wise correspondence $\mathbf{P}\in\mathcal{P}^{N}$ such that each point in $\mathcal{X}$ is mapped to a point in $\mathcal{Y}$, and vice-versa, where $\mathcal{P}^{N}$ is the space of permutations. The central theme of metric-based alignment approaches (GH and GW) is to compare the sets of points as metric spaces. $\mathcal{X}$ and $\mathcal{Y}$ are considered similar if the metrics between corresponding points, as defined by $\mathbf{P}$, are similar as measured in $\mathcal{X}$ and in $\mathcal{Y}$. Denote by $d_{\mathcal{X}}$ and $d_{\mathcal{Y}}$ the metrics associated to $\mathcal{X}$ and $\mathcal{Y}$, and by $\mathbf{D}_{\mathcal{X}}\in\mathbb{R}^{N\times N}$ and $\mathbf{D}_{\mathcal{Y}}\in\mathbb{R}^{N\times N}$ the corresponding pairwise distance matrices computed over the points from $\mathcal{X}$ and $\mathcal{Y}$, respectively. Let further $\mu$ and $\nu$ be the associated discrete probability measures on $\mathcal{X}$ and $\mathcal{Y}$, respectively. Depending on what the spaces represent, these can be uniform measures or incorporate discrete volume elements. ### Gromov-Hausdorff distance. The distortion induced by a correspondence $\mathbf{P}$ between $(\mathcal{X},d_{\mathcal{X}})$ and $(\mathcal{Y},d_{\mathcal{Y}})$ is defined as $\text{dis}(\mathbf{P})=\|\mathbf{D}_{\mathcal{X}}-\mathbf{P}\mathbf{D}_{\mathcal{Y}}\mathbf{P}{{}^{\top}}\|_{\infty}.$ This measures how well the distances between the matched points are preserved. The Gromov-Hausdorff (GH) distance [30] is then defined as $d_{\text{GH}}((\mathcal{X},d_{\mathcal{X}}),(\mathcal{Y},d_{\mathcal{Y}}))=\min_{\mathbf{P}\in\mathcal{P}^{N}}\text{dis}(\mathbf{P}).$ (1) The optimization problem in Eq. 1 results in an integer linear program and is an NP-hard problem [9]. Therefore, it is computationally intractable. ### Gromov-Wasserstein distance. Mémoli [45] proposed relaxing the constraint on $\mathbf{P}$ from an exact assignment defined over $\mathcal{P}^{N}$ to a probabilistic (soft) assignment, i.e., to the space of couplings with marginals $\mu$ and $\nu$ denoted by $U(\mu,\nu):=\\{\mathbf{\Pi}\in\mathbb{R}_{+}^{N\times N}\,|\,\mathbf{\Pi}\vec{1}_{N}=\bm{\mu},\mathbf{\Pi}^{{}^{\top}}\vec{1}_{N}=\bm{\nu}\\}$. Using this relaxation, the squared Gromov-Wasserstein distance between discrete metric spaces is defined as $d^{2}_{\text{GW}}=\min_{\mathbf{\Pi}\in U(\mu,\nu)}\sum_{i,j,i^{\prime},j^{\prime}}\left(d_{\mathcal{X}}(\mathbf{x}_{i},\mathbf{x}_{i^{\prime}})-d_{\mathcal{Y}}(\mathbf{y}_{i},\mathbf{y}_{i^{\prime}})\right)^{2}\pi_{ij}\pi_{i^{\prime}j^{\prime}}=\min_{\mathbf{\Pi}\in U(\mu,\nu)}\|\mathbf{D}_{\mathcal{X}}-\mathbf{\Pi}\mathbf{D}_{\mathcal{Y}}\mathbf{\Pi}{{}^{\top}}\|_{\mathrm{F}}^{2}.$ (2) To avoid confusion, we reserve the notation $\mathbf{P}$ to the true permutation matrix, while denoting the “soft" assignment by $\mathbf{\Pi}$. Notice that the definition of the GW distance results in a quadratic function in $\mathbf{\Pi}$; thus, it is referred to as the quadratic assignment problem. Alternative relaxations to the GH problem exist based on semi- definite programming (SDP) [62], but due to the poor scalability of SDP problems, they do not apply to the scales discussed in this paper. ### Optimal transport. Aligning data that lie within the same space is a linear optimal transport (OT) problem [51]. Given two sets of points $\\{\mathbf{x}_{i}\\}_{i=1}^{N}$ and $\\{\mathbf{x}^{\prime}_{j}\\}_{i=1}^{N}$ in the same space $\mathcal{X}$ with two discrete measures $\mu$ and $\nu$, respectively, the OT problem is defined as the minimization of $\sum_{i,j}\pi_{i,j}c(\mathbf{x}_{i},\mathbf{x}^{\prime}_{j})$, such that $\mathbf{\Pi}$ satisfies marginal constraints $U(\mu,\nu)$ and $c$ defines transport cost (often, $c(\bm{x},\bm{x}^{\prime})=d_{\mathcal{X}}(\bm{x},\bm{x}^{\prime})$). Note that the objective is linear in $\mathbf{\Pi}$, in contrast to GW (Eq. 2), where it is quadratic. Entropy-regularized OT ($\epsilon$-OT) introduces an entropic regularization term, $\epsilon\langle\mathbf{\Pi},\log\mathbf{\Pi}\rangle$, that can be very efficiently solved using the Sinkhorn algorithm [16] (see Appendix for details). More recently, Eisenberger et al. [22] introduced differentiable Sinkhorn layers that uses implicit-differentiation [3] to cast the Sinkhorn algorithm as a differentiable block within larger auto-differentiation pipelines. They calculate the Jacobian of the resulting assignment matrix with respect to both the primal and dual variables of the entropic-regularized OT problem. While $\epsilon$-OT solvers (minimizing a point-wise loss) cannot directly solve the GW problem with its pair-wise loss, it is a crucial building block in the most efficient GW solver existing today, which is described below. ### Entropic Gromov-Wasserstein. In a similar spirit to $\epsilon$-OT, Solomon et al. [55] proposed to solve an entropy-regularized version of GW problem (Eq. 2). Peyré et al. [51] introduced a mirror-descent-based algorithm that iteratively linearizes the objective in Eq. 2 and then performs a projection onto $U(\mu,\nu)$ by solving an $\epsilon$-OT problem to obtain an assignment (see Appendix for details). This procedure is repeated for a number of iterations. Since each outer iteration involves solving an OT problem in the projection step, this quickly becomes expensive and intractable even in moderate sample sizes. In our experiments, we observed that entropic GW solvers result in out-of-memory for $N>25000$ even when running on optimized implementation from ott-jax [17] on a high-end GPU, whereas the implementation in POT [24], since it is CPU-based, is intractable already for $N>8000$. Scetbon et al. [53] proposed low-rank GW that imposes low-rank constraints both on the cost and assignment matrices as an alternative to entropic GW and demonstrated that it could provide speed-up compared to entropic counterpart. We observed that if the data violates the low-rank assumptions, as is generally true for distance matrices and was specifically the case in our real data experiments, the benefits from this approach become void. Explicitly imposing low-rank constraints led to a severe degradation in the quality of the estimated assignment. ## 3 Our approach to the GW problem Figure 1: Entropic Gromov-Wasserstein solver (left) solves a sequence of regularized optimal transport ($\epsilon$-OT) problems using the Sinkhorn algorithm. In contrast, the proposed approach learns, via a pair of embeddings, $f_{\theta}$ and $g_{\phi}$, the transport cost that directly produces the sought alignment $\bm{\Pi}^{\ast}$ by solving a single $\epsilon$-OT problem. While the learning of the embeddings still requires multiple calls to the $\epsilon$-OT solver, their cost is amortized at inference time. In order to scale GW solvers to large sample sizes, we start with the following question: can we find an entropic OT problem whose solution coincides with that of the entropic GW problem (Eq. 2)? The rationale is that, given an unpaired set of samples, if we determine an equivalent entropic OT problem, we can employ fast entropic OT solvers to calculate the assignment. One obvious problem that fits this criterion, by construction, is the entropic OT problem that is solved in the last iterate of the entropic GW solver. However, computing this problem would require iterating through the GW solver, and it is thus impractical. By phrasing this question as an optimization problem, we get the following, $\displaystyle\vec{\Pi}^{*}=$ $\displaystyle\operatorname*{arg\,min}_{\mathbf{C}}\,\left\|\mathbf{D}_{\mathcal{X}}-\vec{\Pi}(\mathbf{C})\mathbf{D}_{\mathcal{Y}}\vec{\Pi}^{{}^{\top}}(\mathbf{C})\right\|_{\mathrm{F}}^{2}$ (3) $\displaystyle\,\,\text{ s.t. }\vec{\Pi}(\mathbf{C})=\operatorname*{arg\,min}_{\vec{\Pi}\in U(\mu,\nu)}\langle\mathbf{\Pi},\mathbf{C}\rangle.$ It is a bilevel optimization problem: the inner problem is linear OT and it produces an assignment that is optimal with respect to the cost $\mathbf{C}$, and the outer problem demands that the resulting $\mathbf{\Pi}(\mathbf{C})$ is GW-optimal, i.e., it aligns the metrics $\mathbf{D}_{\mathcal{X}}$ and $\mathbf{D}_{\mathcal{Y}}$. While seemingly elegant, Equation (3) has two major problems: (i) because $\mathbf{C}$ is unbounded, this objective is very unstable and difficult to optimize; (ii) more practically, Eq. 3 results in a transductive approach; given a new set of unpaired samples, this problem needs to be solved anew, which is not scalable. To mitigate this, instead of optimizing the cost matrix $\mathbf{C}$ (in Eq 3), we propose to implicitly parametrize it as a pairwise cost measured on the learned embeddings of pointwise features $\mathbf{X}$ and $\mathbf{Y}$. This leads us to the following modified objective, $\displaystyle\vec{\Pi}^{*}=$ $\displaystyle\operatorname*{arg\,min}_{\theta,\phi}\,\left\|\mathbf{D}_{\mathcal{X}}-\vec{\Pi}(\theta,\phi)\mathbf{D}_{\mathcal{Y}}\vec{\Pi}^{{}^{\top}}(\theta,\phi)\right\|_{\mathrm{F}}^{2}$ (4) $\displaystyle\,\,\text{ s.t. }\vec{\Pi}(\theta,\phi)=\operatorname*{arg\,min}_{\vec{\Pi}\in U(\mu,\nu)}\langle\mathbf{\Pi},{c}(f_{\theta}(\bm{\mathrm{X}}),g_{\phi}(\bm{\mathrm{Y}}))\rangle,$ where $f,g$ are learnable functions, modeled via neural networks, embedding $\mathbf{X}$ and $\mathbf{Y}$, respectively. It is important to emphasize that the cost is realized through the embedding, while the function $c$ is fixed to the simple Euclidean ($c(\bm{z},\bm{z}^{\prime})=\|\bm{z}-\bm{z}^{\prime}\|^{2}$) or cosine ($c(\bm{z},\bm{z}^{\prime})=\bm{z}{{}^{\top}}\bm{z}^{\prime}$) form. We solve the above problem via gradient descent. In order to backpropagate gradients to $f$ and $g$, we first relax the inner problem to be an $\epsilon$-OT problem, and then employ implicit differentiation [3] to calculate $\frac{\partial{\mathbf{\Pi}}}{\partial{c}}$ [22] which is backpropagated to update the weights of $f$ and $g$. From a geometric perspective, we are embedding the samples from $\mathcal{X}$ and $\mathcal{Y}$ into a common domain $\mathcal{Z}$, where the samples are OT-aligned with the same assignment that makes the metric spaces $(\mathcal{X},d_{\mathcal{X}})$ and $(\mathcal{Y},d_{\mathcal{Y}})$ GW- aligned. From a computational point of view, our framework can be viewed as an amortized entropic GW solver. Figure 1 presents the parallels between our solver and the entropic GW solver [55]. The ground cost of measured on the embeddings $c(f_{\theta}(\mathbf{X}),g_{\phi}(\mathbf{Y}))$ can be interpreted as the cost matrix $\mathbf{C}_{k+1}=\mathbf{D}_{\mathcal{X}}\vec{\Pi}_{k}\mathbf{D}_{\mathcal{Y}}$ (as depicted in the Fig. 1) produced by running the entropic GW solver for $k$ iterations. Post training, the neural networks can be viewed to be amortizing the GW iterations, in similar spirit to recent amortized optimization techniques proposed for fast calculation of convex conjugates [4, 2]. From a practical standpoint, this results in an inductive GW solver. At inference, when a new set of unpaired samples from $\mathcal{X}$ and $\mathcal{Y}$ are encountered, we simply need to solve an entropic OT problem that is highly scalable. Moreover, since our solver is gradient-descent-based, it allows the flexibility to induce domain knowledge, additional regularization, and inductive biases on the assignment, on the cost, and in the neural networks $f,g$, respectively. We will discuss a few such examples in the sequel. Finally, while our approach may resemble the inverse OT (iOT) problem [21, 13] in the sense that it involves the learning of the transport cost, it greatly differs in the minimized objective. While iOT targets finding a cost realizing a given assignment (hence, requiring coupled data), our learning problem does not assume a known target permutation; instead, it tries to find one minimizing the pairwise distance disagreement on unaligned data. From this perspective, the proposed approach can be seen as a variational analog of the iOT problem. Figure 2: The proposed solver generalizes to unseen samples and scales to large-sample sizes post-training. In both top and bottom experiments, $\mathcal{X}$ and $\mathcal{Y}$ are ViT embeddings. The entropic GW solver can only operate in the transductive regime and runs out of memory for $N>25000$. Figure 3: Simulated annealing of $\epsilon$ and spectral geometric regularization are effective in stabilizing the solver and improving the accuracy of the assignment. Left: simulated annealing schedule used. Middle: distribution of the alignment error (measured as FOSCTTM) over $20$ runs with and without $\epsilon$-annealing. Right: distribution of the alignment error with and without the spectral geometric regularization of the transport cost. ## 4 Extensions While the objective function in Eq. 4 is easy to evaluate, the resulting optimization problem is still an NP-hard QAP. In practice, it is challenging to reach good local minima consistently without imposing further inductive biases. This is also true for the entropic GW solver [51] – while sequential linearization and projection via Sinkhorn algorithm works reasonably in practice, there exist no guarantees on its global convergence. In fact, there exist many scenarios where it fails to recover a meaningful local minimum. Here we introduce several regularization techniques in the problem in Eq. 4 to remediate poor convergence: (i) simulated annealing of the entropic regularization strength $\epsilon$, and (ii) spectral-geometric regularization of the OT cost. We also propose a new objective that matches distance ranks instead of distances themselves that can be employed as an alternative Eq. 4. ### Simulated annealing of $\epsilon$. While evaluating our solver on the scSNARE-seq data [10], where the goal is to align transcriptomic readouts against those of chromatin accessibility and the ground-truth is available thanks to a co-assaying technique developed by [10], we observed that our solver, while it is accurate on average, it is sensitive to the initialization of the neural networks $f$ and $g$. As a result of symmetries in the metric spaces of these data, we observed that the assignment sometimes consistently mismapped the cell line of GM12878 to H1, and vice- versa. The right panel of Fig. 3 depicts the distribution of alignment errors (lower is better) obtained by solving Eq. 4 with multiple random initializations of the embedding parameters $\theta$ and $\phi$. In the right column, the largest mode corresponds indeed to accurate assignment, whereas the two other modes with larger errors represent the aforementioned symmetry- induced cell-line mismappings. We mitigate this problem by performing simulated annealing on $\epsilon$ of the entropic OT problem within the Sinkhorn layer. We propose a schedule for $\epsilon$ that starts high and is gradually decayed (see Fig. 3, left). Our rationale is that this results in a coarse-to-fine refinement of the learned cost (implicitly parametrized via $f$ and $g$) during training, and it is similar in spirit to the idea of a multi-scale version of kernel matching in shape correspondence problems [61, 44, 33]. When $\epsilon$ is high, the entropic regularization is strong, and the resulting assignment is “softer”. By scheduling $\epsilon$ from a large value to a small one, we demand that the learned cost matrix, and as a consequence, the resulting assignment, gets refined during training. In practice, we observe that the proposed $\epsilon$-scheduling works remarkably well; it practically reduces the variance across seeds to zero and is effective in breaking symmetries in the metric spaces that lead to bad local minima and making the solver more reliable (Fig. 3, middle). ### Spectral representation on graphs. Before introducing our proposed spectral-geometric regularization, we provide a brief background on graphs. A familiar reader may skip to the following paragraph. Let $\mathcal{G}=(V,E,\mathbf{\Omega})$ be a weighted graph with the vertex set $V$, edge set $E$, and weighted adjacency matrix $\mathbf{\Omega}$. The combinatorial graph Laplacian is defined as $\mathbf{L}=\mathbf{D}-\mathbf{\Omega}$, where $\mathbf{D}=\textrm{diag}(\mathbf{\Omega}\bm{1})$ is the degree matrix. Given a scalar-valued function $\mathbf{z}\in\mathbb{R}^{|V|}$ on the graph $\mathcal{G}$, the Dirichlet energy is defined to be $\mathbf{z}^{{}^{\top}}\mathbf{L}\mathbf{z}$, and it measures the smoothness of $\mathbf{z}$ on $\mathcal{G}$ [56]. Given two graphs $\mathcal{G}_{1}=(V_{1},E_{1},\mathbf{\Omega}_{1})$ and $\mathcal{G}_{2}=(V_{2},E_{2},\mathbf{\Omega}_{2})$, the Cartesian product of $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, denoted by $\mathcal{G}_{1}\,\Box\,\mathcal{G}_{2}$, is defined as a graph with the vertex set $|V_{1}|\times|V_{2}|$, on which two nodes $(u,v),(u^{\prime},v^{\prime})$ are adjacent if either $u=u^{\prime}$ and $(v,v)\in E_{2}$ or $v=v^{\prime}$ and $(u,u)\in E_{1}$. The Laplacian of $\mathcal{G}_{1}\,\Box\,\mathcal{G}_{2}$ is defined as a tensor sum of the Laplacians $\mathbf{L}_{1}$ and $\mathbf{L}_{2}$, i.e., $\mathbf{L}_{\mathcal{G}_{1}\Box\mathcal{G}_{2}}=\mathbf{L}_{1}\oplus\mathbf{L}_{2}=\mathbf{L}_{1}\otimes\mathbf{I}+\mathbf{I}\otimes\mathbf{L}_{2}$. Denote the spectral decompositions of the Laplacians by $\mathbf{L}_{1}=\bm{\Phi}\bm{\Lambda}_{1}\bm{\Phi}^{{}^{\top}}$ and $\mathbf{L}_{2}=\bm{\Psi}\vec{\Lambda}_{2}\vec{\Psi}{{}^{\top}}$. A signal $\mathbf{Z}$ on the product graph $\mathcal{G}_{1}\,\Box\,\mathcal{G}_{2}$ can be represented using the bases of the individual Laplacians as $\mathbf{Z}=\vec{\Phi}{{}^{\top}}\mathbf{F}\vec{\Psi}$, with the coefficients $\mathbf{F}$. ### Spectral-geometric regularization of the OT cost. We propose a spectral-geometric regularization on the learned OT cost that demands “similar” items in $\mathcal{X}$ to incur “similar” cost with respect to all items in $\mathcal{Y}$, and vice-versa. To formally represent this notion, let $\mathcal{G}_{\mathcal{X}}=(\mathcal{X},\mathcal{E}_{\mathcal{X}},\mathbf{\Omega}_{\mathcal{X}})$ and $\mathcal{G}_{\mathcal{Y}}=(\mathcal{Y},\mathcal{E}_{\mathcal{Y}},\mathbf{\Omega}_{\mathcal{Y}})$, be two graphs inferred on $\mathcal{X}$ and $\mathcal{Y}$, respectively, and let $\mathbf{L}_{\mathcal{X}}$ and $\mathbf{L}_{\mathcal{Y}}$ be their corresponding graph Laplacians. We interpret the learned OT cost $\mathbf{C}=c(f_{\theta}(\mathbf{X}),g_{\phi}(\mathbf{Y}))$ from Eq. 4 as a signal on the product graph $\mathcal{G}_{\mathcal{X}}\,\Box\,\mathcal{G}_{\mathcal{Y}}$, and demand that $\mathbf{C}$ is smooth on $\mathcal{G}_{\mathcal{X}}\,\Box\,\mathcal{G}_{\mathcal{Y}}$. The latter smoothness can be expressed as the Dirichlet energy of $\mathbf{C}$ measured on $\mathcal{G}_{\mathcal{X}}\,\Box\,\mathcal{G}_{\mathcal{Y}}$, $\mathcal{E}_{\text{sm}}=\mathrm{trace}\left(\mathbf{C}^{{}^{\top}}\left(\mathbf{L}_{\mathcal{X}}\otimes\mathbf{I}+\mathbf{I}\otimes\mathbf{L}_{\mathcal{Y}}\right)\mathbf{C}\right)=\mathrm{trace}\left(\mathbf{C}^{{}^{\top}}\mathbf{L}_{\mathcal{X}}\mathbf{C}+\mathbf{C}\mathbf{L}_{\mathcal{Y}}\mathbf{C}^{{}^{\top}}\right),$ (5) and added to Eq. 4 as an additional regularization. Figure 3 (right) demonstrates the effectiveness of the proposed spectral regularization on the task of aligning embeddings from neural latent spaces. From the spectral perspective, interpreting the OT cost $\mathbf{C}$ as a signal on $\mathcal{G}_{\mathcal{X}}\,\Box\,\mathcal{G}_{\mathcal{Y}}$, learning $\mathbf{C}$ given the pointwise features from domains $\mathcal{X}$ and $\mathcal{Y}$ is equivalent to directly learning the functional map of $\mathbf{C}$, this makes our work intimately related to the works of that learn functional maps [40, 32, 61, 7, 34] from shape correspondence and geometric matrix completion literature. ### Matching ranks instead of distances. The choice of the comparison criterion for the pairwise distances crucially influences the usability of the GW problem for real applications. Consider, for example, two point clouds that differ only by a scale factor; since distances are not scale-invariant, solving Eq. 4 to match distances would produce meaningless results. As a remedy, we propose to match the ranks of the pairwise distances instead of their absolute values. Ranks preserve the order and are insensitive to scale or, more generally, monotone transformations. This departs from the standard framework of GH and GW, which align metric spaces, and generalizes it to a more general problem of performing unpaired alignment by matching non-metric quantities. In order to be able to differentiate the objective with respect to ranks, which is an inherently non- differentiable function, we use the differentiable soft ranking operators introduced by Blondel et al. [5]. We optimize the following modified objective: $\displaystyle\vec{\Pi}^{*}=$ $\displaystyle\operatorname*{arg\,min}_{\theta,\phi}\,\left\|\mathcal{R}_{\delta}\left(\mathbf{D}_{\mathcal{X}}\right)-\mathcal{R}_{\delta}\left(\vec{\Pi}(\theta,\phi)\mathbf{D}_{\mathcal{Y}}\vec{\Pi}^{{}^{\top}}(\theta,\phi)\right)\right\|_{\mathrm{F}}^{2}$ (6) $\displaystyle\,\,\text{ s.t. }\vec{\Pi}(\theta,\phi)=\operatorname*{arg\,min}_{\vec{\Pi}\in U(\mu,\nu)}\langle\mathbf{\Pi},{c}(f_{\theta}(\bm{\mathrm{X}}),g_{\phi}(\bm{\mathrm{Y}}))\rangle,$ where $\mathcal{R}_{\delta}$ is a soft-ranking operator applied separately to each row of the matrix, and $\delta$ controls the level of “softness” of the rank. Because ranking is a nonlinear operation, this results in a problem that is no longer quadratic in $\mathbf{\Pi}$, it is unclear how standard GW solvers can be adapted to such settings, and also highlights the benefit of having a gradient-descent–based solver. Applying ranking to other groups of distances effectively results in a different GW-like distance. We defer the systematic exploration of this new family of distances to future work. Further extensions. Although we do not explore it within this work, it is easy to see that (i) the proposed framework can be extended to a fused GW [60] setting by adding a linear objective to Eqs. 4 and 6; (ii) the rank of the OT cost can be controlled by modifying the dimension of the embeddings’ output by $f$ and $g$; and (iii) when partial supervision is available on the assignment (“semi-supervised” alignment), it can be incorporated into the loss as a data term. ## 5 Experiments We split this experiment section into three parts. Firstly, we demonstrate that our solver works in the inductive setting and that is much more scalable to large sample sizes in this setting. Secondly, we showcase experiments that demonstrate the effects of (i) simulated annealing of $epsilon$, (ii) spectral geometric regularization, and (iii) the ranking-based formulation. Thirdly, we demonstrate that the proposed solver, in the transductive setting, outperforms the entropic GW solver on two single-cell multiomics benchmarks. We use both real and synthetic data wherever appropriate. Inductivity and scale. In order to evaluate the inductivity of the method and to benchmark it against the entropic GW solver, we consider two experiments (i) when $\mathcal{X}$ and $\mathcal{Y}$ are isometric, and (ii) when $\mathcal{X}$ and $\mathcal{Y}$ are not exactly isometric. For the first experiment, we consider $\mathcal{X}$ to be CIFAR100 encodings obtained from a vision transformer [20]. We apply an orthogonal transformation to each element of $\mathcal{X}$ to generate $\mathcal{Y}$. We parametrize our encoders $f$ and $g$ to be $3$-layer multi-layer perceptrons (MLPs), and optimize the Eq. 4 with respect to their parameters on $200$ unaligned samples for $500$ iterations ($12$ seconds). Then, we evaluate our method in an inductive setting with an increasing number of unaligned samples available at inference up to $N=45000$. We benchmark it against the GPU-accelerated entropic GW solver available from ott-jax [17]. The results are presented in the top panels of Figure 2. We measure accuracy as whether the predicted correspondence is correct in terms of the class label. We observe that both solvers recover the orthogonal transformation perfectly. Further, we can observe that an inductive solver, because it solves only a $\epsilon$-OT problem at inference, is much faster and more memory efficient. Employing an entropic GW solver, on the other hand, goes out of memory for $N>25000$. Note that the times we reported do not include the time required to compute a geodesic distance matrix for both $\mathcal{X}$ and $\mathcal{Y}$, which is significantly time-consuming at large sample sizes (>10 mins for $N=20000$). In contrast, using our solver would not require computing $\mathbf{D}_{\mathcal{X}}$ and $\mathbf{D}_{\mathcal{Y}}$ at inference. For the second experiment, we use the data from [43] and choose $\mathcal{X}$ to be ViT embeddings as in the previous experiment, while $\mathcal{Y}$ is set to be ViT embeddings generated from rescaled images. We train the $f$ and $g$ for $1000$ iterations ($\sim 2$ minutes), using $1000$ unpaired samples during the training time. The results are presented in the bottom two panels of Fig.2. The results suggest, again, that our solver both generalizes well and scales gracefully with sample sizes, whereas the entropic-GW solver produced inferior results in this setting. These experiments corroborate our claim that our solver both attains high-quality solutions and scales well in the inductive regime. Spectral geometric regularization. For this experiment, we consider the above setting where $\mathcal{X}$ and $\mathcal{Y}$ are two unaligned sets of embeddings obtained from a pre-trained vision transformer [20]. We set $f$ and $g$ to be 3-layer MLPs solve Eq. 4 with and without $\mathcal{E}_{\text{sm}}$ regularization (Eq. 5). We solve this problem on $20$ unaligned datasets drawn from $\mathcal{X}$ and $\mathcal{Y}$, each of size $N=1000$. Figure 3 (right panel) presents the accuracy of the assignment by measuring if the predicted corresponding point belongs to the same class as the groundtruth correspondence. Notice that geometric regularization improves the accuracy of the assignment (+20% in terms of mean accuracy over trials). Moreover, it also reduces the variance thereby inducing meaningful inductive bias into the solver. Simulated annealing of $\epsilon$. As discussed in Section 4, we consider the scSNARE-seq data [11], which is a co-assay of transcriptome and chromatin accessibility measurements performed on $N=1047$ cells. We run our experiment with and without the proposed simulated annealing of $\epsilon$ for 20 random initializations of $f$ and $g$, the results are presented in Figure 3. We observe that using this seed stabilizes the training process significantly. We used this as a default choice across all real data experiments. Ranking-based GW. Figure 5 (right panel) depicts the assignment produced by the ranking-based GW solver in an inductive setting. We observe that ranking- based GW outperforms the distance-based counterpart in the setting of single- cell multiomic alignment. Consequently, the results that we present in the sequel (Figures 4 and 6) use ranking-based loss and they outperform both the entropic GW solver and the distance-based variant of our solver. Single-cell multiomic alignment. We consider two real-world datasets: (i) scSNARE-seq data which contains gene expression (RNA) and chromatin accessibility (ATAC) profiles form 1047 individual cells from four cell lines: H1, BJ, K562, and GM12878, with known groundtruth thanks to a co-assaying technique developed by [11]. We obtained the processed data of RNA and ATAC features from the Demetci et al. [18], whose method uses entropic GW to align these two modalities and serves as the baseline we evaluate against. (ii) human bone marrow single-cell dataset that contains paired measurements of single-cell RNA-seq and ATAC-seq measurements released by Luecken et al. [41]. We obtained the processed data from moscot [36]. In the RNA space, we used PCA embedding of 50 dimensions, and in the ATAC space, we used an embedding given by LSI (latent semantic indexing) embedding, followed by $L_{2}$ normalization. In the scSNARE-seq experiment, we used the entropic GW solver with the same hyperparameters used by [18] as the baseline. It was shown by [18] to outperform the other baselines for unpaired alignment on this data. In the bone marrow single-cell experiment, we compared to the entropic GW solver with Euclidean metric and the geodesic distance metric. To establish a fair baseline, following the methodology of Demetci et al. [18], for both baselines, we perform a grid search on the $\epsilon$ used in Sinkhorn iterations of the solver, and $k$ corresponding to the $k$-NN graph constructed for geodesic computation (for the latter setting), and pick the hyperparameters with the least GW loss. In the case of both bone marrow data and the scSNARE-seq data, we observe that our ranking-based solver produces the best FOCSTTM score (see Appendix). In the case of bone marrow data, especially, our solver produces a significant margin over the entropic GW solvers. The results of scSNARE-seq alignment are presented in Figure 6 in the Appendix. In scSNARE-seq, the margin of our improvement is lower, this could be attributed to limited diversity in cell-lines and small sample-size in scSNARE-seq compared to the bone-marrow data. Figure 4: Qualitative and quantitative results on the human bone marrow single-cell dataset. Top plots depict the UMAP of the translated cells colored by domain (left) and by the cell type (right).Bottom plots report the FOCSTTM metrics for $\mathcal{Y}$ projected onto $\mathcal{X}$ (left) and $\mathcal{X}$ projected onto $\mathcal{Y}$ (right). ## 6 Conclusion In this paper, we presented a new scalable approach to the Gromov-Wasserstein problem. The GW loss is pair-wise and thus is hard to minimize directly yet simple to evaluate. On the other hand, the OT loss is point-wise and is thus simple to minimize efficiently. We showed practical approaches to learning data embeddings such that the solution of the corresponding OT problem minimizes the GW loss. Unlike existing GW solvers that optimize the assignment matrix or the corresponding dual variables directly, our optimization variables are the parameters of the embedding functions. In addition to better scalability in the transductive regime, the proposed approach is also inductive, as the computed embeddings can be applied to new data previously unseen in training. We further proposed regularization techniques demonstrating consistently better convergence. We emphasize that GW is an NP- hard problem, and no existing polynomial-time algorithms (including ours) are guaranteed to find its global minimum. However, we showed in many synthetic and real data experiments that the proposed solver is significantly more accurate and scalable. We also introduced a new distance between metric-measure spaces in which distance ranks are matched instead of the distances themselves, which is more appropriate for metric structures coming from distinct modalities that do not necessarily agree quantitatively. Being oblivious to any monotone transformation of the metric structure, this new distance can be applied to general non-metric dissimilarities in the spirit of non-metric multidimensional scaling (MDS) [15]. We defer to future studies the exploration of its geometric and topological properties. ### Limitations. Our current approach focuses on the discrete GW problem in which the correspondence is found explicitly. Future work should study the continuous setting, with the correspondence represented, e.g., in the form of a functional map [49] – an operator mapping functions on to $\mathcal{X}$ to functions on $\mathcal{Y}$ which can be represented efficiently using truncated bases of the product graph constructed on $\mathcal{X}\times\mathcal{Y}$. Another limitation is the use of full batches for the minimization of the GW loss, which restricts scalability in the transductive regime. Future studies should consider extending the proposed approach to the mini-batch setting, in the spirit of mini-batch optimal flow- matching [58], [37]. ## References * Alvarez-Melis and Jaakkola [2018] D. Alvarez-Melis and T. S. Jaakkola. Gromov-wasserstein alignment of word embedding spaces. _arXiv preprint arXiv:1809.00013_ , 2018. * Amos [2022] B. Amos. On amortizing convex conjugates for optimal transport. _arXiv preprint arXiv:2210.12153_ , 2022. * Amos and Kolter [2017] B. Amos and J. Z. Kolter. OptNet: Differentiable optimization as a layer in neural networks. In D. Precup and Y. W. Teh, editors, _Proceedings of the 34th International Conference on Machine Learning_ , volume 70 of _Proceedings of Machine Learning Research_ , pages 136–145. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/amos17a.html. * Amos et al. [2023] B. Amos et al. Tutorial on amortized optimization. _Foundations and Trends® in Machine Learning_ , 16(5):592–732, 2023. * Blondel et al. [2020] M. Blondel, O. Teboul, Q. Berthet, and J. Djolonga. Fast differentiable sorting and ranking. In _International Conference on Machine Learning_ , pages 950–959. PMLR, 2020. * Blumberg et al. [2020] A. J. Blumberg, M. Carriere, M. A. Mandell, R. Rabadan, and S. Villar. Mrec: a fast and versatile framework for aligning and matching point clouds with applications to single cell molecular data. _arXiv preprint arXiv:2001.01666_ , 2020. * Boyarski et al. [2022] A. Boyarski, S. Vedula, and A. Bronstein. Spectral geometric matrix completion. In _Mathematical and Scientific Machine Learning_ , pages 172–196. PMLR, 2022. * Bronstein et al. [2006] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching. _Proceedings of the National Academy of Sciences_ , 103(5):1168–1172, 2006. * Burkard et al. [1998] R. E. Burkard, E. Cela, P. M. Pardalos, and L. S. Pitsoulis. _The quadratic assignment problem_. Springer, 1998. * Chen et al. [2019a] S. Chen, B. B. Lake, and K. Zhang. High-throughput sequencing of the transcriptome and chromatin accessibility in the same cell. _Nature biotechnology_ , 37(12):1452–1457, 2019a. * Chen et al. [2019b] S. Chen, B. B. Lake, and K. Zhang. High-throughput sequencing of the transcriptome and chromatin accessibility in the same cell. _Nature biotechnology_ , 37(12):1452–1457, 2019b. * Cheow et al. [2016] L. F. Cheow, E. T. Courtois, Y. Tan, R. Viswanathan, Q. Xing, R. Z. Tan, D. S. Tan, P. Robson, Y.-H. Loh, S. R. Quake, et al. Single-cell multimodal profiling reveals cellular epigenetic heterogeneity. _Nature methods_ , 13(10):833–836, 2016. * Chiu et al. [2022] W.-T. Chiu, P. Wang, and P. Shafto. Discrete probabilistic inverse optimal transport. In _International Conference on Machine Learning_ , pages 3925–3946. PMLR, 2022. * Chowdhury et al. [2021] S. Chowdhury, D. Miller, and T. Needham. Quantized gromov-wasserstein. In _Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21_ , pages 811–827. Springer, 2021. * Cox and Cox [2000] T. F. Cox and M. Cox. _Multidimensional scaling_. CRC Press, 2000. * Cuturi [2013] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. _Advances in neural information processing systems_ , 26, 2013. * Cuturi et al. [2022] M. Cuturi, L. Meng-Papaxanthos, Y. Tian, C. Bunne, G. Davis, and O. Teboul. Optimal transport tools (ott): A jax toolbox for all things wasserstein. _arXiv preprint arXiv:2201.12324_ , 2022. * Demetci et al. [2022] P. Demetci, R. Santorella, B. Sandstede, W. S. Noble, and R. Singh. Scot: single-cell multi-omics alignment with optimal transport. _Journal of computational biology_ , 29(1):3–18, 2022. * Deshpande et al. [2022] A. S. Deshpande, N. Ulahannan, M. Pendleton, X. Dai, L. Ly, J. M. Behr, S. Schwenk, W. Liao, M. A. Augello, C. Tyer, et al. Identifying synergistic high-order 3d chromatin conformations from genome-scale nanopore concatemer sequencing. _Nature Biotechnology_ , 40(10):1488–1499, 2022. * Dosovitskiy et al. [2020] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020. * Dupuy et al. [2016] A. Dupuy, A. Galichon, and Y. Sun. Estimating matching affinity matrix under low-rank constraints. _arXiv preprint arXiv:1612.09585_ , 2016. * Eisenberger et al. [2022] M. Eisenberger, A. Toker, L. Leal-Taixé, F. Bernard, and D. Cremers. A unified framework for implicit sinkhorn differentiation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 509–518, 2022. * Emiya et al. [2014] V. Emiya, A. Bonnefoy, L. Daudet, and R. Gribonval. Compressed sensing with unknown sensor permutation. In _2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 1040–1044. IEEE, 2014. * Flamary et al. [2021] R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotomamonjy, I. Redko, A. Rolet, A. Schutz, V. Seguy, D. J. Sutherland, R. Tavenard, A. Tong, and T. Vayer. Pot: Python optimal transport. _Journal of Machine Learning Research_ , 22(78):1–8, 2021. URL http://jmlr.org/papers/v22/20-451.html. * Genevay et al. [2016] A. Genevay, M. Cuturi, G. Peyré, and F. Bach. Stochastic optimization for large-scale optimal transport. _Advances in neural information processing systems_ , 29, 2016. * Gold and Rangarajan [1995] S. Gold and A. Rangarajan. Softassign versus softmax: Benchmarks in combinatorial optimization. _Advances in neural information processing systems_ , 8, 1995. * Gold and Rangarajan [1996] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. _IEEE Trans. Pattern Analysis and Machine Intelligence_ , 18(4):377––388, 1996. * Gouil and Keniry [2019] Q. Gouil and A. Keniry. Latest techniques to study dna methylation. _Essays in biochemistry_ , 63(6):639–648, 2019. * Grandi et al. [2022] F. C. Grandi, H. Modi, L. Kampman, and M. R. Corces. Chromatin accessibility profiling by atac-seq. _Nature protocols_ , 17(6):1518–1552, 2022. * Gromov et al. [1999] M. Gromov, M. Katz, P. Pansu, and S. Semmes. _Metric structures for Riemannian and non-Riemannian spaces_ , volume 152. Springer, 1999. * Guo et al. [2013] H. Guo, P. Zhu, X. Wu, X. Li, L. Wen, and F. Tang. Single-cell methylome landscapes of mouse embryonic stem cells and early embryos analyzed using reduced representation bisulfite sequencing. _Genome research_ , 23(12):2126–2135, 2013. * Halimi et al. [2019] O. Halimi, O. Litany, E. Rodola, A. M. Bronstein, and R. Kimmel. Unsupervised learning of dense shape correspondence. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 4370–4379, 2019. * Holzschuh et al. [2020] B. Holzschuh, Z. Lähner, and D. Cremers. Simulated annealing for 3d shape correspondence. In _2020 International Conference on 3D Vision (3DV)_ , pages 252–260. IEEE, 2020. * Kalofolias et al. [2014] V. Kalofolias, X. Bresson, M. Bronstein, and P. Vandergheynst. Matrix completion on graphs. _arXiv preprint arXiv:1408.1717_ , 2014. * Klein et al. [2015] A. M. Klein, L. Mazutis, I. Akartuna, N. Tallapragada, A. Veres, V. Li, L. Peshkin, D. A. Weitz, and M. W. Kirschner. Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. _Cell_ , 161(5):1187–1201, 2015. * Klein et al. [2023a] D. Klein, G. Palla, M. Lange, M. Klein, Z. Piran, M. Gander, L. Meng-Papaxanthos, M. Sterr, A. Bastidas-Ponce, M. Tarquis-Medina, H. Lickert, M. Bakhti, M. Nitzan, M. Cuturi, and F. J. Theis. Mapping cells through time and space with moscot. _bioRxiv_ , 2023a. doi: 10.1101/2023.05.11.540374. URL https://www.biorxiv.org/content/early/2023/05/11/2023.05.11.540374. * Klein et al. [2023b] D. Klein, T. Uscidda, F. Theis, and M. Cuturi. Generative entropic neural optimal transport to map within and across spaces. _arXiv preprint arXiv:2310.09254_ , 2023b. * Kukurba and Montgomery [2015] K. R. Kukurba and S. B. Montgomery. Rna sequencing and analysis. _Cold Spring Harbor Protocols_ , 2015(11):pdb–top084970, 2015. * Lee et al. [2020] J. Lee, D. Y. Hyeon, and D. Hwang. Single-cell multiomics: technologies and data analysis methods. _Experimental & Molecular Medicine_, 52(9):1428–1442, 2020. * Litany et al. [2017] O. Litany, T. Remez, E. Rodola, A. Bronstein, and M. Bronstein. Deep functional maps: Structured prediction for dense shape correspondence. In _Proceedings of the IEEE international conference on computer vision_ , pages 5659–5667, 2017. * Luecken et al. [2021] M. Luecken, D. Burkhardt, R. Cannoodt, C. Lance, A. Agrawal, H. Aliee, A. Chen, L. Deconinck, A. Detweiler, A. Granados, S. Huynh, L. Isacco, Y. Kim, D. Klein, B. DE KUMAR, S. Kuppasani, H. Lickert, A. McGeever, J. Melgarejo, H. Mekonen, M. Morri, M. Müller, N. Neff, S. Paul, B. Rieck, K. Schneider, S. Steelman, M. Sterr, D. Treacy, A. Tong, A.-C. Villani, G. Wang, J. Yan, C. Zhang, A. Pisco, S. Krishnaswamy, F. Theis, and J. M. Bloom. A sandbox for prediction and integration of dna, rna, and proteins in single cells. In J. Vanschoren and S. Yeung, editors, _Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks_ , volume 1, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/158f3069a435b314a80bdcb024f8e422-Paper-round2.pdf. * Macosko et al. [2015] E. Z. Macosko, A. Basu, R. Satija, J. Nemesh, K. Shekhar, M. Goldman, I. Tirosh, A. R. Bialas, N. Kamitaki, E. M. Martersteck, et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. _Cell_ , 161(5):1202–1214, 2015. * Maiorca et al. [2024] V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, and E. Rodolà. Latent space translation via semantic alignment. _Advances in Neural Information Processing Systems_ , 36, 2024. * Melzi et al. [2019] S. Melzi, J. Ren, E. Rodola, A. Sharma, P. Wonka, and M. Ovsjanikov. Zoomout: Spectral upsampling for efficient shape correspondence. _arXiv preprint arXiv:1904.07865_ , 2019. * Mémoli [2011] F. Mémoli. Gromov–wasserstein distances and the metric approach to object matching. _Foundations of computational mathematics_ , 11:417–487, 2011. * Moschella et al. [2022] L. Moschella, V. Maiorca, M. Fumero, A. Norelli, F. Locatello, and E. Rodolà. Relative representations enable zero-shot latent space communication. _arXiv preprint arXiv:2209.15430_ , 2022. * Navin et al. [2011] N. Navin, J. Kendall, J. Troge, P. Andrews, L. Rodgers, J. McIndoo, K. Cook, A. Stepansky, D. Levy, D. Esposito, et al. Tumour evolution inferred by single-cell sequencing. _Nature_ , 472(7341):90–94, 2011. * Nitzan et al. [2019] M. Nitzan, N. Karaiskos, N. Friedman, and N. Rajewsky. Gene expression cartography. _Nature_ , 576(7785):132–137, 2019. * Ovsjanikov et al. [2012] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas. Functional maps: a flexible representation of maps between shapes. _ACM Transactions on Graphics (ToG)_ , 31(4):1–11, 2012. * O’Geen et al. [2011] H. O’Geen, L. Echipare, and P. J. Farnham. Using chip-seq technology to generate high-resolution profiles of histone modifications. _Epigenetics Protocols_ , pages 265–286, 2011. * Peyré et al. [2019] G. Peyré, M. Cuturi, et al. Computational optimal transport: With applications to data science. _Foundations and Trends® in Machine Learning_ , 11(5-6):355–607, 2019. * Rao et al. [2021] A. Rao, D. Barkley, G. S. França, and I. Yanai. Exploring tissue architecture using spatial transcriptomics. _Nature_ , 596(7871):211–220, 2021. * Scetbon et al. [2022] M. Scetbon, G. Peyré, and M. Cuturi. Linear-time gromov wasserstein distances using low rank couplings and costs. In _International Conference on Machine Learning_ , pages 19347–19365. PMLR, 2022. * Smallwood et al. [2014] S. A. Smallwood, H. J. Lee, C. Angermueller, F. Krueger, H. Saadeh, J. Peat, S. R. Andrews, O. Stegle, W. Reik, and G. Kelsey. Single-cell genome-wide bisulfite sequencing for assessing epigenetic heterogeneity. _Nature methods_ , 11(8):817–820, 2014. * Solomon et al. [2016] J. Solomon, G. Peyré, V. G. Kim, and S. Sra. Entropic metric alignment for correspondence problems. _ACM Transactions on Graphics (ToG)_ , 35(4):1–13, 2016. * Spielman [2012] D. Spielman. Spectral graph theory. _Combinatorial scientific computing_ , 18:18, 2012. * Tang et al. [2010] F. Tang, C. Barbacioru, S. Bao, C. Lee, E. Nordman, X. Wang, K. Lao, and M. A. Surani. Tracing the derivation of embryonic stem cells from the inner cell mass by single-cell rna-seq analysis. _Cell stem cell_ , 6(5):468–478, 2010. * Tong et al. [2023] A. Tong, N. Malkin, G. Huguet, Y. Zhang, J. Rector-Brooks, K. Fatras, G. Wolf, and Y. Bengio. Improving and generalizing flow-based generative models with minibatch optimal transport. _arXiv preprint arXiv:2302.00482_ , 2023. * Unnikrishnan et al. [2018] J. Unnikrishnan, S. Haghighatshoar, and M. Vetterli. Unlabeled sensing with random linear measurements. _IEEE Transactions on Information Theory_ , 64(5):3237–3253, 2018. * Vayer et al. [2020] T. Vayer, L. Chapel, R. Flamary, R. Tavenard, and N. Courty. Fused gromov-wasserstein distance for structured objects. _Algorithms_ , 13(9):212, 2020. * Vestner et al. [2017] M. Vestner, Z. Lähner, A. Boyarski, O. Litany, R. Slossberg, T. Remez, E. Rodola, A. Bronstein, M. Bronstein, R. Kimmel, et al. Efficient deformable shape correspondence via kernel matching. In _2017 international conference on 3D vision (3DV)_ , pages 517–526. IEEE, 2017. * Villar et al. [2016] S. Villar, A. S. Bandeira, A. J. Blumberg, and R. Ward. A polynomial-time relaxation of the gromov-hausdorff distance. _arXiv preprint arXiv:1610.05214_ , 2016. * Zheng et al. [2017] G. X. Zheng, J. M. Terry, P. Belgrader, P. Ryvkin, Z. W. Bent, R. Wilson, S. B. Ziraldo, T. D. Wheeler, G. P. McDermott, J. Zhu, et al. Massively parallel digital transcriptional profiling of single cells. _Nature communications_ , 8(1):14049, 2017. * Zong et al. [2012] C. Zong, S. Lu, A. R. Chapman, and X. S. Xie. Genome-wide detection of single-nucleotide and copy-number variations of a single human cell. _Science_ , 338(6114):1622–1626, 2012. ## Appendix A Appendix ### Sinkhorn algorithm. The Sinkhorn algorithm allows efficient solution of the entropy-regularized linear OT problem of the form $\min_{\mathbf{\Pi}\in U(\mu,\nu)}\langle\mathbf{C},\mathbf{\Pi}\rangle+\epsilon\langle\mathbf{\Pi},\log\mathbf{\Pi}\rangle.$ Defining the kernel matrix $\mathbf{K}=e^{-\mathbf{C}/\epsilon}$ and initializing $\mathbf{u}_{1}=\mathbf{v}_{1}=\mathbf{1}$, the algorithm proceeds with iterating $\mathbf{u}_{k+1}=\frac{\mu}{\mathbf{K}\mathbf{v}_{k}};\hskip 28.45274pt\mathbf{v}_{k+1}=\frac{\nu}{\mathbf{K}{{}^{\top}}\mathbf{u}_{k+1}},$ from which the assignment matrix $\mathbf{\Pi}_{k+1}=\mathrm{diag}(\mathbf{u}_{k+1})\,\mathbf{K}\,\mathrm{diag}(\mathbf{v}_{k+1})$. Here $\mathrm{diag}(\mathbf{u})$ denotes a diagonal matrix with the entries of the vector $\mathbf{u}$ on the diagonal, and exponentiation and division are performed element-wise. The iterations are usually stopped when the change $\|\mathbf{\Pi}_{k+1}-\mathbf{\Pi}_{k}\|$ falls below a pre-defined threshold. ### Entropic GW solver. The entropic GW solver aims at solving the entropy-regularized GW problem $\min_{\mathbf{\Pi}\in U(\mu,\nu)}\|\mathbf{D}_{\mathcal{X}}-\mathbf{\Pi}\mathbf{D}_{\mathcal{Y}}\mathbf{\Pi}{{}^{\top}}\|_{\mathrm{F}}^{2}+\epsilon\langle\mathbf{\Pi},\log\mathbf{\Pi}\rangle.$ Without the entropy term, the problem is a linearly constrained quadratic program, which Gold and Rangarajan [27] proposed to solve as a sequence of linear programs. Applied here, this idea leads to a sequence of entropy- regularized linear OT problems of the form $\mathbf{\Pi}_{k+1}=\mathrm{arg}\min_{\mathbf{\Pi}\in U(\mu,\nu)}\langle\mathbf{C}_{k+1},\mathbf{\Pi}\rangle+\epsilon\langle\mathbf{\Pi},\log\mathbf{\Pi}\rangle,$ with the cost $\mathbf{C}_{k+1}=\mathbf{D}_{\mathcal{X}}\mathbf{\Pi}_{k}\mathbf{D}_{\mathcal{Y}}$ defined using the previous iteration. Each such problem is solved using Sinkhorn inner iterations. ### Barycentric projection. For visualization and comparison purposes, it is often convenient to represent the points from $\mathcal{X}$ and $\mathcal{Y}$ in the same space. Let $\mathbf{X}=(\mathbf{x}_{1},\dots,\mathbf{x}_{N})$ and $\mathbf{Y}=(\mathbf{y}_{1},\dots,\mathbf{y}_{N})$ denote the coordinates of the points in $\mathcal{X}$ and $\mathcal{Y}$, respectively. Given the “soft" assignment $\mathbf{\Pi}$ and using $\mathcal{Y}$ as the representation space, we can represent $\mathbf{X}$ in the form of the weighted sum, $\hat{\mathbf{X}}=\mathbf{Y}\mathbf{\Pi}$, so that the representation of a point $\mathbf{x}_{i}$ in $\mathbf{Y}$ becomes [1] $\hat{\mathbf{x}}_{i}=\sum_{j}\pi_{ij}\mathbf{y}_{j},$ We remind that $\mathbf{\Pi}$ is by definition a stochastic matrix, implying that the weights in the above sum are non-negative and sum to $1$. ### FOSCTTM score. The _fraction of samples closer than the true match_ (FOSCTTM) measures the alignment quality of two equally-sized sets with known ground-truth correspondence. Let $U=\\{\mathbf{u}_{i}\\}$ and $V=\\{\mathbf{v}_{i}\\}$ be two sets of points in a common metric space $\mathcal{Z}$ ordered, without loss of generality, in trivial correspondence order (i.e., every $\mathbf{u}_{i}$ corresponds to $\mathbf{v}_{i}$). Given a point $\mathbf{u}_{i}$, we define the fraction of points in $V$ that are closer to it than the true match $\mathbf{v}_{i}$, $p_{i}=\frac{1}{N}\,\left|\\{j:d_{\mathcal{Z}}(\mathbf{u}_{i},\mathbf{v}_{j})<d_{\mathcal{Z}}(\mathbf{u}_{i},\mathbf{v}_{i})\\}\right|.$ Similarly, we define the fraction of points in $U$ that are closer to $\mathbf{v}_{i}$ and the true match $\mathbf{u}_{i}$, $q_{i}=\frac{1}{N}\,\left|\\{j:d_{\mathcal{Z}}(\mathbf{v}_{i},\mathbf{u}_{j})<d_{\mathcal{Z}}(\mathbf{v}_{i},\mathbf{u}_{i})\\}\right|.$ The FOCSTTM score is defined as $\mathrm{FOCSTTM}=\frac{1}{2N}\sum_{i=1}^{N}(p_{i}+q_{i}).$ The score is normalized in the range of $[0,1]$ with perfect alignment having $\mathrm{FOCSTTM}=0$. Figure 5: Qualitative evaluation of the proposed GW solver in inductive setting. The plot depicts the assignment produced by our distance-based GW solver (Eq. 4) on a new set of samples. Figure 6: Qualitative and quantitative results on the scSNARE-seq dataset. Left and middle: Aligned samples from ATAC and RNAl, colored by the domains (ATAC: black, RNA: red) and cell types, respectively. Right: the sorted FOCSTTM plot, a quantitative metric measuring the quality of the assignment.
The linear and nonlinear evolutions of the tearing instability in a collisionless plasma with a strong guide field are analyzed on the basis of a two-field Hamiltonian gyrofluid model. The model is valid for a low ion temperature and a finite $\bee$. The finite $\bee$ effect implies a magnetic perturbation along the guide field direction and electron finite Larmor radius effects. A Hamiltonian derivation of the model is presented. A new dispersion relation of the tearing instability is derived for the case $\bee=0$ and tested against numerical simulations. For $\beta_e \ll 1$ the equilibrium electron temperature is seen to enhance the linear growth rate, whereas we observe a stabilizing role when electron finite Larmor radius effects become more relevant. In the nonlinear phase, a double "faster-than-exponential" growth is observed, similarly to what occurs in the presence of ion finite Larmor radius effects. Energy transfers are analyzed and the conservation laws associated with the Casimir invariants of the model are also discussed. Numerical simulations seem to indicate that finite $\beta_e$ effects do not produce qualtitative modifications in the structures of the Lagrangian invariants associated with Casimirs of the model. § INTRODUCTION Magnetic reconnection plays a crucial role in a broad range of plasma environments, from laboratory plasma experiments to astrophysical plasmas. It is a fundamental energy conversion process, as a result of which magnetic field energy is converted into kinetic energy and heat. In a reconnection even, the tearing instability is believed to play an important role as an onset mechanism of the process. A considerable progress in the understanding of this mechanism has been achieved through the fluid description of plasmas. The fluid framework is less costly in terms of computational resources, and physically more intuitive when compared to the kinetic framework. Fluid models, in general, are also more suitable for analytical treatment. In the non-collisional case, some reduced fluid models were designed to retain two-fluid effects (e.g. [Schep et al., 1994, Grasso et al., 1999, Grasso & Tassi, 2015, Del Sarto et al., 2006, Fitzpatrick & Porcelli, 2007]), such as, for instance, electron inertia which is known to develop a thin current layer where modifications of the topology of the magnetic field lines can occur. These fluid models, on the other hand, neglect the effects of the electron Larmor radius, which makes it impossible to describe phenomena taking place at a microscopic scale comparable to that of the electron thermal gyro-radius. Gyrofluid models are the effective tools to fill this gap. Indeed, although obtained by truncating the infinite hierarchy of equations evolving the moments of the gyrokinetic equations, gyrofluid models, unlike fluid models, retain finite Larmor radius (FLR) effects and are thus valid on thermal Larmor radius scales. Also, most of the reduced fluid models neglect the perturbations of the magnetic field along the direction of a guide field, the latter typically corresponding to the mean magnetic field in astrophysical plasmas (e.g. [Schekochihin et al., 2009]) or to an imposed external field in laboratory plasmas. However, even in the case of a strong guide field, such perturbations can be relevant in some nearly collisionless environments such as the solar wind, which motivates their inclusion in an analysis of collisionless reconnection. In this work, we make use of a gyrofluid model to study the linear and nonlinear evolution of the tearing instability in a collisionless plasma with strong guide field. This study is based on a two-field gyrofluid model that has been derived from gyrokinetic equations in [Tassi et al., 2020], assuming a quasi-static closure. With respect to the above mentioned reduced fluid models, such gyrofluid model accounts for both finite electron Larmor radius effects and perturbations parallel to the direction of the guide field. The model is taken within the asymptotic cold ion limit, although we present a small set of simulations performed in the limit of hot ions to reflect the differences and possible consequences of this limit. A more in-depth study of the hot ion limit could be done in a subsequent work. Our gyrofluid model is valid for finite $\bee$ values, where $\bee$ is the ratio between the electron pressure and the magnetic pressure based on the guide field. We remark that finite $\beta_e$ effects were taken into account also in the model by [Fitzpatrick & Porcelli, 2004, Fitzpatrick & Porcelli, 2007]. However, in that model, electron FLR effects were neglected. The study of reconnection for a finite $\bee$ can be relevant especially for astrophysical plasmas with large temperatures, such as in the Earth magnetosheath, where some $\beta >1$ values are observed, in the presence of a guide field, during reconnection events ([Man et al., 2020, Eastwood et al., 2018]). We consider magnetic reconnection taking place in a two dimensional $(2D)$ plane, perpendicular to the guide field component. Reconnection is mediated by electron inertia and by electron FLR, which makes the process non-dissipative, unlike reconnection driven by electrical collisional resistivity. As many dissipationless fluid and gyrofluid models, also the gyrofluid model under consideration possesses a Hamiltonian structure, which reveals the presence of two Lagrangian invariants and gives the expression of the conserved total energy of the system. With this we can obtain further information about how $\bee$ can influence the distribution of the different components of the total energy. In the limit $\bee \rightarrow 0$ (in the following also referred to as the "fluid" limit), the model corresponds to the two-field fluid model of [Schep et al., 1994]. This fluid model has long been used to study the tearing instability, and a relevant dispersion relation for the collisionless tearing mode, applicable to this model, has been derived in [Porcelli, 1991]. We present in this article a new analytical formula, valid assuming the constant-psi approximation ([Furth et al., 1963]), which differs from the relation of [Porcelli, 1991], taken in the limit where the tearing stability parameter $\Delta '$ is small, by the presence of a small corrective term. These two formulas are tested against numerical simulations and, in its regime of validity, our new relation shows a better agreement with the numerical growth rate. We studied numerically the effect of a finite $\bee$ in the linear and nonlinear phase of the tearing instability. For the linear phase, we first isolate the effect of varying $\beta_e$ by keeping fixed all the other parameters of the system. In this setting we observe a stabilizing role of the $\beta_e$ parameter. The stabilizing effect is then seen to be reduced when increasing the normalized electron skin depth $d_e$. A partial justification of this behavior can be given analytically considering the small FLR limit of the model. We remark that varying $\beta_e$ with fixed $d_e$ and $\rho_s$ amounts to varying the normalized thermal electron Larmor radius $\rho_e$ at fixed $\rho_s$. Subsequently, we consider the effect of varying $\beta_e$ while keeping a fixed mass ratio. The previously mentioned stabilizing role of $\beta_e$ is then concomitant with the destabilizing role of the normalized sonic Larmor radius $\rho_s$. The growth rate is thus evaluated for different values of the parameters $d_e$, $\rs$ and $\rho_e$. These parameters are associated with different physical scales and are absent in the usual reduced magnetohydrodynamics (MHD) description. The results we find turn out to be in agreement with those of [Numata et al., 2011] and of [Numata & Loureiro, 2015], which were obtained with a gyrokinetic model. In the nonlinear phase, we find the explosive growth rate ([Aydemir, 1992]) which has been obtained as well in the gyrofluid study of [Biancalani & Scott, 2012] that was considering low $\bee$ and ion FLR but no electron FLR effects. We investigate how the effects of $\bee$ affects this faster than exponential growth. The reconnection process described by Hamiltonian reduced fluid and gyrofluid models has been analyzed in terms of Lagrangian invariants in several cases in the past ([Cafaro et al., 1998, Grasso et al., 2001, Grasso et al., 2010, Comisso et al., 2013, Grasso & Tassi, 2015]). The effect of both electron FLR effects and parallel magnetic perturbations on the structure of such invariants has not been studied so far, though. In this paper we present the behavior of the two topological invariants of the system. They extend the Lagrangian invariants of simpler models that do not account for $\bee$ effects and behave similarly. The paper is organized as follows. In Sec. <ref> we derive the gyrofluid model adopted for the analysis. The procedure we follow for the derivation automatically provides the Hamiltonian structure of the model. Section <ref> contains a review of the linear theory and a new dispersion relation for the case $\bee=0$. We also present the results of numerical simulations in the linear phase, for arbitrary $\bee$. In Sec. <ref> the results obtained in the non-linear phase are presented and the gyrofluid version is compared to the fluid version. In this Section, we also study the impact of a finite $\bee$ on the evolution of the different energy components. Section <ref> presents the conservation laws and the evolution of the Lagrangian invariants of the model. In the Appendix we present the derivation of the new dispersion relation, which is based on the asymptotic matching theory. § THE GYROFLUID MODEL We begin by considering the model given by the evolution equations \begin{align} & \frac{\pa N_i}{\pa t}+[\gamui \phi + \taupi \rspe^2 2 \gamdi \bpar , N_i]-[\gamui \apar , U_i]=0, \label{conti}\\ & \frac{\pa}{\pa t}(U_i + \gamui \apar) + [\gamui \phi + \taupi \rspe^2 2 \gamdi \bpar , U_i + \gamui \apar]-\frac{\taupi \rspe^2}{\Theta_i} [ \gamui \apar , N_i]=0, \label{momi}\\ &\frac{\pa N_e}{\pa t}+[\gamue \phi - \rspe^2 2 \gamde \bpar , N_e]- [\gamue \apar , U_e]=0, \label{conte}\\ &\frac{\pa}{\pa t}(\gamue \apar - d_e^2 U_e)+[\gamue \phi - \rspe^2 2 \gamde \bpar , \gamue \apar - d_e^2 U_e]+\frac{\rspe^2}{\Theta_e}[\gamue \apar ,N_e]=0, \label{mome} \end{align} complemented by the static relations \begin{align} & \gamui N_i - \gamue N_e + (1-\Theta_i)\gammzi \frac{\phi}{\taupi \rspe^2} + (1-\Theta_e)\gammze \frac{\phi}{ \rspe^2} + (\Theta_i\gamui ^2 -1)\frac{\phi}{\taupi \rspe^2}\nno \\ & + (\Theta_e\gamue^2 -1) \frac{\phi}{\rspe^2} + (\Theta_i\gamui 2 \gamdi -\Theta_e\gamue 2 \gamde)\bpar \nno \\ & + ((1- \Theta_i) (\gammzi - \gammui) - (1 - \Theta_e) (\gammze - \gammue))\bpar =0, \label{qn4f}\\ & \lapp \apar = \left( \left(1 - \frac{1}{\Theta_e}\right)(1 - \gammze)\frac{1}{d_e^2} + \left(1 - \frac{1}{\Theta_i}\right)(1 - \gammzi)\frac{1}{d_i^2} \right)\apar \nno \\ & + \gamue U_e - \gamui U_i, \label{amppar4f}\\ & \bpar =-\frac{\bepe}{2}\left(\taupi 2 \gamdi N_i + 2 \gamde N_e + (1-\Theta_i)( \gammzi- \gammui) \frac{\phi}{ \rspe^2} \right. \nno \\ & - (1-\Theta_e)( \gammze- \gammue) \frac{\phi}{ \rspe^2} + \Theta_i \gamui 2 \gamdi \frac{\phi}{\rspe^2} - \Theta_e \gamue 2 \gamde \frac{\phi}{\rspe^2} + \Theta_i \taupi 4 \gamdi^2 \bpar \nno \\ & \left. + \Theta_e 4 \gamde^2 \bpar + \taupi 2(1- \Theta_i)( \gammzi- \gammui)\bpar + 2(1- \Theta_e)( \gammze- \gammue)\bpar \right) \label{ampperp4f} \end{align} Equations (<ref>) and (<ref>) correspond to the ion and electron gyrocenter continuity equations, respectively, whereas Eqs. (<ref>) and (<ref>) refer to the ion and electron momentum conservation laws, along the guide field direction. The static relations (<ref>), (<ref>) and (<ref>) descend from quasi-neutrality and from the projections of Ampère's law along directions parallel and perpendicular to the guide field, respectively. The system (<ref>)-(<ref>), although written with a different normalization, consists to the Hamiltonian four-field model derived by [Tassi et al., 2020], taken in the 2D limit (assuming that all the independent variables do not vary along the direction of the guide field). This model has been derived by considering a quasi-static closure which fixes all the moments, except for the gyrocenter density and velocity parallel to the guide field, for both species. Strictly speaking, the derivation of the quasi-static closure, followed by [Tassi et al., 2020], does not hold in the purely 2D case, which we consider then as a limit of the 3D case as the component of the wave-vector of the perturbation along the guide field, goes to zero. We recall that the quasi-static closure adopted by [Tassi et al., 2020] is valid in 3D, when, for each particle species, the phase velocity of the fluctuations along the guide field direction is much less than the thermal speed based on the parallel equilibrium temperature of the corresponding species. The model is formulated in a slab geometry adopting a Cartesian coordinate system $(x,y,z)$. We indicated with $N_s$ and $U_s$ the fluctuations of the gyrocenter densities and velocities parallel to the guide field, respectively, for the species $s$, with $s=e$ for electrons and $s=i$ for ions. The symbols $\apar, \bpar$ and $\phi$, on the other hand, corresponds to the fluctuations of the $z$ component of the magnetic vector potential, to the parallel magnetic perturbations and to the fluctuations of the electrostatic potential, respectively. The fields $N_{e,i}, U_{e,i}, \apar, \bpar$ and $\phi$ depend on the time variable $t$ and on the spatial coordinates $x$ and $y$, which belong to the domain $\mathcal{D}=\{-L_x \leq x \leq L_x \, , \, -L_y \leq y \leq L_y \}$, with $L_x$ and $L_y$ positive constants. Periodic boundary conditions are imposed on the domain $\mathcal{D}$. The operator $[ \, , \, ]$ is the canonical Poisson bracket and is defined by $[f,g]=\partial_x f \partial_y g - \partial_y f \partial_x g$, for two functions $f$ and $g$. We write the normalized magnetic field in the form \begin{equation} \label{magfield} \mathbf{B}(x,y,z,t) \approx \hat{z}+ \frac{\hat{d}_i}{L}\bpar(x,y,z,t)\mathbf{z} + \nabla \apar (x,y,z,t)\times \hat{z}, \end{equation} with $\hat{z}$ indicating the unit vector along the $z$ direction, with $L$ a characteristic equilibrium scale length, and with $\hat{d}_i=c\sqrt{m_i/(4 \pi e^2 n_0)}$ the ion skin depth. We denote by $m_i$ the ion mass, by $e$ the proton charge, by $c$ the speed of light and $n_0$ the equilibrium density (equal for ions and electrons). The first term on the right-hand side of (<ref>) accounts for the strong guide field. In Eq. (<ref>) only up to the first order terms in the fluctuations are shown, and the higher-order contributions, which guarantee $\nabla \cdot \mathbf{B}=0$, are neglected. The normalization of the variables used in Eqs. (<ref>)-(<ref>) is the following: \begin{align} & t=\frac{v_A}{L}\hat{t}, \qquad x=\frac{\hat{x}}{L}, \qquad y=\frac{\hat{y}}{L}, \\ & N_{e,i}=\frac{L}{\hat{d}_i}\frac{\hat{N}_{e,i}}{n_0}, \qquad U_{e,i}=\frac{L}{\hat{d}_i}\frac{\hat{U}_{e,i}}{v_A},\\ & \apar=\frac{\hat{A}_\parallel}{L B_0}, \qquad \bpar=\frac{L}{\hat{d}_i}\frac{\hat{B}_\parallel}{B_0}, \qquad \phi=\frac{c}{v_A} \frac{\hat{\phi}}{L B_0}, \end{align} where the hat indicates dimensional quantities, $B_0$ is the amplitude of the guide field and $v_A=B_0/\sqrt{4 \pi m_i n_0}$ is the Alfvén speed. Independent parameters in the model are $\bepe$, $\taupi$, $\rspe$, $\Theta_e$, $\Theta_i$ and $d_e$, corresponding to the ratio between equilibrium electron pressure and magnetic guide field pressure, to the ratio between equilibrium perpendicular ion and electron temperatures, to the normalized sonic Larmor radius, to the ratio between the equilibrium perpendicular and parallel temperature for electrons and ions and to the normalized perpendicular electron skin depth, respectively. These parameters are defined as \begin{align} & \bepe=8 \pi \frac{n_0 \Tpee}{B_0^2}, \qquad \taupi=\frac{\Tpei}{\Tpee}, \qquad \rspe=\frac{1}{L}\sqrt{\frac{\Tpee}{m_i}}\frac{m_i c}{e B_0}, \\ & \Theta_e= \frac{\Tpee}{\Tpae},\qquad \Theta_i= \frac{\Tpei}{\Tpai}, \qquad d_e=\frac{1}{L}c \sqrt{\frac{m_e}{4 \pi e^2 n_0}}, \end{align} where $\Tpea$ and $\Tpa$ are the perpendicular and parallel equilibrium temperatures for the species $s$, respectively, and $m_e$ is the electron mass. Note that $\rspe/\sqrt{\bepe / 2}=d_i$, where $d_i=\hat{d}_i /L$ is the normalized ion skin depth. Electron and ion gyroaverage operators are associated with corresponding Fourier multipliers in the following way: \begin{align} &\gamue=2\gamde \rightarrow \mathrm{e}^{-\kpe \frac{\bepe}{4}d_e^2}, \label{op1}\\ &\gamui=2\gamdi \rightarrow \mathrm{e}^{-\kpe \frac{\taupi}{2}\rspe^2}. \label{op2} \end{align} Γ_0e →I_0(/2d_e^2) e^-/2d_e^2 , Γ_1e →I_1(/2d_e^2) e^-/2d_e^2, Γ_0i →I_0(^2 ) e^-^2 , Γ_1i →I_1(^2 ) e^-^2, where $I_n$ are the modified Bessel functions of order $n$ and $ \kpe = \sqrt{k_{x}^{2}+ k_{y}}^{2} $ is the perpendicular wave number. For the range of parameters adopted in our analysis, the gyroaverage operators $\gamue$ and $\gamui$, corresponding to those introduced by [Brizard, 1992], are shown to be adequate. Nevertheless, different gyroaverage operators, described in the papers [Dorland & Hammett, 1993], [Mandell et al., 2018], have proven to provide a very good agreement with the linear kinetic theory for a wider range of scales and are widespread in gyrofluid numerical codes. We define the dynamical variables A_i=+ d_i^2 U_i, A_e=- d_e^2 U_e. The fields $A_i$ and $A_e$ are proportional to the parallel canonical fluid momenta, based on gyroaveraged magnetic potentials. The two static relations (<ref>) and (<ref>) can be seen, in Fourier space, as an inhomogeneous linear system with the Fourier coefficients of $\phi$ and $\bpar$ as unknowns, for given $N_{i,e}$. From the solution of this system, one can express the fields $\phi$ and $\bpar$ in terms of $N_i$ and $N_e$, by means of relations of the form =_B (N_i, N_e), ϕ=_ϕ(N_i , N_e), where $\call_B$ and $\call_\phi$ are linear operators, the explicit form of which can easily be provided in Fourier space. Similarly, using the relations (<ref>) and (<ref>), one can express $U_e$ and $U_i$ in the form U_e=(A_i , A_e), U_i=(A_i , A_e), where $\lue$ and $\lui$ are also linear operators. The model (<ref>)-(<ref>) can be formulated as an infinite dimensional Hamiltonian system, adopting as dynamical variables the four fields $N_i$, $N_e$, $A_i$ and $A_e$ [Tassi et al., 2020]. The corresponding Hamiltonian structure consists of the Hamiltonian functional \begin{align} &H(N_i , N_e , A_i ,A_e )=\frac{1}{2} \int d^2 x \, \left( \frac{\taupi \rspe^2}{\Theta_i} N_i^2 + \frac{ \rspe^2}{\Theta_e} N_e^2 + A_i \lui (A_i , A_e)\right. \nno\\ & \left. - A_e \lue (A_i , A_e) + N_i (\gamui \call_\phi (N_i , N_e) + \taupi \rspe^2 2 \gamdi \call_B (N_i , N_e) ) \right. \nno\\ & \left. - N_e ( \gamue \call_\phi (N_i , N_e) - \rspe^2 2 \gamde \call_B (N_i , N_e))\right), \label{ham4f} \end{align} and of the Poisson bracket \begin{align} &\{ F , G\}= -\int d^2 x \left( N_i \left([F_{N_i} , G_{N_i}]+\taupi \frac{2}{\bepe}\frac{\rspe^4}{\Theta_i} [F_{A_i}, G_{A_i}]\right) \right. \nno\\ &\left.+ A_i \left([F_{A_i} , G_{N_i}]+[F_{N_i} , G_{A_i}]\right) -N_e([F_{N_e} , G_{N_e}] + d_e^2 \frac{\rspe^2}{\Theta_e} [F_{A_e} , G_{A_e}]) \right. \nno\\ & \left.-A_e([F_{A_e} , G_{N_e}] + [F_{N_e} , G_{A_e}])\right), \label{pb4f} \end{align} where subscripts on functionals indicate functional derivatives, so that, for instance, $F_{N_i}=\delta F / \delta N_i$. Using the Hamiltonian (<ref>) and the Poisson bracket (<ref>), the four equations (<ref>)-(<ref>) can be obtained from the Hamiltonian form [Morrison, 1998] χ/t={χ, H }, replacing $\chi$ with $N_i$, $N_e$, $A_i$ and $A_e$. This Hamiltonian four-field gyrofluid model, although greatly simplified with respect to the original gyrokinetic system, is still amenable to a further reduction, concerning in particular the ion dynamics which, for the analysis of reconnection of interest here, was shown not to be crucially relevant ([Comisso et al., 2013], [Numata et al., 2011]). Also, we carry out most of the analysis in the isotropic cold-ion limit, a simplifying assumption which is also helpful for the comparison with previous works. Nevertheless, some comments will be provided also with regard to the opposite limit of equilibrium ion temperature much larger than the electron one. On the other hand, in carrying out the reduction procedure, we find it important to preserve a Hamiltonian structure, which avoids the introduction of uncontrolled dissipation in the system and also allows for a more direct comparison with previous Hamiltonian models for reconnection, in particular with the two-field model considered by [Cafaro et al., 1998], [Grasso et al., 2001], [Del Sarto et al., 2006], [Del Sarto et al., 2003]. In particular, we intend to obtain a Hamiltonian reduced version of the four-field model (<ref>)-(<ref>), in which the gyrocenter ion density fluctuations $N_i$ and ion gyrocenter parallel velocity fluctuations $U_i$ are neglected, the ion equilibrium temperature is isotropic, and ions are taken to be cold. The latter four conditions amount to impose N_i=0, U_i=0, Θ_i=1, and take the limit Because we want to perform this reduction while preserving a Hamiltonian structure, we apply the conditions (<ref>) and (<ref>) at the level of the Hamiltonian structure, instead of applying them directly to the equations of motion. The latter procedure would indeed produce no information about the Hamiltonian structure of the resulting model. As first step, we impose the conditions (<ref>)-(<ref>) in the static relations (<ref>)-(<ref>), which leads to \begin{align} &\left( \frac{(1-\Theta_e)}{\rspe^2}\gammze+ \frac{(\Theta_e\gamue^2-1)}{\rspe^2} + \lapp\right) \phi \nno \\ & -\left(\Theta_e \gamue 2 \gamde-1+(1 -\Theta_e) (\gammze - \gammue) \right)\bpar =\gamue N_e, \label{qncond}\\ &\left(\left(1-\frac{1}{\Theta_e}\right)\frac{(\gammze-1)}{d_e^2} + \lapp\right)\apar= \gamue U_e, \label{ampparcond}\\ &\left(\Theta_e \gamue 2 \gamde+ (1-\Theta_e)(\gammze-\gammue)-1\right)\frac{\phi}{\rspe^2} \nno \\ &-\left(\frac{2}{\bepe}+ 2(1-\Theta_e)(\gammze-\gammue)+ 4\Theta_e \gamde^2\right)\bpar=2 \gamde N_e. \label{ampperpcond} \end{align} The three relations (<ref>)-(<ref>), together with the definition of $A_e$ in Eq. (<ref>), make it possible to express $\bpar$, $\phi$ and $U_e$, in terms of the two dynamical variables $N_e$ and $A_e$, according to =N_e , ϕ=N_e , U_e=A_e, where $\calbo$, $\calphio$ and $\calueo$ are linear symmetric operators. As next step, we impose the conditions (<ref>)-(<ref>) on the Hamiltonian (<ref>), which reduces the Hamiltonian to the following functional of the only two dynamical variables $N_e$ and $A_e$: \begin{align} &H(N_e ,A_e )=\frac{1}{2} \int d^2 x \, \left( \frac{\rspe^2}{\Theta_e} N_e^2 - A_e \calueo A_e - N_e ( \gamue \calphio N_e - \rspe^2 2 \gamde \calbo N_e)\right). \label{ham2f} \end{align} With regard to the Poisson bracket (<ref>), we can consider its limit as $\taupi \rightarrow 0$, given that the bilinear form (<ref>) is a valid Poisson bracket for any value of $\taupi$. On the other hand, in general, we cannot impose directly the conditions (<ref>) in the bracket, as this operation does not guarantee that the resulting bilinear form satisfies the Jacobi identity. However, we remark that the set of functionals of the two dynamical variables $N_e$ and $A_e$, which the reduced Hamiltonian (<ref>) belongs to, forms a sub-algebra of the algebra of functionals of $N_i$, $N_e$, $A_i$ and $A_e$, with respect to the Poisson bracket (<ref>). Indeed, if $F$ and $G$ are two functionals of $N_e$ and $A_e$ only, $\{F , G \}$ is again a functional of $N_e$ and $A_e$ only. One can in particular restrict to the part of the bracket (<ref>) involving functional derivatives only with respect to $N_e$ and $A_e$, the other terms yielding vanishing contributions when evaluated on functionals of $N_e$ and $A_e$ only. The resulting Poisson bracket therefore reads \begin{align} &\{ F , G\}= \int d^2 x \left( N_e([F_{N_e} , G_{N_e}] + d_e^2\frac{\rspe^2}{\Theta_e} [F_{A_e} , G_{A_e}]) +A_e([F_{A_e} , G_{N_e}] + [F_{N_e} , G_{A_e}])\right). \label{pb2f} \end{align} We remark that the Poisson bracket (<ref>) has the same form as that of the model investigated by [Cafaro et al., 1998] and by Grasso et al., 2001. The resulting reduced two-field model, accounting for the conditions (<ref>)-(<ref>), can then be obtained from the Hamiltonian (<ref>) and the Poisson bracket (<ref>). The corresponding evolution equations read \begin{align} &\frac{\pa N_e}{\pa t}+[\gamue \phi - \rspe^2 2 \gamde \bpar , N_e]- [\gamue \apar , U_e]=0, \label{conte2}\\ &\frac{\pa A_e}{\pa t}+[\gamue \phi - \rspe^2 2 \gamde \bpar , A_e]+\frac{\rspe^2}{\Theta_e}[\gamue \apar ,N_e]=0, \label{mome2} \end{align} where $\bpar$, $\phi$ and $U_e$ are related to $N_e$ and $A_e$ by means of Eqs. (<ref>) and (<ref>)-(<ref>). We impose now electron temperature isotropy (i.e. setting $T_{0 \perp e} =T_{0 \parallel e}=T_{0e}$, corresponding to $\Theta_e=1$) and the evolution equations are reduced to \begin{align} &\frac{\pa N_e}{\pa t}+[\gamue \phi - \rs^2 2 \gamde \bpar , N_e]- [\gamue \apar , U_e]=0, \label{conteiso}\\ &\frac{\pa A_e}{\pa t}+[\gamue \phi - \rs^2 2 \gamde \bpar , A_e]+\rs^2[\gamue \apar ,N_e]=0, \label{momeiso} \end{align} complemented by the equations \begin{align} &\left( \frac{ \gamue^2 -1}{\rs^2}+\lapp\right) \phi-\left( \gamue 2 \gamde -1\right)\bpar =\gamue N_e, \label{qncondiso}\\ &\lapp\apar=\gamue U_e, \label{ampparcondiso}\\ &\left( \gamue 2 \gamde-1\right)\frac{\phi}{\rs^2} -\left(\frac{2}{\bee}+ 4 \gamde^2\right)\bpar=2 \gamde N_e. \label{ampperpcondiso} \end{align} Eqs. (<ref>), (<ref>) and (<ref>)-(<ref>) correspond to the gyrofluid model adopted for the subsequent analysis of magnetic reconnection. § LINEAR PHASE §.§ Linear theory for $\bee \rightarrow 0$ In this Subsection we focus on the regime for which the electron FLR effects and the parallel magnetic perturbations are negligible. The limit of vanishing thermal electron Larmor radius, i.e. $\rho_e=d_e \sqrt{\bee /2} \rightarrow 0$, is adopted by considering $\bee \rightarrow 0$ and a fixed $d_e$. This limit enables to reduce the gyrofluid model (<ref>)-(<ref>) to the fluid model of [Schep et al., 1994, Cafaro et al., 1998], for which the analytical study of the tearing instability has been extensively studied in the past ([Porcelli, 1991, Grasso et al., 2001, Grasso et al., 1999]). When assuming $\bee \rightarrow 0$ for a fixed $d_e$, the gyroaverge operators can be approximated in the Fourier space in the following way f(x,y) = (1 +ρ_e^2 ) f(x,y) + O(ρ_e^4), f(x,y) = 1/2( 1 + ρ_e^2 ) f(x,y) + O(ρ_e^4). Using this development in Eqs. (<ref>)-(<ref>) and neglecting the first order correction, we obtain the evolution equations ([Schep et al., 1994]) \begin{equation} \label{fluid1} \frac{\partial \lapp \phi}{\partial t} + [\phi, \lapp \phi] - [ \apar, \lapp \apar] = 0, \end{equation} \begin{equation} \label{fluid2} \begin{split} \frac{\partial}{\partial t} \left( \apar - d_e^2 \lapp \apar\right) + \left[\phi , \apar - d_e^2 \lapp \apar\right] - \rho_s^2[\lapp \phi, \apar] =0. \end{split} \end{equation} We assume an equilibrium given by ϕ^(0)(x) = 0, ^(0) (x)= λ/cosh^2 ( x/λ ), where $\lambda$ is a parameter that stretches the equilibrium scale length and modifies the equilibrium amplitude. We consider the perturbations ^(1) (x,y,t) = (x) e^γt +i k_y y + Ã̅ (x) e^γt -i k_y y , ϕ^(1)(x,y,t) = (x) e^γt +i k_y y + ϕ̅̃̅(x) e^γt -i k_y y , where $\gamma$ is the growth rate of the instability, $k_y = \pi m/ L_y$ is the wave number, with $m \in \mathbb{N}$ and the overbar refers to the complex conjugate. The collisionless tearing mode has been studied in [Porcelli, 1991] for the $m=1$ mode in toroidal geometry and the results can be adapted to the model (<ref>)-(<ref>). In particular, a dispersion relation has been obtained analytically and is valid for small and large values of the tearing stability parameter $\Delta'$, with \begin{equation} \label{Delta'} \Delta' = \lim_{x \rightarrow 0^{+}} \frac{\wapar_{out}'}{\wapar_{out}} - \lim_{x \rightarrow 0^{-}} \frac{ \wapar_{out}'}{\wapar_{out}}, \end{equation} where $\tilde{A}_{out}$ is the solution for $\tilde{A}$ of the linearized system in the outer region (see also the Appendix). The tearing index, $\Delta'$, is a common measure of the discontinuity of the logarithmic derivative of $\wapar_{out}$ at the resonant surface. The dispersion relation is given by ([Porcelli, 1991], [Fitzpatrick, 2010]) \begin{equation} \frac{\pi}{2}\left(\frac{\gamma }{2 k_y}\right)^2 = - \rho_s \frac{\pi}{\Delta'} + \rho_s^2 d_e \frac{2 k_y}{\gamma}. \label{dispPorcelli} \end{equation} In the limit $d_e^{2/3} \rho_s^{1/3} \Delta'\ll 1$, the relation (<ref>) is reduced to γ= 2 k_y d_e ρ_s/π Δ'. In the Appendix of this paper, we present the derivation of a new dispersion relation valid in the limit $ (\gamma d_e/ (k_y \rho_s)) \Delta' \ll 1$. In the appropriate regime of validity, the new dispersion relation includes a corrective term to Eq. (<ref>). We derived this dispersion relation using an asymptotic matching method and various assumptions, slightly different from those adopted by [Porcelli, 1991]. Table summarizing the various assumptions No. Assumptions used 1 Time variation of the perturbation is slow $\frac{\gamma}{k_y}\ll 1$ 2 Smallness of the inner scales $\frac{\gamma d_e}{k_y \rho_s} \ll \rho_s \ll 1$ 3 Use of the constant $\psi$ approximation $\frac{\gamma d_e}{k_y \rho_s}\Delta' \ll 1$ 4 Neglecting FLR effects in the inner regions $ \rho_e \ll \frac{\gamma d_e}{k_y \rho_s}$, Table 1 gives a review of the assumptions that were adopted on the parameters during our the analysis. The assumption No. 1 indicates a slow time variation of the perturbation. The No. 2 is the assumption on the scales of the inner region, where electron inertia becomes important and allows the break of the frozen flux condition. The assumption No. 3 allows the use of the so-called constant $\psi$ approximation, implying that the dispersion relation is valid for large wave numbers ([Furth et al., 1963]). The condition 4, imposed to neglect electron FLR, can be verified for a low-$\bee$ plasma. From a technical point of view, our new dispersion relation is obtained by solving the equations in the inner layer in real space, unlike in [Porcelli, 1991] where the corresponding equations are transformed and solved in Fourier space. The result of our linear theory, which is described in more detail in the Appendix, is given by the dispersion relation, γ= 2 k_y d_e ρ_s/πλ Δ' + γ^2 d_e πλ/4 k_y ρ_s^2. The first term in the right hand side of (<ref>) is exactly that of the formula (<ref>), for $\lambda=1$. In the parameter regime indicated by Table 1, the second term in (<ref>) is a small term that provides a correction to the formula (<ref>). A solution of the dispersion relation (<ref>), considered in the regime identified by the assumptions of Table 1, is γ_u = 2 k_y ( ρ_s ^2/πd_e λ-ρ_s^3/2√(ρ_s - 2 d_e^2 Δ')/πd_e λ ), and is real for $ \rho_s > 2 d_e^2 \Delta'$. This new dispersion relation is tested against numerical simulations and compared to the expression (<ref>). The numerical solver is pseudo-spectral and is based on a third order Adam-Bashforth scheme. The scheme uses numerical filters acting on typical length scales much smaller than the physical scales of the system ([Lele, 1992]). The instability is triggered by perturbing the equilibrium with a disturbance of the parallel electron gyrocenter velocity field. Because of the requirement of periodic boundary conditions, the equilibrium (<ref>) is approximated by ^(0) (x)=∑^30_n=-30 a_n e^i n x, where $a_n$ are the Fourier coefficients of the function $f(x)= \lambda/\cosh \left(\frac{x}{\lambda} \right)^2$ ([Grasso et al., 2006]). The numerical growth rate is determined by the formula γ_N = d/dt log| ^(1) (π/2,0,t ) |, so that $A_\parallel^{(1)}$ is evaluated at the $X$-point, where reconnection takes place. Comparison between the analytical growth rate $\gamma_u$ obtained from the new formula (<ref>) (dashed line), the analytical growth rate obtained from the formula (<ref>) (solid line) and the numerical growth rate $\gamma_N$ defined in Eq. (<ref>) (circles). The parameters are $d_e=0.1$, $\lambda=1$, $\Delta'=0.72$, $m=1$. The box size is given by $- 10\pi < x < 10\pi$, $- 0.48\pi< y < 0.48\pi$. The values of the parameters lie in the regime of validity of the new formula (<ref>). One can see that, for different values of $\rho_s$, the correction present in Eq. (<ref>) yields a better agreement with the numerical values. This plot is showing additional tests, analogous to those of Fig. 1, but with the Harris sheet equilibrium $\apar^{(0)} (x)= - \lambda \ln \cosh (x/\lambda)$, and $\phi^{(0)}(x) = 0$, for which $\Delta' = \frac{2}{\lambda}\left( \frac{1}{k_y \lambda} - k_y \lambda\right)$ and using the mode $m=1$. The parameters are $d_e=0.2$ and $\lambda=3$. The box size is $- 10\pi < x < 10\pi$, $- 4\pi< y < 4\pi$. For this case, $\Delta'=0.38$. For this equilibrium the dispersion relation determining $\gamma_u$ corresponds to Eq. (<ref>) with the right-hand side multiplied by a factor $1/2$. Symbols are the same as in Fig. 1. Also in this case, the new formula (<ref>) yields a better agreement with the numerical values. As shown in Figs. <ref> and <ref>, the agreement between the theoretical and the numerical values appears to be improved by this new formula, when the latter is applied in its regime of validity. We also performed additional tests on a different equilibrium (the Harris sheet), as shown on Fig. <ref>. Also in this case, we observe that our new dispersion relation provides a better agreement with the numerical values. Consequently, (<ref>) can be seen as an upgrade of the formula (<ref>) in the regime of parameters indicated by the Table 1. Figure <ref> gives a comparison between the theoretical growth rate predicted by Eqs. (<ref>), (<ref>) and (<ref>), and the numerical growth rate $\gamma_N$ as a function of the wave number $k_y$. According to these tests, $\gamma_u$ seems to give a very good prediction for wave numbers $k_y > 1.1$. The discrepancy observed for lower values of $k_y$ comes from the fact that the condition allowing the use of the constant $\psi$ approximation, $(\gamma d_e/ (k_y \rho_s)) \Delta' \ll 1$, is no longer satisfied for a small wave number, and for $\Delta' > \rs/(2 d_e^2)$, the solution is no longer real. Comparison between the theoretical growth rate predicted by Eqs. (<ref>), (<ref>) and (<ref>), and the numerical growth rate $\gamma_N$. The parameters are $d_e=0.03$, $\rho_s=0.03$, $\lambda=1$. The runs were done with the modes $1 \leq m \leq 4$. The box along $y$ is $1.789 \pi < y < 1.789 \pi$. The corresponding values of the tearing stability parameter lie in the interval $0.005 \leq \Delta' \leq 47.86$. §.§ Numerical results for $\bee \ne 0$ We now proceed to a numerical study of the model (<ref>) and (<ref>), complemented by (<ref>), (<ref>) and (<ref>). This will allow to take into account the effects of finite $\beta_e$. The numerical set-ups are the same as those presented in the previous Section, relative to the equilibrium (<ref>), but the code accounts now for finite $\beta_e$ effects. The gyroaverage operators are introduced as they are defined in the Fourier space by Eqs. (<ref>) and (<ref>). For the linear tests we focus on a weakly unstable regime for which $0<\Delta'<1$. The strongly unstable case shows interesting behaviors in the non-linear phase and will be studied in the next Section. For all the tests, we will use $\lambda=1$. In order to isolate the contribution coming from purely varying $\beta_e$, we first scan $ \bee $ from $ 10^{-3}$ to $1$ while $\rho_s$ and $d_e$ remain fixed, which is equivalent to considering a different mass ratio for each $\bee$ value. We recall that the parameters are indeed linked by the relations \begin{align} \label{rhoerel} & \rho_e = \rho_s \sqrt{\frac{m_e}{m_i}} = d_e \sqrt{\frac{\bee}{2}}. \end{align} Then we repeat the scan for different values of $d_e$. The results are presented in Fig. <ref> and show that the single effect of $\bee$ in the model equations is stabilizing the tearing mode. This is consistent with the results obtained in the gyrokinetic and non-collisional study of [Numata et al., 2011], where $\bee$ and the mass ratio are also varied. Figure <ref> also shows the competition between the destabilizing effect of the electron inertia and the stabilizing effect of $\bee$. For this set of parameter, the influence of $\bee$ on the weakly unstable regimes is almost negligible until $\bee=1$. For relatively low values of $\bee$, the highest growth rate corresponds to that for which the parameter $d_e$ is the largest. We recall in fact, from Section 3.1, that, for $\beta_e \ll 1$, the formulas (<ref>) and (<ref>) hold. Such formulas, for $d_e \ll 1$, predict that the growth rate increases linearly with $d_e$. Conversely, when $\bee$ becomes large enough, as appears for $\bee > 0.15$, the growth rate for which $d_e$ is the largest, decreases drastically under the effect of the finite $\rho_e$ and of the parallel magnetic perturbations induced by $\beta_e$. Numerical growth rates of the collisionless tearing mode as function of $\beta_e$, for three different values of $d_e$. The box length along $y$ is such that $-0.45\pi<y<0.45 \pi$, yielding a value of the tearing instability parameter of $\Delta'=0.067$ for the largest mode in the system. We stand in a very small $\Delta'$ regime, close to a marginal stability when $\bee < 0.1$. One sees that for higher values of $\bee$, and depending on the value of $d_e$, the mode is stabilized. Some information about the stabilizing role of $\bee$ can be inferred by taking the small FLR limit of the equation (<ref>), which consists in considering the regime of parameters \begin{align} \label{reg1} &d_e \ll 1 , \qquad \rs \ll 1 , \qquad \frac{d_e}{\rs} \ll 1, \qquad \bee = O(1), \end{align} and assuming, \begin{equation} \label{reg2} \lapp = O(1). \end{equation} If we retain the first-order FLR corrections as $d_e ,\rs \rightarrow 0$, the resulting Ohm's law reads \begin{align} &\frac{\pa}{\pa t}\left( \apar +\left(\frac{\bee}{4}-1\right) d_e^2 \lapp \apar \right)+\left[ \phi , \apar +\left(\frac{\bee}{4}-1\right) d_e^2 \lapp \apar \right] \nno \\ &+\left( \frac{\bee}{4} d_e^2 + \rs^2\left(\frac{\bee}{2 + \bee} -1\right)\right)[\lapp \phi , \apar]=0. \label{mom1flr} \end{align} The new contributions in Eq. (<ref>) are those due to finite $\bee$ and are not present in the usual two-field model by [Schep et al., 1994]. In particular, the contributions proportional to $(\bee/4)d_e^2$ come from electron FLR effects and the contribution proportional to $\bee\rs^2/(2+\bee)$ is due to the presence of the finite $\bpar$. In Eq. (<ref>), comparing with Eqs. (<ref>)-(<ref>), it is possible to identify an effective electron skin depth $d_e'$ and an effective sonic Larmor radius $\rs'$, given by, d_e' = √(1 - /4)d_e, ' = √( ^2 (1 - /+2) - d_e^2/4), respectively. Therefore, considering, as first approximation, the relation <ref> with $d_e'$ and $\rho_s'$ replacing $d_e$ and $\rho_s$, respectively, one can identify some of the stabilizing effects of $\beta_e$, given that $d_e' < d_e$ and $\rs' < \rs$. However, the small FLR limit (<ref>)-(<ref>) only gives us a limited insight, as it neglects higher-order derivatives contributions coming from the gyroaverage operators, which can become important around the resonant surface. On the other hand, this insight is arguably easier to obtain with the gyrofluid model, with respect to the gyrokinetic model. A further analysis we carried out consists of investigating the effect of $\beta_e$ on the linear growth rate, but at a fixed mass ratio. Physically, this might be interpreted as investigating the effect of the variation of the equilibrium electron temperature $T_{0e}$ or of the background density $n_0$ of the plasma, on the stability of the tearing mode. In order to keep a constant mass ratio during the scan in $\bee$, we carried out a study with $\bee$ ranging from $10^{-3}$ to $2$ with $\rho_s$ varying simultaneously. We fix the relation $ d_e = \sqrt{m_e/m_i} $ (implying $\rho_s=\sqrt{\beta_e / 2}$) and we evaluate the cases $ d_e = $ 0.07, $ d_e = $ 0.15, $ d_e = $ 0.1. Figure <ref> shows that when $\bee$ and $\rho_s$ are increased simultaneously there seems to be a competition between the destabilizing effect of $\rho_s$ and the stabilizing effect of $\bee$. Also in this case, the behavior at small $\beta_e$, can be interpreted on the basis of the formulas (<ref>) and (<ref>), predicting an increase of the growth rate with increasing $\rho_s$. When electron FLR effects come into play at larger $\beta_e$, the growth rates decreases. The values chosen for the mass ratio are not realistic but make it possible to reduce the need of grid points. In the case of the artificial value of $d_e= \sqrt{m_e/m_i} = 0.15$, the stabilizing effect takes over the destabilizing effect of $\rho_s$ even for $\beta_e <1$. However, for the case $\sqrt{m_e/m_i} = 0.07$, much closer to a real mass ratio, the effect of $\rho_s$ appears to be dominant. Indeed, decreasing $d_e$ at a fixed $\beta_e$ amounts to decreasing $\rho_e$. Thus, for $d_e =0.07$ the stabilizing effect of the electron FLR terms gets weakened, with respect to the other values of $d_e$, even at large $\beta_e$. Numerical growth rates of the collisionless tearing mode as function of $\beta_e$ and $\rs$, for different values of $d_e= \sqrt{m_e/m_i}$. The box size is $- \pi < x < \pi$, $- 0.47\pi< y < 0.47\pi$, which leads to $\Delta'=0.59$. Figure <ref> shows the variation of the growth rate of the tearing instability as a function of $\bee$, for a fixed value of $\rho_s = 10 \rho_e = 0.3$. The obtained results are confirming the scaling of the growth rate as $\beta_e^{-1/2}$ (or, equivalently, as $d_e$) has been determined with the gyrokinetic study of [Numata & Loureiro, 2015]. This shows the capability of the gyrofluid model to reasonably reproduce gyrokinetic results ([Numata et al., 2011, Numata & Loureiro, 2015]) and the fluid theory of [Fitzpatrick & Porcelli, 2007], in a quantitative way. The value of $d_e$ for each run increases as $d_e=\sqrt{2 m_e /(\bee m_i)}\rs $. The box size is $- \pi < x <\pi$, $- 0.47\pi< y < 0.47\pi$. The numerical values (triangles) are compared with the curve $\gamma=\beta_e^{-1/2}$ (dotted line), which is the scaling predicted by [Fitzpatrick & Porcelli, 2007] on the basis of a fluid model, and confirmed by gyrokinetic simulations by [Numata et al., 2011]. The comparison shows that also our gyrofluid model confirms such scaling. §.§.§ Hot ion limit, $\tau_i \rightarrow +\infty$ In this article we have focused, so far, on the cold ion limit, but in this Subsection we temporarily deviate from the cold-ion case, to consider the opposite limit, in which $\tau_i=\tau_{\perp i} \rightarrow +\infty$. The sole purpose of this Subsection is to have a consistent and concise comparison of the two regimes, therefore we will only study the linear behavior of the hot ion limit and leave the study of its non-linear evolution for a future work. The hot-ion limit can actually be of greater interest for space plasmas such as the solar wind. The ion gyrocenter density fluctuation and the ion gyrocenter parallel velocity are still neglected, and therefore the evolution equations remain unchanged. Only the assumption (<ref>) is taken in the opposed limit, which has an impact on the development of ion gyroaverage operators. The static relations (<ref>) and (<ref>) are thus changed to \begin{align} & \phi = \frac{\rs^2 N_e}{\left(1 - \frac{\beta_e}{2}\right)G_{10e}-G_{10e}^{-1} }, \label{hotion1} \\ & \bpar = \frac{\bee}{2 \rs^2} \phi. \label{hotion2} \end{align} The linear results obtained in the hot-ion limit are compared to the results obtained in the cold-ion regime on Figure <ref>. The parameters are $d_e=0.1$, $\rho_s=0.1$. Our results seem to indicate that, for $\bee> 0.5$, the growth rate is very insensitive to the temperature of the ions, which is in agreement with the results obtained by [Numata et al., 2011]. Studies have been carried out with arbitrary ratio between the equilibrium ion and electron temperature in the low-$\beta$ limit, by [Porcelli, 1991, Grasso et al., 1999], and predict that the growth rate is significantly higher when the temperature of the ion background temperature is higher than that of the electrons. This is indeed what we observe for $\bee<10^{-2}$. Comparison between the linear growth rate obtained in the cold-ion regime and the hot-ion regime. The box size is $- \pi < x <\pi$, $- 0.47\pi< y < 0.47\pi$, which leads to $\Delta'=0.59$. § NONLINEAR PHASE Plot of the effective growth rate $\frac{d}{dt} \log\left| \apar^{(1)} \left(\frac{\pi}{2},0,t \right) \right|$, as a function of time. The corresponding values of $\beta_e$ are shown in the table. The value of the electron skin depth is kept fixed to $d_e=0.08$, whereas $\rho_s$ is varied (and ranges from $0.17$ to $0.69$) so to keep the mass ratio fixed to $m_e/m_i=0.01$. All the growth rates, except for the case $\beta_e=1.5$ exhibit the same behavior, characterized by linear, faster than exponential and saturation phase. The case $\beta_e=1.5$ exhibits also a slow-down phase. To study the impact of a finite $\bee$ on the non-linear evolution of the magnetic island, we focus on the strongly unstable case, $\Delta'=14.31$, resulting from a box length along $y$ given by $- \pi< y < \pi$. In this case, the mode $m=2$ has a positive tearing parameter $\Delta'_{2} = 1.23$. The higher harmonics are linearly unstable. The box along $x$ is chosen to be $-1.5 \pi < x < 1.5 \pi$ and allows to reach a large island without incurring in boundary effects. We make use of a resolution up to $2880\times2880$ grid points. The mass ratio will be taken as $m_e/m_i = 0.01$ for the following tests. The first tests are carried out by making a scan in $\bee$ from $\bee = 0.1$ to $\bee=1.5$ while keeping $d_e = 0.08$ and varying $\rho_s$ as $\rho_s=0.8 \sqrt{\bee} /\sqrt{2}$. Increasing $\bee$ and $\rs$ simultaneously in this way, as stated in Sec. 3.2, amounts to varying the electron background temperature $T_{0e}$. Figure <ref> shows the evolution in time of the effective growth rate, given by Eq. (<ref>), for each simulation. In all these cases, with the exception of $\bee=1.5$, we identify three phases; (1) a linear phase during which the perturbation evolution scales as $\mbox{exp}(\gamma t)$, (2) a faster than exponential phase, which is delayed in the case $\bee=0.1$, given that the linear growth rate is smaller, with respect to the case $\bee=0.8$ for which the instability reaches the nonlinear phase faster (3) a saturation during which the growth rate drops to $0$. We point out that, the fact that the linear growth rate increases with increasing $\bee$ is related to the fact that $\rho_s$ is also increased for each run. As discussed in the previous Section, the isolated effect of an increasing $\bee$ in the equations actually implies a stabilization of the linear growth rate. For the case $\bee = 1.5$, we observe an intermediate phase, during which the growth of the island is slowed down. It is also visible for the case $\bee = 0.8 $ that the growth rate shows a slowing down at $ t=38$ when it seemed to have already entered the explosive phase. Similar evolution and double faster than exponential phase have been studied in [Comisso et al., 2013], where a finite ion Larmor radius is considered. On the left: plot of the effective growth rate $\frac{d}{dt} \log\left| \apar^{(1)} \left(\frac{\pi}{2},0,t \right) \right|$, as a function of time. The parameters are $\bee =0.8$, implying $\rho_e=\sqrt{0.4}d_e$ and $\rho_s =10 \sqrt{0.4}d_e$. On the right: Evolution of half-width of the magnetic island until saturation. The simulations correspond to those in the left panel. We focus now on the case $\bee=0.8$. We scan the values of $d_e$ from $0.06$ to $0.1$, and $\rho_s = 10\rho_e = 10 \sqrt{0.4} d_e \approx 6.32 d_e$. The results are shown on Fig. <ref>. These curves are compared for a fixed time unit (fixed $v_A$), while keeping $\bee$ and the mass ratio constant, which corresponds to varying $B_0 \sim n_0^{1/2}$ while keeping the electron temperature $T_{0e}$ fixed. For the case of $d_e= 0.1$, which corresponds to $\rho_s \sim 0.63$, we observe that the slowdown at the end of the linear phase. On the other hand, in the case of $d_e= 0.06$, for which $\rho_s = 0.37$, it appears at an earlier stage of the evolution process, when the nonlinear phase is already entered and it is followed by an explosive growth. The "double faster than exponential" behavior, which is observed in the cases $ d_e> 0.08 $, is similar to that observed in [Comisso et al., 2013] for large ion Larmor radius values. The evolution of the width of the magnetic island for these five runs is shown on the right plot of Fig. <ref>. The last point for each run corresponds to the width of the island when $\gamma_{max}$ is reached, just before the saturation phase. In conclusion, the growth of the island simply seems to be delayed, but the maximum width before saturation is identical for each case since the amount of initial magnetic energy is the same for each simulation. On the left: plot of the effective growth rate $\frac{d}{dt} \log\left| \apar^{(1)} \left(\frac{\pi}{2},0,t \right) \right|$, for the cases $\bee=0$ (black curve) and $\bee=1.5$ (purple curve). The other parameters are $\rho_s = 0.519$ and $d_e=0.06$. On the right: Log of the time evolution of the reconnected flux at the X-point and the first 6 modes, from the simulation with $\bee=1.5$. The last test consists in studying an extreme case for which the slowing down phase is accentuated, which corresponds to the case of $d_e= 0.06$, $\rho_s = 0.519$, $\bee = 1.5$. We also perform the simulation for $\bee=0$, using a code that solves the fluid equations (<ref>) - (<ref>). Figure <ref> shows the overplot of the evolution of the growth rate for both simulation as a function of time. The slowing down phase is followed by an oscillation of the non-linear growth rate. This oscillation was obtained in other tests for which $\bee = 1.5$ and is due to a slight displacement of the X-point during the reconnection. §.§ Energy considerations The time variations of the different components of the energy for the cases $\bee=0$ and $\bee=1.5$, whose rate of growth is shown in Fig. <ref>, are shown in Fig. <ref>. The variations are defined as $(1/2)\int dx^2( \xi(x,y,t) - \xi(x,y,0)) / H(0)$ where the function $\xi$ can be replaced by the different contributions of the Hamiltonian (<ref>). In terms of the gyrofluid variables and in the presence of FLR effects, it is not obvious to identify the physical meaning of all the contributions to the energy. Therefore we use the terminology adopted in [Tassi et al., 2018] and which refers to the fluid limit $\beta_e=0$. The different contributions are, the magnetic energy, $E_{mag}$, for which $\xi= - U_e \gamue \apar $ (reduced to $|\nabla_{\perp} \apar|^2$ in the fluid case), the parallel electron kinetic energy, $E_{ke}$, for which $\xi = d_e^2 U_e^2$ (reduced to $d_e^2 ( \lapp \apar)^2$ in the fluid case), the energy due to the fluctuation of the electron density, $E_{pe}$, for which $\xi = \rho_s^2 N_e^2$ (reduced to $\rho_s^2 (\nabla_\perp^2 \phi)^2$ in the fluid case) and the perpendicular electrostatic energy of the electrons combined with the energy of the parallel magnetic perturbations, $E_{kp}$, for which $\xi = - ( \gamue \phi - \rho_s^2 2 \gde \bpar) N_e$ (reduced to $|\nabla_{\perp} \phi|^2$ in the fluid case). We consider the simulation as being reliable until the time at which the percentage of the total energy that gets dissipated numerically (black curve) reaches $1 \%$. By comparing the two simulations, one can see that there appears to be a comparable amount of magnetic energy being converted. The remarkable difference is the evolution of the component that combines the electrostatic energy and the energy of the parallel magnetic perturbations, $E_{kp}$, which, in the case $\bee=1.5$, also seems to be converted into electron thermal energy ($E_{pe}$), resulting in an increase in this component. This decrease of the electrostatic energy has been observed only in the case $\bee=1.5$. In the cases $\bee =0.8$, it appears that this component stays rather close to its initial value. We also carried out the test with $\bee=1.5$ by artificially removing the parallel magnetic perturbation $\bpar$ from the code, and consequently it was not appearing in the expression of $E_{kp}$. It appeared first that the presence of $\bpar$ has a stabilizing effect on the tearing mode (which is consistent with the linear results discussed in Sec. 3.2), and secondly, the energy component $E_{kp}$ was slightly increasing instead of decreasing. This allows us to conclude that the energy related to the parallel magnetic perturbations is in fact the decreasing component that seems to be converted into electron thermal energy $E_{pe}$. Time evolution of the energy variations for the cases $\bee=0$ (plot at the top) and $\bee=1.5$ (plot at the bottom). The parameters are $d_e=0.06$, $\rs=0.519$ and their corresponding growth rate is shown in Fig. <ref>. § CONSERVATION LAWS OF THE MODEL In this Section we discuss the conservation laws of the gyrofluid model and its Lagrangian invariants. Equations (<ref>)-(<ref>) can be recast in the form \begin{align} &A_\pm = \gamue \apar - d_e^2 U_e \pm d_e \rs N_e, \label{apm}\\ &\mathbf{v}_\pm = \hat{z} \times \nabla \left(\gamue \phi - \rs^2 2 \gamde \bpar \pm \frac{\rs}{d_e} \gamue \apar\right). \label{vpm} \end{align} We define by \begin{align} &\phi_\pm = \gamue \phi - \rs^2 2 \gamde \bpar \pm \frac{\rs}{d_e} \gamue \apar, \label{phipm} \end{align} the stream functions of the velocity fields $\mathbf{v}_\pm = \hat{z} \times \nabla \phi_\pm$. The formulation (<ref>) makes it evident the presence of Lagrangian invariants, corresponding to the fields $A_\pm$, in the model. Such Lagrangian invariants are advected by the incompressible velocity fields $\mathbf{v}_\pm$. The presence of such Lagrangian invariants is a feature common to many 2D Hamiltonian reduced gyrofluid models [Waelbroeck et al., 2009, Waelbroeck & Tassi, 2012, Keramidas Charidakos et al., 2015, Tassi, 2019, Tassi, 2017, Passot et al., 2018, Grasso et al., 2010, Grasso & Tassi, 2015] and is related to the existence of infinite families of Casimir invariants of the Poisson bracket. For Eqs. (<ref>)-(<ref>) , such invariants correspond to the two families C_+=∫d^2 x 𝒞_+ (A_+), C_-=∫d^2 x 𝒞_- (A_-), where $\mathcal{C}_\pm$ are arbitrary functions. Equations (<ref>) imply that contour lines of the fields $A_\pm$ cannot reconnect, as the corresponding vector fields $\mathbf{B}_\pm=\nabla A_\pm \times \hat{z}$ are frozen in the velocity fields $\mathbf{v}_\pm$. On the other hand, the same model allows magnetic field lines to reconnect. In particular, it is useful to illustrate the mechanisms breaking the frozen-in condition in this model. This can be done by inspection of Eq. (<ref>), governing the evolution of $\apar$, and consequently, of the magnetic field in the plane perpendicular to the guide field, which is given by $\mathbf{B}_\perp=\nabla \apar \times \hat{z}$. Equation (<ref>) can be rewritten in the following way: \begin{align} & \frac{\pa \apar}{\pa t} + \bu \cdot \nabla \apar \nno \\ &= -\frac{\mathcal{D}}{\mathcal{D} t} \left(\left( \frac{\bee}{4} -1\right) d_e^2 \lapp \apar + \sum_{n=2}^{+\infty} \left( \frac{\bee}{4n}-(-1)^{n-1}\right)\left(\frac{\bee}{4}\right)^{n-1}\frac{(d_e^2 \lapp)^n}{(n-1) !} \apar \right) \label{ohm} \\ &-\rs^2 \sum_{n=1}^{+\infty} \frac{1}{n!}\left( \frac{\bee}{4} d_e^2\right)^n [ {(\lapp)}^n \apar , N_e], \nno \end{align} =ẑ×∇(ϕ- ^2 2 - ^2 N_e), and where the operator $\mathcal{D}/\mathcal{D} t$ is defined by 𝒟f/𝒟 t=f/t+[ϕ- ^2 2 ,f] for a function $f$. In Eq. (<ref>) we also used the formal expansions =∑_n=0^+∞1/n! (/4 d_e^2 )^n, ^-1=∑_n=0^+∞(-1)^n/n! (/4 d_e^2 )^n. The right-hand side of Eq. (<ref>) contains all the terms that break the frozen-in condition. Indeed, if the right-hand side of Eq. (<ref>) vanishes, the perpendicular magnetic field is frozen in the velocity field $\bu$. From Eq. (<ref>) one thus sees that the frozen-in condition can be violated by electron inertia (associated with the parameter $d_e$) and by electron FLR effects (associated with the combination $(\bee /4) d_e^2$). In the limit $\bee=0$ only electron inertia remains to break the frozen-in condition. On the other hand, because electron FLR terms are associated with the product between $\bee/4$ and $d_e^2$, in the limit $d_e=0$ both electron inertia and electron FLR terms disappear and the right-hand side of Eq. (<ref>) vanishes, thus restoring the frozen-in condition. We remark that the presence of a finite $\bee$ is also responsible for finite parallel magnetic perturbations $\bpar$. However, these do not violate the frozen-in condition for the perpendicular magnetic field, as they only contribute to modify the advecting velocity field $\bu$ (the parallel magnetic field lines, on the other hand, might undergo reconnection). We consider here the qualitative structures of the contour plots of the Lagrangian invariants $A_\pm$ referring to the choice of parameters already adopted for Fig. <ref>. From comparing the contour plots of $A_{-}$, in the case $\bee=0$ (left panel of Fig. <ref>) and $\bee=1.5$ (middle panel of Fig. <ref>), the structures look qualitatively similar. Contour plot of the Lagrangian invariant $A_{-}$. Left panel: $\bee=0$, right panel: $\bee=1.5$. The parameters are $d_e=0.06$, $\rs=0.519$. The dashed lines are the separatrices. The contour plots refer to the normalized time $\gamma t= 5.18$ The contour lines of $A_{-}$ are induced by the velocity fields $\phi_{-}$ and undergo a phase mixing (the field $A_{+}$ is winding up identically in the opposite direction, induced by $\phi_{+}$). The duration of the transient and linear phases are not identical, consequently we compared the fields at the normalized time $\gamma t= 5.18$, which makes it possible to compare the fields when the islands are of comparable size so that they reached the same stage of evolution. The separatrices are displayed on each plot by dashed lines. We observe a different shape of the island in the two cases, which reflects the different distribution of the spectral power of the magnetic field. The effect of $\bee$ gives a more elongated island along $y$ and thinner along $x$. If we take a $\bee>1$ and keep a low enough mass ratio, then we are forced to stand in a regime with $\rs/d_e$ much greater than $1$. The ratio considered in this simulation is $\rho_s/d_e=8.65$. In this case $A_\pm$ is advected by a velocity field which can be approximated by $\mathbf{v}_\pm = \pm\hat{z} \times \nabla \left( \frac{\rs}{d_e} \gamue \apar\right)$, since $\phi_\pm$ tends to coincide with $\pm \frac{\rs}{d_e} \gamue \apar$. Performing other tests (whose results are not shown here) with $d_e \sim \rho_s$, $\beta_e \in \{0, 0.5\}$ and a mass ratio $20$ times higher, did not show any obvious difference in the mixing phase either. Contour plot of the electron density. Left panel: $\bee=0$, middle panel: $\bee=1.5$. On the right panel are the profiles of $N_e$ at $y=\pi/3$ in the cases $\bee=0$ (purple) and $\bee=1.5$ (blue). The parameters are $d_e=0.06$, $\rs=0.519$. The dashed lines are the separatrices. The contour plots and profiles refer to the normalized time $\gamma t= 5.18$ The electron density $N_e$ can be obtained by a linear combination of the invariants $A_\pm$ N_e = A_+ - A_-/2 d_e . The contour plot of the electron density is displayed on Fig. <ref> and shows the fine structures produced by the mixing of the Lagrangian invariants $A_\pm$. The case $\bee=1.5$ shows nested quadripolar structures. From the difference between the profiles of $N_e$ on Fig. <ref> it is visible that increasing $\bee$ will smooth the gradients in the inner region of the electron density. § CONCLUSION In this article, we have attempted to provide an overview of the impact of finite electron plasma beta effects on the tearing instability in non-collisional plasma with cold ions and a strong guide field. Adopting a gyrofluid model, we have studied the effects of electron gyration and of a parallel magnetic perturbation. There is a wide variety of systems for which this study can be useful, such as magnetosheat plasmas, where current sheets form in the presence of a guide field and a large $\bee$ value. Recently, for instance, studies of observations of the MMS space mission in the magnetotail have revealed electron-only reconnecting current sheet, where ions do not participate and where $\bee$ values can be observed to be greater than $1$ ([Man et al., 2020]). Our main results are the following. First, increasing $\bee$ and $\rho_s$ while keeping $d_e$ and the mass ratio fixed, the evolution of the reconnection growth rate seems to be dominated by the destabilizing effect of $\rho_s$, up to a certain threshold where the effects of $\rho_e$ become important and the grfowth rate diminishes (Fig. <ref>). This can also be interpreted as fixing the background density, $n_0$, the ion mass (so that $d_e$ is fixed) and the guide field amplitude $B_0$, while increasing the electron temperature $T_{0e}$. In the case of a small $\Delta'$ regime, a high $\bee$ can eventually stabilize the tearing mode and prevents reconnection from occurring. Secondly, in the nonlinear regime of the case $\rho_s \gg d_e$ with $\beta_e \sim m_e/m_i \ll 1$, (which is referred to as being the fluid regime in this article), we retrieved the well-know collisionless faster than exponential growth which leads to an explosive growth of the magnetic island. However, when we increase $\bee$, this explosive paradigm is modified with the appearance of a slow down phase preceding the explosive growth. A further conclusion is that the effect of $\bee$ on the Lagrangian invariants of the gyrofluid model does not seem to reduce the filamentary structure, produced by a "phase mixing", characteristic of these invariants. The results obtained with our gyrofluid model are in agreement with results obtained by gyrokinetic studies ([Numata et al., 2011, Numata & Loureiro, 2015]). They also complement some two-fluid studies where a consistent accounting for $\beta_e$ effects, including both electron FLR and parallel magnetic perturbations were neglected ([Schep et al., 1994, Grasso et al., 1999, Del Sarto et al., 2006, Fitzpatrick & Porcelli, 2007]). §.§ Acknowledgements The authors acknowledge helpful discussions with Dimitri Laveder. This work benefits from the support of the Ignitor project under the CNR contract DFM.AD003.261 (IGNITOR)-Del. CIPE n.79 del 07/08/2017. The numerical simulations were performed using the EUROfusion high performance computer Marconi Fusion hosted at CINECA (project FUA35-FKMR) and the computing facilities provided by “Mesocentre SIGAMM” hosted by Observatoire de la Côte d’Azur. §.§ Appendix: Calculation of $\gamma_u$ We start from the linearized Eqs. (<ref>)-(<ref>), using the equilibrium (<ref>) and the perturbations (<ref>). The perturbations are subject to the boundary conditions $\wapar , \, \wphi \rightarrow 0$, as $x \rightarrow \pm \infty$. We look for even solutions of $\wapar(x)$ and odd solutions for $\wphi(x)$, which are standard parities for the classical tearing problem. We consider the time variation of the perturbation being slow, g=γ/k_y ≪1, and the normalized electron skin depth as a small parameter, i.e. \begin{equation} d_e \ll 1. \end{equation} The linearized equations are given by γ( ” - k_y^2 ) - i k_y ” + i k_y ( ” - k_y^2 )=0, γ( - d_e^2 ( ” - k_y^2 )) + i k_y (- d_e^2 ”) - i k_y ρ_s^2 ( ” - k_y^2 ) =0, where $\byo = - \d d \apar^{(0)} / \d d x $ is the equilibrium magnetic field. In order to solve (<ref>) and (<ref>) we have to adopt an asymptotic matching method because the vanishing of the two small parameters $g$ and $d_e$ leads to a boundary layer at the resonant surface $x=0$. We will consider two spatial regions involving two spatial scales. Far from the resonant surface, located at $x=0$, the plasma can be assumed to be ideal and electron inertia can be neglected. This region is commonly called the outer region. Close to the resonant surface, we will proceed to a spatial rescaling and get to a scale at which electron inertia becomes important and drives the reconnection process. This second region is called the inner region. We anticipate that we will find a second boundary layer inside the inner region and will need the use of a second asymptotic matching. §.§ Outer region As mentioned before, we assume $d_e \ll1$ and $g \ll 1$. We then neglect terms of order $d_e^2$ and $g^2$ in Eqs. (<ref>) and (<ref>). The outer equations are given by ”_out - ( k_y^2 + ^”/ )_out = 0 \begin{equation} \widetilde{\phi}_{out}(x)=\frac{i g \wapar_{out}(x)}{B_{y0}}, \label{outer2} \end{equation} where we indicate with the prime symbol, the derivative with respect to the argument of the function. The solution for $\tilde{A}_{out}$ is given by \begin{align} \label{outersolutionA2} & \wapar_{out} (x)= e^{-\frac{\left| x\right| \sqrt{ \lambda^2 k_y^2+4}}{ \lambda}} \left(\frac{15 \tanh ^3\left(\frac{\left| x\right| }{ \lambda}\right)}{ \lambda^2 k_y^2 \sqrt{ \lambda^2 k_y^2+4}}+\frac{15 \tanh ^2\left(\frac{\left| x\right| }{ \lambda}\right)}{ \lambda^2 k_y^2} \right. \nno \\ & \left.+\frac{\left(6 \left( \lambda^2 k_y^2+4\right)-9\right) \tanh \left(\frac{\left| x\right| }{ \lambda}\right)}{ \lambda^2 k_y^2 \sqrt{ \lambda^2 k_y^2+4}}+1\right) \end{align} From Eq. (<ref>), on the other hand, one sees that the solution for $\tilde{\phi}_{out}$ is not defined at the resonant surface $x=0$, where $B_{y0}$ vanishes. This indicates the presence of the above mentioned boundary layer at $x=0$. We measure the logarithmic derivative of the of the discontinuity of the outer solutions (<ref>) at $x=0$ with the formula (<ref>) of the standard tearing parameter and we obtain the expression \begin{equation} \label{Delta'2} \Delta' = \frac{2 \left(5- \lambda^2 k_y^2\right) \left(\lambda^2 k_y^2+3\right)} {\lambda^3 k_y^2 \sqrt{\lambda^2 k_y^2+4}}. \end{equation} In the limit $|x| \rightarrow 0 $ the solution for $\tilde{A}_{out}$ can be develop using its Taylor expansion _out= 1 + Δ'/2 |x| + O(x^2). If $\Delta'$ is small enough, the solution $\wapar$ can be approximated to be equal to $1$ in the region where $x \ll 1$. This is standard procedure called the constant $\psi$ approximation ([Furth et al., 1963]). §.§ Inner region: first boundary layer In the inner region, we proceed to a first spatial rescaling using an inner variable, $\hat{x}$, such that x =ϵ, where $\epsilon \ll 1$ is a stretching parameter. The rescaling (<ref>) implies $k_y\ll \partial_{\hat{x}} $, and allows to use a Taylor expansion of the equilibria (<ref>) (ϵx̂) = 2 ϵ/λ + O(ϵ^2). We obtain the two inner equations _in” = i g λ/ 2 ϵ_in” , g (_in - d_e^2/ϵ^2 _in”) + i 2 ϵ/λ_in - i ρ_s^2 2 /λϵ_in”=0. We introduce the real-valued displacement function ξ_in = -i/g _in, and injecting (<ref>) in (<ref>), we obtain the layer equation ξ_in”/ϵ^2 -2 ϵ/ λ^2 ( g^2 d_e^2/^2 + 4 ϵ^2 ^2 /λ^2 ) ( ϵ/2 λ ξ_in -1 ) =0, where we used the constant $\psi$ approximation, which, we recall, consists in approximating $\wapar_{in} \sim 1$ close to $x=0$. In order to solve (<ref>) we will assume g d_e ≪ρ_s^2 ≪1, and will make use of a second asymptotic matching inside the inner region. We will have indeed two boundary layers at $x=0$, defining two spatial regions in which the equations can be solved. A boundary layer exists at the scale $\epsilon_1 = \rho_s$ and a second one at a smaller scale, for $\epsilon_2 = \frac{g d_e}{\rho_s}$. In the first layer we use ϵ=ϵ_1 = ρ_s, ξ_in= /ϵ_1, where $\hxi$ is the rescaled displacement function. This choice for $\epsilon$ yields a distinguished limit allowing to retain the maximum number of terms in Eq. (<ref>), as $\epsilon \rightarrow 0$, accounting for the condition (<ref>), which allows to neglect the term $g^2 d_e^2/\rho_s^2$ in the denominator of Eq. (<ref>). We restrict our study to the case of negligible FLR effects in the inner region, which implies that $\rho_e \ll \epsilon_1$. This condition ensures that the terms responsible for the electron FLR effects remain smaller than those responsible for the effects of electron inertia. The rescaling leads to the layer equation ” - = - 2 λ/. The solution of Eq. (<ref>) is = λ/4 e^ E_1( ) + λ/4 e^- ( Ei( ) - λg d_e/ρ_s^2π/2 ). Where we already fixed the constants of integration in order to ensure $\lim_{z \rightarrow + \infty} \txi =0$ and to ensure the matching with the solution in the second layer. In (<ref>) we used the expression of the exponential integral functions \begin{equation} E_1(x) = \int_x^{+\infty}\frac{e^{-t}}{t}dt, \quad \quad \mbox{and} \quad \quad Ei(x) = \int_{-\infty}^x \frac{e^{t}}{t}dt. \quad \quad \mbox{for} \quad x>0, \end{equation} §.§ Inner region : second boundary layer In the second layer, where $\hat{x} \sim g d_e /\rho_s^2$, the solution (<ref>) is no longer valid. Therefore, in the second layer, we perform the following rescaling ϵ=ϵ_2 = g d_e/ρ_s, ξ_in= d_e/ρ_s^3, and introduce the second inner variable $\bar{x}=x/\epsilon_2$ (so that $\hat{x}=(g d_e/ \rho_s^2) \bar{x}$). Since we are at an even smaller spatial scale than that of the previous layer, we emphasize the condition of neglecting the FLR effects also in this second inner layer, i.e. $\rho_e \ll \epsilon_2 $. Considering our assumption (<ref>), the equation (<ref>) becomes, ” + 2 /λ( 1 + 4 ^2/λ^2) = 0. The solution of Eq. (<ref>), written bellow, in terms of the variables $\hx$ and $ \hxi$ reads () = λ( 1-γ_E+ λg d_e /2 ρ_s^2π/2+ log( ρ_s^2/ g d_e) ) - λ^2 g d_e/4 ρ_s^2arctan(ρ_s^2 2 / g d_e λ ) - λ/4log( ( ρ_s^2/g d_e)^2+ λ/q) . This solution satisfies the boundary condition $\hat{\xi} (0) =0$, descending from the requirement of $\tilde{\phi}$ being an odd function. In Eq. (<ref>) $\gamma_E$ is the Euler constant. §.§ $\Delta'$ matching We add the following matching condition concerning the derivatives of the solutions: \begin{equation} \label{intA''} \Delta'= \frac{1}{\epsilon_1} \int^{\infty}_{-\infty} \wapar''_{in} d \hx. \end{equation} Using the relations (<ref>) and (<ref>) and using the variables $\hx$ and $\hxi$ we write Δ'= 2 g^2/ ρ_s^3 ∫^+ ∞_0 ( 1 - 2 /λ)/ ( g^2 d_e^2/ ρ_s^4 + 4 ^2/ λ^2) d. We separate the integral referring to the second term on the right-hand side of Eq. (<ref>) in two parts, one from $0$ to $\sigma$ and one from $\sigma$ to $+\infty$, with $\sigma$ a parameter constrained in the overlap region such that g d_e/ ρ_s^2 ≪σ≪1/log( g d_e/ ρ_s^2 ) . We also recall that $\frac{g d_e}{ \rho_s^2} \ll 1$ is our assumption (<ref>). Equation (<ref>) can then be rewritten as \begin{equation} \begin{split}\label{troisintegrales} & \Delta'= \frac{2 g^2}{ \rho_s^3} \int^{+ \infty}_0 \frac{1}{ \left( \frac{g^2 d_e^2}{ \rho_s^4} + \frac{4 \hx^2}{ \lambda^2}\right) } d\hx - \frac{4 g^2}{\lambda \rho_s^3 } \int^{\sigma}_0 \frac{\hx \hxi }{ \left( \frac{g^2 d_e^2}{ \rho_s^4} + \frac{4 \hx^2}{ \lambda^2}\right) } d\hx - \frac{4 g^2}{\lambda \rho_s^3 } \int^{\infty}_\sigma \frac{\hx \hxi }{ \left( \frac{g^2 d_e^2}{ \rho_s^4} + \frac{4 \hx^2}{ \lambda^2}\right) } d\hx.\\ & =\frac{g \lambda \pi }{2 d_e \rho_s} + W_2 + W_1. \end{split} \end{equation} We calculate the expression (<ref>) accurate to $g^2/ \rho_s^3$ so smaller terms are neglected (the next higher term is of order $ \frac{g^2}{ \rho_s^3} \sigma \log \frac{g d_e}{ \rho_s^2} $ and thanks to the constraint (<ref>) we have $\sigma \log \frac{g d_e}{ \rho_s^2}\ll1$). In the interval between $\sigma$ and $+ \infty$, we use the hypothesis (<ref>), given by $g d_e \ll \rho_s^2 \ll 1$ to simplify the denominator. \begin{equation} \begin{split} W_1 & = - \frac{4 g^2}{\lambda \rho_s^3 } \int^{\infty}_\sigma \frac{\hx \hxi }{ \left( \frac{g^2 d_e^2}{ \rho_s^4} + \frac{4 \hx^2}{ \lambda^2}\right) } d\hx.\\ & = - \frac{g^2}{\rho_s^3 } \int^{\infty}_\sigma \frac{\hx }{ \left( \frac{g^2 d_e^2}{ \rho_s^4} + \frac{4 \hx^2}{ \lambda^2}\right) } \left( e^{ \hx} E_1( \hx) + e^{- \hx} \left( Ei( \hx) - \frac{ \lambda g d_e}{\rho_s^2}\frac{\pi}{2} \right)\right) d\hx.\\ & = - \frac{\lambda^2 g^2 }{4\rs^3} \int^{\infty}_\sigma \frac{1}{ \hx} \left( e^{ \hx} E_1( \hx) + e^{- \hx} Ei( \hx) \right) d\hx + \frac{\lambda^3 g^3 d_e }{ 4\rs^5} \frac{\pi}{2} \int^{\infty}_\sigma \frac{e^{- \hx}}{ \hx} d\hx. \end{split} \end{equation} Using the identity $e^{u} E_1(u) + e^{-u} Ei(u) = 2 \int_0^{\infty} \frac{u}{u^2 + t^2} \sin(t) dt $ (from [Geller & Ng, 1969] (id. 22 Tab. 3.3)) and knowing that $\Gamma (0, \sigma)= \int^{\infty}_\sigma \frac{e^{- \hx}}{ \hx} d \hat{x}$ is the incomplete gamma function whose dominant contribution, as $\sigma \rightarrow 0^+$, is $\log(\sigma)$, we obtain \begin{equation} \begin{split} W_1 & = \frac{\lambda^2 g^2 }{ \rs^3} \left( \int_{0}^{\infty} \int_{ \sigma}^{\infty} \frac{\sin(t)}{\hx^2 + t^2} \, d\hx \, dt + O \left(\frac{g d_e}{\rho_s^2}\log(\sigma)\right) \right) d\hx,\\ \end{split} \end{equation} when $ \sigma \rightarrow 0^{+}$ and $g d_e/(\rho_s^2\sigma) \rightarrow 0^{+}$. Focusing now on the remaining double integral, \begin{equation} \begin{split} \int_{0}^{\infty} \int_{\sigma}^{\infty} \frac{\sin(t)}{\hx^2 + t^2} \, d\hx \, dt & =\int_{0}^{\infty} \sin(t) \frac{\arctan(\hx/t)}{t} \Big{\rvert}_{ \sigma}^{\infty} \, dt \\ & =\frac{\pi}{2} \int_{0}^{\infty} \frac{\sin(t)}{t} \, dt - \int_{0}^{\infty} \frac{\sin(t)}{t} \arctan(\sigma /t) \, dt. \end{split} \end{equation} We can prove that the second term is negligible when $ \sigma \rightarrow 0^{+}$ by introducing a new small parameter $\kappa$ such as $ \sigma \ll \kappa \ll 1$, splitting the integral into the sum of an integral from $0$ to $\kappa$ with an integral from $\kappa$ to $+\infty$, and using that in the region $0<t<\kappa$, $\arctan( \sigma /t) < \frac{\pi}{2} $ and $\sin(t) \sim t$ and in the region $\kappa < t $, one has $\arctan(\sigma /t) \sim ( \sigma /t )$. We thus obtain \begin{equation} \begin{split} W_1 &= - \frac{\lambda^2 g^2 }{ 4\rs^3} \left( \frac{ \pi^2}{2} + O \left(\frac{g d_e}{\rho_s^2}\log(\sigma)\right)\right), \label{} \end{split} \end{equation} when $ \sigma \rightarrow 0^{+}$ and $g d_e/(\rho_s^2\sigma) \rightarrow 0^{+}$. It is then possible to show, using (<ref>) and (<ref>) that \begin{equation} \begin{split} W_2 & = O\left( \frac{g d_e}{\rho_s^2}\log \left(\frac{g d_e}{\rho_s^2}\right)\right) + O\left(\frac{g d_e}{\rho_s^2}\log\left(\sigma\right)\right) + O \left( \sigma \log \left(\sigma\right)\right) + O\left( \sigma \log \left(\frac{g d_e}{\rho_s^2}\right)\right) , \end{split} \end{equation} when $ \sigma \rightarrow 0^{+}$ and $g d_e/(\rho_s^2\sigma) \rightarrow 0^{+}$. Summing all the leading order terms and neglecting the higher order contributions, we obtain the dispersion relation Δ' = g λπ/2 d_e ρ_s - g^2 λ^2/ 4 ρ_s^3π^2/2. It is possible, in virtue of (<ref>), to verify that the second term on the right-hand side of Eq. (<ref>) is smaller than the first one ($g/(d_e \rho_s) \gg g^2 / \rho_s^3$). Retaining only the first term in Eq. (<ref>) gives the growth rate predicted by [Porcelli, 1991] and corresponding to the dispersion relation (<ref>). When taking into account the corrective term, we obtain the expression for the growth rate γ_u = 2 k_y ( ρ_s ^2/πd_e λ-ρ_s^3/2√(ρ_s - 2 d_e^2 Δ')/πd_e λ ), corresponding to Eq. (<ref>). We remark that, because of the parity properties we required on $\tilde{\phi}$ and $\tilde{A}$, the growth rate $\gamma_u$ has to be real, which enforces a further condition of validity, corresponding to \begin{equation} \rho_s \geq 2 d_e^2 \Delta '. \end{equation} We performed high precision tests to verify the corrective term of the dispersion relation (<ref>). natexlab#1#1#1#1 #1#1 #1#1#1#1#1#1 #1#1#1#1 #1#1 #1#1 [Aydemir, 1992] Aydemir, A. Y. 1992 Nonlinear studies of m=1 modes in high‐temperature plasmas. Physics of Fluids B: Plasma Physics 4 (11), 3469–3472, arXiv: [Biancalani & Scott, 2012] Biancalani, A. & Scott, B. D. 2012 Observation of explosive collisionless reconnection in 3d nonlinear gyrofluid simulations. EPL (Europhysics Letters) 97 (1), 15005. [Brizard, 1992] Brizard, Alain 1992 Nonlinear gyrofluid description of turbulent magnetized plasmas. Physics of Fluids B: Plasma Physics 4 (5), 1213–1228, arXiv: [Cafaro et al., 1998] Cafaro, E., Grasso, D., Pegoraro, F., Porcelli, F. & Saluzzi, A. 1998 Invariants and geometric structures in nonlinear Hamiltonian magnetic reconnection. Phys. Rev. Lett. 80, 4430–4433. [Comisso et al., 2013] Comisso, L., Grasso, D., Waelbroeck, F. L. & Borgogno, D. 2013 Gyro-induced acceleration of magnetic reconnection. Physics of Plasmas 20 (9), 092118, arXiv: [Del Sarto et al., 2003] Del Sarto, D., Califano, F. & Pegoraro, F. 2003 Secondary Instabilities and Vortex Formation in Collisionless-Fluid Magnetic Reconnection. Physical Review Letters 91 (23), [Del Sarto et al., 2006] Del Sarto, D., Califano, F. & Pegoraro, F. 2006 Electron parallel compressibility in the nonlinear development of two-dimensional collisionless magnetohydrodynamic reconnection. Modern Physics Letters B 20 (16), 931–961, arXiv: [Dorland & Hammett, 1993] Dorland, W. & Hammett, G. W. 1993 Gyrofluid turbulence models with kinetic effects. Physics of Fluids B: Plasma Physics 5 (3), 812–835, arXiv: [Eastwood et al., 2018] Eastwood, J. P., Mistry, R., Phan, T. D., Schwartz, S. J., Ergun, R. E., Drake, J. F., Øieroset, M., Stawarz, J. E., Goldman, M. V., Haggerty, C., Shay, M. A., Burch, J. L., Gershman, D. J., Giles, B. L., Lindqvist, P. A., Torbert, R. B., Strangeway, R. J. & Russell, C. T. 2018 Guide field reconnection: Exhaust structure and heating. Geophysical Research Letters 45 (10), 4569–4577, arXiv: [Fitzpatrick, 2010] Fitzpatrick, R. 2010 Magnetic reconnection in weakly collisional highly magnetized electron-ion plasmas. Physics of Plasmas 17 (4), 042101, arXiv: [Fitzpatrick & Porcelli, 2004] Fitzpatrick, Richard & Porcelli, Franco 2004 Collisionless magnetic reconnection with arbitrary guide field. Physics of Plasmas 11 (10), 4713–4718, arXiv: [Fitzpatrick & Porcelli, 2007] Fitzpatrick, R. & Porcelli, F. 2007 Erratum: Collisionless magnetic reconnection with arbitrary guide-field [phys. plasmas 11, 4713 (2004)]. Physics of Plasmas 14 (4), 049902, arXiv: https://doi.org/10.1063/1.2715576. [Furth et al., 1963] Furth, H.P., Killeen, J. & Rosenbluth, M. N. 1963 Finite resistivity instabilities of a sheet pinch. Phys. Fluids 6, 459. [Geller & Ng, 1969] Geller, Murray & Ng, Edward W. 1969 A table of integrals of exponential integral. J. Res. Natl. Bur. Stand., Sec. B: Math. Sci. 73B (3), 191. [Grasso et al., 2001] Grasso, D., Califano, F., Pegoraro, F. & Porcelli, F. 2001 Phase mixing and saturation in Hamiltonian reconnection. Phys. Rev. Lett. 86, 5051–5054. [Grasso et al., 2006] Grasso, D, Margheriti, L, Porcelli, F & Tebaldi, C 2006 Magnetic islands and spontaneous generation of zonal flows. Plasma Physics and Controlled Fusion 48 (9), L87–L95. [Grasso et al., 1999] Grasso, D, Pegoraro, F, Porcelli, F & Califano, F 1999 Hamiltonian magnetic reconnection. Plasma Physics and Controlled Fusion 41 (12), 1497–1515. [Grasso & Tassi, 2015] Grasso, D & Tassi, E 2015 Hamiltonian magnetic reconnection with parallel electron heat flux dynamics. Journal of Plasma Physics 81, 495810501. [Grasso et al., 2010] Grasso, D., Tassi, E. & Waelbroeck, F. L. 2010 Nonlinear gyrofluid simulations of collisionless reconnection. Physics of Plasmas 17 (8), 082312, arXiv: [Keramidas Charidakos et al., 2015] Keramidas Charidakos, I., Waelbroeck, F. L. & Morrison, P. J. 2015 A Hamiltonian five-field gyrofluid model. Phys. Plasmas 22, 112113. [Lele, 1992] Lele, Sanjiva K. 1992 Compact finite difference schemes with spectral-like resolution. Journal of Computational Physics 103 (1), 16–42. [Man et al., 2020] Man, H., Zhou, M., Yi, Y., Zhong, Z., Tian, A., Deng, X. H., Khotyaintsev, Y., Russell, C. T. & Giles, B. 2020 Observations of electron‐only magnetic reconnection associated with macroscopic magnetic flux ropes. Geophysical Research Letters 47. [Mandell et al., 2018] Mandell, N. R., Dorland, W. & Landreman, M. 2018 Laguerre–hermite pseudo-spectral velocity formulation of gyrokinetics. Journal of Plasma Physics 84 (1), 905840108. [Morrison, 1998] Morrison, P. J. 1998 Hamiltonian description of the ideal fluid. Rev. Mod. Phys. 70, 467–521. [Numata et al., 2011] Numata, Ryusuke, Dorland, William, Howes, Gregory G., Loureiro, Nuno F., Rogers, Barrett N. & Tatsuno, Tomoya 2011 Gyrokinetic simulations of the tearing instability. Physics of Plasmas 18 (11), 112106, arXiv: [Numata & Loureiro, 2015] Numata, Ryusuke & Loureiro, N. F. 2015 Ion and electron heating during magnetic reconnection in weakly collisional plasmas. Journal of Plasma Physics 81 (2), 305810201. [Passot et al., 2018] Passot, T., Sulem, P. L. & Tassi, E. 2018 Gyrofluid modeling and phenomenology of low- $\beta_e$ Alfvén wave turbulence. Phys. Plasmas 25, 042107. [Porcelli, 1991] Porcelli, F. 1991 Collisionless m=1 tearing mode. Phys. Rev. Lett. 66, 425–428. [Schekochihin et al., 2009] Schekochihin, A. A., Cowley, S. C., Dorland, W., Hammett, G. W., Howes, G. G., Quataert, E. & Tatsuno, T. 2009 ASTROPHYSICAL GYROKINETICS: KINETIC AND FLUID TURBULENT CASCADES IN MAGNETIZED WEAKLY COLLISIONAL PLASMAS. The Astrophysical Journal Supplement Series 182 (1), [Schep et al., 1994] Schep, T. J., Pegoraro, F. & Kuvshinov, B. N. 1994 Generalized two-fluid theory of nonlinear magnetic structures. Phys. Plasmas 1, 2843–2851. [Tassi, 2017] Tassi, E. 2017 Hamiltonian closures in fluid models for plasmas. Eur. Phys. J. D 71, 269. [Tassi, 2019] Tassi, E. 2019 Hamiltonian gyrofluid reductions of gyrokinetic equations. J. Phys. A: Math. and Theor. 52, [Tassi et al., 2018] Tassi, E., Grasso, D., Borgogno, D., Passot, T. & Sulem, P. 2018 A reduced Landau-gyrofluid model for magnetic reconnection driven by electron inertia. Journal of Plasma Physics 84 (4), 725840401. [Tassi et al., 2020] Tassi, E., Passot, T. & Sulem, P.L. 2020 A Hamiltonian gyrofluid model based on a quasi-static closure. J. Plasma Phys. 86, 835860402. [Waelbroeck et al., 2009] Waelbroeck, F. L., Hazeltine, R. D. & Morrison, P. J. 2009 A Hamiltonian electromagnetic gyrofluid model. Phys. Plasmas 16, 032109. [Waelbroeck & Tassi, 2012] Waelbroeck, F. L. & Tassi, E. 2012 A compressible Hamiltonian electromagnetic gyrofluid model. Commun. Nonlinear Sci. Numer. Simulat. 17, 2171.
# Thermalized buckling of isotropically compressed thin sheets Suraj Shankar1 and David R. Nelson1,2 1Department of Physics, Harvard University, Cambridge, MA 02138, USA 2Department of Molecular and Cellular Biology and School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA ###### Abstract The buckling of thin elastic sheets is a classic mechanical instability that occurs over a wide range of scales. In the extreme limit of atomically thin membranes like graphene, thermal fluctuations can dramatically modify such mechanical instabilities. We investigate here the delicate interplay of boundary conditions, nonlinear mechanics and thermal fluctuations in controlling buckling of confined thin sheets under isotropic compression. We identify two inequivalent mechanical ensembles based on the boundaries at constant strain (isometric) or at constant stress (isotensional) conditions. Remarkably, in the isometric ensemble, boundary conditions induce a novel long-ranged nonlinear interaction between the local tilt of the surface at distant points. This interaction combined with a spontaneously generated thermal tension leads to a renormalization group description of two distinct universality classes for thermalized buckling, realizing a mechanical variant of Fisher-renormalized critical exponents. We formulate a complete scaling theory of buckling as an unusual phase transition with a size dependent critical point, and discuss experimental ramifications for the mechanical manipulation of ultrathin nanomaterials. ## 1 Introduction Thin sheets with a resistance to shear can accommodate compressive stresses through an array of mechanical instabilities, including buckling Landau and Lifshitz (1986); Koiter (1967), wrinkling Cerda _et al._ (2002); *Cerda2003; Davidovitch _et al._ (2011); Vandeparre _et al._ (2011), folding Pocivavsek _et al._ (2008); Diamant and Witten (2011) and crumpling Ben Amar and Pomeau (1997); *Cerda1998; *Cerda1999; Witten (2007), all controlled essentially by geometry. Although once disregarded as undesirable modes of failure, instabilities now play a central role in the design of mechanical metamaterials Krieger (2012); Bertoldi _et al._ (2017) as they combine complex morphologies with mechanical functionality. In recent years, rapid miniaturization has driven intense research efforts in developing similar metamaterials on a much smaller scale Blees _et al._ (2015); Rogers _et al._ (2016); *xu2017ultrathin; *miskin2018graphene; *reynolds2019capillary; *grosso2020graphene. In this regard, atomically thin two dimensional (2D) materials such as graphene, MoS2 or BN Novoselov _et al._ (2005); Katsnelson (2012) are particularly promising and offer unprecedented opportunities to study classical elasticity and mechanics in the ultimate limit in thin sheets, where thermal fluctuations can play a dominant role Nelson _et al._ (2004); Katsnelson (2012). In such ultrathin flexible materials thermal fluctuations can dramatically renormalize the mechanical properties in a scale-dependent fashion Nelson and Peliti (1987). Out of plane (flexural) deformations allow tensionless solid membranes to exhibit a remarkable thermally wrinkled, yet flat phase with a scale-dependent bending rigidity and strongly softened elastic moduli Bowick and Travesset (2001); Nelson _et al._ (2004). While thin sheets favour bending over energetically expensive stretching, geometry links the two as any bending-induced Gaussian curvature inevitably causes stretching as well. This basic feature underlies many of the impressive finite temperature properties. Nanoindentation measurements in graphene Lee _et al._ (2008) and MoS2 Bertolazzi _et al._ (2011) monolayers yield exceptionally high Young’s moduli on the nanoscale as expected from strong covalent bonding. Yet on larger scales $\sim 10~{}\mu$m, recent experiments with freely suspended graphene have demonstrated a $\sim 4000$ fold enhancement of the bending rigidity Blees _et al._ (2015) and a factor $\sim 20$ reduction in the in-plane stiffness Nicholl _et al._ (2015); *nicholl2017hidden, due to a combination of thermally generated and static ripples Meyer _et al._ (2007); Košmrlj and Nelson (2013); Xu _et al._ (2014), highlighting the importance of flexural fluctuations. While the anomalous mechanics of thermalized membranes has been extensively explored, the role of confinement and boundaries is much less appreciated. Supported or clamped edges are one of the most commonly encountered boundary conditions, in electromechanical resonators Bunch _et al._ (2007); Chen _et al._ (2009), multistable switches Loh and Espinosa (2012) and in nanomechanical devices Blees _et al._ (2015); Nicholl _et al._ (2015, 2017). Geometric confinement at the boundary can induce prestrains in the sample that can cause large scale instabilities such as wrinkling Bao _et al._ (2009). As a result, in recent years, exploring the influence of external stresses on the mechanics of fluctuating membranes has been a topic of prime interest Amorim _et al._ (2016). While there has been some theoretical work, both old Guitter _et al._ (1988); *guitter1989thermodynamical and new Roldán _et al._ (2011); Košmrlj and Nelson (2016); Bonilla and Ruiz-Garcia (2016); Burmistrov _et al._ (2018a); *burmistrov2018differential; *saykin2020absolute, complemented by more recent large scale numerical simulations Jiang (2014); Bowick _et al._ (2017); Wan _et al._ (2017); Sgouros _et al._ (2018); Morshedifard _et al._ (2021); Hanakata _et al._ (2021), elucidating the role of boundaries in controlling the nonlinear mechanics and buckling of thermalized sheets, particularly for compressions which attempt to impose a nonzero Gaussian curvature, remains a challenging problem. Motivated by the above, in this paper, we pose and answer the following question - what is the finite temperature version of the buckling transition in an isotropically compressed thin sheet? Euler buckling represents the simplest mechanical instability a thin elastic body can undergo and it provides an attractive setting to investigate the interplay of thermal fluctuations and boundary conditions along with the geometric nonlinearities inherent to thin plate mechanics. In particular, we focus on the universal aspects of the transition such as critical scaling exponents that are independent of microscopic details. By combining a full renormalization group analysis along with a general scaling theory, we provide a complete description of thermalized buckling as a genuine phase transition that exhibits critical scaling along with more unusual features such as a sensitive dependence on system size and the choice of boundary conditions. Apart from new statistical mechanical results, the scaling framework we propose also yields key predictions that have important ramifications for experiments that we highlight below. In the rest of the introduction, we summarize our main results and outline the structure of the paper. ### 1.1 Results and outline A key outcome of our work is a renormalization group analysis, augmented by a scaling description of buckling in isotropically compressed thermalized thin sheets. There are two main reasons why finite temperature buckling, when viewed as a phase transition, is distinct from conventional critical phenomena. The first is that buckling is strongly size dependent, even at zero temperature by virtue of it being a long-wavelength instability Landau and Lifshitz (1986). The second is the remarkable fact that freely fluctuating elastic sheets exhibit a flat phase Nelson and Peliti (1987) with critical fluctuations over an extended range of temperatures. Both these features are characteristic of thin sheets, arising from an interplay of geometry and mechanics, and form the basis of our results below. ### 1 _Ensemble inequivalence & Fisher renormalization_ Figure 1: A sketch showing a possible realization of the two mechanical ensembles, for example, in an atomically thin sheet of graphene. (a) In the isotensional ensemble, a constant inward external stress $\sigma_{0}$ is applied to the membrane, while the boundary displacement fluctuates. This set- up might be realized in the same fashion as in single molecule experiments, by using feedback-controlled multiplexed optical tweezers to actuate under constant force conditions, similar to experiments recently used to probe the mechanical response of red blood cells Turlier _et al._ (2016). (b) The isometric ensemble instead corresponds to a clamped boundary with the external load imposed via a global strain $\epsilon$. While, current experiments with graphene typically suspend monolayers across fixed size holes Nicholl _et al._ (2015), a variable aperture size tuned by a camera shutter mechanism could be used to tune the strain isotropically. An external symmetry breaking field $\mathcal{E}$ perpendicular to average plane of the sheet can also be applied in either ensemble to bias the direction of buckling. Thin sheets can be loaded in-plane either by prescribing the external strain (isometric) or the external stress (isotensional) at the boundary (see Fig. 1 for an illustration). These constitute dual mechanical ensembles, in analogy with thermodynamic ensembles Gibbs (1902). While it is well known that boundary conditions can modify nonuniversal quantities such as the buckling threshold Landau and Lifshitz (1986), we discover that universal scaling exponents (defined in Sec. 4) can also exhibit a remarkable sensitivity to boundary conditions! We demonstrate this fact explicitly within a systematic $\varepsilon=4-D$ expansion for a general $D$-dimensional solid embedded in $d>D$ dimensional space (discussed in Secs. 7 and 8 with details relegated to Appendix C) along with a simpler, but uncontrolled, one-loop calculation performed directly in the physical dimensions of $D=2$ and $d=3$ in Appendix D. Our calculations show that buckling in the two mechanical ensembles is in fact controlled by two distinct fixed points, with different scaling exponents that are summarized in Table 1. This remarkable departure from conventional wisdom demonstrates the nonequivalence of mechanical ensembles in thermally fluctuating thin sheets and highlights the subtle nature of membrane statistical mechanics. Thus, in the simplest setting of isotropic compression, we find thermalized plate buckling is described by two distinct critical points characterizing the isometric and isotensional universality classes that are distinguished simply by the boundary conditions imposed. Although surprising, the inequivalence of ensembles has precedence in critical phenomena. A critical point engenders fluctuations on all scales which under appropriate conditions can result in scaling exponents that change upon switching to a dual or constrained ensemble. This is known as Fisher renormalization Fisher (1968); *lipa1968critical; *fisher1970visibility. In our case, however, by tuning to the buckling threshold, we approach the flat phase of a free-standing membrane which characterizes an entire low temperature _critical phase_ with scale invariant fluctuations! Furthermore, we find that the imposition of a fixed strain (in contrast to a fixed stress) boundary condition induces a novel long-ranged nonlinearity that couples the local _tilt_ of the surface at far away points, which we derive in Sec. 3 and Appendix A through a careful consideration of zero modes and boundary conditions. This nonlocal term, which can also be important far from the buckling transition, softly enforces the geometric constraint of global inextensibilitiy, which simultaneously shifts the buckling threshold by a spontaneously generated thermal tension and modifies the critical exponents via a mechanical variant of Fisher renormalization. ### 2 _Size-dependent scaling theory_ The long-wavelength nature of the buckling instability endows it with both a system size dependent threshold and a macroscopic mechanical response Landau and Lifshitz (1986), features that are retained even at finite temperature. This size dependence is unusual though, from the point of view of critical phenomena, and behaves as a _dangerously irrelevant_ variable Amit and Peliti (1982); *Gunton1973RenormalizationGI; *nelson1976coexistence that modifies scaling exponents in nontrivial ways. As a result, we derive new exponent identites in Sec. 9 that relate different scaling exponents in both ensembles, mirroring classic results from conventional critical phenomena Goldenfeld (2018). Many of these relations are also summarized in Table 1. By combining scaling with general thermodynamic arguments, we also explicitly demonstrate how buckling physics in both ensembles is a mechanical variant of Fisher renormalization. Note that, the construction of a consistent scaling theory for thermalized buckling is a significant achievement as it not only clarifies previous confounding results Guitter _et al._ (1988, 1989); Košmrlj and Nelson (2016) by correctly accounting for nontrivial system size dependence and ensemble inequivalence, but it also yields a unified framework that incorporates experimentally relevant boundary conditions and symmetry breaking fields. ### 3 _Experimental consequences_ Our work illustrates the spectacular ways in which geometry, boundary effects and thermal fluctuations can conspire to produce unexpected phenomena, and suggests that extending thin body mechanics to finite temperature is a rich and challenging enterprise, requiring great care. Given the ubiquity and ease of manipulating strain rather than stress in experiments, our results have important implications for the rational design of strain engineered nanodevices and interpretation of mechanical measurements in ultrathin materials in the presence of thermal fluctuations. Figure 2: Experimentally measured height response curve of a clamped graphene sheet suspended over a circular hole of radius $6.2~{}\mu$m in a cryostat at two different temperatures, $T=78~{}$K (blue dots) and $T=297~{}$K (red dots). The data is reproduced from Ref. Storch _et al._ (2018). The electrically integrated graphene devices are capacitively actuated out of the plane using an electrostatic force $\mathcal{E}\propto V_{g}^{2}$ ($V_{g}$ is the gate voltage) and the average deflection $\langle h\rangle$ of the sheet is measured using laser interferometry, see inset for a sketch of the setup and Ref. Storch _et al._ (2018) for further details. The red and blue lines are guides to the eye showing the exponent of the nonlinear force response. While the $T=297~{}$K data (red dots) shows a strong cubic dependence on the height, the $T=78~{}$K data (blue dots) shows a smaller slope that matches well with our theoretical prediction using the _isometric_ ensemble exponent $3-1/\beta^{\prime}\approx 1.607$. Note that, this slope is significantly different from a slope of unity (black line), as would be predicted in both the _isotensional_ ensemble and within mean field theory. A common setup to probe the mechanical properties of graphene involves measuring the force-displacement curve for a clamped monolayers that is deformed by the application of stresses or external fields. As a simple example, we shall focus here on a simple circular geometry with sheets draped across holes and deflected by a normal electric field ($\mathcal{E}$) as employed in recent experiments Nicholl _et al._ (2015); Storch _et al._ (2018) (see inset in Fig. 2 for a sketch). Other geometries including ribbons cantilevered at an edge Blees _et al._ (2015) or suspended across trenches Bunch _et al._ (2007); Chen _et al._ (2009); Bao _et al._ (2009) are also possible, but we don’t address them here. A crucial ingredient in the interpretation of force response measurements in these devices is a mechanical “equation of state” that relates the externally applied field $\mathcal{E}$ and the in-plane tension $\Delta\sigma$ to the magnitude of the average deflection of the sheet $\langle h\rangle$. A zero-temperature mean-field description following classical elasticity (detailed in Sec. 5) gives $\mathcal{E}=c_{1}\dfrac{\Delta\sigma}{R^{2}}\langle h\rangle+c_{2}\dfrac{Y}{R^{4}}\langle h\rangle^{3}\;,$ (1.1) where $c_{1,2}$ are calculable numerical constants, $Y$ is the sought after 2D Young’s modulus and $R$ is the size of the sheet. Importantly, when the sheet is constrained at fixed stress $\Delta\sigma$, i.e., the isotensional ensemble, Eq. 1.1 applies exactly even at finite temperature upon simply replacing $Y$ by its scale dependent renormalized value (see Sec. 9 for details). However, nearly all the force measurement experiments are instead conducted with fixed strain and clamped boundary conditions, i.e., in the _isometric_ ensemble. Our general scaling theory and renormalization group analysis provides a different equation $\mathcal{E}=c_{1}\dfrac{\Delta\sigma}{R^{2}}\left(\dfrac{Y}{R^{2}}\right)^{1-1/2\beta^{\prime}}\langle h\rangle^{3-1/\beta^{\prime}}+c_{2}\dfrac{Y}{R^{4}}\langle h\rangle^{3}\;,$ (1.2) where the tension $\Delta\sigma=B\Delta\epsilon$ is now given by the 2D bulk modulus $B$ and the imposed in-plane strain $\Delta\epsilon$. Strikingly, Eq. 1.2 involves a new order parameter exponent $\beta^{\prime}\approx 0.718$ that controls the asymptotic nonlinear force response. The difference between Eq. 1.1 and Eq. 1.2 makes it clear that using Eq. 1.1 for clamped sheets, as is conventionally done, can lead to significantly wrong results. In fact, recent experimental measurements Storch _et al._ (2018) on graphene drumheads match well with our theoretical predictions, see Fig. 2. The strain in the sample increases upon lowering temperature Storch _et al._ (2018); Chen _et al._ (2009), notwithstanding the theoretically expected negative thermal expansion coefficient of graphene Košmrlj and Nelson (2016); Bao _et al._ (2009), presumably due to surface contaminants. As a result, while the classical cubic response dominates at higher temperature with weak tension (red dots in Fig. 2), the anomalous nonlinear response with $\mathcal{E}\propto\langle h\rangle^{1.607}$ (blue dots in Fig. 2) emerges for higher strains at lower temperature. A systematic analysis exploring how this nonlinear response affects the extraction of the Young’s modulus is left for future work. This result highlights the direct relevance of our work to the correct interpretation of mechanical measurements in graphene devices, not only for the circular geometries studied here, but also cantilevers and doubly clamped ribbons. We believe that recognizing the fundamental distinction between the isotensional and the isometric ensembles is essential to such endeavours. While Fig. 2 depicts a static example, a dynamical extension of Eq. 1.2 including dissipation and inertia along with a time-varying field $\mathcal{E}(t)$ provides a simple description of periodicaly driven electromechanical oscillators Lifshitz and Cross (2008). Although a full dynamical analysis is beyond the scope of this current work, we can already appreciate the presence of a strong nonlinear response (Eq. 1.2) in the small deflection limit, which allows for higher quality factors and bistability Lifshitz and Cross (2008) even for a weak drive. Note that, such an anomalous response is only elicited in the isometric ensemble, emphasizing once again the importance of boundary conditions. Although we focus on isotropic compression, we expect suitable extensions of this framework to be applicable to recent numerical simulations Morshedifard _et al._ (2021); Hanakata _et al._ (2021) and experiments Blees _et al._ (2015); Bao _et al._ (2009); Xu _et al._ (2014) on compressed ribbons that have begun addressing the anomalous mechanics of _anisotropic_ buckling in constrained sheets. While much of our discussion has been based on atomic crystals, we emphasize our results are generic and also apply to organic 2D polymers, such as naturally occuring DNA kinetoplasts Klotz _et al._ (2020), the spectrin cytoskeleton Schmidt _et al._ (1993) and reconstituted spider silk Hermanson _et al._ (2007), or synthetic molecular crystals Servalli _et al._ (2018); *payamyar2016two and possibly polymerosomes Shum _et al._ (2008) with very large radii. In Sec. 10, we conclude with a brief discussion of the broader significance of our results to experiments on atomically thin crystalline membranes and possible future directions. ## 2 Thin plate elasticity Here, we focus on the physically relevant continuum elastic description of a thin 2D sheet fluctuating in 3D space (a generalization for a $D$-dimensional solid embedded in $d$-dimensional ($d>D$) Euclidean space, useful for certain calculations, is provided in Appendix C). The deformation of a thin flat sheet is parametrized in the Monge gauge using a 2D in-plane displacement vector ${\bf u}$ and a height field $h$. The total elastic energy of the sheet involves both stretching and bending contributions and is given by Landau and Lifshitz (1986) $\mathcal{H}=\int\mathrm{d}^{2}r\left[\dfrac{\kappa}{2}(\nabla^{2}h)^{2}+\mu u_{ij}^{2}+\dfrac{\lambda}{2}u_{kk}^{2}-\mathcal{E}h\right]-\oint_{\mathcal{C}}\mathrm{d}\ell\;\hat{\nu}_{i}\sigma^{\rm ext}_{ij}u_{j}\;.$ (2.1) The Lamé parameters are $\mu$ and $\lambda$, and $\kappa$ is the bending rigidity. The final boundary integral is the work done by an external stress $\bm{\sigma}^{\rm{ext}}$ with $\hat{\bm{\nu}}$ being the outward normal (within the plane) to the boundary curve $\mathcal{C}$. The penultimate term corresponds to the potential energy due to an external out of plane field $\mathcal{E}$ which couples directly to the height of the membrane. Such a perturbation can be realized by an electric field $E$, with $\mathcal{E}=\rho_{q}E$, where $\rho_{q}$ is the electric charge density on the surface, while in the presence of gravity, we have $\mathcal{E}=\rho_{m}g$, where $\rho_{m}$ is the mass density and $g$ is the gravitational acceleration. The strain tensor $u_{ij}=\dfrac{1}{2}\left(\partial_{i}u_{j}+\partial_{j}u_{i}+\partial_{i}h\partial_{j}h\right)\;,$ (2.2) encodes the geometric nonlinearity inherent to thin sheets. We neglect higher order terms in the in-plane displacements which are small and irrelevant on large scales for a thin sheet Landau and Lifshitz (1986). As an aside, note that, for a $D$-dimensional manifold embedded in $d$-dimensional space, the displacement field ${\bf u}$ has $D$ components and the height field is no longer a scalar, but instead a vector ${\bf h}$ with codimension $d_{c}=d-D>0$ components. The nonlinear strain tensor in this case is then $u_{ij}=(\partial_{i}u_{j}+\partial_{j}u_{i}+\partial_{i}{\bf h}\cdot\partial_{j}{\bf h})/2$. The relative importance of stretching versus bending energies is captured by a dimensionless Föppl-von Kármán number, which in 2D is given by ${\rm vK}=\dfrac{YR^{2}}{\kappa}\;,$ (2.3) where $Y=4\mu(\mu+\lambda)/(2\mu+\lambda)$ is the 2D Young’s modulus and $R$ is a characteristic linear dimension of the sheet. When viewing the sheet as thin slice of a bulk elastic material, its bending modulus and stiffness are related as $\kappa=Y_{\rm 3D}t^{3}/[12(1-\nu_{\rm 3D}^{2})]$ and $Y=Y_{\rm 3D}t$, where $Y_{\rm 3D}$ is the 3D Young’s modulus, $\nu_{\rm 3D}$ the 3D Poisson’s ratio and $t$ is the thickness of the sheet Audoly and Pomeau (2010). As a result, ${\rm vK}\sim(R/t)^{2}$ is essentially controlled by geometry with ${\rm vK}\gg 1$ for a thin sheet ($t/R\ll 1$), reflecting the dominance of geometrically nonlinear isometric deformations, i.e., bending without stretching. An ordinary sheet of paper has a ${\rm vK}\approx 10^{6}$ while a $1~{}\mu$m size graphene monolayer has a microscopic ${\rm vK}\approx 10^{9}$ (using the atomic scale values for $\kappa\approx 1.1~{}$eV Katsnelson (2012) and $Y\approx 340~{}$N/m Lee _et al._ (2008)). ## 3 Mechanical ensembles The properties of a thermalized elatic membrane at temperature $T$ are computed through the equilibrium partition function, $\mathcal{Z}=\int\mathcal{D}h\mathcal{D}{\bf u}\;e^{-\mathcal{H}/k_{B}T}\;,$ (3.1) where $k_{B}$ is the Boltzmann constant. As the in-plane phonons (${\bf u}$) only appear quadratically in $\mathcal{H}$ they can be integrated out exactly to give an effective free energy $\mathcal{F}=-k_{B}T\ln\int\mathcal{D}{\bf u}\;e^{-\mathcal{H}/k_{B}T}$. To do this, we separate out the average strain and Fourier transform the nonzero wavelength deformations. While the calculation for the wavevector ${\bf q}\neq{\bf 0}$ modes is standard Nelson _et al._ (2004), the homogeneous ${\bf q}={\bf 0}$ strain mode needs to be handled with care in the presence of various boundary conditions. We shall focus on isotropic loading at the boundary and neglect external shear or compressional loading of ribbons for simplicity. This leaves us with two possibilities, which are the 1. (i) Fixed stress or _isotensional_ ensemble, and the 2. (ii) Fixed strain or _isometric_ ensemble. Note the latter could be realized with clamped circular boundary conditions. Stress and strain (equivalently, force and displacement) are thermodynamically conjugate variables and the elliptic nature of elasticity prohibits specifying both at a boundary simultaneously. Hence, we have two natural mechanical ensembles akin to the isobaric $(N,P,T)$ and isochoric $(N,V,T)$ ensembles of statistical mechanics respectively, that are dual to each other. In Fig. 1, we sketch a possible realization of the two mechanical ensembles in an atomically or molecularly thin suspended sheet. In the isotensional ensemble, the sheet is driven by an external isotropic stress $\sigma^{\rm ext}_{ij}=\sigma_{0}\delta_{ij}\;,$ (3.2) with no further constraints on the zero modes of the displacement or strain. As a result, the boundary can freely displace in the plane under the action of $\sigma_{0}\neq 0$. Note that $\sigma_{0}>0$ corresponds to a tensile stress while $\sigma_{0}<0$ is compressive stress, a situation studied in Ref. Košmrlj and Nelson (2016). In the isometric ensemble, on the other hand, we clamp the boundary with a fixed displacement and allow the stress to fluctuate freely instead. If the constant displacement on the boundary is ${\bf u}_{\mathcal{C}}=\Delta_{\mathcal{C}}\hat{\bm{\nu}}$, we have $\oint_{\mathcal{C}}\mathrm{d}\ell\;\hat{\bm{\nu}}\cdot{\bf u}=L_{\mathcal{C}}\Delta_{\mathcal{C}}\;,$ (3.3) where $L_{\mathcal{C}}$ is the length of the boundary. By using Stokes’ theorem, we can rewrite this as a bulk integral, $\dfrac{1}{A}\int\mathrm{d}^{2}r\;\bm{\nabla}\cdot{\bf u}=\epsilon\;.$ (3.4) Here we have defined the average strain induced by the boundary as $\epsilon=L_{\mathcal{C}}\Delta_{\mathcal{C}}/A$, where $A=\int\mathrm{d}{\bf r}$ is the area of the sheet. To leading order this strain results in an isotropic change in the projected area as $\delta A/A=\epsilon/2$. As before, $\epsilon>0$ ($\Delta_{\mathcal{C}}>0$) corresponds to a uniform dilational strain, while $\epsilon<0$ ($\Delta_{\mathcal{C}}<0$) is an isotropic compressive strain. We can now integrate out the phonons in either ensemble to get the free energy solely in terms of the flexural modes. In the isotensional ensemble, we obtain $\displaystyle\mathcal{F}_{\sigma}$ $\displaystyle=\int\mathrm{d}^{2}r\left[\vphantom{\left(\dfrac{1}{2}\right)^{2}}\dfrac{\kappa}{2}(\nabla^{2}h)^{2}+\dfrac{\sigma_{0}}{2}|\bm{\nabla}h|^{2}-\mathcal{E}h\right.$ $\displaystyle\qquad\qquad\left.+\dfrac{Y}{2}\left(\dfrac{1}{2}\mathcal{P}_{ij}^{T}\partial_{i}h\partial_{j}h\right)^{2}\right]\;.$ (3.5) The subscript $\sigma$ denotes the fixed stress boundary condition imposed and $\mathcal{F}_{\sigma}$ is the analogous to the Gibbs free energy. The externally applied stress $\sigma_{0}$ enters in a quadratic term that has been identified previously Guitter _et al._ (1989); Košmrlj and Nelson (2016). A tensile stress ($\sigma_{0}>0$) suppresses height fluctuations Roldán _et al._ (2011), while a compressive stress ($\sigma_{0}<0$) signals the onset of the buckling instability. The Young’s modulus controls the now well known nonlinear stretching term Nelson and Peliti (1987); Bowick and Travesset (2001); Nelson _et al._ (2004) via the transverse projection operator ($\mathcal{P}_{ij}^{T}=\delta_{ij}-\partial_{i}\partial_{j}/\nabla^{2}$). In the nonlinear stretching term (in Eq. 3.5 and below in Eq. 3.7), it is understood that the ${\bf q}={\bf 0}$ Fourier mode has been projected out Nelson and Peliti (1987); Bowick and Travesset (2001); Nelson _et al._ (2004), which ensures that $\mathcal{P}^{T}_{ij}$ is well-defined. This fact also means that the free energy is rotationally invariant when $\sigma_{0}=0$, i.e., $h({\bf r})\to h({\bf r})+\bm{\alpha}\cdot{\bf r}$ where $\alpha_{i}$ are rotation angles, is a symmetry of the system. The average areal strain ($\epsilon=2\delta A/A$) conjugate to the imposed stress in this ensemble can be computed from the partition function $\mathcal{Z}_{\sigma}=\int\mathcal{D}h\;e^{-\mathcal{F}_{\sigma}/k_{B}T}$ to give $\langle\epsilon(\sigma_{0})\rangle=\dfrac{k_{B}T}{A}\dfrac{\partial\ln\mathcal{Z}_{\sigma}}{\partial\sigma_{0}}=\dfrac{\sigma_{0}}{B}-\dfrac{1}{2A}\int\mathrm{d}^{2}r\langle|\bm{\nabla}h|^{2}\rangle\;.$ (3.6) The thermal average is computed using $\mathcal{F}_{\sigma}$ and $B=\mu+\lambda$ is the bulk modulus. In the isometric ensemble, we obtain (see Appendix A) a different result, $\displaystyle\mathcal{F}_{\epsilon}$ $\displaystyle=\int\mathrm{d}^{2}r\left\\{\dfrac{\kappa}{2}(\nabla^{2}h)^{2}-\mathcal{E}h+\dfrac{Y}{2}\left(\dfrac{1}{2}\mathcal{P}_{ij}^{T}\partial_{i}h\partial_{j}h\right)^{2}\right.$ $\displaystyle\qquad\qquad\left.+\dfrac{B}{2}\left[\epsilon+\dfrac{1}{2A}\int\mathrm{d}^{2}r|\bm{\nabla}h|^{2}\right]^{2}\right\\}\;,$ (3.7) where the $\epsilon$ subscript now refers to the fixed strain conditions and now $\mathcal{F}_{\epsilon}$ is analogous to the Helmholtz free energy. As before we include contributions from bending, the external field and the nonlinear stretching terms. While the Young’s modulus penalizes bending induced shear, in the presence of clamped boundaries, dilational stretching induced by flexural deflections can no longer be accommodated by displacing the boundary. Hence, global homogeneous dilations are a zero mode in the isotensional ensemble, but _not_ in the isometric ensemble, and are penalized by the bulk modulus in the latter. Upon expanding the final bracket and neglecting an irrelevant constant, we obtain a quadratic term $\sim B\epsilon|\bm{\nabla}h|^{2}$ that has been obtained previously Roldán _et al._ (2011) which mirrors the external tension term in Eq. 3.5. Importantly, we also have an additional _nonlinear_ term of the form $\dfrac{B}{8A}\int\mathrm{d}^{2}r\int\mathrm{d}^{2}r^{\prime}|\bm{\nabla}h|^{2}|\bm{\nabla}^{\prime}h|^{2}\;,$ (3.8) that is independent of the strain imposed, but nonetheless arises only in isometric ensemble. This highly nonlocal term couples the local tilts $\sim\bm{\nabla}h$ of the membrane at arbitrarily distant points and has been missed in previous studies Guitter _et al._ (1988, 1989); Roldán _et al._ (2011); Košmrlj and Nelson (2016). Anisotropic versions of this term do appear in the description of micromechanical resonators as nonlinear beams Lifshitz and Cross (2008) and have been included in a recent mean field analysis of a uniaxially compressed ribbon Hanakata _et al._ (2021). The consequences of this term in the presence of thermal fluctuations are a major focus of this paper. We note some further unusual features of the new nonlocal term in Eq. 3.8. Although it involves a double spatial integral, the whole term is extensive (due to the factor of $1/A$), but it is importantly _not_ additive. As a result, the membrane cannot be divided into a cumulative sum of macroscopic parts which are roughly independent of each other in the thermodynamic limit. Similar long-ranged interactions appear in models of compressible Sak (1974); Bergman and Halperin (1976); de Moura _et al._ (1976) or constrained Rudnick _et al._ (1974) ferromagnets and can affect critical behaviour in some cases, though without reference to ensemble inequivalence. In self-gravitating systems Padmanabhan (1990) and mean-field models of magnets Barré _et al._ (2001), long-ranged interactions are known to spoil the equivalence of canonical and microcanonical ensembles, though typically in the context of first-order phase transitions. Although the buckling instability under compression can proceed as a continuous bifurcation, the highly nonlocal interaction in Eq. 3.8 strongly suggests that the isotensional and isometric ensembles may not be equivalent even in the thermodynamic limit ($A\to\infty$). Before we proceed, we note the isometric ensemble variant of Eq. 3.6. The average stress generated in the sheet due to the imposed strain is simply given by $\langle\sigma(\epsilon)\rangle=-\dfrac{k_{B}T}{A}\dfrac{\partial\ln\mathcal{Z}_{\epsilon}}{\partial\epsilon}=B\left(\epsilon+\dfrac{1}{2A}\int\mathrm{d}^{2}r\left\langle|\bm{\nabla}h|^{2}\right\rangle\right)\;,$ (3.9) with the partition function $\mathcal{Z}_{\epsilon}=\int\mathcal{D}h\;e^{-\mathcal{F}_{\epsilon}/k_{B}T}$ and the thermal average now performed using $\mathcal{F}_{\epsilon}$. Note that the partition functions in the two ensembles are related, $\mathcal{Z}_{\sigma}=\mathrm{const}.\;\int_{-\infty}^{\infty}\mathrm{d}\epsilon\;\mathcal{Z}_{\epsilon}\;e^{A\sigma_{0}\epsilon/k_{B}T}$ (3.10) In order to incorporate both ensembles within the same calulation, we now work with the following free energy $\displaystyle\mathcal{F}$ $\displaystyle=\int\mathrm{d}^{2}r\left[\vphantom{\left(\dfrac{1}{2}\right)^{2}}\dfrac{\kappa}{2}(\nabla^{2}h)^{2}+\dfrac{\sigma}{2}|\bm{\nabla}h|^{2}-\mathcal{E}h\right.$ $\displaystyle\quad\left.+\dfrac{Y}{2}\left(\dfrac{1}{2}\mathcal{P}_{ij}^{T}\partial_{i}h\partial_{j}h\right)^{2}+\dfrac{v}{8A}|\bm{\nabla}h|^{2}\int\mathrm{d}^{2}r^{\prime}|\bm{\nabla}^{\prime}h|^{2}\right]\;.$ (3.11) Upto unimportant additive constants, $\mathcal{F}=\mathcal{F}_{\epsilon}$ (Eq. 3.7) upon identifying $v=B$ and $\sigma=B\epsilon$, where $B=\mu+\lambda$ is the bulk modulus. Alternately, if we set $v=0$ to switch off the nonlocal nonlinear term and set $\sigma=\sigma_{0}$, then we find $\mathcal{F}=\mathcal{F}_{\sigma}$ (Eq. 3.5). It is important to note that setting $v=0$ is only a mathematical trick to obtain $\mathcal{F}_{\sigma}$ from Eq. 3.11. It does _not_ imply that the actual bulk modulus vanishes. As we shall see later, the nonlocal nature of this new term guarantees that it cannot be generated if initially absent, i.e., if we set $v=0$ in Eq. 3.11, then it remains zero under the coarse-graining embodied in a renormalization group transformation. Additionally we will see that the actual physical elastic moduli ($\mu,\lambda$) renormalize identically in both ensembles, but the buckling transition is described by two distinct fixed points (with distinct critical exponents for some quantities!) depending on the ensemble. So, in all that follows, we will use $v$ as a coupling constant that distinguishes the two ensembles with $v=0$ being allowed in the isotensional ensemble and $v>0$ only being allowed in the isometric ensemble. Only in the latter case will $v$ be identical to the actual bulk modulus ($B$) of the sheet. ## 4 Definition of scaling exponents near buckling Before we analyze buckling criticality, we define our notation. Close to the buckling transition, we expect power law scaling in a number of quantities, the exponents for which we define below. The unprimed exponents below will refer to the isotensional (constant stress) ensemble and the primed exponents to the isometric (constant strain) ensemble. We will denote the buckling threshold as $\sigma_{c}$ and $\epsilon_{c}$ in the isotensional and isometric ensembles respectively. ### 4.1 Mechanical response Upon approaching the buckling transition, the sheet develops a variety of anomalous mechanical responses. In the absence of an external field ($\mathcal{E}=0$), up/down symmetry is spontaneously broken and the sheet develops a finite $\langle h\rangle\neq 0$ when buckled. The average height rises continuously at the transition, acting as an order parameter, $\langle h\rangle\propto\begin{cases}|\sigma_{0}-\sigma_{c}|^{\beta}\quad(\textrm{isotensional})\;,\\\ |\epsilon-\epsilon_{c}|^{\beta^{\prime}}\quad(\textrm{isometric})\;.\end{cases}$ (4.1) The zero field susceptibility exhibits divergent scaling near buckling, $\chi=\left.\dfrac{\partial\langle h\rangle}{\partial\mathcal{E}}\right|_{\mathcal{E}=0}\propto\begin{cases}|\sigma_{0}-\sigma_{c}|^{-\gamma}\quad(\textrm{isotensional})\;,\\\ |\epsilon-\epsilon_{c}|^{-\gamma^{\prime}}\quad(\textrm{isometric})\;.\end{cases}$ (4.2) We expect the exponents $\gamma,\gamma^{\prime}$ to be the same on either side of the transition Goldenfeld (2018), though the amplitudes of the scaling function can and will be different. The divergence of the susceptibility signals the breakdown of linear response, which is also seen in the nonlinear field dependence right at the buckling transition $\langle h\rangle\propto\begin{cases}\mathcal{E}^{1/\delta}\quad(\sigma_{0}=\sigma_{c}\;,\ \textrm{isotensional})\;,\\\ \mathcal{E}^{1/\delta^{\prime}}\quad(\epsilon=\epsilon_{c}\;,\,\textrm{isometric})\;.\end{cases}$ (4.3) Finally, in conjunction with these out-of-plane responses, we also have a concomitant nonlinear scaling in the in-plane mechanics, quantified by anomalous stress-strain curves, $\displaystyle\langle\epsilon\rangle$ $\displaystyle\propto\rm{const.}+(\sigma_{0}-\sigma_{c})^{1/\theta}\quad(\textrm{isotensional})\;,$ (4.4a) $\displaystyle\langle\sigma\rangle$ $\displaystyle\propto\rm{const.}+(\epsilon-\epsilon_{c})^{\theta^{\prime}}\quad(\textrm{isometric})\;,$ (4.4b) at zero external field ($\mathcal{E}=0$). Note that the above only includes the dominant singularity and neglects other contributions. The new exponents $\theta,\theta^{\prime}\neq 1$ signal a violation of Hooke’s law. ### 4.2 Fluctuations and spatial scales Apart from global quantites discussed above, local variables also develop extended correlations when near the buckling transition. In the absence of an external field ($\mathcal{E}=0$), the fluctuating height of the sheet has spatial correlations with nontrival scaling properties. A nonzero external stress or strain generically causes the height fluctuations to decay exponentially over a finite correlation length $\xi$, albeit with different large distance asymptotes depending on whether the sheet is buckled or flat. This behaviour is also reflected in the normal-normal correlation function. The unit normal to a surface specified by ${\bf X}({\bf r})=(x,y,h({\bf r}))$ in the Monge representation, is $\hat{{\bf n}}=(-\partial_{x}h,-\partial_{y}h,1)/\sqrt{1+|\bm{\nabla}h|^{2}}$, which allows us to simply relate the normal-normal and height-height correlation functions as $\langle\hat{{\bf n}}({\bf r})\cdot\hat{{\bf n}}({\bf 0})\rangle\simeq 1-\dfrac{1}{2}\left\langle\left|\bm{\nabla}h({\bf r})-\bm{\nabla}h({\bf 0})\right|^{2}\right\rangle\;,$ (4.5) at lowest order in the height gradients. On either side of the buckling transition, we have $\langle\hat{{\bf n}}({\bf r})\cdot\hat{{\bf n}}({\bf 0})\rangle\sim e^{-r/\xi}$, neglecting asymptotic constants and nonexponential prefactors. The correlation length diverges at buckling as $\xi\propto\begin{cases}|\sigma_{0}-\sigma_{c}|^{-\nu}\quad(\textrm{isotensional})\;,\\\ |\epsilon-\epsilon_{c}|^{-\nu^{\prime}}\quad(\textrm{isometric})\;.\end{cases}$ (4.6) Right at the buckling transition, the normal correlations decay as a power law and the sheet has critical fluctuations on all scales that cause the correlation functions to behave anomalously. We define the translationally invariant height and phonon correlators well away from the boundaries as $\displaystyle G_{h}({\bf r})$ $\displaystyle=\langle h({\bf r})h({\bf 0})\rangle\;,$ (4.7) $\displaystyle\left[{\bf G}_{u}({\bf r})\right]_{ij}$ $\displaystyle=\langle u_{i}({\bf r})u_{j}({\bf 0})\rangle\;.$ (4.8) Upon tuning to the transition, the Fourier transformed correlators [$G({\bf q})=\int\mathrm{d}{\bf r}\;e^{-i{\bf q}\cdot{\bf r}}G({\bf r})$] exhibit a power law scaling as $q=|{\bf q}|\to 0$. These averages define the well known anomalous exponents $\eta$ and $\eta_{u}$ Nelson and Peliti (1987); Le Doussal and Radzihovsky (1992); *le2018anomalous; David and Guitter (1988); Aronovitz and Lubensky (1988); *aronovitz1989fluctuations; Bowick _et al._ (1996) through $G_{h}({\bf q})\propto\begin{cases}q^{-(4-\eta)}\quad(\sigma_{0}=\sigma_{c}\;,\,\textrm{isotensional})\;,\\\ q^{-(4-\eta^{\prime})}\quad(\epsilon=\epsilon_{c}\;,\,\textrm{isometric})\;,\end{cases}$ (4.9) and for the in-plane phonons (irrespective of the tensor indices), ${\bf G}_{u}({\bf q})\propto\begin{cases}q^{-(2+\eta_{u})}\quad(\sigma_{0}=\sigma_{c}\;,\,\textrm{isotensional})\;,\\\ q^{-(2+\eta^{\prime}_{u})}\quad(\epsilon=\epsilon_{c}\;,\,\textrm{isometric})\;.\end{cases}$ (4.10) These results describe a divergent renormalization of the wave-vector dependent bending rigidity $\kappa({\bf q})\sim q^{-\eta}$ and softening of the elastic moduli $\mu({\bf q}),\lambda({\bf q})\sim q^{\eta_{u}}$ (with analogous expressions with exponents $\eta^{\prime}$ and $\eta^{\prime}_{u}$ in the isometric ensemble). ## 5 Mean field theory Here we neglect thermal fluctuations and analyze the buckling transition in the mean-field limit, as appropriate at $T=0$. By minimizing the free energy $\mathcal{F}$ (Eq. 3.11) over the surface profile $h({\bf r})$, we obtain the Euler-Lagrange equation, $\displaystyle\kappa\nabla^{4}h-\sigma\nabla^{2}h-Y\left[\mathcal{P}_{k\ell}^{T}\left(\dfrac{1}{2}\mathcal{P}_{ij}^{T}\partial_{i}h\partial_{j}h\right)\right]\partial_{k}\partial_{\ell}h$ $\displaystyle\quad-\dfrac{v}{2A}\nabla^{2}h\int\mathrm{d}^{2}r^{\prime}|\bm{\nabla}^{\prime}h|^{2}=\mathcal{E}\;,$ (5.1) where $v=0$ and $v=B>0$ again allows us to distinguish ensembles. As both the elastic sheet and the external load are isotropic, axisymmetry is assumed in the following. We choose the eigenfunction of the linearized operator in Eq. 5.1 as an ansatz for the buckled height profile in a circular geometry, $h_{0}({\bf r})=H_{0}J_{0}(q_{n}r)\;.$ (5.2) Here, $r=|{\bf r}|$ is the distance from the center of the disc and $J_{0}(x)$ is the Bessel function of the first kind. The mode of buckling is controlled by $q_{n}$ which is fixed by boundary conditions on the height. For simplicity, we shall assume $h_{0}(R)=0\implies J_{0}(q_{n}R)=0\;,$ (5.3) where $R$ is the radius of the sheet. Other boundary conditions can also be easily used with only minor quantitative changes in the results. A simple Galerkin approximation Galerkin (1915) involves projecting Eq. 5.1 onto the single mode ansatz, which then gives (the details of the calculation are provided in Appendix B) $q_{n}^{2}(\kappa q_{n}^{2}+\sigma)H_{0}+q_{n}^{4}\left[\dfrac{v}{2}f(q_{n}R)+c_{0}Y\right]H_{0}^{3}=\mathcal{E}q_{n}R\;,$ (5.4) where $c_{0}\approx 0.10567$ is a constant and $f(x)$ is a dimensionless function given in Appendix B, with the asymptotics $f(x)\sim 2/(\pi x)$ for $x\to\infty$. Similar to the simpler problem of the buckling of a ribbon at $T=0$ Hanakata _et al._ (2021), Eq. 5.4 resembles a Landau theory with $H_{0}$ as the order parameter. It is the mechanical equivalent of a mean field “equation of state”. Note, however, that the underlying Landau theory has coefficients that depend on system size. In the absence of an external field ($\mathcal{E}=0$), buckling occurs for a sufficiently negative $\sigma$ and spontaneously breaks up-down inversion symmetry. At the buckling threshold, the lowest ($n=0$) mode (shown in Fig. 3) goes unstable first. The buckling amplitude in either ensemble is given by $|H_{0}|=\begin{cases}\left(\dfrac{\sigma_{c}-\sigma_{0}}{c_{0}Yq_{0}^{2}}\right)^{1/2}\quad(\sigma_{0}<\sigma_{c}\;,\,\textrm{isotensional})\;,\\\ \left(\dfrac{\epsilon_{c}-\epsilon}{c^{\prime}_{0}q_{0}^{2}}\right)^{1/2}\quad(\epsilon<\epsilon_{c}\;,\,\textrm{isometric})\;,\end{cases}$ (5.5) Figure 3: Sketch of the first buckling mode with boundary conditions such that $h_{0}(r=R)=0$, for a circular plate of radius $R$. The amplitude of the mode at the center of the circular frame is $H_{0}$ and its wavevector is $q_{0}\sim 1/R$. We assume hinged boundary conditions at $r=R$ for simplicity. Qualitatively similar buckling modes appear for alternative boundary conditions, such as for membranes that approach $r=R$ tangentially. where $c^{\prime}_{0}=f(m_{0})+2c_{0}(Y/B)$ is weakly dependent on the Poisson’s ratio through $Y/B$. As expected, we obtain a standard square root scaling typical of pitchfork bifurcations ($\beta=\beta^{\prime}=1/2$). For both $\sigma_{0}>\sigma_{c}$ or $\epsilon>\epsilon_{c}$, we have a stable flat state with $H_{0}=0$. The critical stress for buckling is $\sigma_{c}=-\kappa q_{0}^{2}$ and the critical strain is $\epsilon_{c}=-\kappa q_{0}^{2}/B$ in the respective ensembles. The wavevector that first goes unstable with decreasing $\sigma_{0}<0$ or $\epsilon<0$ is $q_{0}=m_{0}/R$, where $m_{0}\approx 2.405$ is the smallest positive root of $J_{0}(m_{0})=0$ as required by the boundary condition (Eq. 5.3). At threshold, $f(m_{0})=J_{1}(m_{0})^{2}\approx 0.27$, is finite. A couple of points are worth remarking on here. The buckling thresholds in both ensembles involve compression ($\sigma_{c},\epsilon_{c}<0$) and are $\propto 1/R^{2}$, vanishing as the area of the sheet becomes larger. Thus, in the thermodynamic limit ($R\to\infty$), classical buckling is a thresholdless long-wavelength ($q_{0}\sim 1/R\to 0$) instability. At the same time, the buckling amplitude remains macroscopic ($|H_{0}|\propto R$). Note also that, for a circular geometry, the buckled state acquires nonzero Gaussian curvature due to the isotropic nature of the loading. This is the energetically preferred state: a uniaxially buckled sheet that remains developable has a higher energy for these circular loading conditions. As a result, even in the isotensional ensemble, we pay stretching energy $\sim Y$ upon buckling (the penalty associated with Gaussian curvature in Eq. 3.11), while in the isometric ensemble, both the bulk ($B$) and Young’s ($Y$) moduli contribute. Right at the transition, in the presence of an external field ($\mathcal{E}\neq 0$), we have an nonlinear response of height given by $H_{0}=\begin{cases}\left(\dfrac{m_{0}\mathcal{E}}{c_{0}Yq_{0}^{4}}\right)^{1/3}\quad(\sigma_{0}=\sigma_{c}\;,\,\textrm{isotensional})\;,\\\ \left(\dfrac{2m_{0}\mathcal{E}}{c^{\prime}_{0}Bq_{0}^{4}}\right)^{1/3}\quad(\epsilon=\epsilon_{c}\;,\,\textrm{isometric})\;.\end{cases}$ (5.6) As is typical of mean field models, we obtain $\delta=\delta^{\prime}=3$. Note that, because $q_{0}\sim 1/R$, this response is strongly size dependent, diverging as $R^{4/3}$ in a large sheet. Finally, we also have the zero field susceptibility $\chi=\left.\dfrac{\partial H_{0}}{\partial\mathcal{E}}\right|_{\mathcal{E}=0}=\begin{cases}c_{\pm}\dfrac{m_{0}}{q_{0}^{2}|\sigma_{0}-\sigma_{c}|}\quad(\textrm{isotensional})\;,\\\ c_{\pm}\dfrac{m_{0}}{q_{0}^{2}B|\epsilon-\epsilon_{c}|}\quad(\textrm{isometric})\;,\end{cases}$ (5.7) which diverges right at the transition with $\gamma=\gamma^{\prime}=1$. The magnitude of this divergence is different on either side of the transition, with $c_{+}=1$ above the transition and $c_{-}=1/2$ below the transition, irrespective of the ensemble. Once again we find a strong size dependence, with $\chi\propto R^{2}$ diverging as the area. Finally, a simple calculation using Eqs. 3.6 and 3.9 also determines the stress-strain relation to be $\sigma_{0}\propto\epsilon$, which sets the exponents defined by Eq. 4.4 to be $\theta=\theta^{\prime}=1$ in both ensembles. Unlike the previous expressions, the stress-strain relation is system size independent. Although the above analysis was restricted to 2D membranes deforming in 3D space, all of the mean-field results are qualitatively similar in arbitrary dimensions. It is evident that the two ensembles have equivalent scaling behaviour in the mean-field limit. In addition, some of the scaling functions have a nontrivial size dependence, a feature peculiar to the buckling transition. This size dependence is unusual from the point of view of conventional critical phenomena Goldenfeld (2018), and as we shall see in Sec. 9, it has important consequences for the critical exponents and their scaling relations. Below we go beyond the mean field limit by including thermal fluctuations and show that there are important changes. ## 6 Gaussian analysis Here we shall primarily consider the simpler case of a flat unbuckled membrane with $H_{0}=0$ with a vanishing symmetry-breaking external field ($\mathcal{E}=0$). We can rewrite Eq. 3.11 as $\mathcal{F}=\mathcal{F}_{0}+\mathcal{F}_{\rm int}$ where $\displaystyle\mathcal{F}_{0}$ $\displaystyle=\dfrac{1}{2}\int\mathrm{d}^{2}r\left[\kappa(\nabla^{2}h)^{2}+\sigma|\bm{\nabla}h|^{2}\right]\;,$ (6.1) $\displaystyle\mathcal{F}_{\rm int}$ $\displaystyle=\dfrac{Y}{2}\int\mathrm{d}^{2}r\left(\dfrac{1}{2}\mathcal{P}_{ij}^{T}\partial_{i}h\partial_{j}h\right)^{2}$ $\displaystyle\quad+\dfrac{v}{8A}\int\mathrm{d}^{2}r\int\mathrm{d}^{2}r^{\prime}|\bm{\nabla}h|^{2}|\bm{\nabla}^{\prime}h|^{2}\;.$ (6.2) For small fluctuations, one might hope to neglect the nonlinear terms in $\mathcal{F}_{\rm int}$. Upon Fourier transforming ($h_{{\bf q}}\equiv\int\mathrm{d}{\bf r}\;e^{-i{\bf q}\cdot{\bf r}}h({\bf r})$), we obtain the bare height-height correlation function $G^{0}_{h}({\bf q})=\dfrac{1}{A}\langle|h_{{\bf q}}|^{2}\rangle_{0}=\dfrac{k_{B}T}{\kappa q^{4}+\sigma q^{2}}\;,$ (6.3) where $q=|{\bf q}|$ and the zero subscript denotes that the average is performed in the noninteracting limit. We can similarly neglect all the nonlinear interactions in $\mathcal{H}$ (Eq. 2.1) to find the bare in-plane phonon correlation function, $\displaystyle\left[{\bf G}^{0}_{u}({\bf q})\right]_{ij}$ $\displaystyle=\dfrac{1}{A}\langle u_{i}({\bf q})u_{j}(-{\bf q})\rangle_{0}$ $\displaystyle=\dfrac{k_{B}T}{\mu q^{2}}\mathcal{P}_{ij}^{T}({\bf q})+\dfrac{k_{B}T}{(2\mu+\lambda)q^{2}}\mathcal{P}_{ij}^{L}({\bf q})\;,$ (6.4) involving the longitudinal ($\mathcal{P}_{ij}^{L}({\bf q})=q_{i}q_{j}/q^{2}$) and the transverse ($\mathcal{P}_{ij}^{T}({\bf q})=\delta_{ij}-q_{i}q_{j}/q^{2}$) projection operators. As is evident, at the Gaussian level, we have $\eta=\eta^{\prime}=0$ and $\eta_{u}=\eta^{\prime}_{u}=0$ in both ensembles. From Eq. 6.3, we can easily show that the correlation length in Gaussian limit is $\xi=\sqrt{\dfrac{\kappa}{|\sigma|}}\propto|\sigma|^{-1/2}\;,$ (6.5) which corresponds to $\nu=\nu^{\prime}=1/2$ in both ensembles. Note that, here we work in the large sheet limit, which allows us to freely Fourier transform and set $\sigma_{c},\epsilon_{c}\sim 0$. We can now determine the importance of the nonlinear terms in Eq. 6.2 by making a scale transformation. To do this, we rescale ${\bf r}\to b{\bf r}$, $h\to b^{\zeta}h$ to get the following scaling dimensions for the bending rigidity, tension and nonlinear couplings, $y_{\kappa}=2\zeta-2\;,\quad y_{\sigma}=2\zeta\;,\quad y_{Y}=y_{v}=4\zeta-2\;.$ (6.6) where we have used the fact that the area $A\to b^{2}A$ under scaling. Here, $D=2$ and $d=3$, but these scalings depend more generally on dimensionality;their generalization for general $D$-dimensional manifolds embedded in $d$-dimensions is given in Appendix C. In the Gaussian limit, we have $\zeta=1$, as $h$ is simply the height with naïve dimensions of length. This leaves the bending term scale-invariant ($y_{\kappa}=0$), but the external load ($\sigma$), Young’s modulus ($Y$) and nonlocal coupling ($v$) are all equally relevant perturbations for 2D membranes embedded in 3D: $y_{\sigma}=y_{Y}=y_{v}=2>0$. Hence, even at low temperatures when fluctuations may be small, we expect that the nonlinear interactions eventually dominate in a large enough sheet. As usual, a Ginzburg-like criterion determines the thermal length scale beyond which such nonlinear fluctuations dominate Nelson and Peliti (1987); Kantor and Nelson (1987); Košmrlj and Nelson (2016); Bowick _et al._ (2017) $\ell_{\rm{th}}=\sqrt{\dfrac{16\pi^{3}\kappa^{2}}{3k_{B}TY}}\;.$ (6.7) Remarkably, at room temeprature, a monolayer of graphene or MoS2 has $\ell_{\rm{th}}\sim 1-10~{}\rm{\AA}$, and thermal fluctuations matter already on the atomic scale. Softer materials, such as naturally occuring 2D organic polymers Schmidt _et al._ (1993); Hermanson _et al._ (2007); Klotz _et al._ (2020) have a typical $\ell_{\rm{th}}\sim 0.1-1~{}\mu$m range which is much larger due to their smaller Young’s moduli. As a result, the consequences of thermal fluctuations are most dramatic in atomic crystals in contrast to the other soft membranes. We can perturbatively account for such fluctuation effects within a renormalization group framework that we implement below. ## 7 Perturbative Renormalization Group We now implement a conventional Wilsonian renormalization group Goldenfeld (2018) by iteratively integrating out a thin shell in momentum space of short wavelength fluctuations. The cutoff in Fourier space is $\Lambda\sim 1/a$, where $a$ is the microscopic lattice spacing. As an aside, we note that, although the nonlocal term involving $v$ in Eq. 3.11 is quite unusual, it can be treated straightforwardly within a standard Wilsonian treatment, as has been done for related problems in, for example, compressible magnets Sak (1974); Bergman and Halperin (1976); de Moura _et al._ (1976). We perform a systematic $\varepsilon=4-D$ expansion about the upper critical dimension following previous works on unconstrained sheets Aronovitz and Lubensky (1988); Guitter _et al._ (1988). Although the full diagrammatic calculation is presented in Appendix C, we describe the main results below. In Appendix D, we separately provide a simple, but uncontrolled, one-loop calculation with fixed internal ($D=2$) and external ($d=3$) dimensions that is qualitatively correct, but numerically inaccurate. ### 7.1 Recursion relations We carry out a perturbative low temperature expansion evaluation of thermal fluctuations to one loop order. By integrating out fluctuations within a shell of wavevectors $\Lambda/b\leq q\leq\Lambda$, where $b=e^{s}$ is a scale factor, and $\Lambda^{-1}$ a short distance cut-off of order the lattice spacing or membrane thickness, we compute corrections to the various parameters in the model. As explained in Appendix C, even with the addition of the new nonlinear term, the form of our elastic description in $\mathcal{F}$ (Eq. 3.11) remains unchanged at long-wavelengths under coarse-graining; only coupling constants such as $\kappa,\sigma,Y,v$ and $\mathcal{E}$ get renormalized. The fluctuation corrections can be cast as differential recursion relations given below $\displaystyle\dfrac{\mathrm{d}\kappa}{\mathrm{d}s}$ $\displaystyle=\kappa(2\zeta-\varepsilon)+\dfrac{5k_{B}T\Lambda^{2}}{192\pi^{2}(\kappa\Lambda^{2}+\sigma)}(Y+4\mu)\;,$ (7.1) $\displaystyle\dfrac{\mathrm{d}\sigma}{\mathrm{d}s}$ $\displaystyle=\sigma(2\zeta+2-\varepsilon)+\dfrac{k_{B}Tv\Lambda^{4}}{16\pi^{2}(\kappa\Lambda^{2}+\sigma)}\;,$ (7.2) $\displaystyle\dfrac{\mathrm{d}Y}{\mathrm{d}s}$ $\displaystyle=Y(4\zeta-\varepsilon)-\dfrac{5k_{B}TY^{2}\Lambda^{4}}{384\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}\;,$ (7.3) $\displaystyle\dfrac{\mathrm{d}\mu}{\mathrm{d}s}$ $\displaystyle=\mu(4\zeta-\varepsilon)-\dfrac{k_{B}T\mu^{2}\Lambda^{4}}{96\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}\;,$ (7.4) $\displaystyle\dfrac{\mathrm{d}v}{\mathrm{d}s}$ $\displaystyle=v(4\zeta-\varepsilon)-\dfrac{k_{B}Tv^{2}\Lambda^{4}}{16\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}\;,$ (7.5) $\displaystyle\dfrac{\mathrm{d}\mathcal{E}}{\mathrm{d}s}$ $\displaystyle=\mathcal{E}(4-\varepsilon+\zeta)\;.$ (7.6) The fluctuation corrections are evaluated here to leading order in $\varepsilon=4-D$ and the codimension of the manifold is set to its physically relevant value of $d_{c}=d-D=1$. Furthermore, as these equations are derived in general dimension, we use the $D$-dimensional generalization of the Young’s modulus ($Y=2\mu(2\mu+D\lambda)/(2\mu+\lambda)$) and the bulk modulus ($B=(2\mu/D)+\lambda$), which reduce to the standard 2D expressions for $D=2$, as expected. The renormalization equations for $\kappa$, $\mu$ and $Y$ in Eqs. 7.1, 7.4 and 7.3 are identical to those obtained previously Aronovitz and Lubensky (1988); Guitter _et al._ (1989), while the important coupled equations for $\sigma$ and $v$ are new results. The difference between the isometric and isotensional ensembles captured by the presence of the nonlinear coupling $v$ is already reflected in the modified renormalization group flows. We have also retained the external field $\mathcal{E}$ here 111As we have set $d_{c}=1$ here, both the height $h$ and the symmetry breaking field $\mathcal{E}$ are scalars here.; this quantity renormalizes trivially without any graphical corrections as it couples only to the average height ($\int\mathrm{d}{\bf r}\;h=h_{{\bf q}={\bf 0}}$), which is left untouched by the elastic and geometric nonlinearities. The shear and Young’s moduli renormalize independently, as expected, but they both contribute to the renormalization of the bending rigidity near $D=4$. One can easily use the recursion relations for $\mu$ and $Y$ to obtain equivalent ones for the $D$-dimensional versions of the bulk modulus ($B=(2\mu/D)+\lambda$) and the Poisson’s ratio ($\nu_{p}=\lambda/[2\mu+(D-1)\lambda]$), namely $\displaystyle\dfrac{\mathrm{d}B}{\mathrm{d}s}$ $\displaystyle=B(4\zeta-\varepsilon)-\dfrac{k_{B}TB^{2}\Lambda^{4}}{16\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}\;,$ (7.7) $\displaystyle\dfrac{\mathrm{d}\nu_{p}}{\mathrm{d}s}$ $\displaystyle=-\dfrac{k_{B}T\mu\Lambda^{4}}{192\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}(1+\nu_{p})(1+3\nu_{p})\;.$ (7.8) A couple of points are worth noting here. First, as expected, the bulk and shear moduli also renormalize independently. Second, upon comparing Eq. 7.7 and Eq. 7.5, we immediately see that both $v$ and $B$ renormalize in identical ways, guaranteeing that in the isometric ensemble, since $v=B$ at the microscopic scale, they remain equal on larger scales as well. In contrast, the isotensional ensemble is characterized by $v=0$ (which remains invariant under renormalization), even though $B>0$. The third important point concerns the Poisson’s ratio $\nu_{p}$ 222Note that, we define the Poisson’s ratio simply using the elastic moduli here, as is usually done Le Doussal and Radzihovsky (2018); Nelson _et al._ (2004). Alternate, yet related, definitions are possible that can yield different results, see Ref. Burmistrov _et al._ (2018b) for instance.. As is easily seen from Eq. 7.8, we have a stable fixed point where $\mathrm{d}\nu_{p}/\mathrm{d}s=0$ at $\nu_{p}=-1/3$ ($\nu_{p}=-1$ is unphysical as it corresponds to $\lambda=-2\mu/D$ leading to a marginally stable solid with vanishing bulk and Young’s moduli), which is exactly the one-loop estimate for the universal Poisson’s ratio of a free standing elastic membrane, in accord with previous self consistent calculations Le Doussal and Radzihovsky (1992); *le2018anomalous and Monte-Carlo simulations Bowick _et al._ (1996); Falcioni _et al._ (1997); *bowick2001universal; Cuerno _et al._ (2016). This universal auxetic response is a characteristic property of the flat phase of unconstrained thermalized membranes Bowick _et al._ (2001); Bowick and Travesset (2001). The reason we obtain this result from a simple one-loop calculation near $D=4$ is that the structure of the one-loop calculation is the same as that of the self-consistent calculation done by Le Doussal and Radzihovsky (1992), and the higher order box diagrams are convergent in that case. Furthermore, the universal Poisson’s ratio obtained is independent of both internal and embedding dimensions of the membrane Le Doussal and Radzihovsky (1992), hence we recover the same value even in an $\varepsilon=4-D$ expansion. To analyze these recursion relations, we introduce the following dimensionless variables (the Poisson’s ratio is of course already dimensionless), $\displaystyle K=\dfrac{\kappa\Lambda^{2}}{\kappa\Lambda^{2}+\sigma}\;,\quad\bar{Y}=\dfrac{k_{B}T\Lambda^{4}}{(\kappa\Lambda^{2}+\sigma)^{2}}Y\;,$ $\displaystyle\bar{\mu}=\dfrac{k_{B}T\Lambda^{4}}{(\kappa\Lambda^{2}+\sigma)^{2}}\mu\;,\quad\bar{v}=\dfrac{k_{B}T\Lambda^{4}}{(\kappa\Lambda^{2}+\sigma)^{2}}v\;,$ (7.9) which are appropriate near $D=4$ dimensions. For general $D$, we must replace the factor of $\Lambda^{4}$ by $\Lambda^{D}$ to keep $\bar{Y}$, $\bar{\mu}$ and $\bar{v}$ dimensionless. As the external field $\mathcal{E}$ does not influence the fixed points, we will set $\mathcal{E}=0$ for now. The effects of $\mathcal{E}\neq 0$ will be addressed within a general scaling theory we develop in Sec. 9. The physical interpretation of $K$ is that it measures the relative importance of bending to the external load. We note that $0\leq K<\infty$ and then demarcate three distinct regimes based on the value for $K$ as follows: * • $0<K<1$ : Tension dominated ($\sigma>0$), * • $K\sim 1$ : Bending dominated ($\sigma\sim 0$), * • $K>1$ : Compression dominated ($\sigma<0$). The buckled state we found using mean-field theory thus occurs for $K>1$ in the presence of compression. The recursion relations for these dimensionless coupling constants then read $\displaystyle\dfrac{\mathrm{d}K}{\mathrm{d}s}$ $\displaystyle=2(K-1)\left[K-\dfrac{5}{384\pi^{2}}(\bar{Y}+4\bar{\mu})\right]-\dfrac{\bar{v}K}{16\pi^{2}}\;,$ (7.10) $\displaystyle\dfrac{\mathrm{d}\bar{Y}}{\mathrm{d}s}$ $\displaystyle=\left[\varepsilon+4(K-1)-\dfrac{\bar{v}}{8\pi^{2}}-\dfrac{25\bar{Y}}{384\pi^{2}}-\dfrac{5\bar{\mu}}{24\pi^{2}}\right]\bar{Y}\;,$ (7.11) $\displaystyle\dfrac{\mathrm{d}\bar{\mu}}{\mathrm{d}s}$ $\displaystyle=\left[\varepsilon+4(K-1)-\dfrac{\bar{v}}{8\pi^{2}}-\dfrac{5\bar{Y}}{96\pi^{2}}-\dfrac{7\bar{\mu}}{32\pi^{2}}\right]\bar{\mu}\;.$ (7.12) $\displaystyle\dfrac{\mathrm{d}\bar{v}}{\mathrm{d}s}$ $\displaystyle=\left[\varepsilon+4(K-1)-\dfrac{3\bar{v}}{16\pi^{2}}-\dfrac{5\bar{Y}}{96\pi^{2}}-\dfrac{5\bar{\mu}}{24\pi^{2}}\right]\bar{v}\;.$ (7.13) $\displaystyle\dfrac{\mathrm{d}\nu_{p}}{\mathrm{d}s}$ $\displaystyle=-\dfrac{k_{B}T\mu\Lambda^{4}}{192\pi^{2}(\kappa\Lambda^{2}+\sigma)^{2}}(1+\nu_{p})(1+3\nu_{p})\;.$ (7.14) As expected, the scale factor $\zeta$ for the flexural phonon field drops out of the equations when cast in terms of dimensionless variables. Note that, while we quote the renormalization group flow equations for $\bar{Y},\bar{\mu}$ and $\nu_{p}$ (Eqs. 7.11, 7.12 and 7.14), only two of the three are independent, as $\nu_{p}=(Y-2\mu)/[2\mu+(D-2)Y]$. For an unconstrained membrane, $K$ remains fixed at unity, as $\sigma=\bar{v}=0$. While in the isotensional ensemble, we can have any $\sigma\neq 0$ ($K\neq 1$) with $\bar{v}=0$, in the isometric ensemble, we generally have _both_ $\sigma\neq 0$ and $\bar{v}\neq 0$. In the latter case, the system spontaneously develops a thermally generated tension due to the geometric confinement enforced by the clamped boundary conditions, as discussed below. But first, we analyze the fixed points of the recursion relations. ### 7.2 Fixed points Figure 4: A schematic of the full renormalization group flow diagram in the three-dimensional parameter space of $\\{K,\bar{Y},\bar{v}\\}$. We fix the Poisson’s ratio to its universal value $\nu_{p}=-1/3$ here, so that both the shear and bulk moduli are determined by the Young’s modulus through $\bar{\mu}=(D+1)\bar{Y}/4$ and $\bar{B}=(D+1)\bar{Y}/D(D+2)$. The three isotensional fixed points (Gσ, Gκ and vKth) are shown as red points with $0\leq K\leq 1$ and $\bar{v}=0$, while the new constrained fixed point CvKth with $K>1$ and $\bar{v}\neq 0$ is shown in blue. An unphysical fixed point with $\bar{v}\neq 0$ and $\bar{Y}=\bar{B}=\bar{\mu}=0$ is also present as a green dot at the bottom, but this is irrelevant for our purposes. The red plane at $\bar{v}=0$ on the left shows the accessible space of coupling parameters within the often used isotensional ensemble. Under fixed stress (isotensional) conditions, the thermal buckling transition occurs at $\sigma=0$ (i.e., $K=1$) and is controlled by the conventional vKth fixed point. However, for fixed strain (isometric) conditions, when $\bar{v}\neq 0$, we flow instead to a new codimension-1 fixed point (CvKth) that now controls the thermal buckling transition. The unstable renormalization group flow going towards large $K>1$ corresponds to strong compression and postbuckling behaviour. At low temperature, Gσ is a globally attracting and stable fixed point which controls the properties of a tense flat membrane for both ensembles. We now enumerate the four physically relevant fixed points 333There are other interacting fixed points present, including two that correspond to a fluid membrane (with a vanishing shear modulus $\mu=0$) and a physically unrealizable one ($B=0$ but $v\neq 0$), that are not relevant for us. permitted in both ensembles, to $\mathcal{\mathcal{O}}(\varepsilon)$: 1. (i) Gσ: $K_{*}=0$, $\bar{Y}_{*}=0$, $\bar{\mu}_{*}=0$, $\bar{v}_{*}=0$. 2. (ii) Gκ: $K_{*}=1$, $\bar{Y}_{*}=0$, $\bar{\mu}_{*}=0$, $\bar{v}_{*}=0$. 3. (iii) vKth: $K_{*}=1$, $\bar{Y}_{*}=77\pi^{2}\varepsilon/25$, $\bar{\mu}_{*}=96\pi^{2}\varepsilon/25$, $\bar{v}=0$ ($\nu_{p}=-1/3$). 4. (iv) CvKth: $K_{*}=1+\varepsilon/50$, $\bar{Y}_{*}=77\pi^{2}\varepsilon/25$, $\bar{\mu}_{*}=96\pi^{2}\varepsilon/25$, $\bar{v}_{*}=16\pi^{2}\varepsilon/25$ ($\nu_{p}=-1/3$). We have set $d_{c}=1$ here as is physically relevant; the expressions for the fixed points with arbitrary $d_{c}$ are presented in Appendix C. Of these fixed points, only Gκ, Gσ and vKth are admissable in the isotensional ensemble. The thermal Föppl-von Kármán fixed point vKth has been the focus of virtually all studies to date. The _constrained_ thermal fixed point CvKth is new and unique to the isometric ensemble. Both Gκ and Gσ are Gaussian (noninteracting) fixed points that are bending and tension dominated respectively. The conventional flat phase is described by vKth, and occurs for a vanishing renormalized tension (hence $K_{*}=1$) that is appropriate for an unconstrained fluctuating membrane. This fixed point has been extensively studied previously Nelson and Peliti (1987); Le Doussal and Radzihovsky (1992); Aronovitz and Lubensky (1988); Guitter _et al._ (1988, 1989); Roldán _et al._ (2011); Košmrlj and Nelson (2016) and it controls the buckling transition in the absence of boundary constraints, i.e., the _isotensional_ ensemble as evidenced by $\bar{v}=0$ at the fixed point. In contrast, a new constrained fixed point CvKth emerges in the isometric ensemble with $\bar{v}\neq 0$, reflecting the geometric constraint imposed by the clamped boundaries in the isometric ensemble. The new interacting fixed point involves bare compression (as $K_{*}>1$), unlike the others, reflecting the presence of a fluctuation induced spontaneous tension that appears only when the boundary is constrained. We will discuss this feature in more detail in Sec. 8. As we will show below, CvKth controls the buckling transition in the _isometric_ ensemble. We note that both vKth and CvKth are characterized by the universal Poisson’s ratio $\nu_{p}=-1/3$. A schematic of the full renormalization group flow diagram with the above fixed points is sketched in Fig. 4. The stability of each fixed point can be analyzed by linearizing about it. The two Gaussian fixed points, Gσ and Gκ, differ simply by the presence or absence of $\sigma$ and play a role in both ensembles. The tension-dominated fixed point Gσ is a globally stable attractor that controls the low temperature phase of a tense flat membrane. On the other hand, the bending dominated Gaussian fixed point Gκ is unstable in all directions, as found previously for tense membranes Guitter _et al._ (1989); Roldán _et al._ (2011); Košmrlj and Nelson (2016). Both these fixed points along with their flow directions in different invariant planar sections of the full parameter space, are shown in Fig. 5. Within the invariant subspace of $\bar{v}=0$, associated with the isotensional ensemble (see Fig. 4), the conventional flat phase fixed point vKth has only one relevant direction corresponding to the external stress. As a result, it controls the finite temperature buckling transition in the isotensional ensemble, where the stress is tuned to zero at threshold. But once we allow for a $\bar{v}\neq 0$, i.e., work in the isometric ensemble instead, we find that vKth is now a codimension two fixed point, being _unstable_ to this new nonlinear coupling. As sketched in Fig. 4, the system can now flow to the new constrained fixed point CvKth instead, which has codimension one. Perturbations in $\bar{Y},\bar{B}$ and $\bar{v}$ are all irrelevant at CvKth and the only unstable or relevant direction is primarily along $K$. In other words, within the isometric ensemble, the strain tuned buckling transition is controlled by CvKth and _not_ vKth. The identification of two distinct ensemble-dependent fixed points controlling buckling is a significant achievement. The fact that the choice of fixed point is picked by the mechanical ensemble, here decided by fixed strain or stress boundary conditions, is quite intriguing. Although the isotensional and isometric ensembles are dual to each other, they remain inequivalent even in the thermodynamic limit, due to flexural phonons on all length scales at the critical point. As mentioned in the introduction, this remarkable feature is akin to Fisher renormalization of conventional critical exponents, which we demonstrate explicitly in Sec. 9. Below we compute the flat to buckled phase boundary and analyze the linearized flow in the vicinity of the two interacting fixed points to extract critical exponents for the buckling transition. ## 8 Buckling transition in $4-\varepsilon$ dimensions Figure 5: A schematic of the renormalization group flows projected onto the invariant attracting planes appropriate to the two ensembles. The Poisson’s ratio is fixed to its universal value $\nu_{p}=-1/3$ in both cases. (a) In the isotensional ensemble, $\bar{v}=0$ identically, and we have three fixed points, Gσ, Gκ and vKth. The red lines are separatrices that connect the various fixed points and demarcate their basins of attraction. The vertical separatrix flowing into vKth at $K=1$ decribes the buckling transition in this ensemble. The black streamlines are integral curves of the flow and the blue curves highlight two representative trajectories bracketing the buckling transition. (b) In the isometric ensemble, the relevant attractor is now a plane characterized by $\bar{Y}=z_{*}\bar{v}$ with $z_{*}=24/5+\mathcal{O}(\varepsilon)$. Once again, we have three important fixed points, Gσ, Gκ and CvKth. The red curves are separatrices delimiting stability basins for each fixed point and the separatrix flowing into CvKth controls the buckling threshold in this ensemble. Unlike in the isotensional case, this line is curved and bends towards $K>1$, signalling the generation of spontaneous tension (Fig. 6). The black curves are streamlines of the local renormalization group flow and the blue curves are representative flow trajectories on either side of the buckling transition. In both ensembles, Gσ is globally stable for flat and tense mebranes ($K<1$), while flows towards larger values of $K(>1)$ lead to strong compression and buckling. Let us now analyze the recursion relations given in Eqs. 7.10-7.14 in more detail. For a given $\bar{Y}$, as $\bar{\mu}$ and $\nu_{p}$ are related, we only have to consider one of them. From Eq. 7.14, we easily see that the fixed point at $\nu_{p}=-1/3$ is stable and exponentially attracting for any finite $\bar{Y}$. So we shall neglect perturbations in the Poisson’s ratio and fix $\nu_{p}=-1/3$. This condition in turn fixes the shear modulus to be $\bar{\mu}=(D+1)\bar{Y}/4$ and the bulk modulus to be $\bar{B}=(D+1)\bar{Y}/D(D+2)$, allowing us to then concentrate on the flow in the three-dimensional subspace of just $\\{K,\bar{Y},\bar{v}\\}$ parametrizing the stable attractor. As an aside, note that $\nu_{p}$ rapidly approaches its fixed point value of $-1/3$ only when $\bar{Y}>0$, which is true in the vicinity of both the vKth and CvKth fixed points. In contrast, for a tense membrane governed by Gσ, $\bar{Y}(s)\propto e^{-s(4-\varepsilon)}\to 0$ approaches zero exponentially fast on large scales. In this case, $\nu_{p}$ does not reach its universal fixed point value and instead, the rapidly vanishing $\bar{Y}$ essentially freezes $\nu_{p}$ at a value that depends on microscopic properties of the material. Thus, while the large-distance Poisson’s ratio is universal at the buckling transition, away from it, in a tense mebrane, it becomes nonuniversal and depends on microscopic details, consistent with results for fluctuating membranes under strong tension Roldán _et al._ (2011); Košmrlj and Nelson (2016); Los _et al._ (2009); Burmistrov _et al._ (2018a). We shall now address buckling criticality in the two ensembles separately below. ### 8.1 Isotensional ensemble In the isotensional ensemble, we set both $\sigma=\sigma_{0}$ and $\bar{v}=0$. The latter picks out a renormalization group invariant plane (see Figs. 4 and 5a) specific to this ensemble. The buckling transition then occurs at $\sigma_{0}=\sigma_{c}=0$ (in the thermodynamic limit of infinite system size), though $\sigma_{c}$ is nonzero for a finite sheet (see Sec. 5). Note that, right at the unconstrained fixed point vKth, we do have $\sigma_{0}=0$ (i.e., $K=1$) even at finite temperature, a result that holds to all orders in perturbation theory. Hence the critical stress at the buckling transition is still given by its $T=0$ value, $\sigma_{c}(T)=-\kappa q_{0}^{2}\;,$ (8.1) and it does _not_ receive corrections from thermal fluctuations in the isotensional ensemble. As before $q_{0}\sim 1/R$ is the smallest wavevector determined by the boundary conditions and the size of the sheet (Sec. 5). To compute anomalous scaling exponents at the transition, we use a standard renormalization group matching procedure Rudnick and Nelson (1976) to relate correlation functions evaluated near the transition to those further away from the critical point. Under scaling, ${\bf r}=b{\bf r}^{\prime}$ and $h({\bf r})=b^{\zeta}h({\bf r}^{\prime})$, conversely, ${\bf q}={\bf q}^{\prime}/b$ and $h_{{\bf q}}=h^{\prime}_{{\bf q}^{\prime}}b^{D+\zeta}$ (in $D$-dimensions). Upon setting $b=e^{s}$, we have the following scaling relation for the height-height correlation function ($G_{h}({\bf q})\equiv\langle|h_{{\bf q}}|^{2}\rangle/V_{D}$, where $V_{D}$ is the $D$-dimensional volume of the manifold) $G_{h}({\bf q})=\exp\left\\{\int_{0}^{s}\mathrm{d}s^{\prime}[D+2\zeta(s^{\prime})]\right\\}G_{h}({\bf q}e^{s};s)\;,$ (8.2) where $G_{h}({\bf k};s)$ is computed using all the parameters evaluated at scale $s$. We now choose $s=s^{*}$ such that $|{\bf q}|e^{s^{*}}=\ell_{\rm{th}}^{-1}$, set by the thermal length (Eq. 6.7). This condition allows us to write, $\displaystyle G_{h}({\bf q})$ $\displaystyle=\exp\left[Ds^{*}+2\int_{0}^{s^{*}}\mathrm{d}s\;\zeta(s)\right]G_{h}(\ell_{\rm{th}}^{-1};s^{*})$ $\displaystyle=k_{B}T\ell_{\rm{th}}^{4}\dfrac{K(s^{*})}{\kappa(s^{*})}\exp\left[Ds^{*}+2\int_{0}^{s^{*}}\mathrm{d}s\;\zeta(s)\right]\;.$ (8.3) Here we have used the fact that on small scales ($\ell\lesssim\ell_{\rm{th}}$), fluctuation corrections are negligible and a Gaussian or mean field treatment is valid. To evaluate the renormalized bending rigidity at scale $s^{*}$, we need the following flow equation as well $\dfrac{\mathrm{d}\kappa}{\mathrm{d}s}=\kappa\left[2\zeta-\varepsilon+\dfrac{5(\bar{Y}+4\bar{\mu})}{192\pi^{2}K}\right]\;.$ (8.4) It is convenient to chose the height rescaling factor $\zeta(s)$ to keep $\kappa(s)$ fixed, which leads to $\zeta(s)=\dfrac{\varepsilon}{2}-\dfrac{5}{384\pi^{2}K(s)}\left[\bar{Y}(s)+4\bar{\mu}(s)\right]\;.$ (8.5) Right at the buckling transition, the coupling constants flow to the fixed point vKth. The height correlator defines a renormalized bending rigidity via $\kappa_{R}({\bf q})^{-1}=q^{4}\dfrac{G_{h}({\bf q})}{k_{B}T}\;.$ (8.6) Upon using Eq. 8.4 and Eq. 8.3 right at buckling, this then gives the well- known diverging bending rigidity, $\kappa_{R}({\bf q})=\kappa(q\ell_{\rm{th}})^{-\eta}\;,\quad\eta=\dfrac{12}{25}\varepsilon+\mathcal{O}(\varepsilon^{2})\;.$ (8.7) The anomalous exponent $\eta$ that we obtain matches earlier calculations performed for unconstrained membranes Aronovitz and Lubensky (1988); Guitter _et al._ (1988). While we don’t expect the one-loop approximation of $\eta$ to be numerically accurate in the physically relevant case of $D=2$ dimensions, we nonetheless obtain a reasonable value of $\eta=24/25\approx 0.96$ upon setting $\varepsilon=2$ in Eq. 8.7. More sophisticated calculations involving self consistent Le Doussal and Radzihovsky (1992); *le2018anomalous and nonperturbative techniques Kownacki and Mouhanna (2009) give $\eta\simeq 0.82-0.85$ which compares well with the exponent value measured in numerical simulations Bowick _et al._ (1996); Los _et al._ (2009); Roldán _et al._ (2011); Bowick _et al._ (2017) and recent experiments Blees _et al._ (2015). The elastic moduli also experience a scale dependent renormalization, though they now get _softer_ on larger scales. The renormalized Young’s modulus scales as $Y_{R}({\bf q})=Y(q\ell_{\rm{th}})^{\eta_{u}}\;,\quad\eta_{u}=\dfrac{\varepsilon}{25}+\mathcal{O}(\varepsilon^{2})\;,$ (8.8) The other elastic moduli ($\mu,\lambda$ and $B$) all scale in the same way, with the same $\eta_{u}$ exponent. Both $\eta$ and $\eta_{u}$ are related via a Ward identity $\eta_{u}+2\eta=\varepsilon$, which is a consequence of rotational invariance Guitter _et al._ (1988, 1989); Aronovitz and Lubensky (1988); Aronovitz _et al._ (1989); Le Doussal and Radzihovsky (1992). If the external tension is small but nonzero, then we perturb slightly away from vKth. We write $K=1+\delta K$ and linearize in $\delta K$ to get $\dfrac{\mathrm{d}\delta K}{\mathrm{d}s}=\left(2-\dfrac{12}{25}\varepsilon\right)\delta K\equiv(2-\eta)\delta K\;,$ (8.9) which is exactly true by virtue of the definition of $\eta$. For a small external stress ($|\sigma_{0}|\ll\kappa\Lambda^{2}$), $K\approx 1$, with $\delta K(s)=\delta K(0)e^{(2-\eta)s}$ growing with $s$, as expected of a relevant perturbation. Eventually, we reach a scale $s^{*}$ at which the external stress has grown large and is now comparable to the bending rigidity, i.e., $|\sigma(s^{*})|\approx\kappa(s^{*})\Lambda^{2}$), which defines the correlation length $\xi$ via $s^{*}=\ln(\xi/a)$ ($a\sim 1/\Lambda$ is a lattice cutoff) beyond which the stress dominates bending. After incorporating a nonzero $\sigma_{c}$ appropriate to a finite system, we have $\delta K(0)\propto|\sigma_{0}-\sigma_{c}|$ which gives $\xi\propto|\sigma_{0}-\sigma_{c}|^{-\nu}\;,\quad\nu=\dfrac{1}{2-\eta}=\dfrac{1}{2}+\dfrac{3\varepsilon}{25}+\mathcal{O}(\varepsilon^{2})\;,$ (8.10) for the isotensional ensemble. On short length scales ($\ell\ll\xi$), the system is controlled by the bending-dominated vicinty of the vKth fixed point, while on larger scales ($\ell\gg\xi$) when $\sigma_{0}>0$, the system is dominated by external tension and the Gσ fixed point. Hence, for $\sigma_{0}>0$, we have the following length scale dependence (for $D=2$) $\dfrac{\kappa_{R}(\ell)}{\kappa}\propto\begin{cases}1\;,\quad(\ell<\ell_{\rm{th}})\\\ (\ell/\ell_{\rm{th}})^{\eta}\;,\quad(\ell_{\rm{th}}<\ell<\xi)\\\ (\xi/\ell_{\rm{th}})^{\eta}\ln(\ell/\xi)\;,\quad(\ell>\xi)\end{cases}\;,$ (8.11) with similar expressions for the elastic moduli Košmrlj and Nelson (2016). A similar dependence without the $\ln(\ell/\xi)$ term also exists for $2<D<4$ dimensions. ### 8.2 Isometric ensemble Now that we have recapitulated the isotensional results, let us move onto the more interesting isometric ensemble. We now set $\sigma=B\epsilon$ and identify $\bar{v}=\bar{B}$. As mentioned earlier, the $\bar{v}=0$ plane defining the isotensional ensemble is unstable to finite $\bar{v}$ perturbations, leading us to consider the full 3D space of parameters $\\{K,\bar{Y},\bar{v}\\}$. For $\bar{v}>0$, i.e., in the isometric ensemble now, we can further reduce dimensionality by writing $\bar{Y}=z\bar{v}$, which gives $\dfrac{\mathrm{d}z}{\mathrm{d}s}=\dfrac{\bar{Y}}{16\pi^{2}}\left(1-\dfrac{5z}{24}\right)\;,$ (8.12) to leading order in $\mathcal{O}(\varepsilon)$. With $\bar{Y}>0$ as before, this equation has an exponentially stable fixed point given by $z=24/5+\mathcal{O}(\varepsilon)$ that attracts all the renormalization group flows for $\bar{v}>0$. This stable fixed point in $z$ hence defines an attracting invariant plane in the $\\{K,\bar{Y},\bar{v}\\}$ space (see Fig. 5b), only accessible within the isometric ensemble. Note that the new constrained fixed point CvKth lies on this plane as well, a welcome feature that guarantees that when $\bar{v}\neq 0$ microscopically and equal to $\bar{B}$ on short scales (as it should be in the isometric ensemble), their equality is retained on larger scales. More importantly, the relation $\bar{v}=\bar{B}>0$ serves as an invariant attractor under coarse-graining in the isometric ensemble. Within this plane, we have two coupled flow equations, namely $\displaystyle\dfrac{\mathrm{d}K}{\mathrm{d}s}$ $\displaystyle=2K(K-1)+\dfrac{\bar{v}}{16\pi^{2}}(12-13K)\;,$ (8.13) $\displaystyle\dfrac{\mathrm{d}\bar{v}}{\mathrm{d}s}$ $\displaystyle=\bar{v}\left[\varepsilon+4(K-1)-\dfrac{27\bar{v}}{16\pi^{2}}\right]\;,$ (8.14) as shown in Fig. 5b. Upon dividing the above two equations, we obtain $\mathrm{d}K/\mathrm{d}\bar{v}$ which we numerically integrate to obtain the basin of attraction of Gσ and CvKth. The stable and unstable manifolds are obtained as integral curves of the stable and unstable eigendirections at CvKth, which correspond to separatrices shown in red in Fig. 5b. The separatrix connects the unstable fixed point Gκ at $K=1$ to the constrained thermal buckling transition fixed point at CvKth, and it delineates the stability region for a clamped membrane. All parameter values that fall to the left of this separatrix flow into the stable Gσ fixed point, leading to a sheet that is flat and tense on large scales. In the opposite case, parameter values starting to the right of the red separatrix flow away to larger values of $K$, signalling strong compression ($\sigma=B\epsilon<0$) and buckling of the membrane. Representative flow trajectories illustrating this are shown in blue in Fig. 5b. The separatrices are computed as the solution of a boundary value problem and hence don’t admit a simple analytical solution. Nonetheless, we can obtain some asymptotic results informed by the algebraic structure of the recursion relations, in conjunction with the numerical solution. Figure 6: The numerically computed buckling threshold $|\epsilon_{c}(T)|$ extrapolated to $\varepsilon=2$ ($D=2$). The critical buckling strain gets shifted to more negative (compressive) values at finite temperature due to the generation of a thermally induced spontaneous tension. We have subtracted out the zero temperature buckling threshold $\epsilon_{c}(0)=-\kappa q_{0}^{2}/B$. Near the buckling transition, we have used the universal Poisson’s ratio ($\nu_{p}=-1/3$) to relate the bulk and Young’s modulus in $D=2$. At low temperature (large $\ell_{\rm th}$), $\epsilon_{c}(T)\sim T\ln T$, while at higher temperature (smaller $\ell_{\rm th}$), we have $\epsilon_{c}(T)\sim-\sqrt{T}$ (Eq. 8.15). While this plot is obtained by setting $\varepsilon=2$ in the resursion relations obtained by expanding around $D=4$ dimensions, a qualitatively similar curve is obtained from a fixed dimension calculation with $D=2$ and $d=3$ (Appendix D). Upon using the definitions of $K$ and $\bar{B}$ and the relation $\sigma=B\epsilon$, we obtain the critical curve for the buckling transition strain ($\epsilon_{c}(T)$) in terms of the elastic constants and temperature, as plotted in Fig. 6 for $D=2$ ($\varepsilon=2$). For low and high temperatures, we find simple asymptotic expansions for the buckling threshold $\epsilon_{c}(T)<0$ (in $D=2$), $\displaystyle|\epsilon_{c}(T)|$ $\displaystyle\simeq\dfrac{\kappa q_{0}^{2}}{B}+\dfrac{k_{B}T}{8\pi\kappa}\left[2\ln\left(\dfrac{a}{\ell_{\rm{th}}}\right)+c_{1}\right]\;,\quad(\ell_{\rm{th}}\gg a)\;,$ (8.15a) $\displaystyle|\epsilon_{c}(T)|$ $\displaystyle\simeq\dfrac{\kappa q_{0}^{2}}{B}+\dfrac{k_{B}T}{8\pi\kappa}\left(\dfrac{\ell_{\rm{th}}}{a}\right)\left[c_{2}-c_{3}\dfrac{\ell_{\rm{th}}}{a}\right]\;,\quad(\ell_{\rm{th}}\ll a)\;,$ (8.15b) where $a\sim\Lambda^{-1}$ is the lattice cutoff and $c_{1,2,3}$ are numerical constants that must be computed by numerical integration of the recursion relations. While here, we extrapolated our perturbative solution to $\varepsilon=2$, we have confirmed that the same asymptotic expressions for the buckling strain, with only $c_{1,2,3}$ modified, are also obtained within a fixed dimension calculation with $D=2$ and $d=3$ (Appendix D). As $T\to 0$, $\epsilon_{c}(T)\to-\kappa q_{0}^{2}/B$, which is the zero temperature buckling instability threshold with $q_{0}\sim 1/R$ being the smallest available mode in a system of size $R$ (Sec. 5). We have utilized the fact that, near the transition $\nu_{p}=-1/3$, which relates $Y$ and $B$ via $B=3Y/8$ (in $D=2$), allowing us to write the above in terms of the thermal length $\ell_{\rm{th}}$. As $\ell_{\rm{th}}\sim T^{-1/2}$ (Eq. 6.7), for $D=2$ we have $|\epsilon_{c}(T)|\sim T\ln(1/T)$ at low temperature and $|\epsilon_{c}(T)|\sim\sqrt{T}$ at high temperature, as shown in Fig. 6. For general $D$, as $T\to 0$, we find that $|\epsilon_{c}(T)|\sim T[1+{\rm const.}\;(T^{2/\varepsilon-1}-1)/(2-\varepsilon)]$ and the linear $T$ dependence dominates at small $T$ for $0<\varepsilon<2$ ($2<D<4$), while an additional logarithmic term $\sim T\ln(T)$ appears when $\varepsilon=2$ ($D=2$). Note that the high temperature asymptotics depends only weakly on dimension. We pause here to comment on the above results. Unlike in the isotensional ensemble, where the buckling threshold did _not_ receive any correction from thermal fluctuations (Eq. 8.1), in the isometric ensemble, the buckling threshold gets pushed to higher values of compression (as $\epsilon_{c}(T)<0$ and $|\epsilon_{c}(T)|$ increases with $T$) at higher temperature. In other words, the sheet spontaneously develops a tension due to thermal fluctuations in the isometric ensemble. A freely fluctuating sheet wants to shrink in-plane for entropic reasons, but the clamped boundaries resist this shrinkage, thereby putting the sheet under tension. As a result, the externally imposed strain now has to compensate and overcome this thermally induced tension in order to cause buckling. This effect is absent in the isotensional ensemble because the boundaries are free to fluctuate, allowing the sheet to freely shrink with increasing temperature, albeit against a constant external stress. We now compute critical scaling exponents at the buckling transition. Here, by tuning right to the buckling threshold, we approach the constrained fixed point CvKth. Upon evaluating the height-height correlator, we obtain the renormalized bending rigidity to be $\kappa_{R}({\bf q})=\kappa(q\ell_{\rm{th}})^{-\eta^{\prime}}\;,\quad\eta^{\prime}=\dfrac{12}{25}\varepsilon+\mathcal{O}(\varepsilon^{2})\;.$ (8.16) Remarkably, we obtain the same anomalous scaling exponent here as in the isotensional ensemble (Eq. 8.7). As we will show later in Sec. 4, we expect $\eta=\eta^{\prime}$ from general scaling arguments, which we also confirm for arbitrary $d_{c}$ within a lowest order systematic $\varepsilon=4-D$ expansion in Appendix C and Table 1. A similar analysis of the phonon correlator or equivalently the nonlinear stretching term also provides the renormalized Young’s modulus, $Y_{R}({\bf q})=Y(q\ell_{\rm{th}})^{\eta^{\prime}_{u}}\;,\quad\eta^{\prime}_{u}=\dfrac{\varepsilon}{25}+\mathcal{O}(\varepsilon^{2})\;.$ (8.17) As before, $\eta^{\prime}_{u}$ satisfies the Ward identity $\eta^{\prime}_{u}+2\eta^{\prime}=\varepsilon$. A consequence of the equality $\eta=\eta^{\prime}$ is that $\eta_{u}=\eta^{\prime}_{u}$ as well, which is verified here to leading order in an expansion in $\varepsilon=4-D$. Distinct critical exponents appear, however, when we perturb away from the buckling threshold, with one relevant direction that flows away from the fixed point CvKth. Upon linearizing about this fixed point, we obtain and diagonalize the resulting Jacobian matrix to produce the following eigenvalues valid to $\mathcal{O}(\varepsilon)$, $y_{0}=2-\dfrac{13\varepsilon}{25}\;,\ y_{1}=y_{2}=-\dfrac{\varepsilon}{25}\;,\ y_{3}=-\varepsilon\;.$ (8.18) We have three irrelevant directions with negative eigenvalues ($y_{1,2,3}<0$) and one relevant direction with a positive eigenvalue ($y_{0}>0$). If we write $K(s)=K_{*}+\delta K(s)$, where $K_{*}=1+\varepsilon/50$ is the fixed point value and $\delta K(s)$ is a small deviation, then we find that $\delta K(s)\simeq\delta K(0)e^{y_{0}s}$ grows with scale as the renormalization group flow proceeds away from CvKth along the outgoing separatrix. Note that $\delta K(0)\propto(\epsilon_{c}(T)-\epsilon)$ is controlled by the distance to the buckling transition. This relation can be easily obtained by expanding $\sigma/(\kappa\Lambda^{2})=K^{-1}-1$ to linear order in $\delta K$ and setting $\sigma=B\epsilon$ as appropriate in the isometric ensemble. Upon using standard renormalization group arguments, we obtain the divergent correlation length to be $\xi\propto|\epsilon-\epsilon_{c}(T)|^{-\nu^{\prime}}\;,\quad\nu^{\prime}=\dfrac{1}{y_{0}}=\dfrac{1}{2}+\dfrac{13}{100}\varepsilon+\mathcal{O}(\varepsilon^{2})\;.$ (8.19) Strikingly, we obtain a distinct value of $\nu^{\prime}$ here as compared to the value of $\nu$ obtained in the isotensional ensemble. In Sec. 4, we in fact demonstrate through general scaling arguments that $\nu<2/D<\nu^{\prime}$, which is satisfied to leading order in $\varepsilon$ in our systematic expansion (see also Table 1). While, we don’t expect this leading order perturbative calculation to remain numerically accurate for $\varepsilon=2$, the exponent inequality continues to hold in $D=2$, where it reduces to $\nu<1<\nu^{\prime}$. Note that, the requirement $\nu^{\prime}>1$ for $D=2$ in the isometric ensemble implies an exceptionally strong divergence for a correlation length in critical phenomena. The difference in exponents ($\nu\neq\nu^{\prime}$) directly demonstrates that the universality class for buckling within the isometric ensemble is distinct from that in the isotensional ensemble, as advertised in the introduction. Our analysis of the renormalization group flow at the two fixed points associated with buckling in the two dual ensembles is now complete. While the calculation presented here was performed within a systematic $\varepsilon=4-D$ expansion at fixed manifold codimension $d_{c}$, we provide the general results for arbitrary $d_{c}$ in Appendix C. We also present a simpler, yet uncontrolled, and hence inaccurate one-loop approximation for the fixed points and exponents directly in fixed internal and embedding dimension ($D=2,d=3$) in Appendix D. Although, we directly compute only the scaling exponents associated with the fluctuation spectra ($\eta,\eta^{\prime}$ or equivalently $\eta_{u},\eta^{\prime}_{u}$) and the correlation length ($\nu,\nu^{\prime}$), the other exponents defined in Sec. 4 can be obtained through various exponent identities derived below. All the exponents for both ensembles are listed to leading order in an $\varepsilon=4-D$ expansion for arbitrary codimension $d_{c}=d-D$ in Table 1. We also use the most accurate estimates for the $\eta$ exponent in the physically relevant dimensions of $D=2$ and $d=3$, obtained through the self-consistent screening approximation Le Doussal and Radzihovsky (1992) along with the scaling relations derived in Sec. 9 to directly quote our best estimates for the various buckling exponents in both ensembles, in Table 1. Below, we present a general scaling theory valid near the buckling transition and derive relations between various exponents, which acquire nonstandard forms due to the unusual size dependence exhibited by the buckling transition. This framework will also allow us to explicitly demonstrate that the distinction between the isotensional and isometric buckling universality classes constitutes a mechanical variant of Fisher renormalization Fisher (1968). ## 9 Scaling Relations & Fisher Renormalization In this section, we continue to work in the general setting of a $D$-dimensional elastic manifold embedded in $d$-dimensional Euclidean space. As before, the codimension $d_{c}=d-D>0$ counts the number of directions in which the elastic material can deform extrinsically, i.e., the flexural modes. Close to the thermalized buckling transition, we have universal scaling laws, just as in conventional critical phenomena, compactly captured by the scaling form of the free energy itself. Standard renormalization group arguments show that the free energy density defined by $F=-(k_{B}T/V_{D})\ln[\int\mathcal{D}h\;e^{-\mathcal{F}/k_{B}T}]$ ($V_{D}$ is the $D$-dimensional volume, which is just $V_{2}=A$ the area for $D=2$) has a singular part $F_{s}$ which obeys the following scaling relation close to the transition Goldenfeld (2018) $F_{s}=b^{-D}\tilde{\Psi}_{F}\left(\Delta\sigma b^{1/\nu},\mathcal{E}b^{y_{\mathcal{E}}}\right)\;,\\\ $ (9.1) where $b$ is a scale factor and $\tilde{\Psi}_{F}$ is a scaling function that implicitly depends on the system size via $R/b$, the bending rigidity via $\kappa b^{-\eta}$ and the elastic moduli via $\\{Y,B\\}b^{\eta_{u}}$. We suppress this dependence to ease notation, but these quantities are important as they give rise to nonstandard scaling relations later. Eq. 9.1 allows us to map the physics near the buckling transition onto the mean field theory derived in Sec. 5. A finite external field $\mathcal{E}$ is a strongly relevant perturbation, and has been retained with its scaling exponent $y_{\mathcal{E}}>0$. For convenience, we will work within the isotensional ensemble where the distance from the buckling threshold is given by $\Delta\sigma=\sigma_{0}-\sigma_{c}(T)$ 444Note that $\sigma_{c}(T)$ is in general size dependent in a finite sheet, due to the zero temperature buckling threshold $\propto 1/R^{2}$, see Sec. 5. In addition, fluctuation effects will lead to a shift in the buckling threshold $\propto 1/R^{1/\nu}$ as per standard finite size scaling arguments Amit and Martin-Mayor (2005), which we expect to dominate over the zero temperature threshold in a large sheet. A similar argument holds in the isometric ensemble as well.. Equivalent results for the isometric ensemble will be quoted directly as they follow immediately by replacing $\Delta\sigma$ with $B\Delta\epsilon=B(\epsilon-\epsilon_{c}(T))$ and exchanging the unprimed exponents for the primed ones. This connection holds for all the exponent identities, except for the stress-strain exponents $\theta$ and $\theta^{\prime}$, which require a minor modification due to their definition (Eq. 4.4), as will be clear later on. By choosing $b=|\Delta\sigma|^{-\nu}\propto\xi$, we scale out the $\Delta\sigma$ dependence to obtain $F_{s}=|\Delta\sigma|^{\nu D}\Psi_{F}\left(\dfrac{\mathcal{E}}{|\Delta\sigma|^{\phi}}\right)\;,$ (9.2) where $\Psi_{F}(x)=\tilde{\Psi}_{F}(1,x)$. The crossover or gap exponent is given by $\phi=\nu y_{\mathcal{E}}$ (correspondingly $\phi^{\prime}=\nu^{\prime}y^{\prime}_{\mathcal{E}}$ in the isometric ensemble). The full scaling form, $F_{s}=|\Delta\sigma|^{\nu D}\Psi_{F}\left(\dfrac{\mathcal{E}}{|\Delta\sigma|^{\phi}},\kappa|\Delta\sigma|^{\nu\eta},\dfrac{\\{Y,B\\}}{|\Delta\sigma|^{\nu\eta_{u}}},R|\Delta\sigma|^{\nu}\right)\;,$ (9.3) is a function of five distinct variables, four of which we have suppressed in Eq. 9.2. The height field has a scaling dimension $\zeta$. Right at buckling, we expect $\langle h({\bf r})^{2}\rangle\sim\int_{1/R}\mathrm{d}^{D}q/(\kappa_{R}({\bf q})q^{4})\sim R^{2\zeta}$, which gives Aronovitz and Lubensky (1988); Guitter _et al._ (1988, 1989); Aronovitz _et al._ (1989); Le Doussal and Radzihovsky (1992) $\zeta=\dfrac{4-D-\eta}{2}\;,\quad\zeta^{\prime}=\dfrac{4-D-\eta^{\prime}}{2}\;.$ (9.4) Similarly, the requirement that the rotationally invariant nonlinear strain tensor $u_{ij}$ renormalize correctly leads to a Ward identity exponent relation Guitter _et al._ (1988, 1989); Aronovitz and Lubensky (1988); Aronovitz _et al._ (1989); Le Doussal and Radzihovsky (1992), now stated in general $D$ $2\eta+\eta_{u}=4-D\;,\quad 2\eta^{\prime}+\eta^{\prime}_{u}=4-D\;.$ (9.5) These are well known identities, which we will use below in deriving additional exponent relations. Similar to the mean field treatment of buckled ribbons in Ref. Hanakata _et al._ (2021), the average height $\langle h\rangle$ serves as an order parameter for the buckling transition here. By definition, we obtain (for general $\mathcal{E}$) $\langle h\rangle=-\dfrac{\partial F_{s}}{\partial\mathcal{E}}=|\Delta\sigma|^{\nu D-\phi}\Psi_{h}\left(\dfrac{\mathcal{E}}{|\Delta\sigma|^{\phi}}\right)$ (9.6) where $\Psi_{h}(x)=\Psi^{\prime}_{F}(x)$. Now, we identify $\langle h\rangle\sim b^{\zeta}\propto|\Delta\sigma|^{-\nu\zeta}$, which gives the exponent relations $\phi=\dfrac{\nu}{2}(4+D-\eta)\;,\quad\phi^{\prime}=\dfrac{\nu^{\prime}}{2}(4+D-\eta^{\prime})\;.$ (9.7) As a consistency check, one can easily confirm that this relation for the gap exponents is equivalent to demanding a trivial renormalization of the external field $\mathcal{E}$ (see Eq. 7.6 in Sec. 7 for the $D=2$ version). In the zero field limit ($\mathcal{E}=0$), for strong compression, we spontaneously break symmetry by buckling, which leads to a finite $\langle h\rangle$. As $\Delta\sigma\to 0$ in a finite system, the rescaled bulk and Young’s moduli ($B$ and $Y$) diverge as $|\Delta\sigma|^{-\eta_{u}\nu}$, while the rescaled system size ($R|\Delta\sigma|^{\nu}$) and bending rigidity ($\kappa|\Delta\sigma|^{\nu\eta}$) become vanishingly small. But the latter can’t be set to zero quite yet. We know from our mean field analysis that upon buckling, $\langle h\rangle\propto R/Y^{1/2}$ (Eq. A.4 in Sec. 5). As a result, the elastic moduli act as relevant scaling variables at the transition while the system size behaves as a _dangerously irrelevant variable_ Amit and Peliti (1982) whose scaling nontrivially affects the critical exponents. For $\mathcal{E}=0$, this observation leads to $\lim_{\Delta\sigma\to 0}\Psi_{h}(0)\propto\dfrac{R}{Y^{1/2}}|\Delta\sigma|^{\nu(1+\eta_{u}/2)}\;.\\\ $ (9.8) Note that this interesting size dependence for the scaling function is not a statement of conventional finite-size scaling Amit and Martin-Mayor (2005); it is instead a unique feature arising from the long wavelength nature of the buckling transition. By combining Eqs. 9.4, 9.5 with this asymptotic behaviour, we can compute the order parameter exponents $\beta,\beta^{\prime}$ for the buckled height (Eq. 4.1) in our two ensembles to be $\beta=\nu\left(1-\dfrac{\eta}{2}\right)\;,\quad\beta^{\prime}=\nu^{\prime}\left(1-\dfrac{\eta^{\prime}}{2}\right)\;.$ (9.9) This exponent identity is new and distinct from the usual hyperscaling relation that relates $\beta$ and $\eta$ in conventional critical phenomena (the latter in our current notation would read as $\beta=-\nu\zeta<0$, which is obviously wrong). We can similarly compute the susceptibility exponents $\gamma,\gamma^{\prime}$ defined in Sec. 4. From our mean field analysis, we know that $\chi\propto R^{2}$ (Eq. 5.7). Upon taking a derivative of Eq. 9.6 and evaluating it at $\mathcal{E}=0$, we obtain $\lim_{\Delta\sigma\to 0}\Psi^{\prime}_{h}(0)\propto R^{2}|\Delta\sigma|^{2\nu}\;.\\\ $ (9.10) The susceptibility scaling exponents then satisfy $\gamma=\nu(2-\eta)\;,\quad\gamma^{\prime}=\nu^{\prime}(2-\eta^{\prime})\;.$ (9.11) Although we recover the standard Fisher’s identity, its appearance is in fact nontrivial, as is easily seen by noting that Fisher’s identity reflects the equilibrium fluctuation-response relation Chaikin _et al._ (1995), $k_{B}T\chi=\int\mathrm{d}^{D}r\langle h({\bf r})h({\bf 0})\rangle\;,$ (9.12) which by a naïve application of scaling would give $\chi\sim\xi^{4-\eta}$ resulting in $\gamma=\nu(4-\eta)$ instead of Eq. 9.11. The resolution, as before, lies in the nontrivial size dependence of the correlation function integral (because $h({\bf r})\sim R$), which leads to $k_{B}T\chi\sim\xi^{4-\eta}(R/\xi)^{2}\propto\xi^{2-\eta}$ and then correctly producing Eq. 9.11. In addition, combining Eqs. 9.9 and 9.11 results in another unusual exponent identity $\gamma=2\beta\;,\quad\gamma^{\prime}=2\beta^{\prime}\;.$ (9.13) Note that the inequality $\nu\neq\nu^{\prime}$ (discussed below) will necessarily imply additional exponent differences between the two ensembles, such as $\phi\neq\phi^{\prime}$, $\beta\neq\beta^{\prime}$ and $\gamma\neq\gamma^{\prime}$ (see Table 1 for a summary). Next we move on to the nonlinear response in the presence of an external field. For finite $\mathcal{E}$ we now take the limit $\Delta\sigma\to 0$, which requires that we focus on the $x\to\infty$ asymptotics of $\Psi_{h}(x)\sim x^{1/\delta}$. Once again, we must be careful with regard to scaling with the elastic moduli and the system size. The mean field analysis (Eq. 5.6) dictates that the prefactor to the nonlinear response itself scales as $(R^{4}/Y)^{1/3}$. Upon taking this dependence into account, we have $\lim_{\Delta\sigma\to 0}\Psi_{h}\left(\dfrac{\mathcal{E}}{|\Delta\sigma|^{\phi}}\right)\propto\left(\dfrac{\mathcal{E}}{|\Delta\sigma|^{\phi}}\right)^{1/\delta}\left(\dfrac{R^{4}|\Delta\sigma|^{4\nu}}{Y|\Delta\sigma|^{-\nu\eta_{u}}}\right)^{1/3}\;,\\\ $ (9.14) By demanding that the $\Delta\sigma$ dependence cancel as $\Delta\sigma\to 0$, we determine $\delta$ and $\delta^{\prime}$. Remarkably, upon using the other identites from Eqs. 9.4, 9.5 and 9.7, we obtain $\delta=\delta^{\prime}=3\;,$ (9.15) which is an exact result, independent of dimension! In Appendix E, we show how the above results can all be combined into a simpler scaling form for $\langle h\rangle$ with a _single_ size dependent scaling variable and a modified gap exponent. Exponent | Relation | Isotensional ensemble | Isometric ensemble ---|---|---|--- $D=2,\,d=3^{\dagger}$ | $D=4-\varepsilon,\,d=D+d_{c}$ | $D=2,\,d=3^{\dagger}$ | $D=4-\varepsilon,\,d=D+d_{c}$ $\eta,\,\eta^{\prime}$ | $\eta^{\prime}=\eta$ | $0.821$ | $\dfrac{12\varepsilon}{24+d_{c}}$ | $0.821$ | $\dfrac{12\varepsilon}{24+d_{c}}$ $\eta_{u},\eta^{\prime}_{u}$ | $\begin{aligned} 2\eta+\eta_{u}&=4-D\\\ 2\eta^{\prime}+\eta^{\prime}_{u}&=4-D\end{aligned}$ | $0.358$ | $\dfrac{d_{c}\varepsilon}{24+d_{c}}$ | $0.358$ | $\dfrac{d_{c}\varepsilon}{24+d_{c}}$ $\nu,\nu^{\prime}$ | $\begin{aligned} \nu&=1/(2-\eta)\\\ \nu^{\prime}&=\nu/(D\nu-1)\end{aligned}$ | $0.848$ | $\dfrac{1}{2}+\dfrac{3\varepsilon}{24+d_{c}}$ | $1.218$ | $\dfrac{1}{2}+\dfrac{(12+d_{c})\varepsilon}{(96+4d_{c})}$ $\beta,\beta^{\prime}$ | $\begin{aligned} \beta&=\nu(1-\eta/2)\\\ \beta^{\prime}&=\nu^{\prime}(1-\eta^{\prime}/2)\end{aligned}$ | $\dfrac{1}{2}$ | $\dfrac{1}{2}$ | $0.718$ | $\dfrac{1}{2}+\dfrac{d_{c}\varepsilon}{96+4d_{c}}$ $\gamma,\gamma^{\prime}$ | $\begin{aligned} \gamma&=\nu(2-\eta)=2\beta\\\ \gamma^{\prime}&=\nu^{\prime}(2-\eta^{\prime})=2\beta^{\prime}\end{aligned}$ | $1$ | $1$ | $1.436$ | $1+\dfrac{d_{c}\varepsilon}{48+2d_{c}}$ $\delta,\delta^{\prime}$ | $\begin{aligned} \delta\beta&=\gamma+\beta\\\ \delta^{\prime}\beta^{\prime}&=\gamma^{\prime}+\beta^{\prime}\end{aligned}$ | $3$ | $3$ | $3$ | $3$ $\theta,\theta^{\prime}$ | $\begin{aligned} &\theta=1/(\nu D-1)\\\ \theta^{\prime}&=\nu^{\prime}D-1,\,\theta^{\prime}=\theta\end{aligned}$ | $1.436$ | $1+\dfrac{d_{c}\varepsilon}{48+2d_{c}}$ | $1.436$ | $1+\dfrac{d_{c}\varepsilon}{48+2d_{c}}$ $\phi,\phi^{\prime}$ | $\begin{aligned} \phi&=\nu(4+D-\eta)/2\\\ \phi^{\prime}&=\nu^{\prime}(4+D-\eta^{\prime})/2\end{aligned}$ | $2.196$ | $2+\dfrac{(12-d_{c})\varepsilon}{96+4d_{c}}$ | $3.155$ | $2+\dfrac{3(4+d_{c})\varepsilon}{4(24+d_{c})}$ † Exponents computed using $\eta=4/(1+\sqrt{15})$ obtained from the self- consistent screening approximation Le Doussal and Radzihovsky (1992). Table 1: The two buckling universality classes. We list all the scaling exponents along with the relevant scaling identites for the isotensional (unprimed exponents) and isometric (primed exponents) ensembles. The exponents are obtained within an $\varepsilon$-expansion ($D=4-\varepsilon,d=D+d_{c}$), accurate to $\mathcal{O}(\varepsilon)$ and also in physical dimensions ($D=2,d=3$) using the best self-consistent estimates for $\eta$ Le Doussal and Radzihovsky (1992). Only $\beta=1/2$, $\gamma=1$ (isotensional) and $\delta=\delta^{\prime}=3$ (both ensembles) are exact to all orders and independent of dimensionality. Finally, we study the anomalous stress-strain curve exponents $\theta,\theta^{\prime}$ defined by Eqs. 4.4. These quantities are simpler as they approach finite limits when the system size $R\to\infty$. Here we distinguish the isotensional and the isometric ensemble as $\theta$ and $\theta^{\prime}$ are defined differently in the two. We now note that the scaling variable is $B\Delta\epsilon$ in the isometric ensemble, set $\mathcal{E}=0$ in Eq. 9.2 and employ the definitions in Eqs. 3.6, 3.9 to get $\displaystyle\langle\epsilon\rangle$ $\displaystyle=-\dfrac{\partial F_{s,\sigma}}{\partial\sigma_{0}}\propto|\Delta\sigma|^{\nu D-1}\sim|\Delta\sigma|^{1/\theta}\quad(\textrm{isotensional})\;,$ (9.16a) $\displaystyle\dfrac{\langle\sigma\rangle}{B}$ $\displaystyle=\dfrac{\partial F_{s,\epsilon}}{\partial(B\epsilon)}\propto|B\Delta\epsilon|^{\nu^{\prime}D-1}\sim|\Delta\epsilon|^{\theta^{\prime}}\quad(\textrm{isometric})\;.$ (9.16b) In this mechanical context $\theta,\theta^{\prime}$ take on the role usually played by energy scaling in conventional critical phenomena. We now obtain exponent relations analogous to Josephson’s hyperscaling relation Goldenfeld (2018) $\theta=\dfrac{1}{\nu D-1}\;,\quad\theta^{\prime}=\nu^{\prime}D-1\;.$ (9.17) We shall see later that $\theta,\theta^{\prime}>1$ (see Table 1), which leads to a crucial distinction between the two ensembles. In the isotensional ensemble, for $1/\theta<1$, the anomalous sublinear response dominates any linear Hookean response as $\Delta\sigma\to 0$ Košmrlj and Nelson (2016). In contrast, in the isometric ensemble, for $\theta^{\prime}>1$, the dominant strain response is in fact the nonsingular linear term as $\Delta\epsilon\to 0$. This dichotomy reflects a crucial physical consequence of the different boundary conditions: in the isotensional ensemble, the sheet is infinitely compliant to homogeneous dilations or contractions in the plane, which is a zero mode of the system, but in the isometric ensemble, the clamped boundary conditions prohibit this zero mode and the sheet has a finite compliance to homogeneous isotropic distortions. We now discuss one more scaling relation that is _only_ true in the isotensional ensemble and _not_ in the isometric ensemble. In the isotensional case, as we saw before in Sec. 7, the in-plane tension $\sigma=\sigma_{0}$ does not receive any graphical corrections as $\bar{v}=0$ identically. This is true in any dimension and to all orders in perturbation theory, as a consequence of the fact that $\sigma_{0}$ is the sole term that breaks rotational invariance, while the bending and nonlinear stretching terms preserve rotational symmetry. This nonrenormalization condition then implies $\nu=\dfrac{1}{2-\eta}\;,$ (9.18) as we already saw by explicit calculation in Eq. 8.10. Because we must have $\eta,\eta_{u}>0$, Eq. 9.5 implies that $\eta\leq(4-D)/2$ (for $D\leq 4$). This inequality then shows that $\nu\leq 2/D$ always. An additional important consequence of Eq. 9.18 is that both $\beta$ and $\gamma$ take on their mean- field values, $\beta=\dfrac{1}{2}\;,\quad\gamma=1\;,$ (9.19) in any dimension. Note that such relations do _not_ hold in the isometric ensemble as can be seen in an $\varepsilon$-expansion as displayed in Table 1. Eq. 9.18 determining $\nu$ when plugged into the hyperscaling relation (Eq. 9.17) also gives $\theta=(2-\eta)/(D-2+\eta)$, which is consistent with the scaling expected from $\langle\epsilon\rangle\sim\int(\mathrm{d}^{D}r/V_{D})\langle|\bm{\nabla}h|^{2}\rangle\sim\xi^{2\zeta-2}$. This result agrees with previously derived scaling relations in arbitrary dimensions Guitter _et al._ (1989); Aronovitz and Lubensky (1988) and leads to $\theta=(2-\eta)/\eta$ when $D=2$ Košmrlj and Nelson (2016). This completes the derivation of the various scaling exponents in the two ensembles. The values of the exponents computed within an $\varepsilon$-expansion ($D=4-\varepsilon,d=D+d_{c}$)and by using estimates from a self-consistent calculation Le Doussal and Radzihovsky (1992) in $D=2$ and $d=3$ dimensions are displayed in Table 1. With these scaling identites in hand, we can finally address the last key result of the paper, which is to show that the two ensembles are related to each other via a mechanical analog of Fisher renormalization Fisher (1968). As the isotensional and isometric ensembles are thermodynamic duals of each other, the corresponding free energy densities are related to each other via a Legendre transformation (in the thermodynamic limit $V_{D}\to\infty$) $F_{\epsilon}=\min_{\sigma_{0}}\left(F_{\sigma}+\sigma_{0}\epsilon\right)=F_{\sigma}(\sigma_{*})+\sigma_{*}\epsilon\;,$ (9.20) where $\sigma_{*}$ solves $\epsilon=-\partial F_{\sigma}/\partial\sigma_{0}|_{\sigma_{0}=\sigma_{*}}$, and the free energy densities are given by $F_{\sigma,\epsilon}=-(k_{B}T/V_{D})\ln\mathcal{Z}_{\sigma,\epsilon}$. Now, near the buckling transition (at zero symmetry-breaking field $\mathcal{E}=0$), we can use the scaling theory developed above to obtain $\Delta\epsilon\sim|\sigma_{*}-\sigma_{c}|^{1/\theta}\implies\sigma_{*}-\sigma_{c}\sim|\Delta\epsilon|^{\theta}\;,$ (9.21) where we have assumed $\theta>1$ and retained only the leading order term as $\Delta\epsilon\to 0$. Upon combining Eq. 9.21 with the scaling of the singular part of the free energy densities $F_{\epsilon}\sim|\Delta\epsilon|^{1+\theta^{\prime}}$ and $F_{\sigma}\sim|\Delta\sigma|^{1+1/\theta}$ 555The free energy densities also have nonsingular terms that need to be properly accounted for. In the limit $\Delta\sigma\to 0$ and $\Delta\epsilon\to 0$, we have the following expansions for the respective free energy densities: $\mathcal{F}_{\sigma}=a_{0}+a_{1}\Delta\sigma+a_{2}|\Delta\sigma|^{1+1/\theta}+\mathcal{O}(\Delta\sigma^{2})$ and $\mathcal{F}_{\epsilon}=b_{0}+b_{1}\Delta\epsilon+b_{2}|\Delta\epsilon|^{1+\theta^{\prime}}+\mathcal{O}(\Delta\epsilon^{2})$, where $a_{0,1,2}$ and $b_{0,1,2}$ are constants., we require that both sides of Eq. 9.20 scale in the same way as $\Delta\sigma\to 0$. This constraint gives the equality $\theta=\theta^{\prime}\;.$ (9.22) Although $\theta=\theta^{\prime}$, from Eq. 9.17 we immediately see that the correlation length exponents must differ in the two ensembles, $\nu\neq\nu^{\prime}$. The simple form of the nontrivial relation in Eq. 9.22 reflects the definition of $\theta,\theta^{\prime}$ in Eq. 4.4. Eqs. 9.20, 9.21 now lead to $\displaystyle\beta^{\prime}=\beta\theta\;,\quad\gamma^{\prime}=\gamma\theta\;,$ (9.23a) $\displaystyle\phi^{\prime}=\phi\theta\;,\quad\nu^{\prime}=\nu\theta\;.$ (9.23b) The last of these relations can be solved using Eq. 9.17 to explicitly give the important connection $\nu^{\prime}=\nu/(\nu D-1)$. With the help of Eq. 9.18, this relation simplifies to $\nu^{\prime}=\dfrac{1}{D-2+\eta}\;,$ (9.24) which was obtained previously Guitter _et al._ (1989), without however recognizing a possible distinction in ensembles. In $D=2$, we get $\nu^{\prime}=1/\eta$, which upon noting that $\eta<1$ leads to $\nu^{\prime}>1$, which has been observed in old Monte-Carlo simulations of thermalized buckling in clamped sheets Guitter _et al._ (1990). In general $D$, we can show that the isotensional and isometric ensembles have differing correlation length exponents such that $\nu<\dfrac{2}{D}<\nu^{\prime}\;.$ (9.25) which is consistent with our renormalization group results in $D=4-\varepsilon$ dimensions. Finally, demanding that Eq. 9.23 be consistent with Eqs. 9.9, 9.11 leads to equality of the eta exponents in the two ensembles $\eta^{\prime}=\eta\;,\quad\eta^{\prime}_{u}=\eta_{u}\;,$ (9.26) the latter being a consequence of the Ward identity (Eq. 9.5). We note that the difference in some of the exponents between the ensembles is not simply because the control variables ($\Delta\epsilon$ and $\Delta\sigma$) have different dimensions. In fact, both the scaling variables we use have dimensions of stress and are equivalent to each other: we use $\Delta\sigma$ in the isotensional ensemble and $B\Delta\epsilon$ (not $\Delta\epsilon$) in the isometric ensemble. Hence the difference in exponents between the ensembles is due to a genuine change in the fixed point and its associated universality class, as confirmed by our renormalization group calculations. These results are reminiscent of the Fisher renormalization of critical exponents due to hidden variables Fisher (1968), and are also related to the problems of a constrained Rudnick _et al._ (1974) or a compressible magnets Sak (1974), where the presence of a constraint (much like Eq. 9.20) leads to modified exponents. In conventional critical phenomena, such as in 3D magnets or superfluid He, Fisher renormalization doesn’t affect the numerical values of exponents by much as it usually involves dividing the conventional exponents by $1-\alpha$, where $\alpha$ is the specific heat exponent, which is often a rather small correction Fisher (1968). Here, however, the exponent $\theta$ replaces $1-\alpha$ in the mechanical context, allowing for a much stronger distinction in critical behaviour between the two ensembles. In Table 1, we see that to leading order in an $\varepsilon$-expansion, all the equalities in Eqs. 9.22 and 9.23 are satisfied. Within the $\varepsilon$-expansion, we find that the anomalous exponents are equal to leading order in the two ensembles, i.e., $\eta=\eta^{\prime}$ and $\eta_{u}=\eta^{\prime}_{u}$, as expected from our scaling considerations (Eq. 9.26). All the exponents in both ensembles are recapitulated in Table 1 along with their exponent identities. ## 10 Discussion Although the study of thermalized membranes is more than three decades old Nelson _et al._ (2004), it has been revitalized in recent years by enhanced interest in 2D materials such as graphene and MoS2. Motivated by the ability to study extreme mechanics in such ultrathin materials Blees _et al._ (2015), we have investigated the impact of thermal fluctuations on a classic (circa 1757!) Euler buckling instability of thin plates. By viewing the finite temperature buckling transition through the lens of critical phenomena, we have uncovered new exponent relations and remarkable phenomena that tie together geometry, mechanics and fluctuations in a thin elastic sheet. Near a thermodynamic continuous phase transition, fluctuations emerge on all scales and physics becomes universal. As we have shown, a similar situation arises on the verge of a mechanical instability, such as buckling, though with some surprises. The long-wavelength nature of the buckling transition leads to unusual critical scaling behaviour reflected in the system size dependence of the mechanical response. Additionally, buckling can be actuated under either isotensional (constant stress) or isometric (constant strain) loading, which as we have found, actually constitute separate universality classes. This remarkable feature highlights the importance of oft-neglected boundary conditions that when clamped can induce a novel thermally generated spontaneous tension and modify important scaling exponents. Our work demonstrates that the inequivalence of mechanical ensembles distinguished by their boundary conditions exemplifies the phenomenon of Fisher renormalization Fisher (1968) in a mechanical context. We emphasize the salient role of geometry in isotropic thermalized buckling. Much of the phenomena discussed here arise due to the inevitable geometric coupling between in-plane stretching and out of plane bending, ubiquitous in thin plates but absent in lower dimensional counterparts, such as slender filaments. As a consequence, there is no analogue of our results in the finite temperature buckling of beams and polymers Odijk (1998); *hansen1999buckling; *baczynski2007stretching; *emanuel2007buckling; *bedi2015finite; *stuij2019stochastic. Single-molecule measurements with polymers have noted an inequivalence of similar mechanical ensembles Lubensky and Nelson (2002); Keller _et al._ (2003); *sinha2005inequivalence, though in this case due to finite size effects. In contrast for thin sheets, the ensemble inequivalence at buckling survives the thermodynamic limit, as it instead originates from the tensionless flat phase being a critical phase, with fluctuations on all scales. Our work is directly relevant to recent experiments that have probed the mechanics of 2D materials in various geometries Blees _et al._ (2015); Nicholl _et al._ (2015); *nicholl2017hidden. Boundary manipulation is a popular way to induce strong deformations and morphologies Bao _et al._ (2009); Vandeparre _et al._ (2011). When coupled to the material’s electronic properties, this also allows for strain engineering synthetic gauge fields Pereira _et al._ (2010); *guinea2010energy. Our results suggest that the interplay of thermal fluctuations and boundary constraints can be important in many of these contexts, allowing for enhanced control of the emergent mechanics in nanoscale systems. Anisotropic buckling is particularly relevant in such solid-state devices, either in terms of uniaxially compressed ribbons Košmrlj and Nelson (2016) or sheets crushed in the presence of a background aligning field Le Doussal and Radzihovsky (2021), both of which pose challenging directions for future research. It is also known that the influence of boundaries can often persist in macroscopically large diffuse regions in slender elastic bodies Schroll _et al._ (2011), an effect that is amplified by the geometry of plates Lobkovsky and Witten (1997); *cerda2004elements; *barois2014curved and shells Mahadevan _et al._ (2007); *santangelo2013nambu. It would be interesting to explore the consequence of thermal fluctuations in these cases, where boundary effects are again particularly important. While much of our analysis focused on the buckling transition, far beyond the threshold, the sheet adopts a curved profile whose mechanical description is akin to that of thin shells. The ensuing curvature can be manipulated to control localized deformations Vaziri and Mahadevan (2008); *evans2017geometric and fluctuation driven nonlinear response Paulose _et al._ (2012); *kovsmrlj2017statistical. Such curved geometries offer tunable mechanisms to modulate the mechanical and vibrational properties of electromechanical resonators Lifshitz and Cross (2008), another attractive direction for future research. Postbuckled states also often exhibit bistability and sudden snap-through transitions that exhibit critical slowing down even in the absence of fluctuations Gomez _et al._ (2017). It would be of interest to extend our analysis to incorporate such hysteritic and dynamic effects at finite temperature, but this remains a formidable challenge. We hope that this work will encourage future explorations at the rich intersection of geometry, statistical mechanics and elastic instabilities. _Note added_ : We recently became aware of related work by Leo Radzihovsky and Pierre Le Doussal Le Doussal and Radzihovsky (2021) on the buckling transition in a thermalized membrane subjected to an external aligning field, which unlike our work, breaks rotational symmetry explicitly in the bulk. ###### Acknowledgements. SS acknowledges the support of the Harvard Society of Fellows. We thank Mark Bowick, Paul Hanakata, Andrej Košmrlj, John Toner and Leo Radzihovsky for illuminating discussions. This work was also supported by the NSF through the Harvard Materials Research Science and Engineering Center, via Grant No. DMR-2011754 as well as by Grant No. DMR-1608501. ## Appendix A Integrating out the in-plane phonons As ${\bf u}$ appears quadratically in the Hamiltonian (Eq. 2.1), we can exactly integrate it out. To do this, we separate the average strain ($u^{0}_{ij}$) from the nonzero wavelength deformations (i.e., ${\bf q}\neq{\bf 0}$, denoted by the prime on the ${\bf q}$ integral), to write $\displaystyle u_{ij}({\bf r})=u_{ij}^{0}+\int_{{\bf q}}^{\prime}\;e^{i{\bf q}\cdot{\bf r}}\left[\dfrac{i}{2}(q_{i}u_{j}({\bf q})+q_{j}u_{i}({\bf q}))+\mathcal{A}_{ij}({\bf q})\right]\;,$ (A.1) $\displaystyle\textrm{with}\quad u^{0}_{ij}=\dfrac{1}{A}\int\mathrm{d}^{2}r\;u_{ij}\;,\quad\mathcal{A}_{ij}({\bf r})=\dfrac{1}{2}\partial_{i}h\partial_{j}h\;,$ (A.2) where $\int_{{\bf q}}=\int\mathrm{d}^{2}q/(2\pi)^{2}$. Note that, while there are only two independent in-plane phonon degrees of freedom ($u_{i}({\bf q})$) for nonzero wavevector, the homogeneous part of the strain tensor ($u^{0}_{ij}$) has _three_ independent components, corresponding to the three distinct modes of macroscopically deforming a 2D solid. It is well known that only the transverse component of $\mathcal{A}_{ij}$ is important, as the rest can be absorbed into a global translation zero mode (constant displacement) Nelson _et al._ (2004). The total Hamiltonian now takes the form $\mathcal{H}=\mathcal{H}^{\prime}+\mathcal{H}^{0}$ $\displaystyle\mathcal{H}^{\prime}$ $\displaystyle=\int\dfrac{\mathrm{d}^{2}q}{(2\pi)^{2}}\left[\dfrac{\kappa}{2}q^{4}\left|h_{\bf q}\right|^{2}+\dfrac{1}{2}u_{i}({\bf q})\left(\mu q^{2}\mathcal{P}^{T}_{ij}+(2\mu+\lambda)q^{2}\mathcal{P}^{L}_{ij}\right)u_{j}(-{\bf q})\right]-\mathcal{E}h_{{\bf q}={\bf 0}}$ $\displaystyle+\int^{\prime}\dfrac{\mathrm{d}^{2}q}{(2\pi)^{2}}\left[i\mu\left(q_{i}u_{j}({\bf q})+q_{j}u_{i}(-{\bf q})\right)\mathcal{A}_{ij}(-{\bf q})+i\lambda q_{i}u_{i}({\bf q})\mathcal{A}_{kk}(-{\bf q})+\dfrac{1}{2}\left(2\mu|\mathcal{A}_{ij}({\bf q})|^{2}+\lambda|\mathcal{A}_{kk}({\bf q})|^{2}\right)\right]\;,$ (A.3) $\displaystyle\mathcal{H}^{0}$ $\displaystyle=\dfrac{A}{2}\left[2\mu(u^{0}_{ij})^{2}+\lambda(u^{0}_{kk})^{2}\right]-A\sigma^{\rm ext}_{ij}\left(u^{0}_{ij}-\mathcal{A}^{0}_{ij}\right)\;,$ (A.4) where $\mathcal{H}^{\prime}$ includes all contributions from the ${\bf q}\neq{\bf 0}$ in-plane phonon modes and $\mathcal{H}^{0}$ includes all the terms corresponding to the ${\bf q}={\bf 0}$ phonon modes. Here we have used the transverse ($\mathcal{P}^{T}_{ij}({\bf q})=\delta_{ij}-q_{i}q_{j}/q^{2}$) and longitudinal ($\mathcal{P}^{L}({\bf q})=q_{i}q_{j}/q^{2}$) projection operators and written $\mathcal{A}^{0}_{ij}=(1/A)\int\mathrm{d}{\bf r}\;\mathcal{A}_{ij}({\bf r})$. The ${\bf q}={\bf 0}$ and ${\bf q}\neq{\bf 0}$ in-plane phonon modes clearly decouple from each other, so when we integrate them out, the total free energy is simply $\mathcal{F}=\mathcal{F}^{\prime}+\mathcal{F}^{0}$, where $\mathcal{F}^{\prime}$ arises from integrating out the ${\bf q}\neq{\bf 0}$ phonons and $\mathcal{F}^{0}$ arises from the ${\bf q}={\bf 0}$ phonon modes. The former is a standard calculation Nelson _et al._ (2004), which gives $\mathcal{F}^{\prime}=\int\mathrm{d}^{2}r\left\\{\dfrac{\kappa}{2}(\nabla^{2}h)^{2}+\dfrac{Y}{2}\left(\dfrac{1}{2}\mathcal{P}^{T}_{ij}\partial_{i}h\partial_{j}h\right)^{2}-\mathcal{E}h\right\\}\;.$ (A.5) Note that this part of the calculation is common to both ensembles. For the zero mode calculation, we consider the two ensembles separately. In the isotensional ensemble, $\sigma^{\rm ext}_{ij}=\sigma_{0}\delta_{ij}$, which gives $\mathcal{H}^{0}_{\sigma}=\dfrac{A}{2}\left[2\mu(\tilde{u}^{0}_{ij})^{2}+(\mu+\lambda)(u^{0}_{kk})^{2}\right]-A\sigma_{0}\left(u^{0}_{kk}-\mathcal{A}^{0}_{kk}\right)\;,$ (A.6) where we have decomposed $u^{0}_{ij}$ into its deviatoric part ($\tilde{u}^{0}_{ij}=u_{ij}^{0}-\delta_{ij}u_{kk}^{0}/2$) and its trace $u^{0}_{kk}$. Both the shear ($\tilde{u}_{ij}^{0}$) and the dilation ($u^{0}_{kk}$) components of the homogeneous strain can be integrated over freely now to obtain the zero mode contribution to the free energy, $\mathcal{F}^{0}_{\sigma}=A\sigma_{0}\mathcal{A}^{0}_{kk}=\dfrac{\sigma_{0}}{2}\int\mathrm{d}^{2}r\;|\bm{\nabla}h|^{2}\;,$ (A.7) in the isotensional ensemble Košmrlj and Nelson (2016). In the isometric ensemble, we set $\sigma^{\rm ext}_{ij}=0$ and instead have $(1/A)\int\mathrm{d}{\bf r}\bm{\nabla}\cdot{\bf u}=\epsilon$. The homogeneous part of the strain tensor is then given by $u^{0}_{ij}=\tilde{u}^{0}_{ij}+\dfrac{\delta_{ij}}{2}\left(\epsilon+\mathcal{A}^{0}_{kk}\right)\;,$ (A.8) where we have once again separated out the devaitoric shear component ($\tilde{u}^{0}_{ij}$). We immediately see that, while in the isotensional ensemble, all three components of $u^{0}_{ij}$ were freely integrated over, in the isometric ensemble, only _two_ out of the three degrees of freedom can be freely integrated over. The clamped boundary conditions prevent homogeneous dilations or contractions, but the two homogeneous shear deformations in $\tilde{u}^{0}_{ij}$ continue to be zero modes. Upon integrating out $\tilde{u}^{0}_{ij}$, we obtain $\displaystyle\mathcal{F}^{0}_{\epsilon}$ $\displaystyle=\dfrac{A}{2}(\mu+\lambda)\left(\epsilon+\mathcal{A}^{0}_{kk}\right)^{2}$ $\displaystyle=\dfrac{B}{2A}\int\mathrm{d}^{2}r\left[\epsilon+\dfrac{1}{2A}\int\mathrm{d}^{2}r^{\prime}|\bm{\nabla}^{\prime}h|^{2}\right]^{2}\;,$ (A.9) with the bulk modulus $B=\mu+\lambda$, in the isometric ensemble. By adding together $\mathcal{F}^{\prime}$ from Eq. A.5 with $\mathcal{F}^{0}_{\sigma,\epsilon}$ (Eqs. A.7 and A.9), we get the total free energy in the two ensembles (Eqs. 3.5 and 3.7 in the main text). ## Appendix B Mean field equation of state For the mean field calculation we use a single mode Galerkin approximation using $h_{0}({\bf r})=H_{0}J_{0}(q_{n}r)$ for a general wavevector $q_{n}$. The linear terms are easily diagonalized by $h_{0}({\bf r})$ which is an eigenfunction of the Laplacian, $\nabla^{2}h_{0}({\bf r})=-q_{n}^{2}h_{0}({\bf r})\;.$ (B.1) The nonlinear terms are computed as follows. We first have the integral $\int\dfrac{\mathrm{d}^{2}r}{A}|\bm{\nabla}h_{0}|^{2}=H_{0}^{2}q_{0}^{2}f(q_{0}R)\;,$ (B.2) where the dimensionless function is given by $\displaystyle f(x)$
# A Note on the Regularity of Thermoelastic Plates with Fractional Rotational Inertial Force Fredy Maglorio Sobrado Suárez Department of Mathematics, Federal University of Technological of Paraná, Brazil ###### Abstract The present work intends to complement the study of the regularity of the solutions of the thermoelastic plate with rotacional forces. The rotational forces involve the spectral fractional Laplacian, with power parameter $\tau\in[0,1]$ ( $\gamma(-\Delta)^{\tau}u_{tt}$). Previous research regarding regularity showed that, as for the analyticity of the semigroup $S(t)=e^{\mathbb{B}t}$ for the Euler-Bernoulli Plate($\tau=0$) model, the first result was established by Liu and Renardy, [12] in the case of hinged and clamped boundary conditions, for the case $\tau=1$ (Plate Kirchoff-Love) Lasiecka and Triggiani showed, that the semigroup is not differentiable [6, 10] and more recently in 2020 Tebou et al.[5] showed that for $\tau\in(0,\frac{1}{2})$, $S(t)$ is of class Gevrey $s>\frac{2-\tau}{2-4\tau}$. Our main contribution here is to show that $S(t)$ is of Gevrey class $s>\frac{3-\tau}{2-2\tau}$ when the parameter $\tau$ lies in the interval $[\frac{1}{2},1)$ and also show that $S(t)$ is not analytic for $\tau\in(0,1]$ both results for Hinged plate/ Dirichlet temperature boundary conditions. ††Email address<EMAIL_ADDRESS>(Fredy Maglorio Sobrado Suárez) . keyword: Euler-Bernoulli Plate, Plate Kirchoff-Love, Gevrey Class, Fractional Rotational Inertial force, Analiticity, Thermoelastic Plates. ## 1 Introduction Consider that $\Omega$ is a bounded open subset of $\mathbb{R}^{n}$, $n\geq 1$, with sufficiently smooth boundary. The system we study is given by the following coupled plate equations: $\displaystyle u_{tt}+\gamma(-\Delta)^{\tau}u_{tt}+\Delta^{2}u+\alpha\Delta\theta=0,\quad$ $\displaystyle x\in\Omega,$ $\displaystyle t>0,$ (1) $\displaystyle\theta_{t}-\kappa\Delta\theta-\beta\Delta u_{t}=0,\quad$ $\displaystyle x\in\Omega,$ $\displaystyle t>0,$ (2) satisfying the boundary conditions $u=\Delta u=0,\quad\theta=0,\quad x\in\partial\Omega,\ t>0,$ (3) and prescribed initial data $\displaystyle u(x,0)=u_{0}(x),\ u_{t}(x,0)=u_{1}(x),\ \theta(x,0)=\theta_{0}(x),\quad x\in\Omega.$ (4) Here $u$, denote the transversal displacements of the plates. The coefficients and the inertia rotational is given by $\gamma$ positive number. The exponent $\tau$ lies in the interval $[0,1]$, the positive numbers $\alpha$ and $\beta$ are the coupling coefficient. The system we study is given by the following coupled plate equations: $\displaystyle u_{tt}+\gamma(-\Delta)^{\tau}u_{tt}+\Delta^{2}u+\alpha\Delta\theta=0,\quad$ $\displaystyle x\in\Omega,$ $\displaystyle t>0,$ (5) $\displaystyle\theta_{t}-\kappa\Delta\theta-\beta\Delta u_{t}=0,\quad$ $\displaystyle x\in\Omega,$ $\displaystyle t>0,$ (6) and prescribed initial data $\displaystyle u(x,0)=u_{0}(x),\ u_{t}(x,0)=u_{1}(x),\ \theta(x,0)=\theta_{0}(x),\quad x\in\Omega.$ (7) Here $u$, denote the transversal displacements of the plates. The inertia rotational coefficients given by $\gamma$ positive number. The exponent $\tau$ lies in the interval $[0,1]$, $\alpha$ and $\beta$ constant positive are the coupling coefficient. Various researchers year after year have been devoting their attention to the study of the asymptotic behavior and regularity of the solutions of the thermoelastic plate system, especially when considering the rotational inertial force in the system given by: $\gamma(-\Delta)^{\tau}u_{tt}$. We emphasize that this mathematical model, when the parameter $\tau$ takes the values $0$ and $1$, are called Euler-Bernoulli and Kirchoff-Love thermoelastic plates, respectively. We know that regarding the asymptotic behavior the best decay rate is in exponential rate, the first results for the Euler-Bernoulli model, it is well known that the underlying semigroup is both analytic and exponentially stable in the case of hinged boundary conditions and clamped boundary condition. However, for the Kirchhoff model, only the exponential stability of the semigroup is true; Lasiecka and Triggiani showed that the semigroup is not only not analytic, it is not even differentiable [6, 10] for any of the hinged or clamped boundary conditions. Later, many other works followed, establishing the exponential stability of thermoelastic plates (Euler-Bernoulli and Kirchoff-Love) with various boundary conditions, e.g. [1, 2, 13, 23]. Regarding the analyticity of the semigroup for the Euler-Bernoulli model, the first result was established by Liu and Renardy, [19] in the case of bounded and articulated boundary conditions. Subsequently, Liu and Liu, [11], and Lasiecka and Triggiani [6, 7, 8, 9] demonstrated other analyticity results under various boundary conditions. In more recent research from 2020 Tebou et al. [5] studied thermoelastic plates considering the fractional rotational inertial force ($\gamma(-\Delta)^{\tau}u_{tt}$ for the parameter $\tau\in[0,1]$. In $\Omega$, limited open subset of $\mathbb{R}^{n}$, $n\geq 1$, with smooth enough boundary In this research the authors prove that the semigroup associated to the system is the Gevrey class $s$ for each $s>\frac{2-\tau}{2-4\tau}$ for both: the Hinged plate/Dirichlet temperature boundary conditions and Clamped plate/Dirichlet temperature boundary conditions when the parameter $\tau$ lies in the interval $(0,\frac{1}{2})$, also show that the semigroup $S(t)$ is exponentially stable for Hinged boundary conditions, for $\tau$ in the interval $[0,1]$ and finish their investigation, constructing a counterexample, that, under hinged boundary conditions, the semigroup is not analytic, for all $\tau$ in the interval $(0,1)$. To determine the Gevrey class of $S(t)$ use the domain method of the frequency, the appropriate decompositions of the components of the system and the use of Lions’ interpolation inequalities. More recent research in this direction can be found at [15, 17, 18, 24]. The rest of this article is organized as follows: in section 2, we study the well-posedness of the system (9)-(12) through semigroup theory. We leave our main contributions for the third section, which is subdivided into two subsections. In 3.1 we showed that the semigroup $S(t)=e^{\mathbb{B}t}$ is not analytic when $t\in(0,1]$ and in the last subsection 3.2 we showed that the underlying semigroup is Gevrey class $s$ for each $s>\frac{3-\tau}{2-2\tau}$ for Hinged plate/Dirichlet temperature boundary condition when the parameter $\tau$ is in the interval $[\frac{1}{2},1]$. We end this investigation with one observation of the exponential decay of $S(t)$ for $\tau\in[0,1]$. #### 1.0.1 Well-Posedness of the System of plates thermoelastic In this section we will use the semigroup theory for assure the existence and uniqueness of strong solutions for the system (9)-(12). Before this, we are going to recall some preliminary results. ###### Theorem 1 (See Theorem 1.2.4 in [14]) Let $\mathbb{B}$ be a linear operator with domain $D(\mathbb{B})$ dense in a Hilbert space $\mathbb{X}$. If $\mathbb{B}$ is dissipative and $0\in\rho(\mathbb{B})$, the resolvent set of $\mathbb{B}$, then $\mathbb{B}$ is the generator of a $C_{0}$-semigroup of contractions on $\mathbb{X}$. To re-write the system (9)-(12) in an abstract form, considering the operator self-adjoint, positive and has inverse compact on a complex Hilbert space $H=L^{2}(\Omega)$ $A:D(A)\subset L^{2}(\Omega)\rightarrow L^{2}(\Omega)$, where $\displaystyle A:=-\Delta,\quad D(A)=H^{2}(\Omega)\cap H^{1}_{0}(\Omega).$ (8) Therefore, the operator $A^{\theta}$ is self-adjoint positive for all $\theta\in{\mathbb{R}}$, bounded for $\theta\leq 0$, and the embedding $\displaystyle D(A^{\theta_{1}})\hookrightarrow D(A^{\theta_{2}}),$ is continuous for $\theta_{1}>\theta_{2}$. Here, the norm in $D(A^{\theta})$ is given by $\|u\|_{D(A^{\theta})}:=\|A^{\theta}u\|$, $u\in D(A^{\theta})$, where $\|\cdot\|$ denotes the norm in the Hilbert space $L^{2}(\Omega)$. Some of these spaces are $D(A^{0})=L^{2}(\Omega)$, $D(A^{1/2})=H_{0}^{1}(\Omega)$ and $D(A^{-1/2})=H^{-1}(\Omega)$. It is known that the above operator is self-adjoint, positive and has inverse compact. Using this notation to find a solution of the the system (5)-(7) is equivalent to find $u,v$ in some subset of $D(A)$ such that satisfy the equations $\displaystyle u_{tt}+\gamma A^{\tau}u_{tt}+A^{2}u-\alpha A\theta=0,$ (9) $\displaystyle\theta_{t}+\kappa A\theta+\beta Au_{t}=0,$ (10) satisfying the boundary conditions (Hinged plate/Dirichlet temperature): $u=\Delta u=0,\quad\theta=0,\quad x\in\partial\Omega,\ t>0,$ (11) and the initial data $\displaystyle u(x,0)=u_{0},\ u_{t}(x,0)=u_{1},\ \theta(x,0)=\theta_{0},\qquad x\in\Omega.$ (12) For $\gamma$ positive we can extend the operator $I+\gamma A^{\tau}$ in the following sense: $(I+\gamma A^{\tau})\colon D(A^{\tau/2})\to\;D(A^{-\tau/2})$ defined by $\langle{(I+\gamma A^{\tau})z_{1}},{z_{2}}\rangle_{D(A^{-\tau/2})\times D(A^{\tau/2})}=\langle{z_{1}},{z_{2}}\rangle+\gamma\langle{A^{\tau/2}z_{1}},{A^{\tau/2}z_{2}}\rangle,$ (13) for $z_{1},z_{2}\in D(A^{\tau/2})$, where $\langle{\cdot},{\cdot}\rangle$ denotes the inner product in the Hilbert space $D(A^{0})$. Note that this operator is an isometric operator when we consider the equivalent norm in the space $D(A^{\tau/2})$: $\left({\|z\|^{2}+\gamma\|A^{\tau/2}z\|^{2}}\right)^{1/2}:=\|z\|^{1/2}_{D(A^{\frac{\tau}{2}})}.$ Now, we will use a semigroup approach to study the well-posedness of the system (9)-(12). Tomando $v=u_{t}$ and applying the operators $(I+\gamma A)^{-1}$, to equations (9) and considering $U=(u,v,\theta)$ and $U_{0}=(u_{0},v_{0},\theta_{0})$, the system (5)-(7), can be written in the following abstract framework $\frac{d}{dt}U(t)=\mathbb{B}U(t),\quad U(0)=U_{0},$ (14) where the operator $\mathbb{B}$ is given by $\mathbb{B}U:=\Big{(}v,(I+\gamma A^{\tau})^{-1}\left\\{{-A^{2}u+\alpha A\theta}\right\\},-\kappa A\theta-\beta Av\Big{)},$ (15) for $U=(u,v,\theta)$. This operator will be defined in a suitable subspace of the phase space $\mathbb{H}:=D(A)\times D(A^{\frac{\tau}{2}})\times D(A^{0}),$ which in view of (13), it is a Hilbert space with the inner product $\displaystyle\langle{U_{1}},{U_{2}}\rangle$ $\displaystyle:=$ $\displaystyle\beta\langle{Au_{1}},{Au_{2}}\rangle_{D(A^{0})\times D(A^{0})}+\beta\langle{(I+\gamma A^{\tau})v_{1}},{v_{2}}\rangle_{D(A^{-\frac{\tau}{2}})\times D(A^{\frac{\tau}{2}})}$ $\displaystyle\times\alpha\langle{\theta_{1}},{\theta_{2}}\rangle_{D(A^{0})\times D(A^{0})},$ for $U_{i}=(u_{i},v_{i},\theta_{i})\in\mathbb{H}$, $i=1,2$ and induced norm $\|U\|_{\mathbb{H}}^{2}:=\beta\|Au\|^{2}+\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+\alpha\|\theta\|^{2}.$ (16) In these conditions, we define the domain of $\mathbb{B}$ as $\mathcal{D}(\mathbb{B}):=\Big{\\{}U\in\mathbb{H}\colon v\in D(A),-Au+\alpha\theta\in D(A^{1-\frac{\tau}{2}}),-\kappa\theta-\beta v\in D(A)\Big{\\}}.$ (17) To show that the operator $\mathbb{B}$ is the generator of a $C_{0}$\- semigroup we invoke a result from Liu-Zheng’ book Theorem(1). Let us see that the operator $\mathbb{B}$ satisfies the conditions of Theorem(1). Clearly, we see that $D(\mathbb{B})$ is dense in $\mathbb{H}$. Taking the inner product of $\mathbb{B}U$ with $U$ we have $\text{Re}\langle{\mathbb{B}U},{U}\rangle=-\kappa\alpha\|A^{\frac{1}{2}}\theta\|^{2},\quad\forall\ U\in D(\mathbb{B}),$ (18) that is, the operator $\mathbb{B}$ is dissipative. To complete the conditions of the above theorem, it remains to show that $0\in\rho(\mathbb{B})$. Let $F=(f,g,h)\in\mathbb{H}$, let us see that the stationary problem $\mathbb{B}U=F$ has a solution $U=(u,v,\theta)$. From the definition of the operator $\mathbb{B}$ given in (15) this system can be written as $\displaystyle v=f,\qquad$ $\displaystyle\quad\quad-A^{2}u+\alpha A\theta=(I+\gamma A^{\tau})g\quad\quad-\kappa A\theta-\beta Av=h.$ (19) Therefore, it is not difficult to see that there exist only one solution $u$ and $\theta$ of the system $-A^{2}u+\alpha A\theta=(I+\gamma A^{\tau})g\in D(A^{\frac{\tau}{2}})\\\ -\kappa A\theta=h+\beta Af\in D(A^{0}),$ (20) from where we have that $\|U\|_{\mathbb{H}}\leq C\|F\|_{\mathbb{H}},$ wich in particular implies that $\|\mathbb{B}^{-1}F\|_{\mathbb{H}}\leq\|F\|_{\mathbb{H}}$, so we have that $0$ belongs to the resolvent set $\rho(\mathbb{B})$. Consequently, from Theorem 1 we have $\mathbb{B}$ is the generator of a contractions semigroup. As a consequence of the previous Theorem(1), we obtain ###### Theorem 2 Given $U_{0}\in\mathbb{H}$ there exists a unique weak solution $U$ to the problem (14) satisfying $U\in C([0,+\infty),\mathbb{H}).$ Futhermore, if $U_{0}\in D(\mathcal{B}^{k}),\;k\in\mathbb{N}$, then the solution $U$ of (14) satisfies $U\in\bigcap_{j=0}^{k}C^{k-j}([0,+\infty),D(\mathcal{B}^{j}).$ In what follows, $C$ and $C_{\delta}$ will denote a positive constant that assumes different values in different places. ## 2 Regularity they Euler-Bernoulli to Kirchhoff-Love Thermoelastic Plates In this section we discuss the regularity of the semigroup $S(t)=e^{\mathbb{B}t}$, in two subsections: First we analyze the lack of analyticity of $S(t)$ for $0<\tau\leq 1$, then we study the Gevrey class of $S(t)$ for $\frac{1}{2}\leq\tau<1$. The following theorem characterizes the analyticity of $S(t)$: ###### Theorem 3 (see [14]) Let $S(t)=e^{\mathbb{B}t}$ be $C_{0}$-semigroup of contractions on a Hilbert space $\mathbb{H}$. Suppose that $\rho(\mathbb{B})\supseteq\\{i\lambda/\lambda\in{\mathbb{R}}\\}\equiv i{\mathbb{R}}$ Then $S(t)$ is analytic if and only if $\limsup\limits_{|\lambda|\to\infty}\|\lambda(i\lambda I-\mathbb{B})^{-1}\|_{\mathcal{L}(\mathbb{H})}<\infty.$ (21) holds. Note that to show the condition (21) it is enough to show that: Let $\delta>0$. There exists a constant $C_{\delta}>0$ such that the solutions of the system (9)-(12) for $|\lambda|>\delta$ satisfy the inequality $|\lambda|\|U\|_{\mathbb{H}}^{2}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}\qquad\Longleftrightarrow\qquad|\lambda|\|U\|_{\mathbb{H}}\leq C_{\delta}\|F\|_{\mathbb{H}}.$ (22) First, note that if $\lambda\in\rho(\mathbb{B})\subset{\mathbb{R}}$ and $F=(f,g,h)\in\mathbb{H}$ then the solution $U=(u,w,\theta)\in\hbox{D}(\mathbb{B})$ of the stationary system $(i\lambda I-\mathbb{B})U=F$ can be written by $\displaystyle i\lambda u-v$ $\displaystyle=$ $\displaystyle f\quad{\rm in}\quad D(A)$ (23) $\displaystyle i\lambda(I+\gamma A^{\tau})v+A^{2}u-\alpha A\theta$ $\displaystyle=$ $\displaystyle(I+\gamma A^{\tau})g\quad{\rm in}\quad D(A^{-\frac{\tau}{2}})$ (24) $\displaystyle i\lambda\theta+\kappa A\theta+\beta Av$ $\displaystyle=$ $\displaystyle h\quad{\rm in}\quad D(A^{0})$ (25) we have to $\kappa\alpha\|A^{\frac{1}{2}}\theta\|^{2}=\text{Re}\langle{(i\lambda-\mathbb{B})U},{U}\rangle=\text{Re}\langle{F},{U}\rangle\leq\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (26) Next we will show two lemmas that will be fundamental to achieve our results. ###### Lemma 4 Let $\delta>0$. There exists $C_{\delta}>0$ such that the solutions of the system (9)-(12) for $|\lambda|>0$, satisfy $\limsup\limits_{|\lambda|\to\infty}\|(i\lambda I-\mathbb{B})^{-1}\|_{\mathcal{L}(\mathbb{H})}<\infty\qquad{\rm for}\qquad 0\leq\tau\leq 1.$ (27) Proof: To show the (27), it suffices to show that, given $\delta>0$ there exists a constant $C_{\delta}>0$ such that the solutions of the system (9)-(12) for $|\lambda|>\delta$ satisfy the inequality $\|(i\lambda I-\mathbb{B})^{-1}F\|_{\mathbb{H}}^{2}=\|U\|^{2}_{\mathbb{H}}=\beta\|Au\|^{2}+\beta\|v\|_{D(A^{\frac{\tau}{2}})}^{2}+\alpha\|\theta\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (28) As $0\leq\frac{1}{2}$, applying continuous immersions and the estimate (26) we will have $\alpha\|\theta\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}$, therefore it remains to show that $\beta[\|Au\|^{2}+\|v\|^{2}_{D(A^{\frac{\tau}{2}})}]\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ Taking the duality product between equation(24) and $\beta u$ and using the equation (23), taking advantage of the self-adjointness of the powers of the operator $A$, we obtain $\beta\|Au\|^{2}=\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+\beta\gamma\langle{A^{\frac{\tau}{2}}v},{A^{\frac{\tau}{2}}f}\rangle\\\ +\beta\langle{v},{f}\rangle+\alpha\beta\langle{\theta},{Au}\rangle+\beta\langle{g},{u}\rangle+\beta\gamma\langle{A^{\frac{\tau}{2}}g},{A^{\frac{\tau}{2}}u}\rangle.$ (29) On the other hand, taking the duality product between equation(25) and $A^{-1}(I+\gamma A^{\tau})v$, using (24) and taking advantage of the self- adjointness of the powers of the operator $A$, we obtain $\displaystyle\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}$ $\displaystyle=$ $\displaystyle\langle{A^{-1}\theta},{i\lambda(I+\gamma A^{\tau})v}\rangle-\kappa\langle{\theta},{(I+\gamma A^{\tau})v}\rangle+\langle{A^{-1}h},{v}\rangle+\gamma\langle{h},{A^{\tau-1}v}\rangle$ $\displaystyle=$ $\displaystyle-\langle{\theta},{Au}\rangle+\alpha\|\theta\|^{2}+\langle{A^{-1}\theta},{g}\rangle+\gamma\langle{\theta},{A^{\tau-1}g}\rangle$ $\displaystyle-\kappa\langle{A^{\frac{1}{2}}\theta},{A^{-\frac{1}{2}}(I+\gamma A^{\tau})v}\rangle+\langle{A^{-1}h},{v}\rangle+\gamma\langle{h},{A^{\tau-1}v}\rangle$ Applying Cauchy-Schwarz and Young’s inequalities, for $\varepsilon>0$ exists $C_{\varepsilon}>0$ such that $\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}\leq C_{\varepsilon}\|\theta\|^{2}+\varepsilon\|Au\|^{2}+C_{\delta}\|A^{-1}\theta\|\|g\|+\gamma\|\theta\|\|A^{\tau-1}g\|+C_{\varepsilon}\|A^{\frac{1}{2}}\theta\|^{2}\\\ +\varepsilon[\|A^{-\frac{1}{2}}v\|^{2}+\|A^{\tau-\frac{1}{2}}v\|^{2}]+\|A^{-1}h\|\|v\|+\gamma\|h\|\|A^{\tau-1}v\|,$ (30) then, as $-\frac{1}{2}\leq\tau-\frac{1}{2}\leq\frac{\tau}{2}$, the continuous embedding $D(A^{\theta_{2}})\hookrightarrow D(A^{\theta_{1}})$, $\theta_{2}>\theta_{1}$, from norms $\|F\|_{\mathbb{H}}$ and $\|U\|_{\mathbb{H}}$ and estimative (26), we have $\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}\leq\varepsilon\|Au\|^{2}+C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (31) Using (31) in (29), applying Cauchy-Schwarz and Young’s inequalities and from norms $\|F\|_{\mathbb{H}}$ and $\|U\|_{\mathbb{H}}$, we obtain $\beta\|Au\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (32) Now, using (32) in (31), we have $\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (33) Therefore, from the estimates (32), (33) and (26), we conclude the proof of (28), thus finishing the proof of this lemma. $\Box$ ###### Lemma 5 Let $\delta>0$. There exists $C_{\delta}>0$ such that the solutions of the system (9)-(12) for $|\lambda|>\delta$, satisfy $|\lambda|[\beta\|Au\|^{2}+\alpha\|\theta\|^{2}]\leq\beta|\lambda|\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ (34) Proof: Estimative of $\beta|\lambda|\|Au\|^{2}$ Taking the duality product between equation(24) and $\beta\lambda u$ and using the equation (23), taking advantage of the self-adjointness of the powers of the operator $A$, we obtain $\displaystyle\beta\lambda\|Au\|^{2}$ $\displaystyle=$ $\displaystyle\beta\lambda\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+\beta\langle{\lambda(I+\gamma A^{\tau})v},{f}\rangle+\alpha\beta\langle{\theta},{A(-iv-if)}\rangle$ $\displaystyle+\beta\langle{(I+\gamma A^{\tau})g},{-if-iv}\rangle$ $\displaystyle=$ $\displaystyle\beta\lambda\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+i\beta\langle{Au},{Af}\rangle-i\beta\langle{\theta},{f}\rangle-i\beta\langle{(I+\gamma A^{\tau})g},{f}\rangle$ $\displaystyle+i\alpha\beta\langle{\theta},{Av}\rangle+i\alpha\beta\langle{\theta},{f}\rangle+i\beta\langle{(I+\gamma A^{\tau})g},{f}\rangle+i\beta\langle{(I+\gamma A^{\tau})g},{v}\rangle$ $\displaystyle=$ $\displaystyle\beta\lambda\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+i\beta\langle{Au},{Af}\rangle-i\beta\langle{\theta},{f}\rangle+i\alpha\beta\langle{\theta},{Av}\rangle+i\alpha\beta\langle{\theta},{f}\rangle$ $\displaystyle+i\beta\langle{(I+\gamma A^{\tau})g},{v}\rangle.$ On the other hand, taking the duality product between equation(25) and $\frac{\alpha}{\kappa}A^{-1}(\lambda\theta)$ and using the equation (24), taking advantage of the self-adjointness of the powers of the operator $A$, we obtain $\displaystyle\alpha\lambda\|\theta\|^{2}=-i\dfrac{\alpha}{\kappa}\lambda^{2}\|A^{-\frac{1}{2}}\theta\|^{2}-\dfrac{\alpha\beta}{\kappa}\langle{v},{\lambda\theta}\rangle+\dfrac{\alpha}{\kappa}\langle{h},{A^{-1}(\lambda\theta)}\rangle,$ (36) from: $\displaystyle-\dfrac{\alpha\beta}{\kappa}\langle{v},{\lambda\theta}\rangle$ $\displaystyle=$ $\displaystyle i\alpha\beta\langle{Av},{\theta}\rangle+i\dfrac{\alpha\beta^{2}}{\kappa}\|A^{\frac{1}{2}}v\|^{2}+i\dfrac{\alpha\beta}{\kappa}\langle{v},{h}\rangle$ (37) $\displaystyle\dfrac{\alpha}{\kappa}\langle{h},{A^{-1}(\lambda\theta}\rangle$ $\displaystyle=$ $\displaystyle-i\alpha\langle{h},{\theta}\rangle-i\dfrac{\alpha\beta}{\kappa}\langle{h},{v}\rangle+i\dfrac{\alpha^{2}}{\kappa}\|A^{-\frac{1}{2}}h\|^{2},$ (38) Adding (2) with (36) and in the result using the identities (37) and (38), we get $\lambda[\beta\|Au\|^{2}+\alpha\|\theta\|^{2}]=\beta\lambda\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+i\beta\langle{Au},{Af}\rangle\\\ -i\beta\langle{\theta},{f}\rangle+i\alpha\beta\langle{\theta},{Av}\rangle+i\alpha\beta\langle{\theta},{f}\rangle+i\beta\langle{(I+\gamma A^{\tau})g},{v}\rangle\\\ -i\dfrac{\alpha}{\kappa}\lambda^{2}\|A^{-\frac{1}{2}}\theta\|^{2}+i\alpha\beta\langle{Av},{\theta}\rangle+i\dfrac{\alpha\beta^{2}}{\kappa}\|A^{\frac{1}{2}}v\|^{2}+i\dfrac{\alpha\beta}{\kappa}\langle{v},{h}\rangle\\\ -i\alpha\langle{h},{\theta}\rangle-i\dfrac{\alpha\beta}{\kappa}\langle{h},{v}\rangle+i\dfrac{\alpha^{2}}{\kappa}\|A^{-\frac{1}{2}}h\|^{2}.$ (39) Of identity $i\alpha\beta[\langle{\theta},{Av}\rangle+\langle{Av},{\theta}\rangle]=i2\alpha\beta{\rm Re}\langle{\theta},{Av}\rangle$, taking real part of (39), we have $\lambda[\beta\|Au\|^{2}+\alpha\|\theta\|^{2}]=\beta\lambda\|v\|^{2}_{D(A^{\frac{\tau}{2}})}+i\beta\langle{Au},{Af}\rangle\\\ -i\beta\langle{\theta},{f}\rangle+i\alpha\beta\langle{\theta},{f}\rangle+i\beta\langle{(I+\gamma A^{\tau})g},{v}\rangle\\\ +i\dfrac{\alpha\beta}{\kappa}\langle{v},{h}\rangle-i\alpha\langle{h},{\theta}\rangle-i\dfrac{\alpha\beta}{\kappa}\langle{h},{v}\rangle.$ (40) Applying Cauchy-Schwarz and Young inequalities and norms $\|F\|_{\mathbb{H}}$ and $\|U\|_{\mathbb{H}}$, we finish proving this lemma. $\Box$ ###### Lemma 6 Let $\delta>0$. There exists $C_{\delta}>0$ such that the solutions of the system (9)-(12) for $|\lambda|>\delta$, satisfy $\|A^{\frac{1}{2}}v\|^{2}\leq C_{\delta}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}\qquad{\rm for}\qquad\dfrac{1}{2}\leq\tau\leq 1.$ (41) Proof: Performing the product of duality between equation (25) and $A^{-\tau}(I+\gamma A^{\tau})v$ and again using the property that for all $\eta\in\mathbb{R}$, $A^{\eta}$ is self-adjoint, we have $\displaystyle\beta\langle{Av},{A^{-\tau}(I+\gamma A^{\tau})v}\rangle$ $\displaystyle=$ $\displaystyle\langle{A^{-\tau}\theta},{i\lambda(I+\gamma A^{\tau})v}\rangle-\kappa\langle{A^{\frac{1}{2}}\theta},{A^{\frac{1}{2}-\tau}(I+\gamma A^{\tau})v}\rangle$ (42) $\displaystyle+\langle{h},{A^{-\tau}(I+\gamma A^{\tau})v}\rangle,$ then, using (24) in (42), we have $\displaystyle\beta\|A^{\frac{1-\tau}{2}}v\|^{2}+\beta\gamma\|A^{\frac{1}{2}}v\|^{2}$ $\displaystyle=$ $\displaystyle\langle{A^{-\tau}\theta},{-A^{2}u+\alpha A\theta+(I+\gamma A^{\tau})g}\rangle-\kappa\langle{A^{\frac{1}{2}}\theta},{A^{\frac{1}{2}-\tau}v}\rangle$ (43) $\displaystyle-\kappa\gamma\langle{A^{\frac{1}{2}}\theta},{A^{\frac{1}{2}}v}\rangle+\langle{h},{A^{-\tau}v}\rangle+\gamma\langle{h},{v}\rangle.$ Considering continuous immersion $D(A^{\theta_{2}})\hookrightarrow D(A^{\theta_{1}}),\;\theta_{2}>\theta_{1}$ for $\frac{1-\tau}{2}\leq\frac{1}{2}$ and Cauchy-Schwarz inequality, we have $\displaystyle\beta\gamma\|A^{\frac{1}{2}}v\|^{2}$ $\displaystyle\leq$ $\displaystyle|-\langle{A^{\frac{1}{2}}\theta},{A^{\frac{3}{2}-\tau}u}\rangle|+\alpha\|A^{\frac{1-\tau}{2}}\theta\|^{2}+|\langle{\theta},{A^{-\tau}g}\rangle|+|\gamma\langle{\theta},{g}\rangle|$ (44) $\displaystyle|\kappa\langle{A^{\frac{1}{2}}\theta},{A^{\frac{1}{2}-\tau}v}\rangle|+|-\kappa\langle{A^{\frac{1}{2}}\theta},{A^{\frac{1}{2}}v}\rangle|+|\langle{h},{A^{-\tau}v}\rangle|+\gamma|\langle{h},{v}\rangle|.$ Considering $\frac{3}{2}-\tau\leq 1\Leftrightarrow\frac{1}{2}\leq\tau\leq 1$, for $\varepsilon>0$ exists $C_{\varepsilon}>0$ independent de $\lambda$, and applying Young inequality, we have $\beta\gamma\|A^{\frac{1}{2}}v\|^{2}\leq C\\{\|A^{\frac{1}{2}}\theta\|^{2}+\|AU\|^{2}+\|\theta\|\|g\|+\|h\|\|A^{-\tau}v\|+\|h\|\|v\|\\}\\\ +\varepsilon\|A^{\frac{1}{2}}v\|^{2}+C_{\varepsilon}\|A^{\frac{1}{2}}\theta\|^{2}$ (45) Finally using estimative (26), norms $\|F\|_{\mathbb{H}}$ and $\|U\|_{\mathbb{H}}$ and applying Lemma(4). The proof of this lemma is finished. $\Box$ ###### Lemma 7 Let $\varrho(\mathbb{B})$ be the resolvent set of operator $\mathbb{B}$. Then $i\hskip 0.5pt\mathbb{R}\subset\varrho(\mathbb{B}).$ (46) Proof: Let us prove that $i{\mathbb{R}}\subset\rho(\mathbb{B})$ by using an argument by contradiction, so we suppose that $i{\mathbb{R}}\not\subset\rho(\mathbb{B})$. As $0\in\rho(\mathbb{B})$ and $\rho(\mathbb{B})$ is open, we consider the highest positive number $\lambda_{0}$ such that the $]-i\lambda_{0},i\lambda_{0}[\subset\rho(\mathbb{B})$ then $i\lambda_{0}$ or $-i\lambda_{0}$ is an element of the spectrum $\sigma(\mathbb{B})$. We Suppose $i\lambda_{0}\in\sigma(\mathbb{B})$ (if $-i\lambda_{0}\in\sigma(\mathbb{B})$ the proceeding is similar). Then, for $0<\delta<\lambda_{0}$ there exist a sequence of real numbers $(\lambda_{n})$, with $\delta\leq\lambda_{n}<\lambda_{0}$, $\lambda_{n}\rightarrow\lambda_{0}$, and a vector sequence $U_{n}=(u_{n},v_{n},\theta_{n})\in D(\mathbb{B})$ with unitary norms, such that $\displaystyle\|(i\lambda_{n}I-\mathbb{B})U_{n}\|_{\mathbb{H}}=\|F_{n}\|_{\mathbb{H}}\rightarrow 0,$ as $n\rightarrow\infty$. From estimative (28), we have $\|U_{n}\|^{2}_{\mathbb{H}}=\beta\|Au_{n}\|^{2}+\beta\|v_{n}\|^{2}_{D(A^{\frac{\tau}{2}})}+\alpha\|\theta_{n}\|^{2}\leq C_{\delta}\|F_{n}\|_{\mathbb{H}}\|U_{n}\|_{\mathbb{H}}\rightarrow 0.$ Therefore, we have $\|U_{n}\|_{\mathbb{H}}\rightarrow 0$ but this is absurd, since $\|U_{n}\|_{\mathbb{H}}=1$ for all $n\in{\mathbb{N}}$. Thus, $i{\mathbb{R}}\subset\rho(\mathbb{B})$. This completes the proof this lemma. $\Box$ ### 2.1 Lack of analyticity of $S(t)=e^{\mathbb{B}t}$ Since the operator $A$ defined on (8) is positive, self-adjoint and it has compact resolvent, its spectrum is constituted by positive eigenvalues $(\sigma_{n})$ such that $\sigma_{n}\rightarrow\infty$ as $n\rightarrow\infty$. For $n\in{\mathbb{N}}$ we denote with $e_{n}$ an unitary $D(A^{\frac{\tau}{2}})$-norm eigenvector associated to the eigenvalue $\sigma_{n}$, that is, $Ae_{n}=\sigma_{n}e_{n},\quad\|e_{n}\|_{D(A^{\frac{\tau}{2}})}=1,\quad n\in{\mathbb{N}}.$ (47) ###### Theorem 8 The semigroup $S(t)=e^{\mathbb{B}t}$ is not analytic for $\tau\in(0,1]$. Proof: We apply Theorem (3) to show this result. Consider the eigenvalues and eigenvectors of the operator $A$ as in (47). Let $F_{n}=(0,-e_{n},0)\in\mathbb{H}$. The solution $U_{n}=(u_{n},v_{n},\theta_{n})$ of the system $(i\lambda_{n}I-\mathbb{B})U_{n}=F_{n}$ satisfies $v_{n}=i\lambda_{n}u_{n}$ and the following equations $\displaystyle\lambda^{2}_{n}(I+\gamma A^{\tau})u_{n}-A^{2}u_{n}+\alpha A\theta_{n}$ $\displaystyle=$ $\displaystyle(I+\gamma A^{\tau})e_{n},$ $\displaystyle i\lambda\theta_{n}+\kappa A\theta_{n}+i\lambda_{n}\beta Au_{n}$ $\displaystyle=$ $\displaystyle 0.$ Let us see whether this system admits solutions of the form $u_{n}=\mu\tilde{e}_{n},\quad\theta_{n}=\nu\tilde{e}_{n},$ for some complex numbers $\mu_{n}$ and $\nu_{n}$. Then, the numbers $\mu_{n}$, $\nu_{n}$ should satisfy the algebraic system $\displaystyle\\{\lambda^{2}_{n}(1+\gamma\sigma_{n}^{\tau})-\sigma_{n}^{2}\\}\mu_{n}+\alpha\sigma_{n}\nu_{n}$ $\displaystyle=$ $\displaystyle(1+\gamma\sigma_{n}^{\tau}),$ (48) $\displaystyle i\lambda\beta\sigma_{n}\mu_{n}+(i\lambda+\kappa\sigma_{n})\nu_{n}$ $\displaystyle=$ $\displaystyle 0.$ (49) In this point, we introduce the numbers $\lambda_{n}^{2}:=\dfrac{\sigma_{n}^{2}}{1+\gamma\sigma_{n}^{\tau}}.$ Thus, if we introduce the notation $x_{n}\approx y_{n}$ meaning that $\displaystyle\lim_{n\rightarrow\infty}\frac{|x_{n}|}{|y_{n}|}$ is a positive real number, we have that $\displaystyle|\lambda_{n}|\approx|\sigma_{n}|^{\frac{2-\tau}{2}}.$ With these considerations we have that to solving this system (48)–(49), we find that $|\mu_{n}|=\Big{|}\dfrac{\lambda_{n}+\gamma\lambda_{n}\sigma_{n}^{\tau}+i\kappa[\gamma\sigma_{n}^{1+\tau}+\sigma_{n}]}{\alpha\beta\lambda_{n}\sigma_{n}^{2}}\Big{|}\approx|\sigma_{n}|^{\frac{3\tau-4}{2}}\approx|\lambda_{n}|^{\frac{3\tau-4}{2-\tau}}.$ Therefore, the solution $U_{n}$ of the system $(i\lambda_{n}-\mathbb{B})U_{n}=F_{n}$ satisfy $\|U_{n}\|_{\mathbb{H}}\geq K\|v_{n}\|_{D(A^{\frac{\tau}{2}})}=K|\lambda_{n}|\|u_{n}\|_{D(A^{\frac{\tau}{2}})}=K|\lambda_{n}||\mu_{n}|\|e_{n}\|_{D(A^{\frac{\tau}{2}})}=K|\lambda_{n}|^{\frac{2\tau-2}{2-\tau}}.$ Then $|\lambda_{n}|\|U_{n}\|_{\mathcal{H}}\geq K|\lambda|^{\frac{\tau}{2-\tau}}.$ Therefore $\|\lambda_{n}|\|U_{n}\|_{\mathbb{H}}\to\infty$ for $\tau>0$ approaches infinity as $|\lambda_{n}|\to\infty$. Therefore the (21) condition fails. consequently for $\tau>0$, to semigroup $S(t)$ is not analytic, in particular $S(t)$ is not analytic for $\tau\in(\frac{1}{2},1]$. This completes the proof of this theorem. $\Box$ ### 2.2 Gevrey Class Before exposing our results, it is useful to recall the next definition and result presented in [5] (adapted from [19], Theorem 4, p. 153]). ###### Definition 9 Let $t_{0}\geq 0$ be a real number. A strongly continuous semigroup $S(t)$, defined on a Banach space $\mathbb{H}$, is of Gevrey class $s>1$ for $t>t_{0}$, if $S(t)$ is infinitely differentiable for $t>t_{0}$, and for every compact set $K\subset(t_{0},\infty)$ and each $\mu>0$, there exists a constant $C=C(\mu,K)>0$ such that $||S^{(n)}(t)||_{\mathcal{L}(\mathcal{H})}\leq C\mu^{n}(n!)^{s},\text{ for all }\quad t\in K,n=0,1,2...$ (50) ###### Theorem 10 ([19]) Let $S(t)$ be a strongly continuous and bounded semigroup on a Hilbert space $\mathbb{H}$. Suppose that the infinitesimal generator $\mathbb{B}$ of the semigroup $S(t)$ satisfies the following estimate, for some $0<\phi<1$: $\lim\limits_{|\lambda|\to\infty}\sup|\lambda|^{\phi}||(i\lambda I-\mathbb{B})^{-1}||_{\mathcal{L}(\mathbb{H})}<\infty.$ (51) Then $S(t)$ is of Gevrey class $s$ for $t>0$, for every $s>\dfrac{1}{\phi}$. Our main result in this subsection is as follows: ###### Theorem 11 Let $S(t)=e^{\mathbb{B}t}$ strongly continuos-semigroups of contractions on the Hilbert space $\mathbb{H}$, the semigroups $S(t)$ is of Grevrey class $s$ for every $s>\frac{3-\tau}{2-2\tau}$ for $\tau\in[\dfrac{1}{2},1)$, as there exists a positive constant $C$ such that we have the resolvent estimative: $|\lambda|^{\frac{2-2\tau}{3-\tau}}||(i\lambda I-\mathbb{B})^{-1}||_{\mathcal{L}(\mathbb{H})}\leq C,\quad\lambda\in\rho(\mathbb{B})\subset{\mathbb{R}}.$ (52) Proof: Note that the estimate $|\lambda|^{\frac{2-2\tau}{3-\tau}}||(i\lambda I-\mathbb{B})^{-1}||_{\mathbb{H}}=|\lambda|^{\frac{2-2\tau}{3-\tau}}\|U\|_{\mathbb{H}}\leq C_{\delta}\|F\|_{\mathbb{H}}$ (53) implies the inequality (52). Therefore from now on we will show (53), for this purpose let us estimate the term $|\lambda|\|A^{\frac{\tau}{2}}v\|^{2}$, we assume $\lambda\in\rho(\mathbb{B})\subset{\mathbb{R}}$ with $|\lambda|>1$, we shall borrow some ideias from [12]. Set $v=v_{1}+v_{2}$, where $v_{1}\in D(A)$ and $v_{2}\in D(A^{\frac{\tau}{2}})$, with $i\lambda(I+\gamma A^{\tau}A)v_{1}+Av_{1}=(I+\gamma A^{\tau})g,\hskip 9.38945pti\lambda(I+\gamma A^{\tau})v_{2}=-A^{2}u+\alpha A\theta+Av_{1}.$ (54) Firstly, applying the product duality the first equation in (54) by $v_{1}$, we have $i\lambda\|v_{1}\|^{2}+i\lambda\gamma\|A^{\frac{\tau}{2}}v_{1}\|^{2}+\|A^{\frac{1}{2}}v_{1}\|^{2}=\langle{g},{v_{1}}\rangle+\gamma\langle{A^{\frac{\tau}{2}}g},{A^{\frac{\tau}{2}}v_{1}}\rangle.$ (55) Taking first the imaginary part of (55) and in the sequence the real part and applying Cauchy-Schwarz inequality, we have $\displaystyle|\lambda|\|v_{1}\|^{2}+\gamma|\lambda|\|A^{\frac{\tau}{2}}v_{1}\|\leq|\rm{Im}\langle{g},{v_{1}}\rangle|+\gamma|\rm{Im}\langle{A^{\frac{\tau}{2}}g},{A^{\frac{\tau}{2}}v_{1}}\rangle|\leq C\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}$ and $\displaystyle\|A^{\frac{1}{2}}v_{1}\|^{2}\leq C\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ Equivalently $\displaystyle\|v_{1}\|\leq C\dfrac{[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}}{|\lambda|^{\frac{1}{2}}},\quad\|A^{\frac{\tau}{2}}v_{1}\|\leq C\dfrac{[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}}{|\lambda|^{\frac{1}{2}}}$ (56) $\displaystyle\rm{and}$ $\displaystyle\|A^{\frac{1}{2}}v_{1}\|\leq C[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}.$ (57) From $A^{\frac{\tau}{2}}v_{2}=A^{\frac{\tau}{2}}v+A^{\frac{\tau}{2}}v_{1}$, we have $\displaystyle\|A^{\frac{\tau}{2}}v_{2}\|^{2}$ $\displaystyle\leq$ $\displaystyle C\\{\|A^{\frac{\tau}{2}}v\|^{2}+\|A^{\frac{\tau}{2}}v_{1}\|^{2}\\}\leq C\bigg{\\{}\dfrac{|\lambda|+1}{|\lambda|}\bigg{\\}}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}.$ From Lemma(4) and second inequality of (56). Let $\delta>0$. Exists a positive constant $C_{\delta}$ such that for $|\lambda|\geq\delta$, we obtain $\|A^{\frac{\tau}{2}}v_{2}\|^{2}\leq C_{\delta}\|F\|_{\mathcal{H}}\|U\|_{\mathcal{H}}.$ (58) From second equation of (54), we have $\displaystyle|\lambda||(I+\gamma A^{\tau})[A^{-1}v_{2}]||$ $\displaystyle\leq$ $\displaystyle C\\{\|Au\|+\|\theta\|\\}+\|v_{1}\|,$ then, we find $|\lambda|(\|A^{-1}v_{2}\|^{2}+\gamma\|A^{\frac{\tau}{2}-1}v_{2}\|^{2})^{\frac{1}{2}}\leq C\\{\|Au\|+\|\theta\|\\}+\|v_{1}\|,$ applying Cauchy-Schwarz and Young inequalities and using first inequality of (56) and estimative (26) for $|\lambda|^{\frac{1}{2}}>1$, for $0\leq\tau\leq 1$, we obtain $\displaystyle|\lambda|\|A^{\frac{\tau}{2}-1}v_{2}\|$ $\displaystyle\leq$ $\displaystyle C[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}\bigg{[}\dfrac{|\lambda|^{\frac{1}{2}}+1}{|\lambda|^{\frac{1}{2}}}\bigg{]}$ $\displaystyle\|A^{\frac{\tau}{2}-1}v_{2}\|$ $\displaystyle\leq$ $\displaystyle C|\lambda|^{-1}[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}$ (59) On the other hand, from $v_{2}=v-v_{1}$, we have $\|A^{\frac{1}{2}}v_{2}\|\leq\|A^{\frac{1}{2}}v\|+\|A^{\frac{1}{2}}v_{1}\|.$ (60) From Lemma(6) and (57), we have $\|A^{\frac{1}{2}}v_{2}\|\leq C_{\delta}[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1.$ (61) Now, by Lion’s interpolations inequality ($\frac{\tau}{2}-1<\frac{\tau}{2}\leq\frac{1}{2}$) and estimatives (59) and (61), we derive $\displaystyle\|A^{\frac{\tau}{2}}v_{2}||$ $\displaystyle\leq$ $\displaystyle C\|A^{\frac{\tau}{2}-1}v_{2}\|^{\frac{1-\tau}{3-\tau}}\|A^{\frac{1}{2}}v_{2}\|^{\frac{2}{3-\tau}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1$ $\displaystyle\leq$ $\displaystyle C|\lambda|^{-\frac{1-\tau}{3-\tau}}[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1.$ On the other hand, as $\|A^{\frac{\tau}{2}}v\|\leq C\\{\|A^{\frac{\tau}{2}}v_{1}\|+\|A^{\frac{\tau}{2}}v_{1}\|\\}$ and $-\frac{1-\tau}{3-\tau}\geq-\frac{1}{2}$, we have $\displaystyle\|A^{\frac{\tau}{2}}v\|$ $\displaystyle\leq$ $\displaystyle C\\{|\lambda|^{-\frac{1-\tau}{3-\tau}}+|\lambda|^{-\frac{1}{2}}\\}[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}$ $\displaystyle\leq$ $\displaystyle C|\lambda|^{-\frac{1-\tau}{3-\tau}}[\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}]^{\frac{1}{2}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1.$ Equivalently $|\lambda|\|A^{\frac{\tau}{2}}v\|^{2}\leq C_{\delta}|\lambda|^{\frac{1+\tau}{3-\tau}}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1.$ (62) Using (62) in estimative (34) to Lemma(5), we have $|\lambda|[\beta\|Au\|^{2}+\alpha\|\theta\|^{2}]\leq C_{\delta}|\lambda|^{\frac{1+\tau}{3-\tau}}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}\qquad{\rm for}\qquad\dfrac{1}{2}\leq\tau\leq 1.$ (63) And as (62) it implies $|\lambda|\beta\|v\|^{2}_{D(A^{\frac{\tau}{2}})}\leq C_{\delta}|\lambda|^{\frac{1+\tau}{3-\tau}}\|F\|_{\mathbb{H}}\|U\|_{\mathbb{H}}\quad{\rm for}\quad\dfrac{1}{2}\leq\tau\leq 1.$ (64) Finally, using (63) and (64). We finish the proof of this theorem. $\Box$ ###### Remark 12 (Asymptotic Behavior) Note that the semigroup with class Gevrey has more regular properties than a differentiable semigroup, but is less regular than an analytic semigroup. The Gevrey rate $s>\frac{1}{\phi}$ ‘measures’ the degree of divergence of its power series. It should be noted that the Gevrey class or analyticity of the particular model implies three important properties. The first is the property of the smoothing effect on the initial data, that is, no matter how irregular the initial data is, the model solutions are very smooth in positive time. The second property is that systems are exponentially stable. Finally, the systems enjoy the property of linear stability, which means that the type of the semigroup is equal to the spectral limit of its infinitesimal operator. Thus, as a consequence of the lemmas (4) and (7), we have that the semigroup $S(t)=e^{\mathbb{B}t}$ is exponentially stable for all $\tau\in[0.1]$. ## References * [1] G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation, Rend. Istit. Mat. Univ. Trieste, 28(1997), pp 1-29. * [2] G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system with free boundary conditions without mechanical dissipation, SIAM J. Math. Anal., 29 (1998), pp 1-28. * [3] A. Borichev and Y. Tomilov, Optimal polynomial decay of functions and operator semigroups, Math. Ann. 347 (2010), pp 455-478. * [4] K.J. Engel and R. Nagel, One-parameter semigroups for linear evolution equations, Springer (2000). * [5] V. Keyantuo, L. Tebou and M. Warma, A Gevrey Class Semigroup for a Thermoelastic Plate Model with a Fractional Laplacian: Berween the Euler-Bernoulli and Kirchhoff Models. Discrete and Continuous Dynamical System, Vol 40. Number 5, May (2020), pp 2875-2889. * [6] I. Lasiecka and Triggiani, R.: Analyticity and lack thereof, of thermo-elastic semigroups. In: Control and Partial Differential Equations, ESAIM Proc. Soc. Math. Appl. Indust., Paris, vol. 4, (1998), pp 199-222. * [7] I. Lasiecka and R. Triggiani, Analyticity of thermo-elastic semigroups with free boundary conditions. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 27 (1998), pp 457-482. * [8] I. Lasiecka and R. Triggiani, Two direct proofs on the analyticity of the s.c semigroup arising in abstract thermo-elastic equations, Adv. Differential Equations, 3 (1998), pp 387-416. * [9] I. Lasiecka and R. Triggiani, Analyticity of thermo-elastic semigroups with coupled hinged/Neumann B.C. Abstr. Appl. Anal., 3 (1998), pp 153-169. * [10] I. Lasiecka and R. Triggiani, Structural decomposition of thermo-elastic semigroups with rotational forces. Semigroup Forum 60 (2000), pp 16-66. * [11] K. S. Liu and Z. Y. Liu, Exponential stability and analyticity of abstract linear thermoelastic systems, Z. Angew. Math. Phys. 48 (1997) 885-904. * [12] Z. Liu and M. Renardy, A note on the equations of thermoelastic plate, Appl. Math. Lett., 8, (1995), pp 1-6. * [13] Z. Y. Liu and S. M. Zheng, Exponential stability of the Kirchoff plate with thermal or viscoelastic damping, Quarterly Appl. Math. , 55 (1997), 551-564. * [14] Z. Liu and S. Zheng, Semigroups associated with dissipative systems, Chapman & Hall CRC Research Notes in Mathematics, Boca Raton, FL, 398 (1999). * [15] H.P. Oquendo and F.M.S. Suárez, Exact decay rates for coupled plates with partial fractional damping. Zeitschrift für angewandte Mathematik und Physik-ZAMP(online)V1, (2019), pp 70–88. * [16] A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Applied Mathematical Sciences 44, Springer, (1983). * [17] H. D. F. Sare, Z. Liu and R. Racke, Stability of abstract thermoelastic systems with inertial terms, Journal of Differential Equations, Volume 267, Issue 12, 5 december (2019), pp 7085–7134. * [18] B.T.S. Sozzo, J. E. M. Rivera, The Gevrey class of the Euler-Bernoulli beam model, Journal of Mathematical Analysis and Applications,505 (2022). * [19] S. W. Taylor, Gevrey Regularity of Solutions of Evolution Equations and Boundary Control- lability, Thesis (Ph.D.) University of Minnesota, (1989), 182 pp. * [20] L. Tebou, Stabilization of some coupled hyperbolic/parabolic equations. Discrete Contin. Dyn. Syst. Ser. B 14 (2010), pp 1601-1620. * [21] L. Tebon, Stabilization of some coupled hyperbolic/parabolic equations, Discrete and Continuous Dynamical Systems Series B 14, Number 4, November (2010). * [22] L. T. Tebou, Energy decay estimates for some weakly coupled Euler-Bernoulli and wave equations with indirect damping mechanisms. Mathematical Control and Related Fields 2 (2012), pp 45-60. * [23] L. Tebou, Uniform analyticity and exponential decay of the semigroup associated with a thermoelastic plate equation with perturbed boundary conditions, C. R. Math. Acad. Sci. Paris, 351 (2013), pp 539-544. * [24] L. Tebou, Regularity and stability for a plate model involving frational rotational forces and damping, ZAMP Zeitschrift für angewandte Mathematik und Physik, 72:158 (2021).
# FDO Manager: Minimum Viable FAIR Digital Object Implementation Oussama Zoubia University Hospital of Cologne, Cologne, Germany Zeyd Boukhers Fraunhofer Institute for Applied Information Technology FIT, Sankt Augustin, Germany University Hospital of Cologne, Cologne, Germany Nagaraj Bahubali Asundi Fraunhofer Institute for Applied Information Technology FIT, Sankt Augustin, Germany Sezin Dogan University Hospital of Cologne, Cologne, Germany Adamantios Koumpis University Hospital of Cologne, Cologne, Germany Christoph Lange Fraunhofer Institute for Applied Information Technology FIT, Sankt Augustin, Germany RWTH Aachen University, Aachen, Germany Oya Beyan Fraunhofer Institute for Applied Information Technology FIT, Sankt Augustin, Germany University Hospital of Cologne, Cologne, Germany ###### Abstract The concept of FAIR Digital Objects (FDOs) aims to revolutionise the field of digital preservation and accessibility in the next few years. Central to this revolution is the alignment of FDOs with the FAIR (Findable, Accessible, Interoperable, Reusable) Principles, particularly emphasizing machine- actionability and interoperability across diverse data ecosystems. This abstract introduces the “FDO Manager”, a Minimum Viable Implementation, designed to optimize the management of FDOs following these principles and the FDO specifications111https://fairdo.org/specifications/. The FDO Manager is tailored to manage research artefacts such as datasets, codes, and publications, to foster increased transparency and reproducibility in research. The abstract presents the implementation details of the FDO Manager, its underlying architecture, and the metadata schemas it employs, thereby offering a clear and comprehensive understanding of its functionalities and impact on the research domain. Keywords: FAIR Digital Objects, FDO Manager, Data Preservation, Interoperability, Metadata Schemas, Research Artifact Management ## 1 Introduction Since its introduction in 2018 [CGH+18], the concept of FAIR Digital Objects (FDOs) has seen a remarkable rise in interest within the research and data management communities. FDOs are key in enhancing data preservation and accessibility, contributing to a robust ecosystem for managing digital research resources [DSKW20]. This surge in popularity underscores the increasing relevance of the FAIR (Findable, Accessible, Interoperable, Reusable) Principles in the realm of scientific data management [WDA+16, MNV+17]. The FDO Framework [BdSSSFG23] is built upon the FAIR Guiding Principles [WDA+16], which are designed to enhance the FAIRness of digital objects, focusing on concepts such as unique identification, rich metadata, etc. However, implementing these specifications is challenging due to their purely theoretical nature. The recurring requirement of metadata as FDOs, for instance, can lead to complex scenarios where the implementation becomes a task of balancing the depth of detail with practical usability. To address these challenges, this abstract proposes the FDO Manager, which aims to simplify these complexities, providing a pragmatic approach to implementing FDOs that remains true to the FDO specifications while being feasible and user-friendly for researchers and data curators. This involves optimizing the management of research digital objects [BC23], making them more accessible and usable in various research and data management contexts. Our approach to FDO implementation is minimalistic yet viable. It offers an easy-to-use method for employing FDOs, aligning well with the concepts presented in Reproducible Research Publication Workflow (RRPW) [PBBC22], which emphasizes the importance of FAIR principles in the research publication process. ## 2 FDO Manager The core concept driving the FDO Manager is to present a straightforward and minimalist solution that not only conforms to the FAIR Principles but also addresses the practical challenges of managing complex digital objects and their metadata in a systematic and user-centric manner. Figure 2 presents a refined view of the FDO Manager’s architecture, which intricately outlines the interaction between key components within the FDO ecosystem. The FDO Record, a central node in this structure, is uniquely identified by a Persistent Identifier (PID), which directly points to the associated Digital Object (DO), ensuring unambiguous reference and retrieval. In parallel, the FDO Record is also connected and refers to its metadata through a distinct PID, establishing a dual-PID system that differentiates between the DO and its descriptive metadata. The metadata, which describes the DO, is stored independently in a dedicated Metadata Registry, allowing for streamlined management and retrieval. These metadata entries, while connected to the FDO Records, are managed as separate entities, reinforcing the metadata’s role as standalone FDOs. This approach aligns with the FAIR principle of interoperability, allowing for the metadata to be utilized across various systems and applications without direct dependency on the DOs they describe. Furthermore, the architecture includes an FDO Registry, a Catalog which serves as a comprehensive index, registering each FDO Record. This ensures efficient organization and accessibility of digital objects within the system. The Operation Registry complements this by detailing the range of operations that can be executed on a specific FDO Record, encapsulating the dynamic functionalities the FDO Manager supports. The FDO Manager was designed as a REST API, combining user-friendliness with flexibility. The FDO Manager facilitates various operations (POST, GET, PUT, …) for the efficient management of stored digital object records, including retrieval, addition, modification and deletion of FDO records. In the pursuit of interoperability, our approach leverages the widely recognized Schema.org222https://schema.org/ standards to define the metadata schema for the proposed FDO records. which ensures a standardized and universally understood structure for metadata, enhancing the compatibility and exchange of information within the broader digital ecosystem. The storage mechanism in the FDO Manager goes beyond capturing individual FDOs; it is designed to encapsulate the relationships between them as well. This would ensure FDO records preservation, reproducibility, and overall utility of digital content In parallel with the FDO Manager, we have developed a user-friendly FDO Manager Playground interface. This interface serves as a testing ground to explore and assess the functionalities of the FDO Manager, allowing for a comprehensive examination of various endpoints and thorough testing of input and output processes. For a more in-depth understanding of both the FDO Manager and the FDO Manager Playground functionalities, detailed information can be accessed directly through our documentation website.333https://fdda1.gitlab.io/fdom/ Figure 2 illustrates the proposed schema for FDO Manager classes, presenting the structure, attributes and relationships of various classes. The schema introduces four classes: CreativeWork, Service, Person and Organization. which were derived from schema.org. Various types of relationships are shown, including simple relationships such as the one between Person and Organization. Conditional relationships are a notable feature in the model; for instance, the CreativeWork class can be associated with either a Person or an Organization, similar to a Service. Additionally, the CreativeWork class shows a self-referential relationship; citation, allowing a CreativeWork, whether a dataset, code, or publication, to cite another CreativeWork. These relationships enable a flexible representation of entities, in addition to prioritizing scalability and extensibility to accommodate the evolving digital objects. ## 3 Conclusion In conclusion, the FDO Manager emerges as a practical and efficient solution for FDO management. Explored in this paper are its architecture, implementation details, and alignment with FDO specifications and FAIR Principles. With a focus on research artifacts, the FDO Manager streamlines workflows, enhances persistence, and enforces discoverability, contributing significantly to open science. Its adoption ensures that research artifacts remain easily accessible, making a valuable contribution to transparent and collaborative scientific practices. ## Author contributions O.Z: Writing – Original Draft Preparation, Methodology, Software, Visualization, Z.B: Writing – Review & Editing, Conceptualization, Methodology, Supervision, Project Administration N.A: Conceptualization, Methodology, S.D: Visualization, A.K: Conceptualization, Supervision, Project Administration, C.L: Writing – Review, Funding Acquisition, O.B: Conceptualization, Supervision, Project Administration, Funding Acquisition. ## References * [BC23] Zeyd Boukhers and Leyla Jael Castro. Enhancing reproducibility in research through FAIR digital objects. In Proceedings of the Conference on Research Data Infrastructure, volume 1, 2023. * [BdSSSFG23] Luiz Olavo Bonino da Silva Santos, Tiago Prince Sales, Claudenir M. Fonseca, and Giancarlo Guizzardi. Towards a Conceptual Model for the FAIR Digital Object Framework. IOS Press, December 2023. * [CGH+18] Sandra Collins, Francoise Genova, Natalie Harrower, Simon Hodson, Sarah Jones, Leif Laaksonen, Daniel Mietchen, Rūta Petrauskaitė, and Peter Wittenburg. Turning FAIR into reality: Final report and action plan from the european commission expert group on FAIR data. 2018\. * [DSKW20] Koenraad De Smedt, Dimitris Koureas, and Peter Wittenburg. FAIR digital objects for science: From data pieces to actionable knowledge units. Publications, 8(2):21, 2020. * [MNV+17] Barend Mons, Cameron Neylon, Jan Velterop, Michel Dumontier, Luiz Olavo Bonino da Silva Santos, and Mark D Wilkinson. Cloudy, increasingly FAIR; revisiting the FAIR data guiding principles for the european open science cloud. Information services & use, 37(1):49–56, 2017. * [PBBC22] Limor Peer, Claudia Biniossek, Dirk Betz, and Thu-Mai Christian. Reproducible research publication workflow: A canonical workflow framework and FAIR digital object approach to quality research output. Data Intelligence, 4(2):306–319, 2022. * [WDA+16] Mark D Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E Bourne, et al. The FAIR guiding principles for scientific data management and stewardship. Scientific data, 3(1):1–9, 2016. ## Appendix A Appendix Figure 1: The FDO Manager architecture Figure 2: The FDO Manager Schema
# Strong Screening Rules for Group-based SLOPE Models Fabio Feser Department of Mathematics Imperial College London London, SW7 2AZ <EMAIL_ADDRESS> &Marina Evangelou Department of Mathematics Imperial College London London, SW7 2AZ <EMAIL_ADDRESS> ###### Abstract Tuning the regularization parameter in penalized regression models is an expensive task, requiring multiple models to be fit along a path of parameters. Strong screening rules drastically reduce computational costs by lowering the dimensionality of the input prior to fitting. We develop strong screening rules for group-based Sorted L-One Penalized Estimation (SLOPE) models: Group SLOPE and Sparse-group SLOPE. The developed rules are applicable for the wider family of group-based OWL models, including OSCAR. Our experiments on both synthetic and real data show that the screening rules significantly accelerate the fitting process. The screening rules make it accessible for group SLOPE and sparse-group SLOPE to be applied to high- dimensional datasets, particularly those encountered in genetics. ### 1 Introduction As the amount of data collected increases, the emergence of high-dimensional data, where the number of features ($p$) is much larger than the number of observations ($n$), is becoming increasingly common in fields ranging from genetics to finance. Performing regression and discovering relevant features on these datasets is a challenging task, as classical statistical methods tend to break down. The most popular approach to meeting this challenge is the Lasso [30], which has given rise to the general penalized regression framework $\hat{\beta}(\lambda)=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{p}}\left\\{f(\beta)+\lambda J(\beta;v)\right\\},$ (1) where $f$ is the loss function, $J$ is a convex penalty norm, $v\succeq 0$ are penalty parameters, and $\lambda>0$ is the regularization parameter. A key aspect of fitting a penalized model is to tune the value of the regularization parameter, which determines the level of sparsity in the fitted model. Several approaches exist for tuning this parameter, including cross- validation [8, 15] and exact solution path algorithms [10], but can often be computationally expensive. Generally, the model is fit along an $l$-length path $\lambda_{1}\geq\ldots\geq\lambda_{l}\geq 0$. Finding approaches for speeding up the model fitting process can lead to large computational savings when tuning. Screening rules are popular methods that reduce the dimensions of the input dimensionality by discarding irrelevant features before the fitting process. Discarding features, especially in high-dimensional settings, has a tremendous impact on the computational costs. Denoting the active set of non-zero parameters at $\lambda_{k+1}$ by $\mathcal{A}_{v}(\lambda_{k+1})=\\{i\in\\{1,\dots,p\\}:\hat{\beta}_{i}(\lambda_{k+1})\neq 0\\}$, the goal of a screening rule is to use the solution at $\lambda_{k}$ to recover a screened set of features, $\mathcal{S}_{v}(\lambda_{k+1})$, which is a superset of $\mathcal{A}_{v}(\lambda_{k+1})$. The screened set can then be used as input for calculating the fitted values. By finding the smallest possible screened set, that still contains the active set, the number of irrelevant features included in the optimization process is reduced. There are two types of screening rules: safe and heuristic. Safe rules, as suggested by the name, provide guarantees that any variables discarded are in fact inactive at the solution. The seminal work of El Ghaoui et al. [11] introduced the Safe Feature Elimination Rule (SAFE) for the lasso, in which safe regions were constructed to eliminate non-active features. Other examples of safe rules are given in [2, 25, 34], including sample screening [28]. Heuristic rules, on the other hand, tend to discard more variables, but this increased screening efficiency can lead to active variables being incorrectly discarded. Examples of heuristic screening rules are given in [1, 13, 31]. Amongst the proposed heuristic rules, of particular interest is the strong screening rule framework developed for the lasso [31]. As strong screening rules can lead to violations, Tibshirani et al. [31] complemented their rules with a check of the Karush–Kuhn–Tucker (KKT) [16] optimality conditions to ensure no violations occur. There are also hybrid screening regimes, which combine safe and heuristic rules [32, 37]. A strong screening rule is formulated through the KKT conditions for Equation 1, given by: $\mathbf{0}\in\nabla f(\beta)+\lambda\partial J(\beta;v).$ (2) If the gradient $\nabla f(\hat{\beta}(\lambda_{k+1}))$ was available, the set $\mathcal{A}_{v}(\lambda_{k+1})$ could be identified by checking the subdifferential of the norm at zero. That is, identifying for which variables $-\nabla f(\beta)\in\lambda\partial J(\mathbf{0};v)=\\{x\in\mathbb{R}^{p}:J^{*}(x;\lambda v)\leq 1\\},$ (3) where $J^{*}$ refers to the dual norm of $J$. The subdifferential at zero, $\partial J(\mathbf{0};v)$, can also be seen to be the unit ball of its dual norm. As the gradient at $\lambda_{k+1}$ is not available in practice, an approximation of the gradient is used to find a screened subset of the features, $\mathcal{S}_{v}(\lambda_{k+1})$. The screened variables are combined with the previously active ones to form the reduced input space, $\mathcal{E}_{v}(\lambda_{k+1})=\mathcal{S}_{v}(\lambda_{k+1})\cup\mathcal{A}_{v}(\lambda_{k})$. The fitted values are then calculated by solving Equation 1 using only the variables in $\mathcal{E}_{v}(\lambda_{k+1})$. Optimality of the fitted values is checked for using the KKT conditions (Equation 2), with violations added into $\mathcal{E}_{v}$. This ensures features are not discarded that should be active. #### 1.1 Contributions Over the years, several adaptive extensions to the lasso have been proposed, including the Sorted L-One Penalized Estimation (SLOPE) method [4]. SLOPE applies the sorted $\ell_{1}$ norm $J_{\text{slope}}(\beta;v)=\sum_{i=1}^{p}v_{i}|\beta|_{(i)}$, where $v_{1}\geq\ldots\geq v_{p}\geq 0,|\beta|_{(1)}\geq\ldots\geq|\beta|_{(p)}$. One key advantage of SLOPE is its ability to control the variable false discovery rate (FDR) under orthogonal data. Both safe and strong rules have been proposed for SLOPE [3, 12, 17], as well as exact solution path algorithms [9, 24]. As SLOPE is a non-separable penalty, safe screening rules require repeated screening during the fitting process [17]. This can be a computational bottleneck, as the calculation of the safe regions required for screening are often expensive. Strong screening rules, on the other hand, are usually more time efficient as the extra cost of the screening and KKT checks tend to be outweighed by the significant savings of fitting on a reduced input space [31]. Motivated by this, we introduce strong screening rules for two group-based extensions of SLOPE: Group SLOPE (gSLOPE) [6] and Sparse-group SLOPE (SGS) [14]. Group SLOPE (gSLOPE) is an extension of SLOPE for selecting groups, defined by the norm $J_{\text{gslope}}(\beta;w)=\sum_{g=1}^{m}\sqrt{p_{g}}w_{g}\|\beta^{(g)}\|_{2},$ (4) such that $\beta^{(g)}\in\mathbb{R}^{p_{g}}$ is a vector of the group coefficients and $p_{g}$ denotes the group sizes [6]. The norm has ordered penalty weights $w_{1}\geq\ldots\geq w_{g}\geq 0$ (described in Appendix A.1) which are matched to $\sqrt{p_{1}}\|\beta^{(1)}\|_{2}\geq\ldots\geq\sqrt{p_{m}}\|\beta^{(m)}\|_{2}$. Sparse-group SLOPE (SGS) was proposed as an extension to SLOPE and gSLOPE, applying both variable and group penalization for bi-level selection [14]. For $\alpha\in[0,1]$ with variable weights $v$ and group weights $w$ (described in Appendix B.1), the SGS norm is defined as $J_{\text{sgs}}(\beta;\alpha,v,w)=\alpha J_{\text{slope}}(\beta;v)+(1-\alpha)J_{\text{gslope}}(\beta;w).$ Both of these approaches were developed for the analysis of features that naturally belong to groups. This situation is frequently encountered in the analysis of genetics data, where genes are grouped into pathways for the completion of a specific biological task. The gSLOPE approach selects groups of features, whereas SGS performs bi-level selection. Both approaches have been shown to inherit SLOPE’s ability to control FDR under orthogonal data: gSLOPE controls the group FDR [6] and SGS controls both variable and group FDRs [14]. Section 2 introduces a sparse-group screening rule framework. Based on this, strong screening rules for gSLOPE and SGS are developed and presented in Sections 3 and 4. The proofs of the theorems and propositions presented in Sections 2 and 3 are provided in the Appendices A.2 and B.3. Through a series of simulations and the analysis of real datasets (Sections 5), the proposed screening rules are shown to significantly reduce computational runtime. Additionally, due to the reduced input dimensionality offered by screening rules, issues with convergence in the analysis of large datasets are shown to be alleviated. These improvements are achieved without affecting the solution optimality. The screening rules proposed in this manuscript are further applicable to group-based Ordered Weighted $\ell_{1}$ (OWL) models. SLOPE is a particular case of an OWL model through the definition of its weights. As the proposed screening rules only require that the penalty sequences ($v,w$) are ordered, they can be used for any group-based OWL models; for example, the Octagonal Shrinkage and Clustering Algorithm for Regression (OSCAR) model [5]. Results for group-based OSCAR models are provided in Appendix E. ###### Notation. The feature space is divided into a set of $m$ non-overlapping groups, where $\mathcal{G}_{1},\dots,\mathcal{G}_{m}$ denote the set of variable indices for each group. The set of active groups is denoted by $\mathcal{A}_{g}=\\{i\in\\{1,\dots,m\\}:\|\hat{\beta}^{(i)}\|_{2}\neq 0\\}$ whereas $\mathcal{Z}=\\{i\in\\{1,\dots,m\\}:\|\hat{\beta}^{(i)}\|_{2}=0\\}$ presents the inactive groups. The set of variable indices of the active and inactive groups are denoted by $\mathcal{G}_{\mathcal{A}}$ and $\mathcal{G}_{\mathcal{Z}}$, respectively. The cardinality of a vector $x$ is denoted by $\operatorname{card}(x)$. The operators $(\cdot)_{\downarrow}$ and $(\cdot)_{|\downarrow|}$ sort a vector into decreasing and decreasing absolute form. We use $\preceq$ to denote element-wise less than or equal to. The operator $\mathcal{O}(\cdot)$ returns the index of a vector sorted in decreasing absolute form. The cumulative summation operator applied on a vector is denoted by $\text{cumsum}(x)=[x_{1},x_{1}+x_{2},\ldots,\sum_{i=1}^{\operatorname{card}(x)}x_{i}]$. ### 2 Screening rule framework Algorithm 1 Sparse-group screening framework Input: $\lambda\in\mathbb{R}^{l}$, $\mathbf{X}\in\mathbb{R}^{n\times p},y\in\mathbb{R}^{n}$ for $k=1$ to $l-1$ do $\mathcal{S}_{g}(\lambda_{k+1})$ $\leftarrow$ group screening on full input $\mathcal{S}_{v}(\lambda_{k+1})$ $\leftarrow$ variable screening on $g\in\mathcal{S}_{g}(\lambda_{k+1})$ $\mathcal{E}_{v}\leftarrow\mathcal{S}_{v}(\lambda_{k+1})\cup\mathcal{A}_{v}(\lambda_{k})$ compute $\hat{\beta}_{\mathcal{E}_{v}}(\lambda_{k+1})$ $\mathcal{K}_{v}\leftarrow$ variable KKT violations on $\hat{\beta}(\lambda_{k+1})$ while $\operatorname{card}(\mathcal{K}_{v})>0$ do $\mathcal{E}_{v}\leftarrow\mathcal{E}_{v}\cup\mathcal{K}_{v}$ compute $\hat{\beta}_{\mathcal{E}_{v}}(\lambda_{k+1})$ $\mathcal{K}_{v}\leftarrow$ variable KKT violations on $\hat{\beta}(\lambda_{k+1})$ end while end for Output: $\hat{\beta}(\lambda_{1}),\ldots,\hat{\beta}(\lambda_{l})\in\mathbb{R}^{p\times l}$ Sparse-group models apply both variable and group penalization, such that Equation 1 is extended to include an additional convex penalty function. Examples of sparse-group models include the Sparse-group Lasso (SGL) [29] and SGS [14]. Safe screening rules are available for SGL which perform bi-level screening, i.e. both group and variable screening [21, 33], but there are no strong rules which perform bi-level screening. The strong screening rule framework proposed by Tibshirani et al. [31] does not fully extend to sparse- group or non-separable penalties, whilst the strong screening rule derived for SGL in Liang et al. [19] applies only group-level screening. ###### Framework. We introduce a new framework (Algorithm 1) for applying strong screening rules to sparse-group penalties that allow for bi-level screening, based on the framework introduced by Tibshirani et al. [31]. In the sparse-group framework, a screened set of groups is computed, $\mathcal{S}_{g}$, from the full input. An additional layer of screening is then performed to compute $\mathcal{S}_{v}$ using $\mathcal{S}_{g}$. This forms the reduced input set for fitting, $\mathcal{E}_{v}$. The aim of finding $\mathcal{E}_{v}$ is the same as for the lasso and SLOPE strong rules, but the framework first discards irrelevant groups. KKT checks are performed on $\mathcal{E}_{v}$ to ensure no violations have occurred. The framework applied to gSLOPE and SGS is presented in Appendix D. Based on the framework described in Algorithm 1, the screening rules for SGS are derived (Section 4). The screening rule for gSLOPE (Section 3) follows a similar framework, where $\mathcal{S}_{v}$ is taken as all variables within the groups of $\mathcal{S}_{g}$. The KKT checks are then performed only on the groups. By applying bi-level screening for SGS, a substantially larger proportion of variables are discarded, than if just group screening were applied (Figure 1). Using only group screening for SGS, without the additional variable screening step, resulted in the number of variables in the screened set scaling poorly with increased dimensionality. This demonstrates that bi-level screening effectively manages the scaling of SGS under rising dimensions. (a) Applied to synthetic data (Section 5.1). Data generated under a linear model for $p=500,5000$. The results are averaged over $100$ repetitions. (b) Applied to two real genetics datasets (Section 5.2), with groups formed using pathways. The results are averaged over the nine pathway collections. Figure 1: The proportion of variables in $\mathcal{S}_{v}$ relative to the full input for SGS, shown for group and bi-level screening, plotted as a function of the regularization path, with $95\%$ confidence intervals. ### 3 Group SLOPE The strong screening rule for gSLOPE presented in this section is formulated by computing the zero condition of its subdifferential (as per Equation 3) derived in Theorem 3.1. To derive the subdifferential, we define the operator $[b]_{\mathcal{G},q}:=(p_{1}^{q}\|b^{(1)}\|_{2},\dots,p_{m}^{q}\|b^{(m)}\|_{2})^{\top}.$ In particular, $[b]_{\mathcal{G}_{\mathcal{Z}},-0.5}$ is the operator applied only to the inactive groups using the quotient $q=-0.5$. ###### Theorem 3.1 (gSLOPE subdifferential). The subdifferential for gSLOPE is given by $\partial J_{\text{gslope}}(\beta;w)=\begin{dcases}\\{x\in\mathbb{R}^{\operatorname{card}{\mathcal{G}_{\mathcal{Z}}}}:{[x]}_{\mathcal{G}_{\mathcal{Z}},-0.5}\in\partial J_{\text{slope}}(0;w_{\mathcal{Z}})\\},&\text{at 0.}\\\ w_{g}\sqrt{p_{g}}\frac{\beta^{(g)}}{\|\beta^{(g)}\|_{2}},&\text{otherwise.}\end{dcases}$ The choice of $q=-0.5$ leads to $J_{\text{gslope}}^{*}(x;w)=J_{\text{slope}}^{*}([x]_{\mathcal{G},-0.5})$, which allows the gSLOPE subdifferential to be written in terms of the SLOPE one [6]. Combining the KKT conditions (Equation 2) with the gSLOPE subdifferential (Theorem 3.1) reveals that a group is zero if $h(\lambda):=([\nabla f(\hat{\beta}(\lambda))]_{\mathcal{G},-0.5})_{\downarrow}\in\partial J_{\text{slope}}(\mathbf{0};\lambda w_{\mathcal{Z}})$. Using the subdifferential of SLOPE [17], this is given by $\text{cumsum}(h(\lambda)-\lambda w_{\mathcal{Z}})\preceq\mathbf{0}.$ (5) This condition is checked efficiently using the algorithm proposed for the SLOPE strong rule (Algorithm A1) leading to the strong rule for gSLOPE (Proposition 3.2). The algorithm assumes that the indices for the inactive predictors will be ordered last in the input $c$ and the features $|\hat{\beta}|_{\downarrow}$ [17]. ###### Proposition 3.2 (Strong screening rule for gSLOPE). Taking $c=h(\lambda_{k+1})$ and $\phi=\lambda_{k+1}w$ as inputs for Algorithm A1 returns a superset $\mathcal{S}_{g}(\lambda_{k+1})$ of the active set $\mathcal{A}_{g}(\lambda_{k+1})$. The gradient at path value $k+1$ is not available for the computation of $h(\lambda_{k+1})$, so an approximation is required that does not lead to any violations in Algorithm A1. By the cumsum condition in this algorithm, an approximation for a group $g$ is sought such that $h_{g}(\lambda_{k+1})\leq h_{g}(\lambda_{k})+R_{g}$, where $R_{g}\geq 0$ needs to be determined. An approximation is found by assuming that $h_{g}(\lambda_{k+1})$ is a Lipschitz function of $\lambda_{k+1}$ with respect to the $\ell_{1}$ norm, that is, $\left|h_{g}(\lambda_{k+1})-h_{g}(\lambda_{k})\right|\leq w_{g}|\lambda_{k+1}-\lambda_{k}|.$ By again noting that $J_{\text{gslope}}^{*}(x)=J_{\text{slope}}^{*}([x]_{\mathcal{G},-0.5})$, it can be seen that the assumption is equivalent to those used to derive strong rules for the lasso and SLOPE [17, 31]. By the reverse triangle inequality, $|h_{g}(\lambda_{k+1})|\leq|h_{g}(\lambda_{k})|+\lambda_{k}w_{g}-\lambda_{k+1}w_{g},$ leading to the choice $R_{g}=\lambda_{k}w_{g}-\lambda_{k+1}w_{g}$ and the gradient approximation strong screening rule. ###### Proposition 3.3 (Gradient approximation strong screening rule for gSLOPE). Taking $c=h(\lambda_{k})+\lambda_{k}w-\lambda_{k+1}w$ and $\phi=\lambda_{k+1}w$ as inputs for Algorithm A1, and assuming that for any $k\in\\{1,\dots,l-1\\}$, $\left|h_{g}(\lambda_{k+1})-h_{g}(\lambda_{k})\right|\leq w_{g}|\lambda_{k+1}-\lambda_{k}|,\;\forall g=1,\dots,m,$ and $\mathcal{O}(h(\lambda_{k+1}))=\mathcal{O}(h(\lambda_{k}))$, then the algorithm returns a superset $\mathcal{S}_{g}(\lambda_{k+1})$ of the active set $\mathcal{A}_{g}(\lambda_{k+1})$. ### 4 Sparse-group SLOPE This section presents the group and variable screening rules for SGS. They are derived using the SGS KKT conditions, formulated in terms of SLOPE and gSLOPE (by the sum rule of subdifferentials): $-\nabla f(\beta)\in\lambda\alpha\partial J_{\text{slope}}(\beta;v)+\lambda(1-\alpha)\partial J_{\text{gslope}}(\beta;w).$ (6) #### 4.1 Group screening For inactive groups, the KKT conditions (Equation 6) for SGS are $\displaystyle(\nabla f(\beta)+\lambda\alpha\partial J_{\text{slope}}(\mathbf{0};v))_{\mathcal{G}_{\mathcal{Z}}}\in\lambda(1-\alpha)\partial J_{\text{gslope}}(\mathbf{0};w_{\mathcal{Z}}),$ $\displaystyle\underset{\text{by Equation \ref{eqn:gslope_sub_condition}}}{\implies}$ $\displaystyle\text{cumsum}\Bigl{(}\bigl{(}[\nabla f(\beta)+\lambda\alpha\partial J_{\text{slope}}(\mathbf{0};v)]_{\mathcal{G}_{\mathcal{Z}},-0.5}\bigr{)}_{\downarrow}-\lambda(1-\alpha)w_{\mathcal{Z}}\Bigr{)}\preceq\mathbf{0}.$ (7) The problem reduces to that of the gSLOPE screening rule (Section 3), with input given by $c=([\nabla f(\beta)+\lambda\alpha\partial J_{\text{slope}}(\mathbf{0};v)]_{\mathcal{G},-0.5})_{\downarrow}$ and $\phi=\lambda(1-\alpha)w$. To determine the form of the quantity $\partial J_{\text{slope}}(\mathbf{0};v)$, the term inside the $[\cdot]_{\mathcal{G}_{\mathcal{Z}},-0.5}$ operator needs to be as small as possible for Equation 7 to be satisfied. This term is found to be the soft thresholding operator, $S(\nabla f(\beta),\lambda\alpha):=\text{sign}(\nabla f(\beta))(|\nabla f(\beta)|-\lambda\alpha)_{+}$ (with the derivation presented in Appendix B.2). By using the soft-thresholding operator, a valuable connection between SGS and SGL is made, as this operator is used in the gradient update step for SGL [29]. Such a connection has the potential to lead to new and more efficient optimization approaches for SGS that are more closely related to those used to solve SGL, similar to the recently developed coordinate descent algorithm for SLOPE [18]. Using this function, the (non-approximated) strong group screening rule for SGS is given in Proposition B.1. Using a similar Lipschitz assumption as for the gSLOPE rule gives the gradient approximation strong group screening rule for SGS (Proposition 4.1). ###### Proposition 4.1 (Gradient approximation strong group screening rule for SGS). Let $\tilde{h}(\lambda):=([S(\nabla f(\hat{\beta}(\lambda)),\lambda\alpha)]_{\mathcal{G},-0.5})_{\downarrow}$. Taking $c=\tilde{h}(\lambda_{k})+\lambda_{k}(1-\alpha)w-\lambda_{k+1}(1-\alpha)w$ and $\phi=\lambda_{k+1}(1-\alpha)w$ as inputs for Algorithm A1, and assuming that for any $k\in\\{1,\dots,l-1\\}$, $\left|\tilde{h}_{g}(\lambda_{k+1})-\tilde{h}_{g}(\lambda_{k})\right|\leq(1-\alpha)w_{g}|\lambda_{k+1}-\lambda_{k}|,\forall g=1,\dots,m,$ and $\mathcal{O}(\tilde{h}(\lambda_{k+1}))=\mathcal{O}(\tilde{h}(\lambda_{k}))$, then the algorithm returns a superset $\mathcal{S}_{g}(\lambda_{k+1})$ of $\mathcal{A}_{g}(\lambda_{k+1})$. #### 4.2 Variable screening Whilst the group screening for SGS returns a superset, $\mathcal{S}_{g}$, of the active group set, exploiting the sparse-group penalization of SGS allows for the screened set to be further reduced by also applying variable screening. The KKT conditions (Equation 6) for a zero variable, $j\in\mathcal{G}_{g}$, in an active group, $g\in\mathcal{A}_{g}$, are $-\nabla_{j}f(\beta)\in\lambda\alpha\partial J_{\text{slope}}(\mathbf{0};v_{j}).$ (8) The gSLOPE subdifferential term vanishes as the numerator is zero in Theorem 3.1. The problem reduces to that of SLOPE variable screening, applied only to the variables that belong to groups in $\mathcal{A}_{g}$ and scaled by $\alpha$. The gradient approximated rule is formalized in Proposition 4.2 (the non-approximated version is presented in Proposition B.2). ###### Proposition 4.2 (Gradient approximation strong variable screening rule for SGS). Let $\bar{h}(\lambda)=(\nabla f(\hat{\beta}(\lambda)))_{|\downarrow|}$. Taking $c=|\bar{h}(\lambda_{k+1})|+\lambda_{k}\alpha v-\lambda_{k+1}\alpha v$ and $\phi=\lambda_{k+1}\alpha v$ for only the variables in the groups in $\mathcal{A}_{g}(\lambda_{k+1})$ as inputs for Algorithm A1 , and assuming that for any $k\in\\{1,\dots,l-1\\}$, $\left|\bar{h}_{j}(\lambda_{k+1})-\bar{h}_{j}(\lambda_{k})\right|\leq\alpha v_{j}|\lambda_{k+1}-\lambda_{k}|,\forall j\in\mathcal{G}_{\mathcal{A}_{g}(\lambda_{k+1})},$ and $\mathcal{O}(\bar{h}(\lambda_{k+1}))=\mathcal{O}(\bar{h}(\lambda_{k}))$, then the algorithm returns a superset $\mathcal{S}_{v}(\lambda_{k+1})$ of $\mathcal{A}_{v}(\lambda_{k+1})$. In practice $\mathcal{A}_{g}(\lambda_{k+1})$ is not available, as this is exactly what we are trying to superset with any screening rule. However, Proposition 4.1 states that it is contained within $\mathcal{S}_{g}(\lambda_{k+1})$, so that the screened set can be used instead and complemented with KKT checks to ensure there are no violations, which would be performed anyway. For the checks, the group KKT conditions (Equation 6) are checked. For any active groups or groups violating the check (indicating that they should be active), the variable KKT conditions (Equation 8) are checked on the variables within those groups, to identify those that need to be added to $\mathcal{E}_{v}$. ### 5 Results In this section, the effectiveness of the screening rules for gSLOPE and SGS is illustrated through the analysis of both synthetic (Section 5.1) and real data (Section 5.2). Any mentions of $\mathcal{E},\mathcal{A}$ in reference to group and variable metrics refers to $\mathcal{E}_{g},\mathcal{A}_{g}$ and $\mathcal{E}_{v},\mathcal{A}_{v}$ respectively. For SGS, $\mathcal{E}_{g}$ is the groups with representation in $\mathcal{E}_{v}$ (the groups fitted). Computational information is presented in Appendix F.1. #### 5.1 Synthetic data analysis ###### Set up. For the synthetic data, a multivariate Gaussian design matrix, $\mathbf{X}\sim\mathcal{N}(\mathbf{0},\boldsymbol{\Sigma})\in\mathbb{R}^{400\times p}$, was used and the within-group correlation set to $\Sigma_{i,j}=\rho$, where $i$ and $j$ belong to the same group. The correlation and number of features were varied between $\rho\in\\{0,0.3,0.6,0.9\\}$ and $p\in\\{500,1625,2750,3875,5000\\}$, producing $20$ simulation cases. Each simulation case was repeated $100$ times. Two models were considered: linear and logistic. For the linear model, the output was generated as $y=\mathbf{X}\beta+\mathcal{N}(0,1)$ and for the logistic model the class probabilities were calculated using $\sigma(\mathbf{X}\beta+\mathcal{N}(0,1))$, where $\sigma$ is the sigmoid function. Groups of sizes between $3$ and $25$ were considered, of which $15\%$ were set to active. Within each group, $30\%$ of the variables were set to active with the signal sampled from $\beta\sim\mathcal{N}(0,5)$. gSLOPE and SGS were each fit along a log-linear path of $l=50$ regularization parameters using warm starts. The first path value $\lambda_{1}$ was set to the exact value at which the first group enters the model (given in Appendix A.3 for gSLOPE and Appendix B.4 for SGS). The final regularization value was set to $\lambda_{50}=0.05\lambda_{1}$. The data was $\ell_{2}$ standardized and for the linear model an intercept was fitted. Both models had FDR-control parameters set to $0.05$, and $\alpha=0.95$ for SGS. Even though the models were fit using the adaptive three operator splitting algorithm (ATOS) [26], it should be noted that the screening rules are independent of the choice of fitting algorithm. Primarily, the results for the linear model are presented, and the corresponding plots for the logistic model are presented in Appendix F.3.2. ###### Screening efficiency. By comparing the sizes of the fitting set $(\mathcal{E})$ to the active set ($\mathcal{A})$, we can understand the difference between the minimum dimensionality needed against the dimensionality provided by the screening rules. As expected, these sizes increase as $\lambda$ decreases (Figure 2). In particular, the gap between sets widens as more features enter the model. The size of $\mathcal{E}$ remains far below the size of the full input space even at the lowest value of $\lambda$, showing that the screening rules have a benefit along the whole path. This finding is found to be independent of $\rho$, $p$, and model fitted (see Appendix F.3). Figure 2: The number of groups/variables in $\mathcal{E},\mathcal{A}$ for both gSLOPE and SGS as a function of the regularization path for the linear model with $p=2750,\rho=0.6,m=197$. The results are averaged over $100$ repetitions, with the shaded regions corresponding to $95\%$ confidence intervals. Figure 3: The proportion of groups/variables in $\mathcal{E},\mathcal{A}$, relative to the full input, shown for gSLOPE and SGS. This is shown as a function of the correlation ($\rho$), averaged over all cases of the input dimension ($p$), with $100$ repetitions for each $p$, for both linear and logistic models, with standard errors shown. The performance of the screening rules are generally similar across linear and logistic models (Figure 3). As the within-group correlation increases, the size of both the screened and active sets decreases, causing the variables to move in the same direction within the same group. Due to the shape of the convex penalty of SLOPE, highly correlated features are clustered together [36]. This leads to smaller active sets, and in turn, also smaller screened sets. Additionally, it can be seen that the size difference between the screened and active sets decreases with correlation. Across the different correlation values, the linear model had smaller set sizes than the logistic one. ###### Runtime performance. A key metric of performance for a screening rule is the time taken to fit a path of models. Increasing the value of $p$ demonstrates the computational cost reduction of applying a screening rule (Figure 4). For the smaller values of $p$, the runtime tends to be similar whether screening is applied or not. However, once the input dimension increases, the benefit of screening can be seen clearly. Additionally, increasing the within-group correlation tends to lead to smaller active and screened sets, but the benefit of screening appears to be robust under different correlations. Figure 4: Runtime (in seconds) for fitting $50$ models along a path, shown for screening against no screening as a function of $p$, broken down into different correlation cases, for the linear model. The results are averaged over $100$ repetitions, with standard errors shown. Figure 5: Runtime (in seconds) for fitting $100$ models along a path for screening against no screening for gSLOPE and SGS, averaged across the nine collections of pathways for each dataset, with standard errors shown. A clear improvement with regards to runtime can be seen aggregating the results across all simulation cases (Table 1). In particular, for both models considered, screening halves the runtime for fitting $50$ models for SGS. #### 5.2 Real data experiments Table 1: Runtime (in seconds) for fitting $50$ models along a path, shown for screening against no screening, for the linear and logistic models. The results are averaged across all cases of the correlation ($\rho$) and dimensionality ($p$), with standard errors shown. Method | Type | Screen (s) | No screen (s) ---|---|---|--- gSLOPE | Linear | $1016\pm 21$ | $1623\pm 27$ gSLOPE | Logistic | $814\pm 8$ | $1409\pm 11$ SGS | Linear | $735\pm 15$ | $1830\pm 34$ SGS | Logistic | $407\pm 2$ | $859\pm 6$ Table 2: The sizes of the sets $\mathcal{A}$ and $\mathcal{E}$ (with standard errors), averaged across $100$ path values and pathway collections for each of the two datasets. SGS var/grp shows the corresponding metrics for $\mathcal{A}_{v},\mathcal{E}_{v}$ and $\mathcal{A}_{g},\mathcal{E}_{g}$ for the bi-level screening procedure. Method | Dataset | $\operatorname{card}(\mathcal{A})$ | $\operatorname{card}(\mathcal{E})$ ---|---|---|--- gSLOPE | Cancer | $66\pm 1$ | $120\pm 2$ gSLOPE | Colitis | $32\pm 1$ | $56\pm 2$ SGS grp | Cancer | $50\pm 1$ | $115\pm 2$ SGS grp | Colitis | $18\pm 1$ | $87\pm 3$ SGS var | Cancer | $71\pm 1$ | $212\pm 4$ SGS var | Colitis | $25\pm 1$ | $209\pm 6$ ###### Datasets. The screening rules were applied to two gene expression datasets with binary responses. The datasets were previously analyzed to measure the predictive performance of sparse-group regression models on large datasets [14, 29]. The cancer dataset contains gene data for $60$ breast cancer patients who had been treated with tamoxifen for 5 years, classified into binary labels depending on whether the cancer recurred [20]. The colitis dataset uses transcriptional profiles in peripheral blood mononuclear cells of $127$ patients to determine which patients have an inflammatory bowel disease (ulcerative colitis or Crohn’s). Of the $127$ samples, $42$ of them are controls [7]. The genes of both datasets were assigned to pathways (groups) that were obtained from the Molecular Signature Database.111https://www.gsea- msigdb.org/gsea/msigdb/human/collections.jsp Nine collections of pathways were downloaded and as the pathways included different genes, nine unique design matrices for each dataset were created, each with a unique grouping structure. This gave $18$ design matrices in total to which the screening rules were applied. Both gSLOPE and SGS were fitted with their FDR-control parameters set to $0.01$ and for SGS $\alpha=0.99$. Each model was fitted along a path of $100$ regularization parameters, with $\lambda_{1}$ set as described in Appendices A.3 and B.4, and $\lambda_{100}=0.01\lambda_{1}$. The data was $\ell_{2}$ standardized and no intercept was used. ###### Results. For both gSLOPE and SGS, the screening rules lead to considerably faster runtimes (Figure 5). In particular, the use of the screening rules significantly benefits the gSLOPE approach. With the reduced input feature space, any issues with convergence of the fitting algorithm are substantially decreased. For the cancer dataset, $186$ model fits (out of the $900$) did not reach convergence with no screening, compared to only $16$ with screening. A similar trend occurred for the colitis dataset. As gSLOPE applies no variable penalization, it is forced to fit all variables within a group. For datasets with large groups, such as those considered here, this leads to a problematic fitting process which can include many noisy variables. The analysis of the real data further illustrates the benefits of the bi-level screening to the runtime and performance of SGS. Figure 1(b) illustrates that for these large genetics datasets, the bi-level screening allows the input dimensionality for SGS to be reduced to a much greater extent than by just group screening. Both screening rules can drastically reduce the input dimensionality as seen in Table 2. For the colitis dataset, the SGS screening rule reduces the input dimensionality to just $2\%$ of the full space. This comes without the cost of solution consistency. The estimated model coefficients with and without screening were very close to each other, with their $\ell_{2}$ distances of order $10^{-8}$ (Appendices F.2 and F.4). #### 5.3 KKT violations The screening rules rely on assumptions that can fail. In these cases, KKT checks are implemented to ensure that no active variables are excluded from $\mathcal{E}_{v}$. A KKT violation occurs when a variable is not included in the screening set, $\mathcal{S}_{v}$, but should be according to the KKT optimality conditions (Equation 2), in which case it is added into the fitting set, $\mathcal{E}_{v}$. KKT violations are very rare for gSLOPE (Figure 6), occurring on the simulated data infrequently toward the start of the path. On the real data, no KKT violations occurred for gSLOPE. (a) Group violations for gSLOPE and variable violations for SGS, fitted to linear models, averaged over all cases of $p$ and the correlation ($\rho$). (b) Variable violations for SGS applied to the cancer and colitis datasets, averaged over the nine pathway collections. Figure 6: The proportion of KKT violations relative to the full input space, as a function of the regularization path. For SGS, KKT violations are more common, as the additional layer of screening carries with it additional assumptions. Additionally, the subdifferential term in Equation 7 was chosen to be as small as possible, leading to tighter screened sets. The increased violations are reflective of this. On both the simulated and real datasets, SGS recorded an increasing number of KKT violations as the fitted models became denser (Figure 6). The shape of the increasing number of KKT violations mirrors the log-linear shape of the regularization path. The trend of increasing KKT violations with denser models is also observed for the SLOPE strong screening rule [17]. Despite the increased number of KKT violations for SGS, which amounts to the additional computational cost of performing the KKT checks and repeated sub- fitting in Algorithm 1, the overall screening procedure still results in substantial runtime improvements, as evidenced in Tables 1 and 2. This shows that screening rules can provide improvements to computational cost, even with assumptions that are often violated. ### 6 Discussion In this manuscript, we have developed strong screening rules for group-based SLOPE models: Group SLOPE and Sparse-group SLOPE, neither of which have any previous screening rules. The screening rule for gSLOPE screens out irrelevant groups before fitting. The screening rules for SGS perform bi-level screening, based on our proposed sparse-group screening framework. The two proposed screening rules differ from the existing SLOPE screening rules both in construction and in final outcome. gSLOPE only performs screening through the groups, forcing it to keep all variables within the screened groups, even noisy ones, in the optimization process. SGS, similarly to SLOPE and in contrast to gSLOPE, performs variable screening after its group screening step. As additional assumptions are required for SGS to perform bi-level selection, its computational efficiency is not as effective as the one observed with the SLOPE screening rules [17]. The superiority of SGS in variable selection and predictive performance when group information is available overcomes the additional computational burden [14]. Through comprehensive analysis of synthetic and real data, we illustrate that the screening rules lead to dramatic improvements in the runtime of gSLOPE and SGS models, as well as for group-based OSCAR models (Appendix E). This is achieved without affecting model accuracy. This is particularly important in datasets where $p\gg n$, such as genetics ones, which is the main motivation behind SLOPE [4]. The screening rules presented in this manuscript allow group-based SLOPE, and by extension, group-based OWL models, to achieve computational fitting times more in line with lasso-based models, making them more widely accessible. ###### Limitations. One of the limitations of our proposed screening rules for SGS is the number of assumptions made. A future direction includes the exploration of alternative strong screening rules that require fewer assumptions. Alternatively, the development of safe rules, which guarantee that only inactive variables are discarded, could be an avenue for future exploration, potentially incorporating safe and strong rules together in a hybrid rule [32, 37]. A comparison between safe and the proposed strong rules developed in this manuscript could provide further insight into which type of screening is most beneficial for non-separable penalties. ### References * Ahmed and Bajwa [2019] Talal Ahmed and Waheed U. Bajwa. Exsis: Extended sure independence screening for ultrahigh-dimensional linear models. _Signal Processing_ , 159:33–48, 2019. ISSN 0165-1684. doi: https://doi.org/10.1016/j.sigpro.2019.01.018. * Atamturk and Gomez [2020] Alper Atamturk and Andres Gomez. Safe screening rules for $\ell_{0}$-regression from perspective relaxations. In _Proceedings of the 37th International Conference on Machine Learning_ , pages 421–430. PMLR, 2020. * Bao et al. [2020] Runxue Bao, Bin Gu, and Heng Huang. Fast OSCAR and OWL Regression via Safe Screening Rules. In _Proceedings of the 37th International Conference on Machine Learning_ , pages 653–663. PMLR, 2020. * Bogdan et al. [2015] Małgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, and Emmanuel J Candès. SLOPE—Adaptive variable selection via convex optimization. _The Annals of Applied Statistics_ , 9(3), 2015. ISSN 1932-6157. doi: 10.1214/15-AOAS842. * Bondell and Reich [2008] Howard D. Bondell and Brian J. Reich. Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR. _Biometrics_ , 64(1):115–123, 2008. ISSN 0006-341X. doi: 10.1111/j.1541-0420.2007.00843.x. * Brzyski et al. [2019] Damian Brzyski, Alexej Gossmann, Weijie Su, and Małgorzata Bogdan. Group SLOPE – Adaptive Selection of Groups of Predictors. _Journal of the American Statistical Association_ , 114(525):419–433, 2019. doi: 10.1080/01621459.2017.1411269. * Burczynski et al. [2006] Michael E Burczynski, Ron L Peterson, Natalie C Twine, Krystyna A Zuberek, Brendan J Brodeur, Lori Casciotti, Vasu Maganti, Padma S Reddy, Andrew Strahs, Fred Immermann, Walter Spinelli, Ulrich Schwertschlag, Anna M Slager, Monette M Cotreau, and Andrew J Dorner. Molecular Classification of Crohn’s Disease and Ulcerative Colitis Patients Using Transcriptional Profiles in Peripheral Blood Mononuclear Cells. _The Journal of Molecular Diagnostics_ , 8(1):51–61, 2006. ISSN 15251578. doi: 10.2353/jmoldx.2006.050079. * Chetverikov et al. [2021] Denis Chetverikov, Zhipeng Liao, and Victor Chernozhukov. On cross-validated Lasso in high dimensions. _The Annals of Statistics_ , 49(3):1300 – 1317, 2021. doi: 10.1214/20-AOS2000. * Dupuis and Tardivel [2023] Xavier Dupuis and Patrick J C Tardivel. The Solution Path of SLOPE. Working paper, 2023. URL https://hal.science/hal-04100441. * Efron et al. [2004] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. _The Annals of Statistics_ , 32(2):407–499, 2004. doi: 10.1214/009053604000000067. * El Ghaoui et al. [2010] Laurent El Ghaoui, Vivian Viallon, and Tarek Rabbani. Safe feature elimination in sparse supervised learning. Technical Report UCB/EECS-2010-126, EECS Department, University of California, Berkeley, 2010. * Elvira and Herzet [2021] Clément Elvira and Cédric Herzet. Safe rules for the identification of zeros in the solutions of the SLOPE problem. _SIAM Journal on Mathematics of Data Science_ , 5(1):147–173, 2021. doi: 10.1137/21m1457631. * Fan and Lv [2008] Jianqing Fan and Jinchi Lv. Sure independence screening for ultrahigh dimensional feature space. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 70(5):849–911, 2008. doi: https://doi.org/10.1111/j.1467-9868.2008.00674.x. * Feser and Evangelou [2023] Fabio Feser and Marina Evangelou. Sparse-group SLOPE: adaptive bi-level selection with FDR-control. _arXiv preprint arXiv:2305.09467_ , 2023. * Homrighausen and McDonald [2018] Darren Homrighausen and Daniel J. McDonald. A study on tuning parameter selection for the high-dimensional lasso. _Journal of Statistical Computation and Simulation_ , 88(15):2865–2892, 2018. doi: 10.1080/00949655.2018.1491575. * Kuhn and Tucker [1950] H W Kuhn and A W Tucker. Nonlinear programming. In _Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability_ , pages 481–492, Berkeley, Los Angeles, USA, 1950. University of California Press. * Larsson et al. [2020] Johan Larsson, Małgorzata Bogdan, and Jonas Wallin. The strong screening rule for SLOPE. In _Advances in Neural Information Processing Systems_ , volume 33, pages 14592–14603. Curran Associates, Inc., 2020. * Larsson et al. [2022] Johan Larsson, Quentin Klopfenstein, Mathurin Massias, and Jonas Wallin. Coordinate Descent for SLOPE. _Proceedings of Machine Learning Research_ , 206:4802–4821, 2022. ISSN 26403498. * Liang et al. [2022] Xiaoxuan Liang, Aaron Cohen, Anibal Solón Heinsfeld, Franco Pestilli, and Daniel J. McDonald. sparsegl: An R Package for Estimating Sparse Group Lasso. _arXiv preprint arXiv:2208.02942_ , 2022. * Ma et al. [2004] Xiao-Jun Ma, Zuncai Wang, Paula D Ryan, Steven J Isakoff, Anne Barmettler, Andrew Fuller, Beth Muir, Gayatry Mohapatra, Ranelle Salunga, J.Todd Tuggle, Yen Tran, Diem Tran, Ana Tassin, Paul Amon, Wilson Wang, Wei Wang, Edward Enright, Kimberly Stecker, Eden Estepa-Sabal, Barbara Smith, Jerry Younger, Ulysses Balis, James Michaelson, Atul Bhan, Karleen Habin, Thomas M Baer, Joan Brugge, Daniel A Haber, Mark G Erlander, and Dennis C Sgroi. A two-gene expression ratio predicts clinical outcome in breast cancer patients treated with tamoxifen. _Cancer Cell_ , 5(6):607–616, 2004. ISSN 15356108. doi: 10.1016/j.ccr.2004.05.015. * Ndiaye et al. [2016a] Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, and Joseph Salmon. GAP Safe Screening Rules for Sparse-Group Lasso. In _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016a. * Ndiaye et al. [2016b] Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, and Joseph Salmon. Gap Safe screening rules for sparsity enforcing penalties. _Journal of Machine Learning Research_ , 18, 2016b. ISSN 15337928. * Negrinho and Martins [2014] Renato Negrinho and André F T Martins. Orbit regularization. In _Proceedings of the 27th International Conference on Neural Information Processing Systems_ , volume 2, pages 3221–3229. MIT Press, 2014. * Nomura [2020] Shunichi Nomura. An Exact Solution Path Algorithm for SLOPE and Quasi-Spherical OSCAR. _arXiv preprint arXiv:2010.15511_ , 2020. * Ogawa et al. [2013] Kohei Ogawa, Yoshiki Suzuki, and Ichiro Takeuchi. Safe screening of non-support vectors in pathwise svm computation. In _Proceedings of the 30th International Conference on Machine Learning_ , pages 1382–1390. PMLR, 2013. * Pedregosa and Gidel [2018] Fabian Pedregosa and Gauthier Gidel. Adaptive three operator splitting. In _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 4085–4094. PMLR, 2018. * Schneider and Tardivel [2022] Ulrike Schneider and Patrick Tardivel. The Geometry of Uniqueness, Sparsity and Clustering in Penalized Estimation. _Journal of Machine Learning Research_ , 23:1–36, 2022. * Shibagaki et al. [2016] Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, and Ichiro Takeuchi. Simultaneous safe screening of features and samples in doubly sparse modeling. In _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 1577–1586. PMLR, 2016. * Simon et al. [2013] Noah Simon, Jerome Friedman, Trevor Hastie, and Robert Tibshirani. A Sparse-Group Lasso. _Journal of Computational and Graphical Statistics_ , 22(2):231–245, 2013. ISSN 1061-8600. doi: 10.1080/10618600.2012.681250. * Tibshirani [1996] Robert Tibshirani. Regression Shrinkage and Selection Via the Lasso. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 58(1):267–288, 1996. ISSN 00359246. doi: 10.1111/j.2517-6161.1996.tb02080.x. * Tibshirani et al. [2010] Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, and Ryan J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. _Journal of the Royal Statistical Society. Series B: Statistical Methodology_ , 74(2):245–266, 2010. ISSN 13697412. doi: 10.1111/j.1467-9868.2011.01004.x. * Wang and Breheny [2022] Chuyi Wang and Patrick Breheny. Adaptive hybrid screening for efficient lasso optimization. _Journal of Statistical Computation and Simulation_ , 92(11):2233–2256, 2022. doi: 10.1080/00949655.2021.2025376. * Wang and Ye [2014] Jie Wang and Jieping Ye. Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets. _Advances in Neural Information Processing Systems_ , 3:2132–2140, 2014. ISSN 10495258. * Wang et al. [2013] Jie Wang, Jiayu Zhou, Peter Wonka, and Jieping Ye. Lasso screening rules via dual Polytope Projection. In _Proceedings of the 26th International Conference on Neural Information Processing Systems_ , volume 1, pages 1070–1078. Curran Associates Inc., 2013. * Zeng and Figueiredo [2014a] Xiangrong Zeng and Mário A. T. Figueiredo. Decreasing weighted sorted $\ell_{1}$ regularization. _IEEE Signal Processing Letters_ , 21:1240–1244, 2014a. * Zeng and Figueiredo [2014b] Xiangrong Zeng and Mário A T Figueiredo. The atomic norm formulation of OSCAR regularization with application to the Frank-Wolfe algorithm. In _2014 22nd European Signal Processing Conference (EUSIPCO)_ , pages 780–784, 2014b. * Zeng et al. [2021] Yaohui Zeng, Tianbao Yang, and Patrick Breheny. Hybrid safe–strong rules for efficient optimization in lasso-type problems. _Computational Statistics & Data Analysis_, 153:107063, 2021. ISSN 0167-9473. doi: https://doi.org/10.1016/j.csda.2020.107063. ## Appendix ### Appendix A Group SLOPE #### A.1 Penalty weights The penalty weights for gSLOPE were derived to provide group FDR-control under orthogonal designs [6]. For the FDR-control parameter $q_{g}\in(0,1)$, they are given by (where the indexing corresponds to the sorted groups) $w_{i}^{\text{max}}=\max_{j=1,\dots,m}\left\\{\frac{1}{\sqrt{p_{j}}}F^{-1}_{\chi_{p_{j}}}(1-q_{g}i/m)\right\\},\;\text{for}\;i=1,\dots,m,$ where $F_{\chi_{p_{j}}}$ is the cumulative distribution function of a $\chi$ distribution with $p_{j}$ degrees of freedom. A relaxation to this sequence is applied in [6], to give $w_{i}^{\text{mean}}=\overline{F}^{-1}_{\chi_{p_{j}}}(1-q_{g}i/m),\;\text{where}\;\overline{F}_{\chi_{p_{j}}}(x):=\frac{1}{m}\sum_{j=1}^{m}F_{\chi_{p_{j}}}(\sqrt{p_{j}}x).$ (9) The mean sequence weights defined in Equation 9 are used for all gSLOPE numerical simulations in this manuscript (shown in Figure A1). Figure A1: The gSLOPE weights, $w$, shown for Figure 4 for $p=500,m=100,q_{g}=0.05$. #### A.2 Theory ###### Proof of Theorem 3.1. The proof is similar to that of Theorem 2.7 in [6], where the subdifferential of gSLOPE is derived under equal groups. It is derived here under more general terms. The subdifferential needs to be derived under two cases: 1. 1. Inactive groups, $\mathcal{G}_{\mathcal{Z}}$. 2. 2. Active groups, $\mathcal{G}_{\mathcal{A}}$. Case 1: For inactive groups, we consider the subdifferential at zero. The subdifferential of a norm at zero is given by the dual norm of the unit ball [27], $\partial J_{\text{gslope}}(\mathbf{0};w)=\mathbf{B}_{J_{\text{gslope}}^{*}(\mathbf{0};w)}[0,1]=\\{x:J_{\text{gslope}}^{*}(x;w)\leq 1\\}.$ The dual norm for gSLOPE is given by [6] $J_{\text{gslope}}^{*}(x;w)=J_{\text{slope}}^{*}([x]_{\mathcal{G},-0.5}).$ Hence, the dual norm unit ball is $\mathbf{B}_{J_{\text{gslope}}^{*}(\mathbf{0};w)}[0,1]=\\{x:[x]_{\mathcal{G},-0.5}\in\mathbf{B}_{J_{\text{slope}}^{*}(\mathbf{0};w)}[0,1]\\},$ where $\mathbf{B}_{J_{\text{slope}}^{*}(\mathbf{0};w)}[0,1]=\\{x\in\mathbb{R}^{m}:\text{cumsum}(|x|_{\downarrow}-w)\preceq\mathbf{0}\\}$ is the unit ball of the dual norm to $J_{\text{slope}}$ [4]. Using this, the subdifferential at zero for the inactive groups, $\mathcal{Z}$, is given by $\partial J_{\text{gslope}}(\mathbf{0};w_{\mathcal{Z}})=\\{x\in\mathbb{R}^{\operatorname{card}(\mathcal{G}_{\mathcal{Z}})}:[x]_{\mathcal{G}_{\mathcal{Z}},-0.5}\in\partial J_{\text{slope}}(\mathbf{0};w_{\mathcal{Z}})\\}.$ Case 2: Without loss of generality, denote the group index $s$ such that $\|\beta^{(g)}\|_{2}=0$ for $g>s$ (inactive groups) and $\|\beta^{(g)}\|_{2}\neq 0$ for $g\leq s$ (active groups). In other words, $g\in\mathcal{G}_{\mathcal{A}}$ if $g\leq s$. Define a set $D=\\{d\in\mathbb{R}^{p}:\|\beta^{(1)}+d^{(1)}\|_{2}>\ldots>\|\beta^{(s)}+d^{(s)}\|_{2},\|\beta^{(s)}+d^{(s)}\|_{2}>\|d^{(g)}\|_{2},g>s\\}$. By definition of a subdifferential, if $x\in\partial J_{\text{gslope}}(\beta;w)$, then for all $d\in D$ $\sum_{g=1}^{m}\sqrt{p_{g}}w_{g}\|\beta^{(g)}+d^{(g)}\|_{2}\geq\sum_{g=1}^{m}\sqrt{p_{g}}w_{g}\|\beta^{(g)}\|_{2}+x^{\top}d.$ Splitting this up into whether the groups are active (whether $g\leq s$): $\displaystyle\sum_{g=1}^{s}\sqrt{p_{g}}w_{g}\|\beta^{(g)}+d^{(g)}\|_{2}+\sum_{g=s+1}^{m}\sqrt{p_{g}}w_{g}\|d^{(g)}\|_{2}\geq$ $\displaystyle\sum_{g=1}^{s}\sqrt{p_{g}}w_{g}\|\beta^{(g)}\|_{2}$ (10) $\displaystyle+\sum_{g=1}^{s}x^{(g)T}d^{(g)}+\sum_{g=s+1}^{m}x^{(g)T}d^{(g)}.$ Now, for $g\in\mathcal{G}_{\mathcal{A}}$, define a new set $D_{g}=\\{d\in D:d^{(j)}\equiv\mathbf{0},j\neq g\\}$. Taking $d\in D_{g}$, Equation 10 becomes $\sqrt{p_{g}}w_{g}\|\beta^{(g)}+d^{(g)}\|_{2}\geq\sqrt{p_{g}}w_{g}\|\beta^{(g)}\|_{2}+x^{(g)T}d^{(g)}.$ Since the set $\\{d^{(g)}:d\in D_{g}\\}$ is open in $\mathbb{R}^{p_{g}}$ and contains zero, by Corollary G.1 in [6], it follows that $x^{(g)}\in\partial f_{g}(b^{(g)})$ for $f_{g}:\mathbb{R}^{p_{g}}\rightarrow\mathbb{R},f_{g}(x)=w_{g}\sqrt{p_{g}}\|x\|_{2}$. Now, for $g\leq s$, $f_{g}$ is differentiable in $\beta^{(g)}$, giving $x^{(g)}=w_{g}\sqrt{p_{g}}\frac{\beta^{(g)}}{\|\beta^{(g)}\|_{2}},$ proving the result. ∎ ###### Proof of Proposition 3.2. Suppose we have $\mathcal{B}\neq\emptyset$ after running the algorithm. Then, plugging in $h(\lambda_{k+1})=([\nabla f(\hat{\beta}(\lambda_{k+1}))]_{\mathcal{G},-0.5})_{\downarrow}$ gives $\text{cumsum}\Bigl{(}\bigl{(}([\nabla f(\hat{\beta}(\lambda_{k+1}))]_{\mathcal{G},-0.5})_{\downarrow}\bigr{)}_{\mathcal{B}}-\lambda_{k+1}w_{\mathcal{B}}\Bigr{)}\prec\mathbf{0},$ so that by the gSLOPE subdifferential (Theorem 3.1) all groups in $\mathcal{B}$ are inactive. This is valid by the KKT conditions (Equation 2), as we know that $-\nabla f(\hat{\beta}(\lambda_{k+1}))\in\partial J_{\text{gslope}}(\mathbf{0};w)$. Hence, $\mathcal{S}_{g}(\lambda_{k+1})$ will contain the active set $\mathcal{A}_{g}(\lambda_{k+1})$. ∎ ###### Proof of Proposition 3.3. Since $\text{cumsum}(y)\succeq\text{cumsum}(x)\iff y\succeq x$ [17], we only need to show for a group $g$, $|h_{g}(\lambda_{k+1})|\leq|h_{g}(\lambda_{k})|+\lambda_{k}w_{g}-\lambda_{k+1}w_{g}.$ Applying the reverse triangle inequality to the Lipschitz assumption gives $\displaystyle|h_{g}(\lambda_{k+1})|-|h_{g}(\lambda_{k})|\leq\left|h_{g}(\lambda_{k+1})-h_{g}(\lambda_{k})\right|\leq\lambda_{k}w_{g}-\lambda_{k+1}w_{g}$ $\displaystyle\implies$ $\displaystyle|h_{g}(\lambda_{k+1})|\leq|h_{g}(\lambda_{k})|+\lambda_{k}w_{g}-\lambda_{k+1}w_{g},$ proving the result. ∎ #### A.3 Path start derivation The aim is to find the value of $\lambda$ at which the first group enters the model. When all features are zero, the gSLOPE KKT conditions (Equation 2) are $\mathbf{0}\in\nabla f(\mathbf{0})+\lambda\partial J_{\text{gslope}}(\mathbf{0};w).$ This is satisfied when $[\nabla f(\mathbf{0})]_{\mathcal{G},-0.5}\in\partial J_{\text{slope}}(\mathbf{0};\lambda w)\implies\text{cumsum}\bigl{(}([\nabla f(\mathbf{0})]_{\mathcal{G},-0.5})_{\downarrow}-\lambda w\bigr{)}\preceq\mathbf{0}.$ Rearranging this gives $\lambda\succeq\text{cumsum}\bigl{(}([\nabla f(\mathbf{0})]_{\mathcal{G},-0.5})_{\downarrow}\bigr{)}\oslash\text{cumsum}(w),$ where $\oslash$ denotes Hadamard division. Picking the maximum possible $\lambda$ such that this holds yields $\lambda_{1}=\max\left\\{\text{cumsum}\bigl{(}([\nabla f(\mathbf{0})]_{\mathcal{G},-0.5})_{\downarrow}\bigr{)}\oslash\text{cumsum}(w)\right\\}.$ This can be verified by noting that $\lambda_{1}=J^{*}_{\text{gslope}}(\nabla f(\mathbf{0});w)$ [22]. Now, $J^{*}_{\text{gslope}}(x;w)=J^{*}_{\text{slope}}([x]_{\mathcal{G},-0.5};w)$ [6]. The dual norm of SLOPE is given by [23] $J^{*}_{\text{slope}}(x;w)=\max\left\\{\text{cumsum}(|x|_{\downarrow})\oslash\text{cumsum}(w)\right\\}.$ Therefore, $\lambda_{1}$ is as before. ### Appendix B Sparse-group SLOPE #### B.1 Penalty weights The penalty weights for SGS provide variable and group FDR-control simultaneously, under orthogonal designs [14]. They are given by (where the indexing corresponds to the sorted variables/groups) $\displaystyle v_{i}^{\text{max}}=\max_{j=1,\dots,m}\left\\{\frac{1}{\alpha}F_{\mathcal{N}}^{-1}\left(1-\frac{q_{v}i}{2p}\right)-\frac{1}{3\alpha}(1-\alpha)a_{j}w_{j}\right\\},\;i=1,\dots,p,$ (11) $\displaystyle w_{i}^{\text{max}}=\max_{j=1,\dots,m}\left\\{\frac{F_{\text{FN}}^{-1}(1-\frac{q_{g}i}{m})-\alpha\sum_{k\in\mathcal{G}_{j}}v_{k}}{(1-\alpha)p_{j}}\right\\},\;i=1,\dots,m,$ (12) where $F_{\chi_{p_{j}}}$ is the cumulative distribution function of a $\chi$ distribution with $p_{j}$ degrees of freedom, $F_{\mathcal{N}}$ is the cumulative distribution function of a folded Gaussian distribution, and $a_{j}$ is a quantity that requires estimation. The estimator $\hat{a}_{j}=\lfloor\alpha p_{j}\rfloor$ is proposed in [14]. As with gSLOPE (Appendix A.1), a relaxation is possible, giving the weights $\displaystyle v_{i}^{\text{mean}}=\overline{F}_{\mathcal{N}}^{-1}\left(1-\frac{q_{v}i}{2p}\right),\;\text{where}\;\overline{F}_{\mathcal{N}}(x):=\frac{1}{m}\sum_{j=1}^{m}F_{\mathcal{N}}\left(\alpha x+\frac{1}{3}(1-\alpha)a_{j}w_{j}\right),$ (13) $\displaystyle w_{i}^{\text{mean}}=\overline{F}_{\text{FN}}^{-1}\left(1-\frac{q_{g}i}{p}\right),\;\text{where}\;\overline{F}_{\text{FN}}(x):=\frac{1}{m}\sum_{j=1}^{m}F_{\text{FN}}\left((1-\alpha)p_{j}x+\alpha\sum_{k\in\mathcal{G}_{j}}v_{k}\right).$ (14) The mean weights defined in Equations 13 and 14 are used for all SGS numerical simulations in this manuscript (shown in Figure A2). Figure A2: The SGS weights, $(v,w)$, shown for Figure 4 for $p=500,m=100,q_{v}=0.05,q_{g}=0.05,\alpha=0.95$. #### B.2 Derivation of soft thresholding operator To determine the form of the quantity $\partial J_{\text{slope}}(\mathbf{0};v)$, consider that for Equation 7 to be satisfied, the term inside the $[\cdot]$ operator needs to be as small as possible. Now, $\partial J_{\text{slope}}(\mathbf{0};v)=\\{y:\text{cumsum}(|y|)\preceq\text{cumsum}(v)\\}.$ Note that $\text{cumsum}(y)\preceq\text{cumsum}(x)\iff y\preceq x$. We consider the cases: 1. 1. $\nabla_{i}f(\beta)>\lambda\alpha v_{i}$: choose $y_{i}=-v_{i}$. 2. 2. $\nabla_{i}f(\beta)<-\lambda\alpha v_{i}$: choose $y_{i}=v_{i}$. 3. 3. $\nabla_{i}f(\beta)\in[-\lambda\alpha v_{i},\lambda\alpha v_{i}]$: choose $y_{i}=\nabla_{i}f(\beta)/\alpha\lambda$. Hence, the term becomes $S(\nabla f(\beta),\lambda\alpha):=\text{sign}(\nabla f(\beta))(|\nabla f(\beta)|-\lambda\alpha)_{+},$ which is the soft thresholding operator. #### B.3 Theory ###### Proposition B.1 (Strong group screening rule for SGS). Let $\tilde{h}(\lambda):=([S(\nabla f(\beta),\lambda\alpha)]_{\mathcal{G},-0.5})_{\downarrow}$. Then taking $c=\tilde{h}(\lambda_{k+1})$ and $\phi=(1-\alpha)\lambda_{k+1}w$ as inputs for Algorithm A1 returns a superset $\mathcal{S}_{g}(\lambda_{k+1})$ of the active set $\mathcal{A}_{g}(\lambda_{k+1})$. ###### Proof of Proposition B.1. The proof is similar to that of Proposition 3.2. Suppose we have $\mathcal{B}\neq\emptyset$ after running the algorithm. Then, $\displaystyle\text{cumsum}(\tilde{h}_{\mathcal{B}}(\lambda_{k+1})-\lambda_{k+1}(1-\alpha)w_{\mathcal{B}})$ $\displaystyle\prec\mathbf{0}$ $\displaystyle\implies\text{cumsum}\Bigl{(}\bigl{(}[S(\nabla f(\beta),\lambda_{k+1}\alpha)]_{\mathcal{G},-0.5})_{\downarrow}\bigr{)}_{\mathcal{B}}-\lambda_{k+1}(1-\alpha)w_{\mathcal{B}}\Bigr{)}$ $\displaystyle\prec,\mathbf{0}$ so that by the SGS subdifferential (Equation 7), all groups in $\mathcal{B}$ are inactive. Hence, $\mathcal{S}_{g}(\lambda_{k+1})$ will contain the active set $\mathcal{A}_{g}(\lambda_{k+1})$. ∎ ###### Proof of Proposition 4.1. The proof is identical to that of Proposition 3.3, replacing $h_{g}(\cdot)$ with $\tilde{h}_{g}(\cdot)$ and $\lambda_{k+1}w$ by $\lambda_{k+1}(1-\alpha)w$. ∎ ###### Proposition B.2 (Strong variable screening rule for SGS). Let $\bar{h}(\lambda)=|(\nabla f(\hat{\beta}(\lambda)))|_{\downarrow}$. Then taking $c=\bar{h}(\lambda_{k+1})$ and $\phi=\lambda_{k+1}\alpha v$ for only the variables contained in the groups in $\mathcal{A}_{g}(\lambda_{k+1})$ in Algorithm A1 returns a superset $\mathcal{S}_{v}(\lambda_{k+1})$ of the active set $\mathcal{A}_{v}(\lambda_{k+1})$. ###### Proof. Suppose we have $\mathcal{B}\neq\emptyset$ after running the algorithm. Then, we have $\text{cumsum}(\bar{h}_{\mathcal{B}}(\lambda_{k+1})-\lambda_{k+1}\alpha v_{\mathcal{B}})\prec\mathbf{0}\implies\text{cumsum}\Bigl{(}\bigl{(}|\nabla f(\hat{\beta}(\lambda_{k+1}))|_{\downarrow}\bigr{)}_{\mathcal{B}}-\lambda_{k+1}\alpha v_{\mathcal{B}}\Bigr{)}\prec\mathbf{0},$ (15) so that by the SGS subdifferential for non-zero groups (Equation 8), all variables in $\mathcal{B}$ are inactive. Hence, $\mathcal{S}_{v}(\lambda_{k+1})$ will contain the active set $\mathcal{A}_{v}(\lambda_{k+1})$. ∎ ###### Proof for Proposition 4.2. The proof is identical to that of Proposition 3.3, replacing $h_{g}(\cdot)$ with $\bar{h}_{g}(\cdot)$, $\lambda_{k+1}v$ with $\lambda_{k+1}\alpha v$, and considering only variables in the groups contained in $\mathcal{A}_{g}(\lambda_{k+1})$. ∎ #### B.4 Path start derivation The aim is to find the value of $\lambda$ at which the first variable enters the model. When all features are zero, the SGS KKT conditions (Equation 6) are $\displaystyle-\nabla f(\mathbf{0})\in\lambda(1-\alpha)\partial J_{\text{gslope}}(\mathbf{0};w)+\lambda\alpha\partial J_{\text{slope}}(\mathbf{0};v)$ $\displaystyle\implies$ $\displaystyle-\frac{1}{\lambda}\nabla f(\mathbf{0})-(1-\alpha)\partial J_{\text{gslope}}(\mathbf{0};w)\in\alpha\partial J_{\text{slope}}(\mathbf{0};v)$ $\displaystyle\implies$ $\displaystyle\text{cumsum}\left(\left|-\frac{1}{\lambda}\nabla f(\mathbf{0})-(1-\alpha)\partial J_{\text{gslope}}(\mathbf{0};w)\right|_{\downarrow}-\alpha v\right)\preceq\mathbf{0}.$ By the reverse triangle inequality and ordering of the group weights $\displaystyle\frac{1}{\lambda}\text{cumsum}\left(|\nabla f(\mathbf{0})|_{\downarrow}\right)\preceq\text{cumsum}((1-\alpha)|\partial J_{\text{gslope}}(\mathbf{0};w)|-\alpha v)$ $\displaystyle\implies$ $\displaystyle\lambda\succeq\text{cumsum}(|\nabla f(\mathbf{0})|_{\downarrow})\oslash\text{cumsum}((1-\alpha)|\partial J_{\text{gslope}}(\mathbf{0};w)|-\alpha v).$ Now, note that for $x\in J_{\text{gslope}}(\mathbf{0};w)$, it holds $\text{cumsum}([x]_{\mathcal{G},-0.5}-w)\preceq\mathbf{0}\implies[x]_{\mathcal{G},-0.5}\preceq w\implies\|x^{(g)}\|_{2}\leq\sqrt{p_{g}}w_{g},\forall g\in\mathcal{G}.$ This is satisfied at the upper limit at $x=\tau\omega$, where $\tau$ and $\omega$ are expanded vectors of the group sizes ($\sqrt{p_{g}})$ and penalty weights ($w_{g}$) to $p$ dimensions, so that each variable within the same group is assigned the same value. Hence, $\lambda_{1}=\max\left\\{\text{cumsum}(|\nabla f(\mathbf{0})|_{\downarrow})\oslash\text{cumsum}((1-\alpha)\tau\omega-\alpha v)\right\\}.$ ### Appendix C SLOPE Algorithm Algorithm A1 Cumsum algorithm from [17] Input: $c\in\mathbb{R}^{p},\phi\in\mathbb{R}^{p}$, where $\phi_{1}\geq\cdots\geq$ $\phi_{p}\geq 0$ $\mathcal{S},\mathcal{B}\leftarrow\varnothing$ for $i=1$ to $p$ do $\mathcal{B}\leftarrow\mathcal{B}\cup\\{i\\}$ if $\text{cumsum}(c_{\mathcal{B}}-\phi_{\mathcal{B}})\geq 0$ then $\mathcal{S}\leftarrow\mathcal{S}\cup\mathcal{B}$ $\mathcal{B}\leftarrow\varnothing$ end if end for Output: $\mathcal{S}$ ### Appendix D Screening rule framework #### D.1 Group SLOPE algorithm For the following is performed for $k=1,\ldots,l-1$: 1. 1. Set $\mathcal{E}_{g}=\mathcal{S}_{g}(\lambda_{k+1})\cup\mathcal{A}_{g}(\lambda_{k})$, where $\mathcal{S}_{g}(\lambda_{k+1})$ is obtained using Proposition 3.3. 2. 2. Compute $\hat{\beta}(\lambda_{k+1})$ by Equation 1 with the gSLOPE norm using only the groups in $\mathcal{E}_{g}.$ For any groups not in $\mathcal{E}_{g}$, $\hat{\beta}(\lambda_{k+1})$ is set to zero. 3. 3. Check the KKT conditions (Equation 2) for all groups at this solution. 4. 4. If there are no violations, we are done and keep $\hat{\beta}(\lambda_{k+1})$. Otherwise, add the violating groups into $\mathcal{E}$ and return to Step 2. #### D.2 SGS algorithm For the following is performed for $k=1,\ldots,l-1$: 1. 1. Group screen step: Calculate $\mathcal{S}_{g}(\lambda_{k+1})$ using Proposition 4.1. 2. 2. Variable screen step: Set $\mathcal{E}_{v}=\mathcal{S}_{v}(\lambda_{k+1})\cup\mathcal{A}_{v}(\lambda_{k})$, where $\mathcal{S}_{v}(\lambda_{k+1})$ is obtained using Proposition 4.2 with only the groups in $\mathcal{S}_{g}(\lambda_{k+1})$. 3. 3. Compute $\hat{\beta}(\lambda_{k+1})$ by Equation 1 with the SGS norm using only the features in $\mathcal{E}_{v}$. For features not in $\mathcal{E}_{v}$, $\hat{\beta}(\lambda_{k+1})$ is set to zero. 4. 4. Check the KKT conditions (Equation 6) for all features at this solution. 5. 5. If there are no violations, we are done and keep $\hat{\beta}(\lambda_{k+1})$, otherwise add in the violating variables into $\mathcal{E}_{g}$ and return to Step 3. ### Appendix E Group-based OSCAR This section presents an extension of the proposed screening rules for group- based OSCAR models. Firstly the OSCAR models are presented, and subsequently, the performance of the screening rules on synthetic data is presented. #### E.1 Model definition The Ordered Weighted $\ell_{1}$ (OWL) framework is defined as [35] $\hat{\beta}=\operatorname*{arg\,min}_{\beta}\left\\{\nabla f(\beta)+\lambda J_{\text{owl}}(\beta;v)\right\\},$ where $J_{\text{owl}}(\beta;v)=\sum_{i=1}^{p}v_{i}|\beta|_{(i)}$, $|\beta|_{(1)}\geq\ldots\geq|\beta|_{(p)}$, and $v$ are non-negative non- increasing weights. SLOPE is a special case of OWL where the weights are taken to be the Benjamini-Hochberg critical values [4]. Octagonal Shrinkage and Clustering Algorithm for Regression (OSCAR) [5] is a further special case of OWL (often referred to as OWL with linear decay) where for a variable $i$, the weights are taken to be $v_{i}=\sigma_{1}+\sigma_{2}(p-i)$, and $\sigma_{1},\sigma_{2}$ are to be set. In Bao et al. [3] they are set to $\sigma_{1}=d_{i}\|\mathbf{X}^{\top}y\|_{\infty},\sigma_{2}=\sigma_{1}/p$, where $d_{i}=i\times e^{-2}$. Group OSCAR (gOSCAR) and Sparse-group OSCAR (SGO) are defined using the frameworks provided by gSLOPE [6] and SGS [14], respectively, but instead using the weights (for a variable $i$ and a group $g$) $v_{i}=\sigma_{1}+\sigma_{2}(p-i),\;w_{g}=\sigma_{1}+\sigma_{3}(m-g),\;\sigma_{3}=\sigma_{1}/m.$ (16) with the weights visualised in Figure A3. Figure A3: The SGO weights, $(v,w)$, shown for Figure 4 for $p=500,m=100,q_{v}=0.05,q_{g}=0.05,\alpha=0.95$. #### E.2 Results Similar observations and conclusions made for the screening rules of gSLOPE and SGS are made for gOSCAR and SGO (Figures A4 \- A8). Figure A4 illustrates the effectiveness of the bi-level screening of SGO, similar to the effectiveness observed for SGS. Figures A5 \- A6 showcase the efficiency of the screening rules on the proportion of the selected groups/variables. The screening rules are found to be effective across different data characteristics, with the running time of the models significantly decreasing (Figure A7). Similarly to SGS, KKT violations are more common compared to the ones observed for gOSCAR (Figure A8). This is due to the additional assumptions made at the additional screening layer of SGO and SGS. Similarly to Figure 6(a), the shape of the increasing number of KKT violations mirrors the log-linear shape of the regularization path. Figure A4: The proportion of variables in $\mathcal{S}_{v}$ relative to the full input for SGO, shown for group and bi-level screening plotted as a function of the regularization path, applied to the synthetic data (Section 5.1). The data are generated under a linear model for $p=500,5000$. The results are averaged over 100 repetitions and 95% confidence intervals are shown (the SGO equivalent of Figure 1 (a)). Figure A5: The number of groups/variables in $\mathcal{E},\mathcal{A}$ for both gOSCAR and SGO as a function of the regularization path for the linear model with $p=2750,\rho=0.6,m=197$. The results are averaged over $100$ repetitions, with the shaded regions corresponding to $95\%$ confidence intervals (the gOSCAR/SGO equivalent of Figure 2). Figure A6: The proportion of groups/variables in $\mathcal{E},\mathcal{A}$, relative to the full input, shown for gOSCAR and SGO. This is shown as a function of the correlation ($\rho$), averaged over all cases of the input dimension ($p$), with $100$ repetitions for each $p$, for both linear and logistic models, with standard errors shown (the gOSCAR/SGO equivalent of Figure 3). Figure A7: Runtime (in seconds) for fitting $50$ models along a path, shown for screening against no screening as a function of $p$, broken down into different correlation cases, for the linear model. The results are averaged over $100$ repetitions, with standard errors shown (the OSCAR equivalent of Figure 4). Figure A8: The proportion of KKT violations relative to the full input space, as a function of the regularization path. Group violations for gOSCAR and variable violations for SGS, fitted to linear models, averaged over all cases of $p$ and $\rho$ (the gOSCAR/SGO equivalent of Figure 6(a)). ### Appendix F Results #### F.1 Computational information The simulated experiments were executed on a high-performance computing cluster (x86-64 Linux GNU) and the real data analysis was conducted on a Apple Macbook Air (M1, 8GB). #### F.2 Model Comparison This section presents the accuracy of the models with and without screening, by comparing the $\ell_{2}$ distances observed between the screened and non- screened fitted values. ###### Synthetic data. For the linear model, the maximum $\ell_{2}$ distances observed between the screened and non-screened fitted values were of order $10^{-5}$ for gSLOPE and $10^{-8}$ for SGS (Table F.3). Across the different cases, 98000 models were fit in total for each approach (excluding the models for $\lambda_{1}$, where no screening is applied). Of these model fits, there were no instances for gSLOPE where $\mathcal{E}$ was not a superset of $\mathcal{A}$. There was only one instance (out of the 98000) that this occurred for SGS, where $\mathcal{E}$ was missing a single variable contained in $\mathcal{A}$ (which had a non-screen fitted value of $\hat{\beta}=-0.004$). For the logistic model, the maximum $\ell_{2}$ distances observed between the screened and non-screened fitted values were of order $10^{-8}$ for both gSLOPE and SGS (Table F.3.2). Across the different cases, 98000 models were fit in total for each approach (excluding the models for $\lambda_{1}$, where no screening is applied). Of these model fits, there were no instances for gSLOPE or SGS where $\mathcal{E}$ was not a superset of $\mathcal{A}$. ###### Real data. In the real data analysis, the estimated coefficients with and without screening were very close to each other (with $\ell_{2}$ to be of order $10^{-8}$) for both SGS and gSLOPE (Table F.4). No instances occurred where $\mathcal{E}$ was not a superset of $\mathcal{A}$. #### F.3 Additional results from the simulation study Table A1: Variable screening metrics for SGS using linear and logistic models for the simulation study presented in Section 5.1. The number of variables in $\mathcal{A}_{v},\mathcal{S}_{v},\mathcal{E}_{v}$, and $\mathcal{K}_{v}$ are shown, averaged across all $20$ cases of the correlation ($\rho$) and $p$. Standard errors are shown. Method | Type | $\operatorname{card}(\mathcal{A}_{v})$ | $\operatorname{card}(\mathcal{S}_{v})$ | $\operatorname{card}(\mathcal{E}_{v})$ | $\operatorname{card}(\mathcal{K}_{v})$ ---|---|---|---|---|--- SGS | Linear | $179\pm 3$ | $313\pm 5$ | $363\pm 6$ | $51\pm 1$ SGS | Logistic | $230\pm 3$ | $405\pm 5$ | $472\pm 6$ | $66\pm 1$ Table A2: General and group screening metrics for SGS and gSLOPE using linear and logistic models for the simulation study presented in Section 5.1. General metrics: the runtime (in seconds) for screening against no screening, the number of fitting iterations for screening against no screening, and the $\ell_{2}$ distance between the fitted values obtained with screening and no screening. Group screening metrics: the number of groups in $\mathcal{A}_{g},\mathcal{S}_{g},\mathcal{E}_{g}$, and $\mathcal{K}_{g}$. The results are averaged across all $20$ cases of the correlation ($\rho$) and $p$. Standard errors are shown. Method | Type | Runtime (s) | | | | | | | | ---|---|---|---|---|---|---|---|---|---|--- | Runtime (s) | screen | | | | | | | | | $\operatorname{card}(\mathcal{A}_{g})$ | $\operatorname{card}(\mathcal{S}_{g})$ | $\operatorname{card}(\mathcal{E}_{g})$ | $\operatorname{card}(\mathcal{K}_{g})$ | Num it | | | | | | Num it | | | | screen | | | | | | $\ell_{2}$ dist | | | | | | | | | | to no screen | | | | | | | | | gSLOPE | Linear | $1016\pm 21$ | $1623\pm 27$ | $55\pm 1$ | $76\pm 1$ | $76\pm 1$ | $0.006\pm 0.004$ | $333\pm 6$ | $351\pm 6$ | $2\times 10^{-6}\pm 1\times 10^{-6}$ gSLOPE | Logistic | $814\pm 8$ | $1409\pm 11$ | $71\pm 1$ | $97\pm 1$ | $97\pm 1$ | $0.014\pm 0.014$ | $78\pm 1$ | $83\pm 1$ | $1\times 10^{-8}\pm 1\times 10^{-8}$ SGS | Linear | $735\pm 15$ | $1830\pm 34$ | $61\pm 1$ | $84\pm 1$ | $91\pm 1$ | $21\pm 1$ | $91\pm 3$ | $708\pm 12$ | $2\times 10^{-9}\pm 3\times 10^{-9}$ SGS | Logistic | $407\pm 2$ | $859\pm 6$ | $84\pm 1$ | $107\pm 1$ | $118\pm 1$ | $22\pm 0.3$ | $7\pm 0.2$ | $51\pm 0.8$ | $4\times 10^{-9}\pm 3\times 10^{-10}$ ##### F.3.1 Additional results for the linear model (a) $p=500,\rho=0$ (b) $p=500,\rho=0.9$ (c) $p=5000,\rho=0$ (d) $p=5000,\rho=0.9$ Figure A9: The number of groups/variables in $\mathcal{E},\mathcal{A}$ as a function of the regularization path for the linear model with SGS and gSLOPE, shown for different values of the correlation ($\rho$) and $p$. The results are averaged over $100$ repetitions, with $95\%$ confidence intervals shown. Table A3: Variable screening metrics for SGS using a linear model for the simulation study presented in Section 5.1. The number of variables in $\mathcal{A}_{v},\mathcal{S}_{v},\mathcal{E}_{v}$, and $\mathcal{K}_{v}$ are shown. The results are shown for different values of $p$, averaged across $\rho\in\\{0,0.3,0.6,0.9\\}$. Standard errors are shown. Method | $p$ | $\operatorname{card}(\mathcal{A}_{v})$ | $\operatorname{card}(\mathcal{S}_{v})$ | $\operatorname{card}(\mathcal{E}_{v})$ | $\operatorname{card}(\mathcal{K}_{v})$ ---|---|---|---|---|--- SGS | $500$ | $19\pm 1$ | $24\pm 1$ | $28\pm 1$ | $4\pm 0.2$ SGS | $1625$ | $83\pm 3$ | $138\pm 5$ | $161\pm 6$ | $22\pm 1$ SGS | $2750$ | $188\pm 7$ | $316\pm 10$ | $370\pm 12$ | $54\pm 2$ SGS | $3875$ | $268\pm 9$ | $470\pm 14$ | $548\pm 16$ | $78\pm 3$ SGS | $5000$ | $334\pm 10$ | $618\pm 17$ | $712\pm 20$ | $95\pm 3$ Table A4: General and group screening metrics for SGS and gSLOPE using linear models for the simulation study presented in Section 5.1. General metrics: the runtime (in seconds) for screening against no screening, the number of fitting iterations for screening against no screening, and the $\ell_{2}$ distance between the fitted values obtained with screening and no screening. Group screening metrics: the number of groups in $\mathcal{A}_{g},\mathcal{S}_{g},\mathcal{E}_{g}$, and $\mathcal{K}_{g}$. The results are shown for different values of $p$, averaged across $\rho\in\\{0,0.3,0.6,0.9\\}$. Standard errors are shown. Method | $p$ | Runtime (s) | | | | | | | | ---|---|---|---|---|---|---|---|---|---|--- | Runtime (s) | screen | | | | | | | | | $\operatorname{card}(\mathcal{A}_{g})$ | $\operatorname{card}(\mathcal{S}_{g})$ | $\operatorname{card}(\mathcal{E}_{g})$ | $\operatorname{card}(\mathcal{K}_{g})$ | Num it | | | | | | Num it | | | | screen | | | | | | $\ell_{2}$ dist | | | | | | | | | | to no screen | | | | | | | | | gSLOPE | $500$ | $89\pm 1$ | $144\pm 1$ | $9\pm 0.2$ | $10\pm 0.3$ | $10\pm 0.3$ | $0.005\pm 0.005$ | $47\pm 1$ | $55\pm 1$ | $6\times 10^{-6}\pm 7\times 10^{-6}$ gSLOPE | $1625$ | $231\pm 5$ | $453\pm 5$ | $26\pm 1$ | $36\pm 1$ | $36\pm 1$ | $0.005\pm 0.006$ | $203\pm 6$ | $222\pm 6$ | $1\times 10^{-6}\pm 1\times 10^{-6}$ gSLOPE | $2750$ | $1061\pm 15$ | $2296\pm 25$ | $56\pm 2$ | $75\pm 2$ | $75\pm 2$ | $0.004\pm 0.005$ | $270\pm 9$ | $350\pm 9$ | $5\times 10^{-7}\pm 7\times 10^{-7}$ gSLOPE | $3875$ | $1765\pm 83$ | $2800\pm 113$ | $82\pm 3$ | $114\pm 3$ | $114\pm 3$ | $0.006\pm 0.008$ | $549\pm 19$ | $546\pm 18$ | $3\times 10^{-7}\pm 5\times 10^{-7}$ gSLOPE | $5000$ | $1937\pm 63$ | $2422\pm 66$ | $102\pm 3$ | $147\pm 4$ | $147\pm 4$ | $0.007\pm 0.012$ | $594\pm 21$ | $581\pm 19$ | $2\times 10^{-7}\pm 3\times 10^{-7}$ SGS | $500$ | $94\pm 1$ | $133\pm 2$ | $9\pm 0.2$ | $11\pm 0.3$ | $12\pm 0.3$ | $3\pm 0.2$ | $28\pm 1$ | $74\pm 3$ | $5\times 10^{-10}\pm 4\times 10^{-10}$ SGS | $1625$ | $416\pm 8$ | $1129\pm 19$ | $25\pm 1$ | $37\pm 1$ | $41\pm 1$ | $12\pm 0.3$ | $62\pm 7$ | $511\pm 21$ | $2\times 10^{-9}\pm 2\times 10^{-9}$ SGS | $2750$ | $639\pm 14$ | $2137\pm 47$ | $62\pm 2$ | $82\pm 2$ | $89\pm 3$ | $20\pm 1$ | $80\pm 6$ | $791\pm 28$ | $2\times 10^{-9}\pm 6\times 10^{-9}$ SGS | $3875$ | $939\pm 31$ | $2862\pm 96$ | $93\pm 3$ | $124\pm 3$ | $136\pm 4$ | $30\pm 1$ | $112\pm 8$ | $1049\pm 34$ | $5\times 10^{-9}\pm 1\times 10^{-8}$ SGS | $5000$ | $1586\pm 66$ | $2891\pm 128$ | $119\pm 4$ | $164\pm 3$ | $180\pm 4$ | $39\pm 1$ | $171\pm 11$ | $1118\pm 37$ | $2\times 10^{-9}\pm 4\times 10^{-9}$ ##### F.3.2 Additional results for the logistic model This section presents additional results for the logistic model. Similar trends to the ones observed for the linear model are seen. Figure A10: The number of groups/variables in $\mathcal{E},\mathcal{A}$ as a function of the regularization path for the logistic model with $p=2750,\rho=0.6,m=197$, shown for gSLOPE and SGS. The results are averaged over $100$ repetitions, with $95\%$ confidence intervals shown. This figure is the equivalent of Figure 2 for the logistic model. (a) $p=500,\rho=0$ (b) $p=500,\rho=0.9$ (c) $p=5000,\rho=0$ (d) $p=5000,\rho=0.9$ Figure A11: The number of groups/variables in $\mathcal{E},\mathcal{A}$ as a function of the regularization path for the logistic model with SGS and gSLOPE, shown for different values of the correlation ($\rho$) and $p$. The results are averaged over $100$ repetitions, with $95\%$ confidence intervals shown. Figure A12: Runtime (in seconds) for screening against no screening as a function of $p$, broken down into different correlation cases, for the logistic model. The results are averaged over $100$ repetitions, with standard errors shown. Table A5: Variable screening metrics for SGS using a logistic model for the simulation study presented in Section 5.1. The number of variables in $\mathcal{A}_{v},\mathcal{S}_{v},\mathcal{E}_{v}$, and $\mathcal{K}_{v}$ are shown. The results are shown for different values of $p$, averaged across $\rho\in\\{0,0.3,0.6,0.9\\}$. Standard errors are shown. Method | $p$ | $\operatorname{card}(\mathcal{A}_{v})$ | $\operatorname{card}(\mathcal{S}_{v})$ | $\operatorname{card}(\mathcal{E}_{v})$ | $\operatorname{card}(\mathcal{K}_{v})$ ---|---|---|---|---|--- SGS | $500$ | $53\pm 2$ | $71\pm 3$ | $89\pm 4$ | $19\pm 1$ SGS | $1625$ | $157\pm 5$ | $248\pm 8$ | $291\pm 9$ | $44\pm 1$ SGS | $2750$ | $247\pm 7$ | $420\pm 11$ | $491\pm 13$ | $71\pm 2$ SGS | $3875$ | $316\pm 9$ | $571\pm 14$ | $663\pm 16$ | $92\pm 3$ SGS | $5000$ | $375\pm 10$ | $717\pm 16$ | $824\pm 19$ | $107\pm 3$ Table A6: General and group screening metrics for SGS and gSLOPE using logistic models for the simulation study presented in Section 5.1. General metrics: the runtime (in seconds) for screening against no screening, the number of fitting iterations for screening against no screening, and the $\ell_{2}$ distance between the fitted values obtained with screening and no screening. Group screening metrics: the number of groups in $\mathcal{A}_{g},\mathcal{S}_{g},\mathcal{E}_{g}$, and $\mathcal{K}_{g}$. The results are shown for different values of $p$, averaged across $\rho\in\\{0,0.3,0.6,0.9\\}$. Standard errors are shown. Method | $p$ | Runtime (s) | | | | | | | | ---|---|---|---|---|---|---|---|---|---|--- | Runtime (s) | screen | | | | | | | | | $\operatorname{card}(\mathcal{A}_{g})$ | $\operatorname{card}(\mathcal{S}_{g})$ | $\operatorname{card}(\mathcal{E}_{g})$ | $\operatorname{card}(\mathcal{K}_{g})$ | Num it | | | | | | Num it | | | | screen | | | | | | $\ell_{2}$ dist | | | | | | | | | | to no screen | | | | | | | | | gSLOPE | $500$ | $49\pm 1$ | $76\pm 1$ | $26\pm 1$ | $31\pm 1$ | $31\pm 1$ | $0.003\pm 0.004$ | $31\pm 1$ | $40\pm 1$ | $4\times 10^{-8}\pm 5\times 10^{-8}$ gSLOPE | $1625$ | $138\pm 3$ | $203\pm 3$ | $43\pm 1$ | $54\pm 2$ | $54\pm 2$ | $0.004\pm 0.006$ | $78\pm 2$ | $79\pm 1$ | $1\times 10^{-8}\pm 5\times 10^{-9}$ gSLOPE | $2750$ | $987\pm 11$ | $1641\pm 16$ | $72\pm 2$ | $95\pm 3$ | $95\pm 3$ | $0.008\pm 0.013$ | $87\pm 2$ | $89\pm 1$ | $1\times 10^{-8}\pm 5\times 10^{-9}$ gSLOPE | $3875$ | $1441\pm 26$ | $2398\pm 31$ | $98\pm 3$ | $135\pm 3$ | $135\pm 3$ | $0.031\pm 0.054$ | $95\pm 2$ | $98\pm 1$ | $7\times 10^{-9}\pm 3\times 10^{-9}$ gSLOPE | $5000$ | $1454\pm 29$ | $2727\pm 40$ | $118\pm 3$ | $168\pm 4$ | $168\pm 4$ | $0.022\pm 0.041$ | $101\pm 2$ | $109\pm 1$ | $4\times 10^{-9}\pm 1\times 10^{-9}$ SGS | $500$ | $118\pm 1$ | $113\pm 1$ | $28\pm 1$ | $33\pm 1$ | $38\pm 1$ | $8\pm 0.2$ | $6\pm 0.3$ | $29\pm 1$ | $8\times 10^{-9}\pm 7\times 10^{-10}$ SGS | $1625$ | $248\pm 2$ | $538\pm 9$ | $50\pm 2$ | $59\pm 2$ | $64\pm 2$ | $11\pm 0.4$ | $7\pm 1$ | $63\pm 3$ | $5\times 10^{-9}\pm 1\times 10^{-9}$ SGS | $2750$ | $374\pm 2$ | $868\pm 12$ | $85\pm 3$ | $104\pm 2$ | $115\pm 3$ | $21\pm 1$ | $7\pm 0.4$ | $57\pm 2$ | $3\times 10^{-9}\pm 5\times 10^{-10}$ SGS | $3875$ | $558\pm 4$ | $1280\pm 19$ | $116\pm 3$ | $148\pm 3$ | $164\pm 3$ | $30\pm 1$ | $8\pm 0.4$ | $54\pm 1$ | $2\times 10^{-9}\pm 2\times 10^{-10}$ SGS | $5000$ | $737\pm 5$ | $1498\pm 19$ | $141\pm 4$ | $188\pm 3$ | $209\pm 4$ | $40\pm 1$ | $8\pm 0.4$ | $54\pm 1$ | $1\times 10^{-9}\pm 2\times 10^{-10}$ #### F.4 Additional results from the real data analysis Table A7: Variable screening metrics for SGS applied to real data in 5.2. The number of variables in $\mathcal{A}_{v},\mathcal{S}_{v},\mathcal{E}_{v}$, and $\mathcal{K}_{v}$ are shown. The results are averaged across the nine pathway collections, with standard errors shown. Dataset | Method | Input | | | | ---|---|---|---|---|---|--- | $\operatorname{card}(\mathcal{A}_{v})$ | $\operatorname{card}(\mathcal{S}_{v})$ | $\operatorname{card}(\mathcal{E}_{v})$ | $\operatorname{card}(\mathcal{K}_{v})$ | | SGS | Cancer | $5651$ | $71\pm 1$ | $211\pm 4$ | $212\pm 4$ | $1\pm 0.04$ SGS | Colitis | $10259$ | $25\pm 1$ | $208\pm 6$ | $209\pm 6$ | $1\pm 0.07$ Table A8: General and group screening metrics for SGS and gSLOPE applied to real data in 5.2. General metrics: the runtime (in seconds) for screening against no screening, the number of fitting iterations for screening against no screening (with the number of occasions of failed convergence given in brackets), and the $\ell_{2}$ distance between the fitted values obtained with screening and no screening. Group screening metrics: the number of groups in $\mathcal{A}_{g},\mathcal{S}_{g},\mathcal{E}_{g}$, and $\mathcal{K}_{g}$. The results are averaged across the nine pathway collections, with standard errors shown. Method | Dataset | Input | | | | | | | ---|---|---|---|---|---|---|---|---|--- | $\operatorname{card}(\mathcal{A}_{g})$ | $\operatorname{card}(\mathcal{S}_{g})$ | $\operatorname{card}(\mathcal{E}_{g})$ | $\operatorname{card}(\mathcal{K}_{g})$ | Num it | | | | | Num it | | | | screen (num failed) | | | | | $\ell_{2}$ dist | | | | | | | | | to no screen | | | | | | | | gSLOPE | Cancer | $592$ | $66\pm 1$ | $120\pm 2$ | $120\pm 2$ | $0\pm 0$ | $1719\pm 63(16)$ | $4453\pm 103(186)$ | $7\times 10^{-8}\pm 7\times 10^{-9}$ gSLOPE | Colitis | $655$ | $32\pm 1$ | $56\pm 2$ | $56\pm 2$ | $0\pm 0$ | $1215\pm 73(28)$ | $3904\pm 127(140)$ | $3\times 10^{-7}\pm 5\times 10^{-8}$ SGS | Cancer | $592$ | $50\pm 1$ | $167\pm 2$ | $115\pm 2$ | $67\pm 0.8$ | $72\pm 3(0)$ | $127\pm 4(0)$ | $1\times 10^{-8}\pm 1\times 8^{-10}$ SGS | Colitis | $655$ | $18\pm 0.6$ | $136\pm 4$ | $87\pm 3$ | $61\pm 2$ | $156\pm 9(0)$ | $1348\pm 62(0)$ | $4\times 10^{-8}\pm 6\times 10^{-9}$
# Modeling, Characterization, and Control of Bacteria-inspired Bi-flagellated Mechanism with Tumbling ††thanks: We acknowledge financial support from the National Science Foundation under Grant numbers CAREER-2047663 and CMMI-2101751. Zhuonan Hao1, Sangmin Lim1, Mohammad Khalid Jawed1 1Department of Mechanical & Aerospace Engineering, University of California, Los Angeles, 420 Westwood Plaza, Los Angeles, CA 90095 ###### Abstract Multi-flagellated bacteria utilize the hydrodynamic interaction between their filamentary tails, known as flagella, to swim and change their swimming direction in low Reynolds number flow. This interaction, referred to as bundling and tumbling, is often overlooked in simplified hydrodynamic models such as Resistive Force Theories (RFT). However, for the development of efficient and steerable robots inspired by bacteria, it becomes crucial to exploit this interaction. In this paper, we present the construction of a macroscopic bio-inspired robot featuring two rigid flagella arranged as right- handed helices, along with a cylindrical head. By rotating the flagella in opposite directions, the robot’s body can reorient itself through repeatable and controllable tumbling. To accurately model this bi-flagellated mechanism in low Reynolds flow, we employ a coupling of rigid body dynamics and the method of Regularized Stokeslet Segments (RSS). Unlike RFT, RSS takes into account the hydrodynamic interaction between distant filamentary structures. Furthermore, we delve into the exploration of the parameter space to optimize the propulsion and torque of the system. To achieve the desired reorientation of the robot, we propose a tumble control scheme that involves modulating the rotation direction and speed of the two flagella. By implementing this scheme, the robot can effectively reorient itself to attain the desired attitude. Notably, the overall scheme boasts a simplified design and control as it only requires two control inputs. With our macroscopic framework serving as a foundation, we envision the eventual miniaturization of this technology to construct mobile and controllable micro-scale bacterial robots. ###### Index Terms: bio-inspired robot, tumble control scheme ## I Introduction The study of flagellated bacteria and microorganisms has provided valuable insights into the development of flagellated robots. [1, 2, 3]. These robots mimic the locomotion of flagellated organisms, which rely on the intricate interaction between their helical structures, known as flagella, and the surrounding viscous fluid. By understanding and replicating these propulsion mechanisms, flagellated robots can achieve functional movements such as running, turning, and stopping. Additionally, natural observations have highlighted that different types of bacteria, including uni-flagellated and multi-flagellated species, rely on distinct propulsion mechanisms to achieve specific forms of locomotion [4, 5]. Uni-flagellated bacteria possess a single flagellum filament protruding from the side of the body, enabling them to swim through the rotary motion of the flagellum relative to the cell body [6, 5]. Notably, previous investigations have revealed that when the rotational frequency of the flagella exceeds a threshold, buckling instability occurs, resulting in highly nonlinear swimming trajectories [7, 8]. The body orientation of these robots can be controlled simply by adjusting the spin speed of the flagellum, a mechanism that has been widely employed in uni-flagellated robot designs to achieve motility [9, 10]. However, research on robotic propulsion inspired by multi-flagellated bacteria is relatively limited. Multi-flagellated bacteria exhibit locomotion through the interplay of their flagella, involving phenomena such as bundle formation, tumbling, and polymorphic transformations, all of which arise from different flagella actuation [6, 11, 12]. Bundle formation occurs when two or more flagella spin in the same direction, generating efficient longitudinal propulsion. The propulsive force is approximately linearly related to the spin speed, and this observation has been effectively utilized in bi-flagellated robots to enable single-direction mobility [13, 14]. The presence of multiple flagella in these robots offers benefits, suggesting alternative approaches for speed enhancement beyond flagellum geometry optimization. However, a major limitation arises in the area of turning or reorienting the body, preventing these robots from swimming freely in space. Recent research has explored changing the spin direction of one or more flagella, gradually reducing the propulsion thrust and generating a turnover torque [4]. This results in rapid tumble events and seemingly erratic body reorientation. Our study models the tumbling event as a predictable phenomenon and aims to incorporate the tumbling mechanism into a bi-flagellated robot to enhance steerability. Figure 1: Snapshots from (a) experiments and (b) simulations. Side view of the bi-flagellated rotating around an axis at $t\in\\{0,5,10\\}\text{s}$. Two identical flagella rotate in opposite directions, i.e., clockwise (CW) and counterclockwise (CCW), at an angular velocity of $\omega=280~{}\text{rpm}$. To imitate the fluid-structure interplay between flagellum and low Reynolds number flow, computational fluid dynamics model including Resistive Force Theory (RFT), Slender Body Theory (SBT) [15], and Regularized Stokeslets Segments (RSS) [16] are used to predict the motion of uni- and multi- flagellated robots. RFT introduces the drag coefficient along the tangential and perpendicular directions of the flagellum. The method is computationally inexpensive but neglects the hydrodynamic interactions between flows induced by different parts of the flagellum. An accurate quantitative analysis requires a non-local hydrodynamics force model that accounts for the interaction between the flow induced by distant parts of the filament. Both SBT and RSS rely on the linearity of the Stokes equations for low Reynolds number flow, which can accurately describe the evolution of flagellum dynamics with long-ranged interactions in a viscous fluid. To understand the physical phenomenon of flagellated locomotion, we couple the rigid body dynamics with the hydrodynamics model to simulate the robot’s trajectory. Position and orientation are observable states in the system, which allow us to study periodical locomotion. As shown in Figure 1, the experiment and simulation show good agreement. The proposed simulation framework successfully reveals the interaction between two flagella from long- ranged hydrodynamics. Our contributions are as follows. We model and create a macroscopic bi- flagellated robot to study how different actuation modes can switch robot locomotion patterns. A simple bi-flagellated robot that exploits variation in viscosity and structure in its tails can effectively reorient the body. A framework comprising experiments and simulations is developed to study the robot’s locomotion. The simulation tool can be used to generate data to explore the parameter space of the tumbling phenomenon. Meanwhile, the robot’s dynamics are fully described and can be used to formulate the control scheme. The physics behind the tumbling locomotion is elaborated in detail. The simplicity of the robot and the small number of moving parts can eventually lead to the miniaturization of this robot. The paper is organized as follows. In section II, we demonstrate the structural design and experimental setup of the bi-flagellated robot. Section III presents a computational framework to describe the robot dynamics in a viscous fluid. Section IV explores the optimal robot geometry for best dynamics performance, and we validate our simulation results against the experiments. Section V concludes our works and proposes the potential directions for future study. ## II Experimental design ### II-A Robotic structure The robot depicted in Figure 2 (a) consists of a cylindrical head and two right-handed helical flagella with plates attached to the motor shaft. The head has a radius of $r_{h}$ measuring 2.5 cm and a height of $h$ measuring 4.3 cm. Inside the head, two tiny brushed DC geared motors are located, each rated at 6 V voltage and capable of a stall current of 1.5 A. The motors are equipped with magnetic encoders and an IMU module. The motor shaft protrudes from the head, and its rotation direction and speed are controlled via PWM feed using a microcontroller. The flagella are manufactured using rapid prototyping techniques with Polylactic acid (PLA), a type of 3D printing material. The PLA flagella have a fixed cross-sectional radius of $r_{0}$ measuring 1.58 mm and a helix radius $R$ measuring 6.36 mm. To generate sufficient experimental data for investigating the tumbling mechanism, the helix length $l$ is varied between 63.6 and 127.2 mm, while the helix pitch $\lambda$ is varied between 15.9 and 63.6 mm. The PLA material used for the flagella is considered to be non-deformable, with a Young’s modulus $E$ of 4.107$\times 10^{9}$ Pa. These design and parameter variations allow for the exploration of different configurations of the flagellated robot and provide a range of experimental data to study the underlying mechanisms of tumbling. Figure 2: Robot schematic and experimental setup. (a) The bio-inspired robot is comprised of two components: (i) a cylindrical head with radius $r_{h}$ and height $h$, and (ii) two helical flagella tails with radius $r$, length $l$, pitch $\lambda$, and cross-section radius $r_{0}$. (b) The bi-flagellated robot immerses into glycerin and rotates its body around a steering joint (1-DOF), i.e., y-axis. The rotations around other axes and translations are limited for. ### II-B Experimental setup Our experiments are designed to serve two main purposes: (i) to explore the optimal structure of the flagellum and robot that generates maximum steering efficiency, and (ii) to achieve controllable direction changes. The details of our experimental setup are as follows. Figure 2 (b) illustrates the experimental apparatus used to validate the numerical simulations developed in the subsequent section. The platform comprises four components: (i) a glycerin tank with dimensions of 122 cm (length) $\times$ 45 cm (width) $\times$ 51.5 cm (height), (ii) the bi- flagellated robot, (iii) a steering joint, and (iv) a positioning frame. Glycerin, with a density $\rho$ of 1.26 g/ml and viscosity $\mu$ of 1 Pa$\cdot$s at 25∘ Celsius, is chosen as the surrounding viscous environment. To facilitate quantitative comparison, we restrict the robot’s degree of freedom (DOF) to a single-axis rotation using the steering joint. This setup enables us to characterize the tumbling behaviors associated with different flagellum designs. The DC geared motors located inside the robot’s head are connected to an external microcontroller, allowing us to adjust the rotational speed and direction. The test platform permits us to vary the rotation speed $\omega$ within the range of 0 to 30 rad/s, ensuring that the low Reynolds number condition is satisfied, i.e., $Re=\rho\omega Rr_{0}/\mu\leq 0.37$. This condition guarantees that the fluid flow is predominately governed by viscosity rather than inertia. By operating within the low Reynolds number regime, we can accurately investigate the hydrodynamic interactions between the flagella and the surrounding fluid. ## III Numerical method of bi-flagellated locomotion TABLE I: Parameters with symbolic representations Symbol | Value | Unit | Description ---|---|---|--- $E$ | 4.107$\times 10^{9}$ | Pa | Young’s modulus $\rho$ | 1260 | kg/$\mathrm{m}^{3}$ | Density $\mu$ | 1 | Pa$\cdot$s | Viscosity $r_{0}$ | 1.58 | mm | Cross-sectional radius $R$ | 6.36 | mm | Helix radius $\lambda$ | 31.8 | mm | Helix pitch $l$ | 95.4 | mm | Helix axial length $d$ | 22 | mm | Flagellum spacing distance $r_{h}$ | 25 | mm | Head radius $h$ | 43 | mm | Head height $r_{m}$ | 2 | mm | Mass center shift $m_{h}$ | 0.1 | kg | Head mass $g$ | 9.8 | m/$\mathrm{s}^{2}$ | Gravitational acceleration $\omega$ | 0.1 | rad/s | Flagellum rotation speed $\Delta l$ | 5 | mm | Discretization length $\Delta t$ | 0.01 | s | Time step $C_{t}$ | 4 | 1 | Translational drag coefficient $C_{r}$ | 2.3 | 1 | Rotational drag coefficient The bi-flagellated robot consists of two main components: the helical flagellum and the cylindrical head. In order to simulate the locomotion of this robot in a viscous fluid medium, we develop a numerical model that combines three key components: (i) a kinematic representation for the bi- flagellated mechanism, (ii) Regularized Stokeslet Segments (RSS) to model the long-ranged hydrodynamics forces, and (iii) a forced-head hydrodynamics model to capture the interaction between the flagellum and the head. This section is structured as follows. In Section III-A, we provide a description of the kinematic representation of the helical flagellum and cylindrical head. This representation allows us to characterize the motion of the robot in terms of its shape and orientation. Then, in Section III-B, we explain how we integrate the kinematic representation with the RSS method to compute the hydrodynamic forces exerted on the flagellum. The RSS method considers the interactions between different segments of the flagellum and the surrounding fluid. Next, in Section III-C, we detail the equations of motion (EOM) that govern the dynamics of the bi-flagellated robot. These EOM are derived from the theory of rigid body dynamics, taking into account the forces and torques acting on the flagellum and the head. By solving these equations, we can simulate the motion and behavior of the robot in response to the hydrodynamic forces. Finally, in Section III-E, we discuss the geometric and physical conditions of the problem, including the dimensions and material properties of the flagellum and head. These conditions play a crucial role in determining the behavior and performance of the bi-flagellated robot in the simulated fluid environment. ### III-A Kinematic representation #### III-A1 Helical flagellum We model the flagella filament as a perfect helix with radius $R$, pitch $\lambda$, and axial length $l$ (see Figure 2 (a)). A right-handed helix in the Cartesian coordinate system is parameterized as a function of $s$, i.e., $\displaystyle\mathbf{r}(s)=\left[R\cos\frac{2\pi s}{L},R\sin\frac{2\pi s}{L},\frac{\lambda s}{L}\right],0\leq s\leq l,$ (1) where $L=\sqrt{(2\pi R)^{2}+\lambda^{2}}$ is the contour length of one helical turn. In the case of a left-handed helix, the second term has a negative sign. We employ the discretization method to model the kinematic of helical filament. In the schematic of Figure 3 (a), each discrete helical curve consists of $N+1$ nodes, i.e., $\mathbf{n}=[\mathbf{n}_{0},\mathbf{n}_{1},\cdots,\mathbf{n}_{N}]$. We take the first two nodes $\mathbf{n}_{0}$ and $\mathbf{n}_{1}$ as the connection to the head. Starting from $i=2$, the coordinate of the remaining nodes is calculated by taking $s=(i-2)l/(N-2)$ in Equation 1. The $N+1$ nodes correspond to $N$ edge vector $\mathbf{e}^{1}$, $\cdots$,$\mathbf{e}^{N-1}$, such that $\mathbf{e}^{i}=\mathbf{x}_{i+1}-\mathbf{x}_{i}$, where $i=1,\cdots,N-1$. Hereby, we denote the node-associated quantities by superscripts and edge- associated quantities by subscripts. Nodal positions constitute the $3N$ sized DOF vector, i.e., $\mathbf{X}=[\mathbf{x}_{0},\cdots,\mathbf{x}_{N}]^{T}$, where the superscripts $T$ denotes transposition. Because the rigid flagellum can only rotate around a single fixed axis, i.e., z-axis, the angular velocity vector is specified as $\boldsymbol{\omega}=[0,0,\omega_{z}]$. By defining the rotation axis $\mathbf{x}_{\text{rotate}}=[0,0,1]$, we can obtain the linear velocity of each node by $\dot{\mathbf{x}}_{i}=\boldsymbol{\omega}\times(\mathbf{x}_{i}-\mathbf{x}_{\text{rotate}})$, where $\times$ denotes the cross product of two vectors. With nodal velocities, we can update the nodal positions at each time step. Rearranging the derivative of DOF vector as $\mathbf{U}=\dot{\mathbf{X}}=[\dot{\mathbf{x}}_{0},\cdots,\dot{\mathbf{x}}_{N}]^{T}$, the variable is used to formulate the drag force in RSS (see details in Section III-B). #### III-A2 Cylindrical head Without losing the generality, we use a single head node $\mathbf{n}_{\text{head}}$ in Figure 3 (a.1) to represent the spatial configuration of the bi-flagellated system. Concerning a prescribed fixed coordinate system, denoted as inertial frame $\mathbf{x}^{I}:x^{I}-y^{I}-z^{I}$, we can describe the translation and rotation on the rotated coordinate system, designated as body frame $\mathbf{x}^{B}:x^{B}-y^{B}-z^{B}$ attached to the head (see Figure 3 (b)). We take the Euler angles to represent the orientation of the head, which are typically denoted as yaw $\alpha$, pitch $\beta$, and roll $\gamma$ in $Z-Y-X$ convection. In the steering joint setup, we define pitch angle $\beta$ as the angle between the axis $z_{I}$ and axis $z_{B}$ to describe the orientation of the bi-flagellated system. Further, to model the free swimming motion after removing the steering joint, we introduce quaternion for orientation $\mathbf{q}=(q_{0},q_{1},q_{2},q_{3})$ (convertable to Euler angle) and axial angular velocities along body frame $\boldsymbol{\omega}^{B}=(\omega_{x}^{B},\omega_{y}^{B},\omega_{z}^{B})$, and define DOF vector $\mathbf{Q}=[\mathbf{x}^{I},\dot{\mathbf{x}}^{I},\mathbf{q},\boldsymbol{\omega}^{B}]$ to represent spatial information. ### III-B Regularized Stokeslets Segments Figure 3: Kinematic representation of bi-flagellated system. (a) Long-range hydrodynamics. (a.1) Discrete schematic of the bi-flagellated robot. Each helical flagellum is discretized into $N+1$ nodes. Superscript denotes the helix index, and subscript denotes the node index. $\mathbf{n}_{0}^{1}$ and $\mathbf{n}_{0}^{2}$ connect the flagellum with the head node, as the application points of the forces generated by each helix, interacting with the head node in rigid body dynamics. Inset: Notations associated with the flow $\dot{\mathbf{x}}_{m}$ at point $\mathbf{x}_{m}$ generated by a line segment $\mathbf{e}_{k}=\mathbf{x}_{k+1}-\mathbf{x}_{k}$. (a.2) Time series of normalized hydrodynamics forces $\bar{F}$ along x, y, and z direction with flagella spacing distance $d=0.022\text{ m}$ and rotation speed $\omega_{1}=1\text{ rad/s}$ and $\omega_{2}=-1\text{ rad/s}$. The sinusoidal wave pattern arises due to the long-ranged hydrodynamics interactions between flows induced by different segments of the flagella. (b) The description of the robot coordinate in the body frame and inertia frame with Euler’s angle representation. The green arrow line $\mathbf{N}$ indicate the line of nodes. We use Regularized Stokeslets Segments (RSS) methods to model the viscous drag force experienced by a helical flagellum in motion within a viscous fluid. The relation between the velocity vector $\mathbf{U}$ of (size $3N$) at nodes set $\mathbf{n}$ and the hydrodynamics force vector $\mathbf{F}$ of (size $3N$) applied on them is linearly configured by a geometry-associated matrix $\mathbf{A}$ (vector of size $3N\times 3N$), i.e., $\displaystyle\mathbf{U}=\mathbf{A}\mathbf{F},$ (2) We describe the formulation of matrix $\mathbf{A}$ on the discretized helical flagellum as follows. The primary Green’s function of Stokes flow is the Stokeslets, which describes the flow associated with a singular point force. Referring to Figure 3 (a), RSS provides a relationship between the velocity $\dot{\mathbf{x}}_{m}$ at a point $\dot{\mathbf{x}}_{m}$ and the forces applied by each node on the fluid such that $8\pi\mu\dot{\mathbf{x}}_{m}=\sum_{k=0}^{N-2}\left(\mathbf{A}_{1}^{k}\mathbf{f}_{\mathrm{k}}^{h}+\mathbf{A}_{2}^{k}\mathbf{f}_{\mathrm{k}+1}^{h}\right),$ (3) where $\mathbf{f}_{\mathrm{k}}$ is the force vector of size 3 that represents the force applied by the $k$-th node onto the fluid. This is equal and opposite to the hydrodynamics force onto the $k$-th node. The matrix $\mathbf{A_{1}^{k}}$ and $\mathbf{A_{2}^{k}}$ are $\begin{array}[]{r}\mathbf{A}_{2}^{k}=\left|\mathbf{s}_{k}\right|\left(\left(T_{1,-1}^{k,k+1}+c^{2}T_{1,-3}^{k,k+1}\right)\mathbf{I}+T_{1,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{r}_{k}^{T}\right)+\right.\\\ \left.T_{2,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{s}_{k}^{T}+\mathbf{s}_{k}\mathbf{r}_{k}^{T}\right)+T_{3,-3}^{k,-3}\left(\mathbf{s}_{k}\mathbf{s}_{k}^{T}\right)\right),\\\ \mathbf{A}_{1}^{k}=\left|\mathbf{s}_{k}\right|\left(\left(T_{0,-1}^{\mathbf{k},\mathbf{k}+1}+c^{2}T_{0,-3}^{\mathbf{k},\mathbf{k}+1}\right)\mathbf{I}+T_{0,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{r}_{k}^{T}\right)+\right.\\\ \left.T_{1,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{s}_{k}^{T}+\mathbf{s}_{k}\mathbf{r}_{k}^{T}\right)+T_{2,-3}^{k,k+1}\left(\mathbf{s}_{k}\mathbf{s}_{k}^{T}\right)\right)-\mathbf{A}_{2}^{k},\end{array}$ where $\mathbf{x}_{m}$ is the point of measurement, $c$ is the regularization parameter (from analysis in [16], $c=1.031\cdot r_{0}$), $\mathbf{I}$ is the 3-b-3 identity matrix, $\mathbf{r}_{k}=\mathbf{x}_{m}-\mathbf{x}_{k}$, $\mathbf{r}_{k+1}=\mathbf{x}_{m}-\mathbf{x}_{k+1}$, and $\mathbf{e}_{k}=\mathbf{x}_{k+1}-\mathbf{x}_{k}$ are the position vectors between edge and point of measurement, and the scalar quantities denoted by $T$, e.g., $T_{0,-1}^{k,k+1}$ are expressed as follow $\displaystyle T_{0,-1}^{k,k+1}=\left.\frac{1}{\left|\mathbf{s}_{k}\right|}\log\left[\left|\mathbf{s}_{k}\right|R+\left(\mathbf{x}_{\alpha}\cdot\mathbf{s}_{k}\right)\right]\right|_{0}^{1},$ $\displaystyle T_{0,-3}^{k,k+1}=-\left.\frac{1}{R\left[\left|\mathbf{s}_{k}\right|R+\left(\mathbf{x}_{\alpha}\cdot\mathbf{s}_{k}\right)\right]}\right|_{0}^{1},$ $\displaystyle T_{1,-1}^{k,k+1}=\left.\frac{R}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}\right|_{0}^{1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{0,-1}^{k,k+1}\text{, }$ $\displaystyle T_{1,-3}^{k,k+1}=-\left.\frac{1}{R\left(\left|\mathbf{s}_{k}\right|\right)^{2}}\right|_{0}^{1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{0,-3}^{k,k+1}\text{, }$ $\displaystyle T_{2,-3}^{k,k+1}=-\left.\frac{\alpha}{R\left(\left|\mathbf{s}_{k}\right|\right)^{2}}\right|_{0}^{1}+\frac{1}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{0,-1}^{k,k+1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{1,-3}^{k,k+1},$ $\displaystyle T_{3,-3}^{k,k+1}=-\left.\frac{\alpha^{2}}{R\left(\left|\mathbf{s}_{k}\right|\right)^{2}}\right|_{0}^{1}+\frac{2}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{1,-1}^{k,k+1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{2,-3}^{k,k+1},$ where $\mathbf{x}_{\alpha}=\mathbf{x}_{k}-\alpha\mathbf{s}_{k}$, and $R=\sqrt{|\mathbf{x}_{\alpha}|^{2}+c^{2}}$. The geometry matrix $\mathbf{A}$ is formulated by a rearrangement of block matrices $\mathbf{A}_{1}^{k}$ and $\mathbf{A}_{2}^{k}$ on the corresponding node index. At each time step in the simulation, knowing the position $\mathbf{X}$ and velocity $\mathbf{U}$ of the node set $\mathbf{n}$, we can construct the geometry matrix $\mathbf{A}$. Then we employ the force-velocity relationship in Equation 2 to evaluate the hydrodynamics forces by $\mathbf{\hat{F}}=\mathbf{A}\setminus\mathbf{U}$. ### III-C Forced-head hydrodynamics model To study the locomotion of a bi-flagellated system under external forces, we build the forced-head hydrodynamics model by using the aforementioned kinematic representation. A single head node $\mathbf{n}^{\text{head}}$ and two interconnected nodes from helical flagellum $\mathbf{n}_{0}^{1}$ and $\mathbf{n}_{0}^{2}$. in Figure 3 describe the configuration of the dynamical system. Head node $\mathbf{n}_{\text{head}}$ accounts for the hydrodynamics drag force and torque induced by the translation and rotation of the head. Two connected nodes $\mathbf{n}_{0}^{1}$ and $\mathbf{n}_{0}^{2}$ with equal distance $d/2$ to $\mathbf{n}^{\text{head}}$ account for the resultant of hydrodynamics forces and torque generated by two helical flagella. In this section, we analyze the forces and torques applied to the system and formulate the equation of motion of bi-flagellated locomotion. Hydrodynamics force on head. We use the Stokes law to compute the hydrodynamics force on the robot head. As the head translates with velocity $\dot{\mathbf{x}}^{\text{head}}$, the viscous fluid exerts a drag force to resist the translation. Likewise, when the head rotates with angular velocity $\boldsymbol{\omega}^{\text{head}}$, the viscous fluid applies torque to resist that rotation. Stokes’ law model the hydrodynamics drag by considering the object as a small sphere. For the cylindrical head, we can introduce two prefactors $C_{t},C_{r}$ to account for the non-sphericity of the object. Therefore, we model the translation drag force as $\mathbf{f}_{t}^{\text{head }}=-C_{t}\cdot 6\pi\mu r_{h}\dot{\mathbf{x}}^{\text{head}},$ (4) where $r_{h}$ is the radius of the head, and the rotation drag torque as $\mathbf{T}_{r}^{\text{head}}=-C_{r}\cdot 8\pi d_{r}^{3}\boldsymbol{\omega}^{\text{head}},$ (5) where $d_{r}$ is reference dimension to account for the non-spherical shape. The value of $C_{t}$ and $C_{r}$ are determined by drop and rotation test. In our model, $\mathbf{f}_{t}^{\text{head }}$ and $\mathbf{T}_{r}^{\text{head}}$ are applied on the head node $\mathbf{n}^{\text{head}}$. Righting moment due to mass distribution. In our bi-flagellated system, the head is conditioned to be neutral buoyancy. Therefore, the gravitational force and buoyancy forces are balanced, i.e., $mg=\rho Vg$, where $m$ is the mass of the head, $V$ is the volume of the head, $\rho$ is the density of fluid medium, and $g$ is gravitational acceleration. However, the mass is not uniformly distributed along the robot head. When the center of mass (COM) and center of geometry (COG) is shifted by a distance $\textbf{r}_{m}$, a righting moment tend to restore the robot to its previous attitude after any rotational displacement. The moment can be modeled as $\displaystyle\mathbf{T}_{m}^{\text{head}}=mg\textbf{r}_{m}\sin{\beta},$ (6) where $\textbf{r}_{m}$ is displacement vector pointing from COM to COG, and $\beta$ is the pitch angle. Propulsive force from flagellum. In Section III-B, we evaluate the hydrodynamics forces of each node along the discrete helical flagellum by the method of RSS. In the bi-flagellated system, two flagella provide the propulsion for the head. The propulsive force is equivalent to the resultant forces of all nodes forces of each flagellum applied on the two connection nodes $\mathbf{n}_{0}^{1}$ and $\mathbf{n}_{0}^{2}$, i.e., $\mathbf{f_{p}^{\text{tail}}}=\sum_{k=1}^{N}\mathbf{f_{k}}$. For simplicity, we denote the resultant forces for two flagella as $\mathbf{f_{p}^{1}}$ and $\mathbf{f_{p}^{2}}$. Figure 3(b) provides a time evolution of the resultant forces when two flagella rotate in the opposite direction. The force amplitude shows a sinusoidal pattern resulting from the long-ranged coupling between two flagella. The opposite sign of forces along the z-axis can cancel the propulsive effect but instead generate a turn-over torque because of the spacing distance $d$, which is the fundamental mechanism of the tumbling phenomenon. The torque applied equivalently on head node $\mathbf{n}^{\text{head}}$ is given by $\displaystyle\mathbf{T}^{\text{tail}}=(\mathbf{f_{p}^{1}}-\mathbf{f_{p}^{2}})\times\dfrac{d}{2},$ (7) where $\mathbf{r_{1}}=\mathbf{x}_{0}^{1}-\mathbf{x}^{\text{head}}$ and $\mathbf{r_{2}}=\mathbf{x}_{0}^{2}-\mathbf{x}^{\text{head}}$. In summary, the external forces and torques applied on the head includes $\mathbf{f}_{t}^{\text{head}},\mathbf{T}_{r}^{\text{head}},\mathbf{T}_{m}^{\text{head}},\mathbf{f_{p}^{\text{tail}}},\mathbf{T}^{\text{tail}}$. The governing equations of pivot steering in terms of pitch angle $\beta$ is: $I_{y}\mathbf{\ddot{\beta}}=T_{rz}^{\text{head}}+T_{mz}^{\text{head}}+T^{\text{tail}}_{z},$ (8) where the subscript $z$ represents the torch component along z direction, and the governing equation of free swimming in terms of the DOF vector $\mathbf{Q}$ is $\displaystyle m\begin{bmatrix}0\\\ \ddot{\mathbf{x}}^{I}\end{bmatrix}$ $\displaystyle=\mathbf{q}\otimes\begin{bmatrix}0\\\ \mathbf{f}_{t}^{\text{head}}+\mathbf{f_{p}^{1}}+\mathbf{f_{p}^{2}}\end{bmatrix}\otimes\mathbf{q}^{*},$ (9) $\displaystyle\dot{\mathbf{q}}$ $\displaystyle=\frac{1}{2}\mathbf{q}\otimes\begin{bmatrix}0\\\ \mathbf{\omega}^{B}\end{bmatrix},$ $\displaystyle\mathbf{J}\dot{\mathbf{\omega}}^{B}$ $\displaystyle=-\mathbf{\omega}^{B}\times\mathbf{J}\mathbf{\omega}^{B}+\mathbf{T}_{r}^{\text{head}}+\mathbf{T}_{m}^{\text{head}}+\mathbf{T}^{\text{tail}},$ where $\mathbf{q}^{*}$ is the conjugate of $\mathbf{q}$, and $\mathbf{J}=\text{diag}(I_{x},I_{y},I_{z})$ is matrix of moment of inertia. ### III-D Control scheme of pivot tumbling To study the controllability of tumbling, we rewrite Equation 8 as the state space model by defining the state vector $\mathbf{x}\triangleq[\beta,\dot{\beta}]$ and control input vector $u\triangleq[\mathbf{f^{1}_{p}},\mathbf{f^{2}_{p}}]$ (assuming small $\beta$, such that $\sin(\beta)\approx\beta$ holds): $\dot{\mathbf{x}}(t)=\mathbf{A}\mathbf{x}(t)+\mathbf{B}\mathbf{u}(t),\quad\mathbf{x(0)}=\mathbf{x}_{0},$ where $\mathbf{A}=\begin{bmatrix}0&1\\\ -\dfrac{mgr_{m}}{I_{y}}&-\dfrac{8\pi C_{r}\mu h^{3}}{I_{y}}\end{bmatrix},\mathbf{B}=\frac{d}{2I_{y}}\begin{bmatrix}0&0\\\ 1&-1\end{bmatrix}$ The system is structurally stable because the eigenvalues of matrix $\mathbf{A}$ have negative real part. The states of the system asymptotically converges to the steady condition $\beta_{\text{ss}}=\dfrac{(\mathbf{f^{1}_{p}}-\mathbf{f^{2}_{p}})d}{2mg\textbf{r}_{m}},\quad\dot{\beta}_{\text{ss}}=0$ (10) Therefore, to realize the desired pitch angle $\beta_{\text{ref}}$, we require $\mathbf{f^{1}_{p}}$ and $\mathbf{f^{2}_{p}}$ to satisfy below conditions $\displaystyle\mathbf{f^{1}_{p}}+\mathbf{f^{2}_{p}}$ $\displaystyle=0,\text{\quad(max torque)}$ (11) $\displaystyle\dfrac{(\mathbf{f^{1}_{p}}-\mathbf{f^{2}_{p}})d}{2mg\textbf{r}_{m}}$ $\displaystyle=\beta_{\text{ref}},\text{\quad(steady state value)}$ $\displaystyle\|\mathbf{f^{1}_{p}}\|,\|\mathbf{f^{2}_{p}}\|$ $\displaystyle\leq\mathbf{f_{\max}}.\text{\quad(effective propulsion)}$ However, to implement a actual control for the propulsion $\mathbf{f^{1}_{p}}$ and $\mathbf{f^{2}_{p}}$, we need more knowledge on the mechanism of flagellated propulsion. In Section IV, we characterize the propulsion with flagellum geometry and rotation speed. ### III-E Definition of the problem The general framework introduced above for the forced-head dynamics is now applied to generating bi-flagellated locomotion. We provide specifics on the geometry and physical parameters of this problem. The flagellum is chosen to be a rigid right-handed helical filament with Young’s modulus $E$. The geometrical parameters describing the helix structure include helix pitch $\lambda$, helix radius $R$, axial length $l$, and cross-sectional radius $r_{0}$. The values of the parameters in Table I are chosen to match the laboratory experiments described in Section II. We use a dimensionless scheme to generalize the results, except for several fundamental variables, to make a valid comparison between macroscopic and microscopic mechanisms [17]. The procedures are introduced as follows. The helical flagellum is connected to the head at one extremity, where it is rotated counterclockwise with a prescribed angular velocity $\omega$. Two flagella are spaced with a specific distance $d$. Hereafter, we normalize the spacing distance by the helix radius, $R$, such that the normalized distance is $\bar{d}=d/R$, where the overbar symbol $\bar{\cdot}$ denotes the normalized variables. Likewise, the geometrical parameters that describes the flagellum structure include $\bar{\lambda}=\lambda/l$ and $\bar{R}=R/\lambda$. The input for the bi-flagellated system is angular velocity $\omega$ as a function of time. The propulsive thrust $F$ is the z component of $\mathbf{f_{p}^{\text{tail}}}$ and turnover torque $T$ is the y component of $\mathbf{T}^{\text{tail}}$, with the normalization as $\bar{F}=F/(\mu\omega RL)$ and $\bar{T}=T/(\mu\omega R^{2}L)$, where $\mu$ denote the viscosity of fluid. This dimensionless representation allows for generality across length scales in interpreting our findings. ## IV Results and discussion The bi-flagellated system can undergo different motions by varying the actuation modes of the two motors. In this section, we study the mechanism of tumbling behavior. We first explore the parameter space that enhances the direction change from flagellum geometry and robot structure. Then we show a bi-flagellated robot that can efficiently reorient the body by tumbling. ### IV-A Four distinct locomotion patterns Figure 4: Locomotion patterns of the bi-flagellated robot. The robot can turn and translate when rotating two flagella with different modes. Denote $\omega>0$ when flagellum rotating counterclockwise. (a.1) Left turn when $\omega_{1}>0,\omega_{2}<0$, (a.2) Right turn when $\omega_{1}<0,\omega_{2}>0$, (a.3) upward translation when $\omega_{1}<0,\omega_{2}<0$, (a.4) downward translation when $\omega_{1}>0,\omega_{2}>0$. (b.1)-(b.4) The hydrodynamics force applied at two flagella with corresponding flagella rotation modes (a.1)-(a.4). The light red arrows represent the force direction applied on the node, and the dark red arrows represent the resultant force direction applied on the extremity of the flagellum. Since two flagella are rotated by two motors separately, we can propose four different actuation combinations according to the rotation direction of the two motors, i.e., 1. 1. $\omega_{1}>0$, $\omega_{2}<0$ 2. 2. $\omega_{1}<0$, $\omega_{2}>0$ 3. 3. $\omega_{1}<0$, $\omega_{2}<0$ 4. 4. $\omega_{1}>0$, $\omega_{2}>0$ where we denote $\omega_{i}>0$ as the counterclockwise rotation from the overlook view. The magnitude of angular velocities is equal regardless of the rotation direction, i.e., $|\omega_{1}|=|\omega_{2}|$. By the numerical simulation tool in Section III, we explore all the possible locomotion patterns from the above combinations. Figure 4 (a) demonstrates four different locomotion induced by two flagella, including left turn, right turn, upward translation, and downward translation. The corresponding plots of the force field can interpret the dynamics mechanism behind each motion in Figure 4 (b). For the right-handed helix, a counterclockwise rotation yield a resultant force pointing obliquely downward, while the clockwise rotation yield an upward force, as the dark red arrows show. Rotating two flagella with the same direction ensures the two resultant forces point up or down simultaneously but not precisely in the same direction due to the hydrodynamics coupling. On the contrary, rotating two flagella in the opposite direction make the resultant forces work as a force couple that is subject to generating a turnover torque. If the torque is large enough to overcome the intrinsic inertial of the head, directional change takes place with time evolution. Figure 5: Geometrical parameters determine the magnitude of propulsive thrust and turnover torque. (a) Dependence of normalized propulsive force $\bar{F}$ (log-scaled) on both normalized pitch $\lambda/l$ and normalized radius $R/\lambda$. (b) Dependence of normalized turnover torque $\bar{T}$ (log-scaled) on both normalized pitch $\lambda/l$ and normalized radius $R/\lambda$. The upward triangle symbols in (a) and (b) correspond to Table II. The cycled red marker corresponds to the bi-flagellated robot. The red and green pentagram markers corresponds to $l=1600R$ and $l=6.25R$, respectively. (c) Normalized turnover torque $\bar{T}$ as a function of normalized flagella spacing distance $d/R$. (d) Normalized propulsive thrust $\bar{F}$ as a function of normalized flagella spacing distance $d/R$. Inset in (c) and (d): the normalized resultant force of two flagella as a function of normalized flagella spacing distance $d/R$. The non-linear relationship emerges when two flagella are in proximity, i.e., $d/R<5$, due to hydrodynamics coupling between two flagella. TABLE II: Parameters of flagellum for five species of bacteria Label | Microorganism | $\lambda/l$ | $R/\lambda$ ---|---|---|--- 1 | Caulobacter crescentus [18] | 0.1667 | 0.1205 2 | Escherichia coli [19] | 0.3571 | 0.0909 3 | Rhizobium lupini [20] | 0.2500 | 0.1852 4 | Salmonella [21] | 0.2500 | 0.0909 5 | Vibrio alginolyticus [22] | 0.3243 | 0.1167 $\color[rgb]{1,0,0}\circ$ | Bi-flagellated robot | 0.33 | 0.2 ### IV-B Design space for optimal flagellum geometry Figure 6: Comparison of simulation and experiment for fixed-body rotation. (a) Time evolution of pitch angle $\beta$ when angular speed of flagellum varying from 6.28 rad/s to 20.94 rad/s with $\lambda/l=3,R/\lambda=5,d/R=3.5$. (b) The steady state values of pitch angle $\beta_{\text{max}}$, as a function of angular velocity $\omega$, with a parameter set: $\lambda/l=3,R/\lambda=5,d/R=3.5$. (c) The steady state values of pitch angle $\beta_{\text{max}}$, as a function of normalized radius $R/\lambda$, with a parameter set: $l/R=15,d/R=3.5$, and $\omega$ = 20.9 rad/s. (d) The steady state values of pitch angle $\beta_{\text{max}}$, as a function of normalized pitch $R/\lambda$, with a parameter set: $R/\lambda=5,d/R=3.5$, and $\omega$ = 20.9 rad/s. Thus far, our findings on the bi-flagellated actuation and corresponding locomotion patterns have brought insight into the run-and-turn behaviors of bacteria. Next, we perform a broader exploration of the parameter space for the structure of the helical flagellum and robot, emphasizing the ranges relevant to natural multi-flagellated cells. We use normalized propulsive thrust $\bar{F}$ and turnover torque $\bar{T}$ to represent propulsion and steering efficiency. $\bar{F}$ characterize propulsion ability by a single helical flagellum. In contrast, $\bar{T}$ represents the ability of direction change formed by the force couple accounting for the long-ranged hydrodynamics effect. A systematic parametric study is performed to quantify the dependence of these variables on the geometry and structure parameters. In Figure 5 (a) and (b), we plot the magnitude of $\bar{T}$ and $\bar{F}$ (colorbar) in log-scale on the normalized pitch and normalized radius at $\omega=20.94$ rad/s. We set helix radius $R$ as a constant value of 0.0064 m and compute the helix pitch $\lambda$ and length $l$ per the corresponding value of normalized values. In the plot of torque, we make the distance between two flagella as constant $d=3R$ to eliminate the effect of spacing distance $d$. Two phase diagrams have commonalities in the geometrical parameters. We find that the two quantities increases as the normalized pitch and radius increase. The highest value locate at the left-bottom region, indicating the maximum torque and thrust. However, the region represents an elongated flagellum concerning the radius, i.e., $L=1600R$, which is less common in bacteria due to its poor maneuverability and high energy cost. From the parameter distribution of several species of bacteria in Table II, as the upward triangles, we learn that the optimal geometry locates on the right- bottom region is a trade-off between dynamic performance and dimensionality. Therefore, as the red marker shows, we take $\lambda/l=0.33$ and $R/\lambda=0.2$ as representative values. In Figure 5 (c) and (d), we plot the magnitude of $\bar{T}$ and $\bar{F}$ as a function of normalized space distance $d/R$, at the representative geometrical values. As $d/R$ increases, $\bar{T}$ and $\bar{F}$ monotonically increase, but non-linearity of the curve due to hydrodynamics occurs when two flagellum is in proximity. Intriguingly, from the insets of two plots, we see that the long-ranged hydrodynamics exert different effects when two flagella rotate differently. Hydrodynamics escalates the force magnitude when they rotate in the opposite direction but decreases the magnitude when they rotate in the same direction. Given torque is a product of force and arm, the magnitude almost remains the same when the normalized spacing distance $d/R$ increases from 2 to 3.5. The result implies the maximum propulsive force and turnover torque occur when two flagella are spaced an infinite distance. However, the long spacing distance is not preferable for either the bacteria or bio- inspired robot. The multi-flagellated system must compromise with dimension and propulsion efficiency. Here, we take $d/R=3.5$ in our bi-flagellated system to ensure proper size of the head. ### IV-C Analysis on tumbling process Toward validating the numerical simulations presented in Section III, we now perform a direct quantitative comparison with experimental results using the apparatus described in Section II. Emphases are given to the evolution of pitch angle $\beta$, as an accumulative result of applied forces and torques. In this section, we investigate how pitch angle $\beta$ evolved with different flagellum parameters, including normalized pitch $\lambda/l$ and rotation speed $\omega$. In Figure 6(a), we plot $\beta$ as a function of time $t$, with the reference assumed $\lambda/l=3,R/\lambda=5,d/R=3.5$. The pitch angle keeps increasing with time and reaches a maximum value, denoted as $\beta_{\text{max}}$. The magnitude of $\beta_{max}$ is proportional to the rotation speed of the flagella, which is ensured by the steady condition in Equation 10. Therefore, though without measurement the value of turnover torque in experiment, we can use the $\beta_{\text{max}}$, a.k.a., $\beta_{\text{ss}}$ as an indicator of torque magnitude. Figure 6 (b) plot the relationship between $\beta_{\text{ss}}$ and rotation speed $\omega$, and show an excellent agreement between experiments and simulations. We learn that the magnitude of torque generated by forces $\mathbf{f^{1}_{p}}$ and $\mathbf{f^{2}_{p}}$ is linear with the rotation speed of the flagellum. We employ our numerical simulations to explore the effect of $\lambda/l$ with a comparison with experiments when keeping the angular velocity of flagellum $\omega=20.94$ rad/s, for which we showed in the previous results that there is a significant direction change effect. Figure 6(c) show a good match between the experiment and simulation of the tendency of $\beta_{\text{max}}$ on $\lambda/l$. The agreement validate the result in Figure 5(a) about the relationship between turnover torque and flagellum geometry. ### IV-D Attitude control of bi-flagellated robot Figure 7: Attitude control of bi-flagellated robot. (a) Relationship between propulsion force and flagellum rotation speed $\omega$ with $d/R=0.35,R/\lambda=0.2$, and $\lambda/l=0.33$. The slopes $K_{1}=0.0006$ Ns/rad and $K_{2}=-0.0006$ Ns/rad. (b) The evolution of pitch angle $\beta$ when apply the attitude control scheme with a reference $\beta_{\text{ref}}=35^{\circ}$. The previous sections shows that the propulsion force and turnover torque are associated with flagellum structure $d/R,R/\lambda,\lambda/l$ and rotation speed $\omega$. As for the control problem, it is not feasible to change the structure related parameters to vary the propulsion force and torque in process. Therefore, we take the rotation speed of two flagella $\omega_{1},\omega_{2}$ as the actual control variable. Figure 6(b) illustrate the torque is linear with the rotation speed for given flagellum structure. Through our computation framework, we evaluate the propulsion force by $\mathbf{f^{1}_{p}}=K_{1}\omega_{1},\mathbf{f^{2}_{p}}=K_{2}\omega_{2}$ as Figure 7(a). This allows us to realize the control scheme mentioned in Section III-D. We showcase a tracking for a constant attitude angle $\beta_{\text{ref}}=35^{\circ}$ in both simulation and experiment. By solving Equation 11, we obtain $\omega_{1}=201.6\text{rpm},\omega_{2}=-201.6\text{rpm}$. Then we set the rotation speed of two flagellum with the values and the pitch angle $\beta$ evolved as Figure 7(b). ## V Conclusions and future work In conclusion, we present a bi-flagellated mechanism and a numerical simulation framework for studying bacteria tumbling behavior. A dimensionless scheme generalize our results to the bacteria level. The framework are used to explore the relationship between the steering ability and the structural parameters of the bi-flagellated system. The attitude control scheme ensure us to control the orientation of the robot. Directions for future work include: (i) formulation of optimal control policy for free swimming robot, and (ii) develop the simulation tools for soft and elastic flagellum, accounting for the contact effect between flagella. ## References * [1] C. Brennen and H. Winet, “Fluid mechanics of propulsion by cilia and flagella,” Annual Review of Fluid Mechanics, vol. 9, pp. 339–398, 1977\. * [2] E. Lauga and T. R. Powers, “The hydrodynamics of swimming microorganisms,” Reports on progress in physics, vol. 72, no. 9, p. 096601, 2009. * [3] A. Dolev, M. Kaynak, and M. S. Sakar, “On-board mechanical control systems for untethered microrobots,” Advanced Intelligent Systems, vol. 3, no. 10, p. 2000233, 2021. * [4] M. Dvoriashyna and E. Lauga, “Hydrodynamics and direction change of tumbling bacteria,” Plos one, vol. 16, no. 7, p. e0254551, 2021. * [5] G. I. Taylor, “Analysis of the swimming of microscopic organisms,” Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, vol. 209, no. 1099, pp. 447–461, 1951. * [6] S. Lim, Y. Du, Y. Lee, S. K. Panda, D. Tong, and M. K. Jawed, “Fabrication, control, and modeling of robots inspired by flagella and cilia,” Bioinspiration & Biomimetics, 2022. * [7] R. Vogel and H. Stark, “Motor-driven bacterial flagella and buckling instabilities,” The European Physical Journal E, vol. 35, no. 2, pp. 1–15, 2012. * [8] M. K. Jawed, N. K. Khouri, F. Da, E. Grinspun, and P. M. Reis, “Propulsion and instability of a flexible helical rod rotating in a viscous fluid,” Physical review letters, vol. 115, no. 16, p. 168101, 2015. * [9] Y. Du, J. Lam, K. Sachanandani, and M. K. Jawed, “Modeling the locomotion of articulated soft robots in granular medium,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6495–6502, 2022. * [10] K. Son, J. S. Guasto, and R. Stocker, “Bacteria can exploit a flagellar buckling instability to change direction,” Nature physics, vol. 9, no. 8, pp. 494–498, 2013. * [11] H. C. Berg, “The rotary motor of bacterial flagella,” Annual review of biochemistry, vol. 72, no. 1, pp. 19–54, 2003. * [12] H. C. Berg and D. A. Brown, “Chemotaxis in escherichia coli analysed by three-dimensional tracking,” nature, vol. 239, no. 5374, pp. 500–504, 1972\. * [13] S. Lim, A. Yadunandan, and M. K. Jawed, “Bacteria inspired multi-flagella propelled soft robot at low reynolds number,” arXiv preprint arXiv:2111.12793, 2021. * [14] Z. Ye, S. Régnier, and M. Sitti, “Rotating magnetic miniature swimming robots with multiple flexible flagella,” IEEE Transactions on Robotics, vol. 30, no. 1, pp. 3–13, 2013. * [15] J. Lighthill, “Flagellar hydrodynamics,” SIAM review, vol. 18, no. 2, pp. 161–230, 1976. * [16] R. Cortez, “Regularized stokeslet segments,” Journal of Computational Physics, vol. 375, pp. 783–796, 2018. * [17] M. Kim, J. C. Bird, A. J. Van Parys, K. S. Breuer, and T. R. Powers, “A macroscopic scale model of bacterial flagellar bundling,” Proceedings of the National Academy of Sciences, vol. 100, no. 26, pp. 15481–15485, 2003\. * [18] S. Koyasu and Y. Shirakihara, “Caulobacter crescentus flagellar filament has a right-handed helical form,” Journal of molecular biology, vol. 173, no. 1, pp. 125–130, 1984. * [19] N. C. Darnton, L. Turner, S. Rojevsky, and H. C. Berg, “On torque and tumbling in swimming escherichia coli,” Journal of bacteriology, vol. 189, no. 5, pp. 1756–1764, 2007. * [20] B. Scharf, “Real-time imaging of fluorescent flagellar filaments of rhizobium lupini h13-3: flagellar rotation and ph-induced polymorphic transitions,” Journal of bacteriology, vol. 184, no. 21, pp. 5979–5986, 2002. * [21] R. M. Macnab and M. K. Ornston, “Normal-to-curly flagellar transitions and their role in bacterial tumbling. stabilization of an alternative quaternary structure by mechanical force,” Journal of molecular biology, vol. 112, no. 1, pp. 1–30, 1977. * [22] S. Chattopadhyay and X.-L. Wu, “The effect of long-range hydrodynamic interaction on the swimming of a single bacterium,” Biophysical Journal, vol. 96, no. 5, pp. 2023–2028, 2009.
aligntableaux = bottom # On the first order theory of plactic monoids Daniel Turaev ###### Abstract This paper proves that a plactic monoid of any finite rank will have decidable first order theory. This resolves other open decidability problems about the finite rank plactic monoids, such as the Diophantine problem and identity checking. This is achieved by interpreting a plactic monoid of arbitrary rank in Presburger arithmetic, which is known to have decidable first order theory. The algorithm generating the interpretations is uniform, which answers positively the decidability of the Diophantine problem for the infinite rank plactic monoid. We also show that the plactic monoid of rank 2 is bi- interpretable with Presburger arithmetic. ###### Contents 1. 1 Introduction 2. 2 Background 1. 2.1 Rewriting systems 2. 2.2 The plactic monoid 3. 2.3 Interpretations, theories, and Presburger arithmetic 3. 3 The case $n=2$ 1. 3.1 Interpreting $P_{2}$ in $(\mathbb{Z},0,1,+,-,\leq)$ 2. 3.2 Bi-interpretability 4. 4 The general case 1. 4.1 Multiplication – the idea 2. 4.2 The formula defining $\mu_{x}$ 5. 5 The Diophantine problem in the infinite case 1. 5.1 The plactic monoid of all tableaux 2. 5.2 A plactic monoid on integers 3. 5.3 Two open questions ## 1 Introduction The plactic monoid has its origin in the work of Knuth [22], which based itself on the algorithm developed by Schensted [41]. First studied in depth by Lascoux and Schützenberger [25, 42, 43], its combinatorial properties were applied to the theory of symmetric polynomials to prove the Littlewood- Richardson rule. Due to its origins as a monoid of Young tableaux, it has proved useful in various aspects of geometry and representation theory [14]. More recently, it has found application in Kashiwara’s crystal basis theory [19], with analogous plactic monoids being defined for different root systems associated to crystal bases [27, 28, 29, 30], and used to study Kostka-Foulkes polynomials [26]. Related, plactic-like monoids have also been defined [1, 12, 16, 35], and used to study the combinatorics and growth properties of the plactic monoid, which itself has some interesting combinatorial structure. Cain, Gray, and Malheiro [5] have shown that the plactic monoids are biautomatic, as are related crystal monoids [4], and related plactic-like monoids such as the Chinese, Hypoplactic, and Sylvester monoids [6]. Schensted’s multiplication algorithm can be used to decide the word problem for the plactic monoid. It was shown in 1981 that the plactic monoid has decidable conjugacy problem [25]. A classic generalisation of both the word and conjugacy problems is the Diophantine problem, which has received much attention for free groups [20, 32, 40, 44], where Makanin-Razborov diagrams were used independently by Sela [44] and Kharlampovich and Myasnikov [21] to solve the Tarski problems on the first order theory of free groups111For a survey of these results, see [13]. The Diophantine problem has also been studied for free monoids [33, 45] and is gaining attention in the study of other monoids [15, 36]. In the monoid setting, it asks for an algorithm for deciding whether a given system of equations has a solution in a given monoid. An active area of research is the question of checking identities in the plactic monoids and their monoid algebras. Progress has been made in the rank 3 case [23, 24], and the plactic monoid, bicyclic monoid, and related plactic- like monoids have been shown to admit faithful representations in terms of matrices over the tropical semiring [3, 8, 10, 18]. This implies that every plactic monoid of finite rank satisfies a nontrivial semigroup identity. There is a natural decision problem underpinning this field of study – is it decidable whether a given identity is satisfied by a plactic monoid. In this paper, we show that the plactic monoid of every finite rank has decidable first order theory. This result is a significant generalisation of both of the above ideas. Both identities and Diophantine equations are expressible as first order sentences, thus yielding positive results for both the Diophantine problem and the problem of identity checking. It is not equivalent to these results – the free semigroup has undecidable theory despite having decidable Diophantine problem [39]. Nor does this result follow from the multi-homogeneity of the plactic monoids, as there are multi- homogeneous monoids with undecidable theory, and even undecidable conjugacy problem [7]. The argument presented below is by constructing an interpretation of a plactic monoid in Presburger arithmetic, and could open the door to studying the theories of plactic-like classes of monoid. It is known that all groups interpretable in Presburger arithmetic are abelian-by-finite [37]. This result may also be a starting point for classifying all monoids interpretable in Presburger arithmetic. In preparing this paper for publication, the author was made aware in private communication that Alan Cain and Tara Brough have independently constructed a proof that the plactic monoids are interpretable in Presburger arithmetic, which will appear in a forthcoming paper of theirs. This coincides with the results in section 3.1 and section 4. ### Notation and conventions Write $[n]$ for the set $\\{1,\dots,n\\}$. Write $A$ for a totally-ordered alphabet on $n$ letters, which will usually be $[n]$. The free monoid on an alphabet $A$ will be written with a Kleene star $A^{*}$, and will have identity $\varepsilon$, the empty word. Given a set of generators $A$ and relations $R\subset A^{*}\times A^{*}$, the monoid presentation denoted by $\langle A|R\rangle$ will be the quotient of $A^{*}$ by the congruence generated by $R$. The set $\mathbb{N}$ of natural numbers will contain 0. ## 2 Background ### 2.1 Rewriting systems A string rewriting system (henceforth rewriting system) for $A^{*}$ is a set $R~{}\subset~{}A^{*}\times~{}A^{*}$ of elements $(\ell,r)$, usually written $\ell\to r$, called _rewrite rules_. See the book [2] for a more detailed introduction. For two elements $u,v\in A^{*}$, write $u\to_{R}v$ if $u=x\ell z$, $v=xrz$, and $(\ell,r)\in R$. The transitive and reflexive closure of $\to_{R}$, written $\to_{R}^{*}$, is called the reduction relation of $R$. The symmetric closure of $\to_{R}^{*}$ is a semigroup congruence. The monoid obtained by taking the quotient of $A^{*}$ by this congruence is the monoid presented by $\langle A|R\rangle$. Thus every presentation $\langle A|R\rangle$ also corresponds to a rewriting system, which is written $(A,R)$. A rewriting system is called _Noetherian_ if it has no infinite descending chain. That is, there is no sequence $u_{1},u_{2},\dots\ \in A^{*}$ such that $u_{i}\to_{R}u_{i+1}$ for all $i\in\mathbb{N}$. A rewriting system is called _confluent_ if it has the property that, whenever $u\in A^{*}$ is such that $u\to_{R}^{*}u^{\prime}$ and $u\to_{R}^{*}u^{\prime\prime}$, there exists a $v$ such that $u^{\prime}\to_{R}^{*}v$ and $u^{\prime\prime}\to_{R}^{*}v$. We call a confluent Noetherian rewriting system _complete_. Call $u\in(A,R)$ a _reduced word_ if there is no subword $\ell$ of $u$ that forms the left hand side of a rewrite rule in $R$. By theorem 1.1.12 of [2], if $(A,R)$ is a complete rewriting system, then for every $u\in A^{*}$ there is a _unique, reduced_ $v\in A^{*}$ such that $u\to_{R}^{*}v$. This $v$ is called a _normal form_ for $u$, and forms a cross-section of the monoid $\langle A|R\rangle$, in the sense that every element of the monoid is equal to exactly one reduced word. We may therefore identify a monoid admitting a complete rewriting system with its set of normal forms, and the multiplication being concatenation followed by reducing to normal form. ### 2.2 The plactic monoid We follow the French conventions of Young diagrams having longer rows underneath shorter ones. ###### Definition 2.1. A _Semistandard Young Tableau_ (henceforth simply tableau) is a Young diagram with labelled boxes, with labels satisfying the following conditions * • each row weakly increases left to right * • each column strongly decreases top to bottom ###### Example. 3&4 233 11244 is a tableau. 4&5 6 1 123 is not a tableau. Let $t$ be a tableau with labels taken from $A$. We associate to $t$ a _row reading_ in $A^{*}$. Suppose $t$ is a tableau of $m$ rows, labelled top to bottom as $r_{1},\dots,r_{m}$. The labels of the boxes in each row are an increasing sequence, which can be viewed as a word $r_{i}\in A^{*}$. The row reading of $t$ is then $w=r_{1}r_{2}\dots r_{m}\in A^{*}$. We similarly associate a _column reading_ to $t$. Denote the columns of $t$ from left to right by $c_{1},\dots,c_{m}$. Each such column corresponds to a strictly decreasing sequence $c_{i}\in A^{*}$. The column reading of $t$ is then $w=c_{1}\dots c_{m}\in A^{*}$. ###### Example. The tableau $t=$ 3 2 & 3 1 1 2 2 2 has row reading $32311222=3\ 23\ 11222$ and column reading $32131222=321\ 31\ 2\ 2\ 2$ We now describe Schensted’s algorithm. Consider $A=[n]$ the totally ordered alphabet, and $w\in A^{*}$. We may view $w$ as a finite sequence of numbers. Schensted’s algorithm is used to study the longest increasing and decreasing subsequences of $w$. The algorithm associates a tableau to $w$ with the property that the number of columns of $w$ is the length of the longest _increasing_ sequence, and the number of rows is the length of the longest _strictly decreasing_ sequence. See [41] or chapter 5 of [31] for more details on this combinatorial structure. ###### Definition 2.2 (Schensted’s algorithm). We define $P:A^{*}\to A^{*}$ to be the map sending a word $w$ to the row reading of a tableau recursively as follows: Firstly, $P(\varepsilon)=\varepsilon$. Then suppose $w=x_{1}\dots x_{\ell}\in A^{*}$ and $P(x_{1}\dots x_{\ell-1})=r_{1}\dots r_{m}$ for some rows $r_{i}$ that form the row reading of a tableau. Then we have: 1. 1. If $r_{m}x_{\ell}$ is a row, then we set $P(r_{1}\dots r_{m}x_{\ell})=r_{1}\dots r_{m}x_{\ell}$ 2. 2. If not, then we can write $r_{m}=r_{\alpha}yr_{\beta}$, with $y$ being the leftmost letter such that $x_{\ell}<y$, as this will break the weakly increasing property required of a row. But then $r_{\alpha}x_{\ell}r_{\beta}$ will be a row. So we set $P(r_{1}\dots r_{m}x_{\ell})=P(r_{1}\dots r_{m-1}y)r_{\alpha}x_{\ell}r_{\beta}$ . We call the process in point (2) ‘bumping the letter $y$’. If $t$ has row reading $r_{1}\dots r_{m}$ and column reading $c_{1}\dots c_{k}$, then it is straightforward to show that $P(r_{1}\dots r_{m})=P(c_{1}\dots c_{k})=r_{1}\dots r_{m}$ ###### Definition 2.3 (The plactic monoid). The relation $\sim$ on $A^{*}$ given by $u\sim v\iff P(u)=P(v)$ is a semigroup congruence, and the monoid $A^{*}/\sim$ with multiplication given by $u\cdot v=P(uv)$ is called the _plactic monoid of rank $n$_. Denote this monoid by $P_{n}$. Knuth [22] exhibited a set of defining relations $K$ for the plactic monoids of the form $xzy=zxy$ and $yxz=yzx$ for $x<y<z,\ x,y,z\in A$ and $xyx=yxx$ and $yyx=yxy$ for $x<y,\ x,y\in A$. That is, we have $K=\\{xzy=zxy,\ x\leq y<z\\}\cup\\{yxz=yzx,\ x<y\leq z\\}$ with $P_{n}=\langle A|K\rangle$. For each finite rank, it follows that $P_{n}$ will be finitely presented. It was shown by Cain, Gray, and Malheiro in [5] that the plactic monoid admits a finite complete rewriting system, which we describe here. We consider two columns $\alpha,\beta$ as words in $A^{*}$. We say that $\alpha$ and $\beta$ are _compatible_ , written $\alpha\succeq\beta$, if $\alpha\beta$ is the column reading of a tableau. Then each pair $\alpha,\beta$ with $\alpha\nsucceq\beta$ yields a rewrite rule. Consider the tableau associated to $P(\alpha\beta)$. Since the number of columns in $P(\alpha\beta)$ is the length of the longest increasing sequence, and $\alpha,\beta$ are columns, it follows that $P(\alpha\beta)$ will be a tableau with at most two columns. Therefore this tableau will have column reading $\gamma\delta$, for some columns $\gamma,\delta$ with $\gamma\succeq\delta$, and potentially $\delta=\varepsilon$. Now consider $\mathcal{C}=\\{c_{\alpha}\ |\ \alpha\in A^{*},\alpha\text{ is a column}\\}$ to be a set of symbols corresponding to columns in $A^{*}$. Since $A$ is finite and columns are strictly decreasing sequences, $\mathcal{C}$ is also finite. Then define $R$ to be the set of all rewrite rules detailed above $R=\\{c_{\alpha}c_{\beta}\to c_{\gamma}c_{\delta}|\ \alpha,\beta\in A^{*}\ \alpha\nsucceq\beta\\}$ It is shown in [5] that ###### Lemma 2.4. $(\mathcal{C},R)$ is a complete rewriting system for $P_{n}$. It follows from this that $P_{n}$ admits normal forms as reduced words in $\mathcal{C}^{*}$. By the definition of $\succeq$, this normal form will be in the form of column readings $c_{\alpha_{1}}\dots c_{\alpha_{m}}$ with each $\alpha_{i}\succeq\alpha_{i+1}$. Note that if $\alpha=\alpha_{m}\dots\alpha_{1}$ and $\beta=\beta_{n}\dots\beta_{1}$, $\alpha_{i},\beta_{i}\in A$, appear in the column reading of the same tableau (not necessarily adjacent) with $\alpha$ further left than $\beta$, then $\alpha\succeq\beta$. Indeed, since $\alpha$ and $\beta$ are columns of the same tableau, then by the structure of a tableau we have that $m\geq n$. Furthermore, each pair $\alpha_{i},\beta_{i}$ will be in the same row of the tableau, with $\alpha_{i}$ appearing earlier than $\beta_{i}$. This will imply that $\alpha_{i}\leq\beta_{i}$. But these two conditions imply that $\alpha\succeq\beta$. Thus $\succeq$ is a partial order. We introduce a length-decreasing-lexicographic order on $\mathcal{C}$ extending $\succeq$. For $c_{\alpha},c_{\beta}\in\mathcal{C}$, define: $c_{\alpha}\sqsubseteq c_{\beta}\iff\left(|\alpha|>|\beta|\right)\lor\left(|\alpha|=|\beta|\wedge\left(\exists j:\ i<j\implies\alpha_{i}=\beta_{i}\wedge\ \alpha_{j}<\beta_{j}\right)\right)$ With $j$ taken as $n+1$ when $c_{\alpha}=c_{\beta}$. Note that $c_{\alpha}\succeq c_{\beta}\implies c_{\alpha}\sqsubseteq c_{\beta}$. Furthermore, this is clearly a total order. We can therefore enumerate the set $\mathcal{C}$ as $\\{c_{1},\dots c_{k}\\}$, with $k=|\mathcal{C}|=2^{n}-1$, such that $i\leq j\implies c_{i}\sqsubseteq c_{j}$. Then, since $c_{\alpha}\succeq c_{\beta}\implies c_{\alpha}\sqsubseteq c_{\beta}$, we have that the normal forms of $P_{n}$ will have the form $c_{1}^{w_{1}}\dots c_{k}^{w_{k}}$ with $w_{i}\in\mathbb{N}$ for each $i$, and for any pair $c_{i},c_{j}$ with $i<j\land c_{i}\nsucceq c_{j}$, either $w_{i}=0$ or $w_{j}=0$. Call two columns $c_{i}$ and $c_{j}$ _incompatible_ if $i<j\land c_{i}\nsucceq c_{j}$. ###### Example. $P_{3}$ has seven columns: 3 2 1 , 2 1 , 3 1 , 3 2 , 1 , 2 , 3 listed here in length-decreasing-lexicographic order. This list corresponds to symbols $c_{1},\dots,c_{7}\in\mathcal{C}$. Note that $c_{4}$ and $c_{5}$ are incompatible. $P_{3}$ is the lowest rank plactic monoid with an incompatible pair. ### 2.3 Interpretations, theories, and Presburger arithmetic We assume familiarity with basic first order logic, and refer the reader to [17] or [34] for a more detailed introduction to model theory. We will be following the conventions of [34] ###### Definition 2.5. Let $L$ be the language of first order formulas for a given signature. Then the first order theory of an $L$-structure $\mathcal{M}$ is the set of all sentences in $L$ that hold in $\mathcal{M}$. The question of deciding a first order theory asks for an algorithm which, given a first order sentence $\phi$, determines whether $\phi$ is true or false in $\mathcal{M}$ in finite time. The language of interest in this paper is the language of monoids, whose signature is $(\circ,\varepsilon)$. To speak of the first order theory of a given monoid, one classically allows atomic formulas of the form $u=v$ for each $u,v\in\mathcal{M}$. In the finitely generated case (with generating set $A=\\{a_{1},\dots,a_{n}\\}$, say) this is equivalent to adding constants $a_{1},\dots,a_{n}$ to the signature, and considering the first order theory with constants of $(\mathcal{M},\circ,\varepsilon,a_{1},\dots,a_{n})$. In our case, we refer to the first order theory of a plactic monoid of rank $n$, which will have constants $1,\dots,n$ added to the language of monoids. Write $FOTh(P_{n})$ as shorthand for the first order theory of $P_{n}$ with constants. We aim in the following sections to build an interpretation of $P_{n}$ in Presburger arithmetic, which will allow $\varphi\in FOTh(P_{n})$ to be reduced to a sentence $\tilde{\varphi}$ of Presburger arithmetic. A _reduction_ of a decision problem $D_{1}$ to another decision problem $D_{2}$ is a Turing machine which, given finitely many queries to an oracle for $D_{2}$, will yield an algorithm for deciding $D_{1}$. Importantly, this means that decidability of $D_{2}$ will imply decidability of $D_{1}$, as such an oracle machine will exist and halt in finite time on each query. Presburger arithmetic is named after Mojżesz Presburger, who in 1929 was tasked with studying the decidability of the integers under addition. In his master’s thesis [38], he used quantifier elimination and reasoning about arithmetic congruences to prove that the first order theory of $(\mathbb{N},0,1,+)$ is consistent, complete, and decidable. Note that we can add a comparison symbol $\leq$ to the signature of Presburger arithmetic without trouble, since $x\leq y$ is equivalent to the statement $\exists z:y=x+z$. This yields the following lemma: ###### Lemma 2.6. $FOTh(\mathbb{N},0,1,+,\leq)$ is decidable. This result will form the bedrock of the following argument. For an English translation of Presburger’s work, see [46]. For the definition of an interpretation, we proceed as in section 1.3 of [34] ###### Definition 2.7. Let $\mathcal{M}$ be an $L$-structure. A set $S\subseteq\mathcal{M}^{n}$ is called _definable in $\mathcal{M}$_ if there is a first order formula $\phi(x_{1},\dots,x_{n},y_{1},\dots,y_{m})\in L$ with free variables $x_{1},\dots,x_{n},y_{1}\dots,y_{m}$ such that there exists $(w_{1},\dots,w_{m})\in\mathcal{M}^{m}$ with the property that $\phi(x_{1},\dots,x_{n},w_{1},\dots,w_{m})$ holds if and only if $(x_{1},\dots,x_{n})\in S$. i.e. $S$ is the set $\\{\underline{x}\in\mathcal{M}^{n}|\ \mathcal{M}\models\phi(\underline{x},w_{1},\dots,w_{m})\\}$ ###### Example. The set of elements that commutes with a given $m\in\mathcal{M}$ is definable by the formula $xm=mx$. The centre of $\mathcal{M}$ is definable by the formula $\forall m:xm=mx$ If $\mathcal{M}$ is finitely generated by $\\{m_{1},\dots m_{k}\\}$, then the formula $xm_{1}=m_{1}x\land xm_{2}=m_{2}x\land\dots\land xm_{k}=m_{k}x$ defines the centre of $\mathcal{M}$. This has the property of being a _positive existential_ formula, which is useful in the study of Diophantine equations222See, for example, [9] ###### Definition 2.8. A function $f:\mathcal{M}^{m}\to\mathcal{M}^{n}$ is definable in $\mathcal{M}$ if its graph is definable as a subset of $\mathcal{M}^{m+n}$. Note that the composition of definable functions is definable. ###### Definition 2.9. Let $\mathcal{M}$ be an $L_{1}$-structure, and $\mathcal{N}$ be an $L_{2}$-structure. Then we call $\mathcal{N}$ _interpretable_ in $\mathcal{M}$ if there exist some $n\in\mathbb{N}$, some set $S\subseteq\mathcal{M}^{n}$, and a bijection $\phi:S\to\mathcal{N}$ such that 1. 1. $S$ is definable in $\mathcal{M}$ 2. 2. For every $\sigma$ in the signature of $L_{2}$, including the equality relation, the preimage by $\phi$ of the graph of $\sigma$ is definable in $\mathcal{M}$ Since we will only be dealing with the case of a monoid, point 2 reduces to checking the preimages of equality $\phi^{-1}(=)$ and multiplication $\phi^{-1}(\cdot)$, with the latter being the set of triples $(a,b,c)\in S^{3}$ such that $\phi(a)\cdot\phi(b)=\phi(c)$. Note that in the above definition we insisted the map $\phi$ be a bijection, as in section 1.3 of [34]. The interpretation we will build will be a bijection. However, the most general theory of interpretations works with surjections from $S$ onto $\mathcal{N}$. See section 5 of [17] for more information. The following result will prove fundamental, and is a consequence of theorem 5.3.2 and its remarks in [17]: ###### Proposition 2.10. Suppose $L_{1}$ and $L_{2}$ are languages, with $M_{1}$ and $M_{2}$ being $L_{1}$\- and $L_{2}$-structures, respectively. Suppose $M_{1}$ is interpretable in $M_{2}$. Then the problem of deciding $FOTh(M_{1})$ is reducible to the problem of deciding $FOTh(M_{2})$. Next we define the notion of bi-interpretability, which will be the subject of section 3.2. By the definition of interpretations, it is straightforward to see that interpretations are transitive: if $M_{1}$ is interpretable in $M_{2}$, and $M_{2}$ is interpretable in $M_{3}$, then $M_{1}$ is interpretable in $M_{3}$. This implies that if two structures are _mutually interpretable_ , i.e. $M_{1}$ and $M_{2}$ are each interpretable in the other, then we obtain an interpretation of $M_{1}$ in itself, and likewise an interpretation of $M_{2}$ in itself. ###### Definition 2.11. Given $M_{1}$ an $L_{1}$-structure, and $M_{2}$ an $L_{2}$-structure, we say $M_{1}$ and $M_{2}$ are _bi-interpretable_ if $M_{1}$ and $M_{2}$ mutually interpretable, and the map $\phi_{i}$ interpreting $M_{i}$ in itself is definable in $M_{i}$, for $i=1,2$. ###### Example. Presburger arithmetic is commonly expressed as $(\mathbb{N},0,1,+,\leq)$ and $(\mathbb{Z},0,1,+,-,\leq)$. These two models are bi-interpretable. The identity map $\phi$ on $\mathbb{N}\subset\mathbb{Z}$ definable by $0\leq x$ interprets $(\mathbb{N},0,1,+,\leq)$ in $(\mathbb{Z},0,1,+,-,\leq)$, and the map $\psi:\mathbb{N}^{2}\to\mathbb{Z}$ with $\psi(a,b)=a-b$ interprets as a surjection $(\mathbb{Z},0,1,+,-,\leq)$ in $(\mathbb{N},0,1,+,\leq)$. We then obtain $\phi\psi:S\to\mathbb{N}$ from the definable set $S=\\{(a,b)\ |\ a\geq b\\}\subset\mathbb{N}^{2}$, with graph $\\{(a,b,c)\ |\ a\geq b\land c+b=a\\}$ definable in $(\mathbb{N},0,1,+,\leq)$. We also obtain $\psi\phi^{2}:T\to\mathbb{Z}$ with $T=\mathbb{N}^{2}$ definable in $\mathbb{Z}^{2}$ by $0\leq x\land 0\leq y$, whose graph $\\{(a,b,c)|\ 0\leq a\land 0\leq b\land c=a-b\\}$ definable in $(\mathbb{Z},0,1,+,-,\leq)$. We will use these two notions of Presburger arithmetic interchangeably. ## 3 The case $n=2$ ### 3.1 Interpreting $P_{2}$ in $(\mathbb{Z},0,1,+,-,\leq)$ First, we will explicitly treat the case $n=2$ of tableaux on two letters. Such tableaux have three possible columns: 2 1 , 1 , 2 . Denote by $t$ the word $21$ in $A^{*}$. Then by abuse of notation $\mathcal{C}^{*}=\\{t,1,2\\}^{*}$, and our rewriting system becomes: $R=\\{21\to t\ ,\ 2t\to t2\ ,\ 1t\to t1\\}$ By the two commutativity rules, and the fact that any factor $21$ would not appear in a reduced word, we can write any reduced word $w\in(\mathcal{C},R)$ as some $t^{w_{1}}1^{w_{2}}2^{w_{3}}$. Thus, by completeness of the rewriting system, each element of $P_{2}$ corresponds to a triple $(w_{1},w_{2},w_{3})\in\mathbb{N}^{3}$, associated to a normal form $t^{w_{1}}1^{w_{2}}2^{w_{3}}$. Likewise, each such triple corresponds to a tableau, hence an element of $P_{2}$. Consider the map $\phi:S\to P_{2}$, where $S=\mathbb{N}^{3}\subset\mathbb{Z}^{3}$ is definable by the formula $(0\leq x_{1})\land(0\leq x_{2})\land(0\leq x_{3})$ and $\phi(x_{1},x_{2},x_{3})=t^{x_{1}}1^{x_{2}}2^{x_{3}}$. This is a bijection from a definable set in $(\mathbb{Z},0,1,+,-,\leq)$, and the inverse graph of equality will be $\displaystyle\phi^{-1}(=)$ $\displaystyle=\left\\{(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3})\in\mathbb{N}^{6}\ |\ t^{a_{1}}1^{a_{2}}2^{a_{3}}=t^{b_{1}}1^{b_{2}}2^{b_{3}}\right\\}$ $\displaystyle=\left\\{(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3})\in\mathbb{N}^{6}\ |\ a_{1}=b_{1},\ a_{2}=b_{2},\ a_{3}=b_{3}\right\\}\subset\mathbb{Z}^{6}$ Which is definable by the formula $(\underline{a}\in S)\land(\underline{b}\in S)\land\bigwedge\limits_{i\in[3]}(a_{i}~{}=~{}b_{i})$ We check the preimage of the graph of multiplication $\displaystyle\phi^{-1}(\circ)=\left\\{(\underline{a},\underline{b},\underline{c})\in S^{3}\ |\ t^{a_{1}}1^{a_{2}}2^{a_{3}}t^{b_{1}}1^{b_{2}}2^{b_{3}}=t^{c_{1}}1^{c_{2}}2^{c_{3}}\right\\}$ Explicitly checking the multiplication yields $\displaystyle t^{a_{1}}1^{a_{2}}2^{a_{3}}t^{b_{1}}1^{b_{2}}2^{b_{3}}$ $\displaystyle=t^{a_{1}+b_{1}}1^{a_{2}}2^{a_{3}}1^{b_{2}}2^{b_{3}}$ $\displaystyle=\begin{cases}t^{a_{1}+b_{1}+a_{3}}1^{a_{2}+b_{2}-a_{3}}2^{b_{3}},\ a_{3}\leq b_{2}\\\ t^{a_{1}+b_{1}+b_{2}}1^{a_{2}}2^{b_{3}+a_{3}-b_{2}},\ b_{2}\leq a_{3}\end{cases}$ Thus we get the following formula for $\phi^{-1}(\circ)$ $\displaystyle(\underline{a}\in S)\land(\underline{b}\in S)\land(\underline{c}\in S)\land$ $\displaystyle[(a_{3}\leq b_{2}\land c_{1}=a_{1}+b_{1}+a_{3}\land c_{2}=a_{2}+b_{2}-a_{3}\land c_{3}=b_{3})$ $\displaystyle\lor$ $\displaystyle(b_{2}\leq a_{3}\land c_{1}=a_{1}+b_{1}+b_{2}\land c_{2}=a_{2}\land c_{3}=b_{3}+a_{3}-b_{2})]$ It follows then that $\phi$ is an interpretation of $P_{2}$ in Presburger arithmetic. This yields the following result. ###### Theorem 1. $P_{2}$ has decidable first order theory ###### Proof. Since $\phi$ above is an interpretation of $P_{2}$ in $(\mathbb{Z},0,1,+,-,\leq)$, every first order formula of $P_{2}$ is interpreted as a first order formula of Presburger arithmetic, which is decidable by 2.6. ∎ Note that this argument is closely related to the proof that the bicyclic monoid $B=\langle a,b\ |\ ba=\varepsilon\rangle$ has decidable first order theory (see section 2.4 of [11]). Indeed, the map $\psi:P_{2}\to B$ sending 1 to $a$, 2 to $b$, and $t$ to $\varepsilon$ is a monoid homomorphism, and $\psi\circ\phi:S\to B$ is an interpretation of the bicyclic monoid in Presburger arithmetic. ### 3.2 Bi-interpretability The centre of $P_{2}$, $Z(P_{2})=\\{t^{n}|\ n\in\mathbb{N}\\}$ is a subset of $P_{2}$ which is isomorphic to $(\mathbb{N},0,1,+,\leq)$, with $a\leq b$ in $\mathbb{N}$ corresponding to $\exists y:t^{a}y=t^{b}$, and addition corresponding to monoid multiplication. The centre is definable in $P_{2}$ via the formula $x1=1x\land x2=2x$, since $P_{2}$ is finitely generated. We can therefore take $\psi:Z(P_{2})\to\mathbb{N}$ to be an interpreting map of $(\mathbb{N},0,1,+,\leq)$ in $P_{2}$. ###### Proposition 3.1. $P_{2}$ and Presburger arithmetic are bi-interpretable. ###### Proof. Firstly, note that $\phi:\mathbb{N}^{3}\to P_{2}$ defined as above is also an interpreting map of $P_{2}$ into $(\mathbb{N},0,1,+,\leq)$, by replacing any subtraction formulas $a=b-c$ with $a+c=b$. Let $\psi:Z(P_{2})\to\mathbb{N}$ be the interpretation described above. Then $\psi\phi:S\to\mathbb{N}$, where $S$ is the definable subset $\\{(n,0,0)\ |\ n\in\mathbb{N}\\}\subset\mathbb{N}^{3}$, is the isomorphism sending $(n,0,0)$ to $n$, which is clearly definable. Consider $\phi\psi:Z(P_{2})^{3}\to P_{2}$, sending $(t^{a},t^{b},t^{c})$ to $w=t^{a}1^{b}2^{c}$. We have that the sets $S_{1}=\\{1^{n}|\ n\in\mathbb{N}\\}$ and $S_{2}=\\{2^{n}|\ n\in\mathbb{N}\\}$ are definable in $P_{2}$ as follows: First, let $C_{1}=\\{t^{a}1^{b}|\ a,b\in\mathbb{N}\\}\subset P_{2}$ be the set of elements satisfying $\exists y:yx\in Z(P_{2})$ then $S_{1}$ will be the set of elements satisfying $x\in C_{1}\land\forall y\forall z:y\in Z(P_{2})\land x=yz\implies y=\varepsilon$ Likewise, for $C_{2}=\\{t^{a}2^{b}|\ a,b\in\mathbb{N}\\}\subset P_{2}$ defined by $\exists y:xy\in Z(P_{2})$, we have that the following formula defines $S_{2}$: $x\in C_{2}\land\forall y\forall z:y\in Z(P_{2})\land x=yz\implies y=\varepsilon$ These formulas state that if we can write $x$ as $t^{a}1^{b}$ (respectively $t^{a}2^{b}$) then we must have $a=0$, meaning $x$ must belong to $S_{1}$ (respectively $S_{2}$). Given $t^{b}$ we can define $x=1^{b}$ by $\exists z:x\in S_{1}\land z\in S_{2}\land zx=t^{b}$ Likewise we can define $y=2^{c}$ by $\exists z:z\in S_{1}\land y\in S_{2}\land yz=t^{b}$ Then we have that $\phi\psi(t^{a},t^{b},t^{c})=t^{a}xy$, which is definable. ∎ ## 4 The general case Throughout this section, let $k=|\mathcal{C}|=2^{n}-1$. Index $\mathcal{C}$ by $i\in[k]$ with $i~{}<~{}j~{}\iff~{}c_{i}~{}\sqsubset~{}c_{j}$ Let $S\subseteq\mathbb{N}^{k}$ be the set of all $(v_{1},\dots,v_{k})$ such that $c_{1}^{v_{1}}\dots c_{k}^{v_{k}}$ is the normal form of a tableau, and let $\phi:S\to P_{n}$ be the natural bijection. The normal form of any tableau will obey compatibility conditions: for each pair $(a,b)\in[k]\times[k]$ such that $a<b$ and $c_{a}\nsucceq c_{b}$, we have that either $v_{a}=0$ or $v_{b}=0$. Let $I\subset[k]\times[k]$ be the set of all such pairs. Then $S\subset\mathbb{Z}^{k}$ is defined by the formula $\bigwedge\limits_{i\in[k]}(0\leq x_{i})\land\bigwedge\limits_{(a,b)\in I}\left[(x_{a}=0)\lor(x_{b}=0)\right]$ We claim that $\phi$ is an interpreting map of $P_{n}$ in Presburger arithmetic. Again, we check the diagonal: $\displaystyle\phi^{-1}(=)=\\{(\underline{a},\underline{b})\in S^{2}\ |\ \phi(\underline{a})=\phi(\underline{b})\\}$ Which is definable by $(\underline{a}\in S)\land(\underline{b}\in S)\land\bigwedge\limits_{i\in[k]}(a_{i}=b_{i})$ as in the $n=2$ case. It remains to check whether the preimage of the multiplication graph $\phi^{-1}(\circ)=\left\\{(\underline{a},\underline{b},\underline{c})\in S^{3}\ |\ \phi(\underline{a})\phi(\underline{b})=\phi(\underline{c})\right\\}\subset\mathbb{Z}^{3k}$ is definable. ### 4.1 Multiplication – the idea Using the previous section as a base case, we will proceed with the induction hypothesis that, for each $2\leq i\leq n-1$, we have a formula $\eta_{i}$ in Presburger arithmetic defining multiplication in $P_{i}$. We first consider the structure of multiplication in $P_{n}$. The recursive nature of Schensted’s algorithm yields a characterisation of multiplication in $P_{n}$ via bottom rows and top tableaux. ###### Definition 4.1. We call a tableau $t\in P_{n}$ a _top tableau_ if its row reading is a word over $\\{2,\dots,n\\}^{*}$. i.e. there are no 1’s appearing in the tableau word representing $t$. Note that each $u\in P_{n}$ will have an associated top tableau – if $u=r_{1}\dots r_{l}$, then $r_{1}\dots r_{l-1}$ will be a top tableau. For $u,v\in P_{n}$, the product $uv$ will be computed by first running an insertion algorithm into the bottom row of $u$, and then inserting any bumped letters into the top tableau associated to $u$. We will make this idea more precise. ###### Definition 4.2. Define the following maps: 1. 1. The top map $T:P_{n}\to P_{n}$ maps an element $w$ with row form $r_{1}\dots r_{l}$ in $A^{*}$ to its corresponding top tableau $T(w)=r_{1}\dots r_{l-1}$ 2. 2. The bottom map $B:P_{n}\to P_{n}$ maps an element $w$ as above to its bottom row $B(w)=r_{l}$ ###### Example. Let $t=$ 3&4 233 11244 . Then $T(t)=34\ 233$ and $B(t)=11244$ For $u,v\in P_{n}$, by the structure of Schensted’s algorithm, the product $uv$ will run an insertion algorithm first into $B(u)$, followed by any letters that are bumped being inserted into $T(u)$. This yields the following characterisation of the top and bottom of the product: $\displaystyle T(uv)$ $\displaystyle=T(u)T(B(u)v)$ $\displaystyle B(uv)$ $\displaystyle=B(B(u)v)$ Where equality is taken to mean equality in $P_{n}$, not equality of words. Note that the set of top tableaux, which is equivalently the image of $T$, is a submonoid isomorphic to $P_{n-1}$ over the alphabet $\\{2,\dots,n\\}$. Thus the product of top tableaux will be definable via $\eta_{i-1}$. Therefore, if we can define the row $B(uv)$, and a way of stitching $T(uv)$ and $B(uv)$ into one tableau $uv$, we will obtain $\eta_{i}$ a formula defining multiplication in $P_{n}$ ###### Definition 4.3. The stitch map $\Sigma:P_{n}\times P_{n}\to P_{n}$ is defined as follows. For $u\in P_{n}$ a top tableau with row reading $r_{1}\dots r_{n}\in A^{*}$ and $v$ a row with row reading $r_{v}\in A^{*}$, $\Sigma(u,v)=uv$ if $r_{1}\dots r_{n}r_{v}$ is the row reading of a tableau. Otherwise, $\Sigma(u,v)=\varepsilon$. If $\Sigma$ has nontrivial output, we call $u$ and $v$ _compatible_ , and $uv$ the “stitched” tableau. ###### Example. Suppose $u=43322234$ and $v=11113$. Then $\Sigma(u,v)=uv$, with corresponding tableau 4 3&3 22234 11113 Note that $\Sigma(T(u),B(u))=u$. We can thus characterise multiplication via the above maps as follows $\displaystyle uv$ $\displaystyle=\Sigma(T(uv),B(uv))$ $\displaystyle=\Sigma(T(u)T(B(u)v),B(B(u)v))$ Let us consider the structure of $w\in P_{n}$ as a word in normal form in $\mathcal{C}^{*}$. We have that $w=c_{1}^{w_{1}}c_{2}^{w_{2}}\dots c_{k}^{w_{k}}$ for some $w_{i}\in\mathbb{N}$ for each $i$, satisfying some compatibility conditions. But consider now each block $c_{i}^{m}$ for some $m\in\mathbb{N}$ as a tableau word in row form in the presentation $\langle A|K\rangle$. Then for $c_{i}=x_{1}x_{2}\dots x_{r}$ in row form in $A^{*}$, we have that $c_{i}^{m}=x_{1}^{m}x_{2}^{m}\dots x_{r}^{m}$ in row form in $A^{*}$. For each $c_{i}$, this row form is unique, since each column corresponds to a unique decreasing sequence in $A^{*}$. Define $\alpha$ to be the finite sequence of letters in $A$ which first outputs in order the letters in the row form of $c_{1}$, then the letters of the row form of $c_{2}$, and so on. We also define $\beta$ to be the finite sequence, taking values in $[k]$, with $\beta_{i}=j$ when $\alpha_{i}$ is a letter from column $c_{j}$. ###### Example. The seven columns of $P_{3}$: 3 2 1 , 2 1 , 3 1 , 3 2 , 1 , 2 , 3 , yield $\displaystyle\alpha$ $\displaystyle=3,2,1,2,1,3,1,3,2,1,2,3$ $\displaystyle\beta$ $\displaystyle=1,1,1,2,2,3,3,4,4,5,6,7$ Denote the length of $\alpha$ and $\beta$ by $\ell$, which is fixed for any choice of $n$. Then we can write $w=\alpha_{1}^{w_{\beta_{1}}}\dots\alpha_{\ell}^{w_{\beta_{\ell}}}$, where $w_{\beta_{i}}$ is the coefficient of column $c_{\beta_{i}}$ in the normal form of $w$. ###### Lemma 4.4. Consider $u,v,w\in P_{n}$. Suppose $v$ has normal form $c_{1}^{v_{1}}\dots c_{k}^{v_{k}}$. Then $w=uv$ is equivalent to the following: There exist $u_{0},u_{1},\dots u_{\ell}\in P_{n}$ such that $u_{0}=u$, $u_{\ell}=w$, and we have a recursive formula for $u_{i}$ $u_{i}=u_{i-1}\alpha_{i}^{v_{\beta_{i}}}$ This result is immediate from the structure of the insertion algorithm and the fact $v=\alpha_{1}^{v_{\beta_{1}}}\dots\alpha_{\ell}^{v_{\beta_{\ell}}}$ ###### Corollary 4.5. $\phi^{-1}(\circ)$ is definable if the maps $\mu_{x}:\mathbb{N}\times S\to S$ such that $\phi(\mu_{x}(m,\underline{a}))=\phi(\underline{a})x^{m}$ are definable for each $x\in A$. ###### Proof. By lemma 4.4, given $\underline{a},\underline{b}\in S$, we have that $\phi(\underline{c})=\phi(\underline{a})\phi(\underline{b})$ if and only if there is some $\underline{c}^{0},\dots,\underline{c}^{\ell}$ such that $\underline{c}^{0}=\underline{a},\ \underline{c}=\underline{c}^{\ell}$, and $\underline{c}^{i}=\mu_{\alpha_{i}}(b_{\beta_{i}},\underline{c}^{i-1})$ so the preimage of the graph of multiplication is a composition of finitely many applications of $\mu_{x}$, which will be definable if each $\mu_{x}$ is definable. ∎ ### 4.2 The formula defining $\mu_{x}$ Henceforth, $x$ is a fixed letter in $A$. Recall that if $\underline{b}=\mu_{x}(m,\underline{a})$, then $\displaystyle\phi(\underline{b})$ $\displaystyle=\phi(\underline{a})x^{m}$ $\displaystyle=\Sigma(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m}),B(B(\phi(\underline{a}))x^{m})).$ So we wish to obtain a formula of Presburger arithmetic describing $\underline{b}=\phi^{-1}\Sigma(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m}),B(B(\phi(\underline{a}))x^{m}))$ We can break this down into a composition of several maps. First, define $\underline{a}^{1}$ and $\underline{a}^{2}$ such that $\displaystyle\underline{a}^{1}$ $\displaystyle=\phi^{-1}T\phi(\underline{a})$ $\displaystyle\underline{a}^{2}$ $\displaystyle=\phi^{-1}B\phi(\underline{a})$ Next, considering $R\subset P_{n}$ the set of row words, we define two maps $\rho_{1},\rho_{2}:R~{}\to~{}S$ such that, for $r\in R$, $\phi(\rho_{1}(r))=T(rx^{m})$ and $\phi(\rho_{2}(r))=B(rx^{m})$. Then since $\phi(\underline{a}^{2})$ is a row, we can define $\underline{a}^{3}$ and $\underline{a}^{4}$ to be such that $\displaystyle\underline{a}^{3}=\rho_{1}(\phi(\underline{a}^{2}))$ $\displaystyle\underline{a}^{4}=\rho_{2}(\phi(\underline{a}^{2}))$ That is, $\phi(\underline{a}^{3})=T(B(\phi(\underline{a}))x^{m})$ and $\phi(\underline{a}^{4})=B(B(\phi(\underline{a}))x^{m})$. Next, define $\underline{a}^{5}$ to be such that $\phi(\underline{a}^{5})=T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m})=\phi(\underline{a}^{1})\phi(\underline{a}^{3})$ By our induction hypothesis, this will be definable, as the set of top tableaux is generated by a known subset of columns. Thus the coefficients in $\underline{a}^{5}$ will either be calculated by the formula $\eta_{n-1}$, or will equal zero. Finally, we have that $\underline{b}=\phi^{-1}\Sigma(\phi(\underline{a}^{5}),\phi(\underline{a}^{4}))$ Since the composition of definable maps is definable, we have that $\mu_{x}$ is definable precisely when $\phi^{-1}T\phi(\underline{a}),\phi^{-1}B\phi(\underline{a}),\rho_{1},\rho_{2},$ and $\phi^{-1}\Sigma(\phi(\ ),\phi(\ ))$ are definable. This will be the subject of the following three lemmas. ###### Lemma 4.6. The following maps are definable: 1. i $\phi^{-1}B\phi:S\to S$ 2. ii $\phi^{-1}T\phi:S\to S$ ###### Proof. $(i)$ Define the finite sets $B_{a}$ for each $a\in A$ by $B_{a}=\\{j\in[k]\ |\ c_{j}=x_{m}\dots x_{1}\land x_{1}=a\\}$ which are nonempty for each $a$. Then we get that $\underline{b}=\phi^{-1}(B(\phi(\underline{a}))$ if and only if the following formula holds: $\bigwedge\limits_{i\in[k-n]}(b_{i}=0)\land\bigwedge\limits_{i\in[n]}\left(b_{k-n+i}=\sum_{j\in B_{i}}a_{j}\right)$ The first part of the formula denoting the coefficient of each column of size $\geq 2$ being zero, and the second part denoting the fact that each column $x_{m}\dots x_{1}$ in $\phi(\underline{a})$ contributes to the coefficient of the $x_{1}$ letter in the bottom row. $(ii)$ Define the similar sets $T_{i}$ for each $i\in[k]$ by $T_{i}=\\{j\in[k]\ |\ c_{j}=x_{m}\dots x_{1}\land x_{m}\dots x_{2}=c_{i}\\}$ Note that if $i\in B_{1}$, then $T_{i}=\emptyset$. Now, we have that $\underline{b}=\phi^{-1}(T(\underline{a}))$ if and only if the following formula holds: $\bigwedge\limits_{i\in[k]}\left(b_{i}=\sum_{j\in T_{i}}a_{j}\right)$ Where we take the sum over an empty indexing set to be 0. ∎ Note that the sets $T_{i}$ and $B_{a}$ can be constructed algorithmically for any given $n$. Given the set of columns as decreasing sequences, we can check membership in each $B_{a}$ by considering the minimal element of a column, and we can check membership in each $T_{i}$ by considering the column without its minimal element. Note also that the set $B_{a}$ and $T_{j}$ partition $[k]$. Next, we move on to defining the maps $\phi^{-1}T(rx^{m})$ and $\phi^{-1}B(rx^{m})$. ###### Lemma 4.7. The following maps are definable: 1. i $\rho_{1}\phi:S\to S$ 2. ii $\rho_{2}\phi:S\to S$ ###### Proof. Consider $S_{R}\subset S$ to be the subset of normal forms corresponding to rows (i.e. $S_{R}$ is the preimage of $R\subset P_{n}$). Then $S_{R}$ is a subset definable by $\bigwedge_{i\in[k-n]}x_{i}=0$ and $\rho_{1}\phi,\ \rho_{2}\phi$ will be maps from $S_{R}$ to $S_{R}$. Indeed, by [41], the number of rows after running Schensted’s algorithm on any $w\in A^{*}$ is equal to the length of the longest strictly decreasing subsequence of $w$. Now, since we map a row $r$ to $rx^{m}\in A^{*}$, and since $r$ is non-decreasing as a sequence in $A^{*}$, the longest strictly decreasing subsequence of $w=rx^{m}$ can have length at most 2. Now, write $\underline{r}=(0,\dots,0,r_{1},r_{2},\dots,r_{n})$. We will describe explicitly $\underline{c}=(0,\dots,0,c_{1},\dots,c_{n})$ and $\underline{d}~{}=~{}(0,\dots,d_{1},\dots,d_{n})$ such that $\phi(\underline{c})=T(w)$ and $\phi(\underline{d})=B(w)$. First, consider $B(w)$. In the setting $\langle A|K\rangle$, we will have $x^{m}$ inserted into $r=1^{r_{1}}2^{r_{2}}\dots(x+1)^{r_{x+1}}(x+2)^{r_{x+2}}\dots n^{r_{n}}$ It will bump $m$ letters from this row, starting at $x+1$. This means that $d_{i}=r_{i}$ for $i<x$ and $d_{x}=r_{x}+m$. We will now consider the later entries of $\underline{d}$: In the first case, suppose $m\leq r_{x+1}$. Then we will bump $m$ letters $x+1$ and replace them with letters $x$. This yields the effect that $d_{x+1}=r_{x+1}-m$ and $d_{i}=r_{i}$ for all $i>x+1$. Now, suppose $r_{x+1}\leq m\leq r_{x+1}+r_{x+2}$. Then all letters $x+1$ are bumped, as are $m-r_{x+1}$ letters $x+2$. Thus we have that $d_{x+1}=0,\ d_{x+2}=r_{x+2}+r_{x+1}-m$, and $d_{i}=r_{i}$ for all $i>x+2$. Continuing in this pattern, case $i$ then becomes $\displaystyle\sum_{j=1}^{i-1}r_{x+j}\leq m\leq\sum_{j=1}^{i}r_{x+j}$ And in this case, $d_{x+j}=0$ for each $0<j<i$. Also, $d_{x+i}=\sum_{j=1}^{i}r_{x+j}-m$, and all later entries remain unchanged. Suppose we are now in the case $\displaystyle\sum_{j=1}^{n-x}r_{x+j}\leq m$ Then we have $d_{x+j}=0$ for all $j$. Each case yields a formula in terms of $\leq$, addition, and subtraction. Then the disjunction of the above cases, which will be a finite formula, will define $\underline{d}$ such that $\phi(\underline{d})=B(w)$. Let us now consider $T(w)$. This will be the row of bumped letters, which will mean that $c_{i}=0$ for any $i\leq x$. Now, suppose we are in case $i$ as above. Then $\displaystyle\sum_{j=1}^{i-1}r_{x+j}\leq m\leq\sum_{j=1}^{i}r_{x+j}$ and we will bump all letters $x+1,\dots,x+i-1$, as well as some letters $x+i$. Therefore $c_{x+j}=r_{x+j}$ for $0<j<i$, and $c_{x+i}=m-\sum_{j=1}^{i-1}r_{x+j}$. Note that the length of $r_{1}$ is always exactly $m$ in this case. Now suppose we are in the case $\displaystyle\sum_{j=1}^{n-x}r_{x+j}\leq m$ Then we will have that $c_{x+j}=r_{x+j}$ for each $j$. Again, as above, the disjunction of all the cases yields a formula defining the graph of $\phi^{-1}T(rx^{m})$. ∎ We will now show the definability of the stitch map ###### Lemma 4.8. The map $\phi^{-1}\Sigma(\phi(\ ),\phi(\ )):S^{2}\to S$ is definable. ###### Proof. The condition for $\Sigma$ to have nontrivial action is definable via the following formula: Suppose $\underline{a}\ ,\ \underline{b}\in S$. Consider the set $B_{1}=\\{i~{}\in~{}[k]\ |\ c_{i}~{}=~{}x_{m}\dots x_{1}\wedge x_{1}=a\\}$ as in lemma 4.6. Then $\phi(\underline{a})$ being a top tableau is definable by $\left(\bigwedge\limits_{i\in B_{1}}a_{i}=0\right)$. Also, $\phi(\underline{b})$ being a row is definable by the formula $\left(\bigwedge\limits_{i\in[k-n]}b_{i}=0\right)$ Now, let $e_{a}=\sum_{i\in B_{a}}a_{i}$. In order for it to be possible to stitch two inputs, we need $e_{2}\leq b_{k-n+1}$, $e_{3}\leq b_{k-n+2}+b_{k-n+1}-e_{2}$, and so on. We can rearrange this to get the following compatibility condition: $\underline{a}\in S\wedge\underline{b}\in S\wedge\left(\bigwedge\limits_{i\in B_{1}}a_{i}=0\right)\wedge\bigwedge\limits_{i\in[k-n]}(b_{i}=0)\wedge\bigwedge\limits_{i\in[n]}\left(\sum_{j=1}^{i}e_{i}\leq\sum_{j=0}^{i-1}b_{k-n+j}\right)$ Where we take empty sums to be 0. Note that all sums used are finite, so we obtain a valid formula in Presburger arithmetic. Denote this compatibility formula $\gamma(\underline{a},\underline{b})$ If $\gamma$ is satisfied, we will define the stitch recursively. We wish to construct $\underline{d}=\phi^{-1}\Sigma(\phi(\underline{a}),\phi(\underline{b}))$. When a top tableau $t$ is multiplied by a compatible bottom row, by the bumping property of Schensted’s algorithm, columns of $t$ will be bumped from left to right. Therefore, we must construct each coefficient $d_{1},\dots,d_{k}$ in order, with columns of $\underline{a}$ and $\underline{b}$ being ’used up’ in a sense. Suppose we have calculated the coefficients $d_{1},\dots,d_{i-1}$, and have obtained modified elements $\underline{a}^{i-1}$ and $\underline{b}^{i-1}$ representing all columns that have not yet been used up in a stitch. We calculate the coefficient $d_{i}$ as follows: $d_{i}$ is the coefficient of $c_{i}\in\mathcal{C}$, which we can write as $c_{i}=x_{m}\dots x_{1}\in A^{*}$. Then $x_{m}\dots x_{2}$ and $x_{1}$ are two columns, which we denote respectively by $c_{i_{T}}$ and $c_{i_{B}}$ in $\mathcal{C}$. By the structure of Schensted’s algorithm it is straightforward to see that $d_{i}=\min(a_{i_{T}}^{i-1},b_{i_{B}}^{i-1})$, which is definable in Presburger arithmetic by $\displaystyle(a^{i-1}_{i_{T}}\leq b^{i-1}_{i_{B}}\land d_{i}=a^{i-1}_{i_{T}})\lor(b^{i-1}_{i_{B}}\leq a^{i-1}_{i_{T}}\land d_{i}=b^{i-1}_{i_{B}})$ Now define $a^{i}_{j}=a^{i-1}_{j}$ for $j\neq i_{T}$, and $a^{i}_{i_{T}}=a^{i-1}_{i_{T}}-d_{i}$. Likewise, define $b^{i}_{j}=b^{i-1}_{j}$ for $j\neq i_{B}$ and $b^{i}_{i_{B}}=b^{i-1}_{i_{B}}-d_{i}$. This corresponds to the fact that these columns have now been used up in a stitch. Note that we will always get one of these coefficients being set to zero. This is clearly definable in Presburger arithmetic, and we will denote the formula for obtaining $\underline{a}^{i}$ and $\underline{b}^{i}$ from $\underline{a}^{i-1}$ and $\underline{b}^{i-1}$ by $\delta_{i}$, taking $\underline{a}^{0}=\underline{a}$ and $\underline{b}^{0}=\underline{b}$. This also allows us to calculate $d_{1}$ in terms of $\underline{a}^{0}$ and $\underline{b}^{0}$ Now we get that $\phi^{-1}\Sigma(\phi(\ ),\phi(\ ))$ has graph $\\{\underline{a},\underline{b},\underline{d}\\}$ satisfying: $\displaystyle\gamma(\underline{a},\underline{b})\land\exists\underline{a}^{0}\dots\exists\underline{a}^{k}\exists\underline{b}^{0}\dots\exists\underline{b}^{k}:(\underline{a}^{0}=\underline{a})\land(\underline{b}^{0}=\underline{b})\land\left(\bigwedge_{i\in[k]}d_{i}=\min(a^{i-1}_{i_{T}},b^{i-1}_{i_{B}})\land\delta_{i}\right)$ ∎ With the above lemmas in hand, we can now prove the following result. ###### Proposition 4.9. For any $x\in A$, the map $\mu_{x}$ is definable. ###### Proof. By the discussion before lemma 4.6, $\mu_{x}$ is a composition of the maps $\phi^{-1}B\phi,\ \phi^{-1}T\phi,\ \rho_{1}\phi,\ \rho_{2}\phi,\ \phi^{-1}\Sigma(\phi(\ ),\phi(\ ))$, and multiplication of top tableaux. By lemmas 4.6, 4.7, and 4.8, all five required maps are definable. To define $\underline{a}^{5}=\phi^{-1}(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m}))$, first note that since $\underline{a}^{5}$ denotes a top tableau, we have that $a^{5}_{i}=0$ for each $i\in B_{1}$ as defined in lemma 4.6. Furthermore, for each $i\notin B_{1}$, we have that $a^{5}_{i}$ is determined by the formula $\eta_{n-1}$ applied to $\phi^{-1}(T(\phi(\underline{a})))$ and $\phi^{-1}(T(B(\phi(\underline{a}))x^{m}))$. This determines $\eta_{n}$ by induction, with base case $\eta_{2}$ as detailed in section 3.1. This completes the proof. ∎ ###### Theorem 2. For any $n$, the first order theory of $P_{n}$ decidable ###### Proof. By proposition 4.9 and corollary 4.5, $\phi^{-1}(\circ)$ is definable. Thus $\phi:S\to P_{n}$ is an interpretation of $P_{n}$ in Presburger arithmetic. This reduces $FOTh(P_{n})$ to $FOTh\left(\mathbb{Z},0,1,+,-,\leq\right)$, which is decidable by lemma 2.6. ∎ By transitivity of interpretations, we have the following corollary. ###### Corollary 4.10. For any $n\in\mathbb{N},\ P_{n}$ is PE interpretable in $P_{2}$ ###### Corollary 4.11. The diophantine problem (via the positive existential theory) and the problem of checking if a given identity holds (via the positive universal theory) are decidable in $P_{n}$ ## 5 The Diophantine problem in the infinite case We note that the above interpretations were constructed algorithmically in a uniform way. That is to say, there will exist an effective procedure which, given $n$, will construct the interpreting map for $P_{n}$. The procedure runs as follows: 1. 1. Generate the interpretation for $P_{n-1}$ 2. 2. Given $n$, generate the power set of $[n]$ except the empty set. 3. 3. Enumerate this set by the order $\sqsubseteq$ on columns. Since each column is a decreasing sequence of elements in $[n]$, each column corresponds to a unique element of the power set. 4. 4. Run Schensted’s algorithm on each pair of columns. If the output of running Schensted’s algorithm on $c_{i}c_{j}$ is not $c_{i}c_{j}$, then $(i,j)$ is an incompatible pair. 5. 5. Generate the formula defining $S$ by conjuncting with $x_{i}=0\lor x_{j}=0$ for each incompatible pair discovered in step 3. 6. 6. Generate the formula defining equality in terms of the formula defining $S$. 7. 7. Generate the formula for $\mu_{x}$ in terms of the interpretation for $P_{n-1}$ 8. 8. Generate the sequences $\alpha$ and $\beta$ from lemma 4.4. Steps 7 and 8 yield a formula defining multiplication. Step 1 will repeat recursively until we reach $P_{2}$, which can be written explicitly as in section 3. ### 5.1 The plactic monoid of all tableaux We consider $A=\mathbb{N}\setminus\\{0\\}$ with $K_{\mathbb{N}}$ the set of Knuth relations for all triples $(x,y,z)~{}\in~{}\mathbb{N}^{3}$. Then the associated plactic monoid $P(\mathbb{N})$ is the monoid of _all_ semistandard Young tableaux. Despite the work in this paper, the question of deciding the theory of $P(\mathbb{N})$ remains open. However, we present an algorithm, by uniformity, for deciding the Diophantine problem for $P(\mathbb{N})$. ###### Lemma 5.1. For any $n\in\mathbb{N}$ the map $\phi:P(\mathbb{N})\to P_{n}$, defined on generators as $\phi(k)=k$ if $k\leq n$ and $\phi(k)=\varepsilon$ if $k>n$ and extended to words in the natural way, is a homomorphism. ###### Proof. Considered as a map from $\mathbb{N}^{*}$ to $[n]^{*}$, $\phi$ is clearly a homomorphism. It only remains to show that $\phi$ is well defined as a map from $P(\mathbb{N})$ to $P_{n}$. We will show this by proving that each Knuth relation in $K_{\mathbb{N}}$ will map to a relation that holds in $P_{n}$. Suppose $u=xzy$ and $v=zxy$, for $x\leq y<z$. If $z\leq n$ then $\phi(u)=u,\ \phi(v)=v,$ and $u=v$ in $P_{n}$ so there is nothing to prove. If $z>n$ then $\phi(u)=\phi(x)\phi(y)=\phi(v)$, as $\phi(z)=\varepsilon$. Thus $\phi(u)=\phi(v)$ will always hold in $P_{n}$. An analogous argument shows $\phi(u)=\phi(v)$ for $u=yxz$ and $v=yzx$ with $x<y\leq z$. ∎ ###### Theorem 3. The Diophantine problem for $P(\mathbb{N})$ is decidable. ###### Proof. Suppose we are given some equation $u_{1}X_{1}\dots X_{n}u_{n+1}=v_{1}Y_{1}\dots Y_{m}v_{m+1}$. Denote this equation by $\varphi$. We define the support of $\varphi$ to be all letters appearing in the supports of $u_{i}$ and $v_{i}$ $supp(\varphi)=\bigcup_{i\leq n+1}supp(u_{i})\cup\bigcup_{j\leq m+1}supp(v_{j})$ Let $k=\max(supp(\varphi))$. Then by the above proposition there exists a homomorphism $\phi:P(\mathbb{N})\to P_{k}$. Since each $u_{i}$ and $v_{j}$ is an element of $P_{k}$, $\phi(u_{i})=u_{i}$ and $\phi(v_{j})=v_{j}$. Suppose $\varphi$ has a solution $(x_{1},\dots,x_{n},y_{1},\dots,y_{m})\in P(\mathbb{N})^{m+n}$. Then $(\phi(x_{1}),\dots\phi(x_{n}),\phi(y_{1}),\dots,\phi(y_{m}))\in P_{k}^{n+m}$ will also be a solution to $\varphi$. Thus $\varphi$ has a solution in $P(\mathbb{N})$ if and only if it has a solution in $P_{k}$. Now, since there is a uniform algorithm for deciding first order sentences in $P_{k}$ for any $k$, we obtain the following procedure for solving Diophantine problems in $P(\mathbb{N})$: 1. 1. Given $\varphi$ as input, calculate $k=\max(supp(\varphi))$. 2. 2. Generate the interpretation of $P_{k}$ into Presburger arithmetic 3. 3. Interpret the sentence $\exists X_{1}\dots\exists X_{n}\exists Y_{1}\dots\exists Y_{m}:u_{1}X_{1}\dots X_{n}u_{n+1}=v_{1}Y_{1}\dots Y_{m}v_{m+1}$ in Presburger arithmetic using the interpretation of $P_{k}$, and check whether it holds. ∎ ### 5.2 A plactic monoid on integers We need not restrict ourselves to plactic monoids generated by $\mathbb{N}$. Let’s consider instead tableaux with labels taken from $\mathbb{Z}$. By the total order on $\mathbb{Z}$ we obtain a set $K_{\mathbb{Z}}$ of Knuth relations on triples $(x,y,z)\in\mathbb{Z}^{3}$. Define the plactic monoid on integers to be $P(\mathbb{Z})=\langle\mathbb{Z}\ |\ K_{\mathbb{Z}}\rangle$. This is an infintely generated plactic monoid, but note that $P(\mathbb{N})$ and $P(\mathbb{Z})$ are not isomorphic. Indeed, suppose $\psi:P(\mathbb{Z})\to P(\mathbb{N})$ were an isomorphism. Then for some $y\in P(\mathbb{Z})$ we have $\psi(y)=1$. Since 1 is irreducible, we must have $y\in\mathbb{Z}$. Consider $x<y<z,\ x,y,z\in\mathbb{Z}$. Then by irreducibility, $\psi(x),\psi(z)\in\mathbb{N}$. Thus we have some $a,b\in\mathbb{N}$ such that $1ab=1ba$. Such an equality cannot hold in $P(\mathbb{N})$. Given $\varphi$ a Diophantine equation in $P(\mathbb{Z})$, we will have $supp(\varphi)$ a finite totally ordered set. This set will have some smallest element $a\in\mathbb{Z}$ and some largest element $b\in\mathbb{Z}$. Then the interval $[a,b]\subset\mathbb{Z}$ has size $k=b-a+1$, and we can define an order preserving injective map from $supp(\varphi)$ to $[k]$. We will extend this map to a homomorphism. ###### Lemma 5.2. Let $\\{z_{1}<z_{2}<\dots<z_{n}\\}$ be a finite set of integers with their standard order. Then the map $\phi:P(\mathbb{Z})\to P_{k}$, with $k=z_{n}-z_{1}+1$ defined on generators by $\phi(z)=\begin{cases}\varepsilon,\ z<z_{1}\\\ z-z_{1}+1,\ z\in[z_{1},z_{n}]\\\ \varepsilon,\ z>z_{n}\end{cases}$ and extended to words in the natural way, is a homomorphism ###### Proof. As in lemma 5.1, consider $u=xzy$ and $v=zxy$, for $x\leq yz$. If more than one letter in $\\{x,y,z\\}$ is mapped to $\varepsilon$, there is nothing to prove. Likewise if no letters are mapped to $\varepsilon$. If only one letter is mapped to $\varepsilon$, then this is either $x$, yielding $\phi(u)=\phi(z)\phi(y)=\phi(v)$, or this letter is $z$, yielding $\phi(u)=\phi(x)\phi(y)=\phi(v)$. An analogous argument holds for all other Knuth relations. ∎ Thus, as in the case above, any Diophantine equation $\varphi$ is solvable in $P(\mathbb{Z})$ if and only if it has a solution in a fixed finite rank plactic monoid. Therefore, by uniformity of the above algorithm, the Diophantine problem for $P(\mathbb{Z})$ is also decidable. ### 5.3 Two open questions 1. 1. Is the first order theory of $P(\mathbb{N})$ decidable? It is known that this monoid satisfies no identities [18], and the above proof shows it has decidable Diophantine problem. Can this be extended to the whole theory? What about in the $P(\mathbb{Z})$ case? 2. 2. Do infinite rank plactic monoids defined on other generating sets have decidable Diophantine problem? For example, does $P(\mathbb{Q})=\langle\mathbb{Q}\ |\ K_{\mathbb{Q}}\rangle$ have decidable Diophantine problem? What about $P(L)$ for an arbitrary recursive total order? ## Acknowledgements This research was conducted during my master’s study at the University of East Anglia, and will form part of my thesis. I thank Robert Gray for his support and feedback as supervisor, and Lorna Gregory for her feedback and useful discussion of the model theoretic background. I also thank the PhD student community at the UEA for their support. ## References * [1] Antoine Abram and Christophe Reutenauer “The Stylic Monoid” arXiv, 2021 DOI: 10.48550/ARXIV.2106.06556 * [2] Ronald V Book and Friedrich Otto “String-rewriting systems” Springer, 1993 * [3] Alan J Cain et al. “A note on identities in plactic monoids and monoids of upper-triangular tropical matrices” In _arXiv:1705.04596_ , 2017 * [4] Alan J. Cain, Robert D. Gray and António Malheiro “Crystal monoids & crystal bases: Rewriting systems and biautomatic structures for plactic monoids of types An, Bn, Cn, Dn, and G2” In _Journal of Combinatorial Theory, Series A_ 162, 2019, pp. 406–466 DOI: https://doi.org/10.1016/j.jcta.2018.11.010 * [5] Alan J. Cain, Robert D. Gray and António Malheiro “Finite Gröbner–Shirshov bases for Plactic algebras and biautomatic structures for Plactic monoids” In _Journal of Algebra_ 423, 2015, pp. 37–53 DOI: https://doi.org/10.1016/j.jalgebra.2014.09.037 * [6] Alan J. Cain, Robert D. Gray and António Malheiro “Rewriting systems and biautomatic structures for Chinese, hypoplactic, and sylvester monoids” In _International Journal of Algebra and Computation_ 25.01n02 World Scientific Pub Co Pte Lt, 2015, pp. 51–80 DOI: https://doi.org/10.1142%2Fs0218196715400044 * [7] Alan J. Cain and António Malheiro “Deciding conjugacy in sylvester monoids and other homogeneous monoids” In _International Journal of Algebra and Computation_ 25.05 World Scientific Pub Co Pte Lt, 2015, pp. 899–915 DOI: https://doi.org/10.1142%2Fs0218196715500241 * [8] Alan J. Cain, Marianne Johnson, Mark Kambites and António Malheiro “Representations and identities of plactic-like monoids” In _Journal of Algebra_ 606, 2022, pp. 819–850 DOI: https://doi.org/10.1016/j.jalgebra.2022.04.033 * [9] Laura Ciobanu and Albert Garreta “Group equations with abelian predicates”, 2022 arXiv:2204.13946 [math.GR] * [10] Laure Daviaud, Marianne Johnson and Mark Kambites “Identities in upper triangular tropical matrix semigroups and the bicyclic monoid” In _Journal of Algebra_ 501 Elsevier BV, 2018, pp. 503–525 DOI: https://doi.org/10.1016%2Fj.jalgebra.2017.12.032 * [11] Volker Diekert and Markus Lohrey “Word equations over graph products” In _International Journal of Algebra and Computation_ 18.03 World Scientific, 2008, pp. 493–533 * [12] Gérard Duchamp and Daniel Krob “Plactic-Growth-Like Monoids” In _Words, Languages And Combinatorics Ii: Proceedings Of The International Conference_ , 1994, pp. 124 World Scientific * [13] Benjamin Fine, Anthony Gaglione, Gerhard Rosenberger and Dennis Spellman “The Tarski Problems and Their Solutions” In _Advances in Pure Mathematics_ 5.04 Scientific Research Publishing, 2015, pp. 212 * [14] William Fulton “Young Tableaux: With Applications to Representation Theory and Geometry”, London Mathematical Society Student Texts Cambridge University Press, 1996 DOI: 10.1017/CBO9780511626241 * [15] Albert Garreta and Robert D. Gray “On equations and first-order theory of one-relator monoids” In _Information and Computation_ 281, 2021, pp. 104745 DOI: https://doi.org/10.1016/j.ic.2021.104745 * [16] Florent Hivert, Jean-Christophe Novelli and Jean-Yves Thibon “An analogue of the plactic monoid for binary search trees” arXiv, 2002 DOI: 10.48550/ARXIV.MATH/0206246 * [17] Wilfrid Hodges “Model theory” Cambridge University Press, 1993 * [18] Marianne Johnson and Mark Kambites “Tropical representations and identities of plactic monoids” In _Transactions of the American Mathematical Society_ 374.6, 2021, pp. 4423–4447 * [19] M. Kashiwara “On crystal bases of the $Q$-analogue of universal enveloping algebras” In _Duke Mathematical Journal_ 63.2 Duke University Press, 1991, pp. 465–516 DOI: 10.1215/S0012-7094-91-06321-0 * [20] Olga Kharlampovich and Alexei Myasnikov “Irreducible affine varieties over a free group. I. Irreducibility of quadratic equations and Nullstellensatz” In _Journal of Algebra_ 200.2 New York: Academic Press,[1964-, 1998, pp. 472–516 * [21] Olga Kharlampovich and Alexei Myasnikov “Tarski’s problem about the elementary theory of free groups has a positive solution” In _Electronic Research Announcements of the American Mathematical Society_ 4.14, 1998, pp. 101–108 * [22] Donald Knuth “Permutations, matrices, and generalized Young tableaux” In _Pacific journal of mathematics_ 34.3 Mathematical Sciences Publishers, 1970, pp. 709–727 * [23] Łukasz Kubat and Jan Okniński “Identities of the plactic monoid” In _Semigroup Forum_ 90.1, 2015, pp. 100–112 Springer * [24] Łukasz Kubat and Jan Okniński “Plactic algebra of rank 3” In _Semigroup Forum_ 84.2, 2012, pp. 241–266 Springer * [25] Alain Lascoux and Marcel-P Schützenberger “Le monoıde plaxique” In _Noncommutative structures in algebra and geometric combinatorics (Naples, 1978)_ 109, 1981, pp. 129–156 * [26] Cédric Lecouvey “Combinatorics of crystal graphs and Kostka–Foulkes polynomials for the root systems Bn,Cn and Dn” In _European Journal of Combinatorics_ 27.4, 2006, pp. 526–557 DOI: https://doi.org/10.1016/j.ejc.2005.01.006 * [27] Cédric Lecouvey “Schensted-Type Correspondence, Plactic Monoid, and Jeu de Taquin for Type Cn” In _Journal of Algebra_ 247.2, 2002, pp. 295–331 DOI: https://doi.org/10.1006/jabr.2001.8905 * [28] Cedric Lecouvey “Schensted type correspondence for type $G_{2}$ and computation of the canonical basis of a finite dimensional $U_{q}(G_{2})$-module” arXiv, 2002 DOI: 10.48550/ARXIV.MATH/0211443 * [29] Cedric lecouvey “Schensted-type correspondences and plactic monoids for types $B_{n}$ and $D_{n}$” arXiv, 2002 DOI: 10.48550/ARXIV.MATH/0211444 * [30] Peter Littelmann “A plactic algebra for semisimple Lie algebras” Citeseer, 1995 * [31] M. Lothaire “Algebraic Combinatorics on Words”, Encyclopedia of Mathematics and its Applications Cambridge University Press, 2002 DOI: 10.1017/CBO9781107326019 * [32] Gennadiy Semenovich Makanin “Equations in a free group” In _Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya_ 46.6 Russian Academy of Sciences, Steklov Mathematical Institute of Russian …, 1982, pp. 1199–1273 * [33] Gennady S Makanin “The problem of solvability of equations in a free semigroup” In _Mathematics of the USSR-Sbornik_ 32.2 IOP Publishing, 1977, pp. 129 * [34] D. Marker “Model Theory : An Introduction”, Graduate Texts in Mathematics Springer New York, 2013 * [35] Jean-Christophe Novelli “On the hypoplactic monoid” In _Discrete Mathematics_ 217.1, 2000, pp. 315–336 DOI: https://doi.org/10.1016/S0012-365X(99)00270-8 * [36] Carl-Fredrik Nyberg-Brodda “A word-hyperbolic special monoid with undecidable Diophantine problem” arXiv, 2022 DOI: 10.48550/ARXIV.2205.01056 * [37] Alf Onshuus and Mariana Vicaría “Definable groups in models of Presburger Arithmetic” In _Annals of Pure and Applied Logic_ 171.6, 2020, pp. 102795 DOI: https://doi.org/10.1016/j.apal.2020.102795 * [38] M. Presburger “Uber die Vollstandigkeiteines gewissen Systems der Arithmetik ganzer Zahlen, in welchen die Addition als einzige Operation hervortritt” In _Comptes-Rendus du ler Congres des Mathematiciens des Pays Slavs_ , 1929 URL: https://cir.nii.ac.jp/crid/1571698599431503232 * [39] W.. Quine “Concatenation as a Basis for Arithmetic” In _Journal of Symbolic Logic_ 11.4 Association for Symbolic Logic, 1946, pp. 105–114 DOI: 10.2307/2268308 * [40] Alexander A Razborov “On systems of equations in a free group” In _Mathematics of the USSR-Izvestiya_ 25.1 IOP Publishing, 1985, pp. 115 * [41] C. Schensted “Longest Increasing and Decreasing Subsequences” In _Canadian Journal of Mathematics_ 13 Cambridge University Press, 1961, pp. 179–191 DOI: 10.4153/CJM-1961-015-3 * [42] M-P Schützenberger “La correspondance de Robinson” In _Combinatoire et représentation du groupe symétrique_ Springer, 1977, pp. 59–113 * [43] Marcel P Schützenberger “Sur une construction de Gilbert de B. Robinson” In _Séminaire Dubreil. Algèbre et théorie des nombres_ 25.1, 1971, pp. 1–4 * [44] Zlil Sela “Diophantine geometry over groups I : Makanin-Razborov diagrams” In _Publications Mathématiques de l’IHÉS_ 93 Institut des Hautes Études Scientifiques, 2001, pp. 31–105 URL: http://www.numdam.org/item/PMIHES_2001__93__31_0/ * [45] Zlil Sela “Word Equations I: Pairs and their Makanin-Razborov Diagrams” arXiv, 2016 DOI: 10.48550/ARXIV.1607.05431 * [46] Ryan Stansifer “Presburger’s Article on Integer Airthmetic: Remarks and Translation”, 1984
# Multi-stage Retrieve and Re-rank Model for Automatic Medical Coding Recommendation Xindi Wang1,2, Robert E. Mercer1, Frank Rudzicz2,3,4 1 Department of Computer Science, University of Western Ontario, Canada 2 Vector Institute for Artificial Intelligence, Canada 3 Faculty of Computer Science, Dalhousie University, Canada 4 Department of Computer Science, University of Toronto, Canada <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract The International Classification of Diseases (ICD) serves as a definitive medical classification system encompassing a wide range of diseases and conditions. The primary objective of ICD indexing is to allocate a subset of ICD codes to a medical record, which facilitates standardized documentation and management of various health conditions. Most existing approaches have suffered from selecting the proper label subsets from an extremely large ICD collection with a heavy long-tailed label distribution. In this paper, we leverage a multi-stage “retrieve and re-rank” framework as a novel solution to ICD indexing, via a hybrid discrete retrieval method, and re-rank retrieved candidates with contrastive learning that allows the model to make more accurate predictions from a simplified label space. The retrieval model is a hybrid of auxiliary knowledge of the electronic health records (EHR) and a discrete retrieval method (BM25), which efficiently collects high-quality candidates. In the last stage, we propose a label co-occurrence guided contrastive re-ranking model, which re-ranks the candidate labels by pulling together the clinical notes with positive ICD codes. Experimental results show the proposed method achieves state-of-the-art performance on a number of measures on the MIMIC-III benchmark. Multi-stage Retrieve and Re-rank Model for Automatic Medical Coding Recommendation Xindi Wang1,2, Robert E. Mercer1, Frank Rudzicz2,3,4 1 Department of Computer Science, University of Western Ontario, Canada 2 Vector Institute for Artificial Intelligence, Canada 3 Faculty of Computer Science, Dalhousie University, Canada 4 Department of Computer Science, University of Toronto, Canada<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ## 1 Introduction Electronic health records111https://www.cms.gov/Medicare/E-Health/EHealthRecords (EHRs) contain a comprehensive repository of essential administrative and clinical data pertinent to a person’s care within a specific healthcare provider setting. In order to conduct meaningful statistical analysis, these EHR data are annotated with structured codes in a classification system known as medical codes. The International Classification of Diseases222https://www.who.int/standards/classifications/classification-of- diseases (ICD) is one of the most widely-used coding systems, and it provides a taxonomy of classes, each uniquely identified by a code assigned to an episode of patient care. The task of medical coding associates ICD codes with EHR documents. The status quo of assigning medical codes is a manual process, which is labour-intensive, time-consuming, and error-prone Xie and Xing (2018). To reduce coding errors and cost, the demand for automated medical coding has become imperative. Previous deep learning approaches regarded medical coding as an extreme multi- label text classification problem Shi et al. (2017); Mullenbach et al. (2018); Baumel et al. (2018); Xie et al. (2019); Yuan et al. (2022), where an encoder is typically employed to learn the representations of the clinical notes and a label-specific binary classifier is subsequently constructed on top of the encoder for label predictions. However, some remaining difficulties have still posed immense challenges. First, clinical documents are lengthy (containing on average 1596 words in the MIMIC-III dataset) and noisy (including terse abbreviations, symbols, and misspellings). Second, the label set is extremely large and complex; for instance, in the $10^{th}$ ICD edition, there are over 130,000 codes333https://www.cdc.gov/nchs/icd/icd10cm_pcs.htm. Third, the distribution of ICD codes is extremely long-tailed; while some ICD codes occur frequently, many others seldom appear, if at all, because of the rarity of the diseases. For instance, among the 942 unique 3-digit ICD codes in the MIMIC- III dataset Johnson et al. (2016), the ten most common codes account for 26% of all code occurrences and the 437 least common codes account for only 1% of occurrences Bai and Vucetic (2019). Figure 1: An example of a medical record from the MIMIC-III dataset which includes the discharge summary, assigned ICD codes and auxiliary knowledge. We colour each code and its corresponding mentions in the discharge summary and auxiliary knowledge. We use the auxiliary knowledge of the notes to retrieve the candidate subset of the label space. To address the aforementioned challenges, we propose a novel multi-stage retrieve and re-rank framework, where the goal is to first generate a curated ICD list and then provide suggested ICD codes for a given medical record. In contrast to prior approaches, for instance, CAML Mullenbach et al. (2018), MultiResCNN Li and Yu (2020) and KEPTLongformer Yang et al. (2022), that primarily consider ICD indexing as a multi-label text classification task, we introduce a new perspective that conceptualizes the task as a recommendation problem. More precisely, we first conduct a two-stage retrieval process leveraging auxiliary knowledge and BM25 to obtain a small subset of candidate ICD codes from the large number of labels to alleviate issues caused by the label set and imbalanced label distribution. EHR auxiliary knowledge holds significant potential, but it has often been underutilized in prior studies. In addition to clinical texts, our focus centers on two code terminologies: Diagnosis-Related Group codes444https://www.cms.gov/Medicare/Medicare-Fee-for- Service-Payment/AcuteInpatientPPS/MS-DRG-Classifications-and-Software (DRG) and Current Procedural Terminology codes555https://www.ama- assn.org/amaone/cpt-current-procedural-terminology (CPT), as well as patient prescribed medications. These external sources can serve as robust indicators for predicting ICD codes. For instance, within a drug prescription, the presence of a medication like “Namenda” can strongly imply a likelihood of Alzheimer’s disease, as depicted in Figure 1. Subsequently, we design a re- ranking model via co-occurrence guided contrastive learning to refine the candidate set, which can deal with lengthy clinical notes and generate semantically meaningful representations via the pre-trained language model and leverage code co-occurrence to generate co-occurrence-aware label representations. The co-occurrence of codes in clinical texts yields valuable insights into the interconnections among different diseases or conditions. As illustrated in Figure 1, the code for “Dementia in conditions classified elsewhere without behavioral disturbance” (294.10) can be easily found in the text; however, inferring the code “Alzheimer’s disease” (331.0) presents a more intricate challenge with less explicit clues. Fortunately, a robust association exists between these two diseases, with “Alzheimer’s disease” serving as a prevalent cause of “dementia”. This linkage can be effectively captured as these two diseases frequently co-occur within the clinical notes. This empowers us to gain a deeper understanding of the contexts, which could mitigate the limitation of long-tailed label distributions as rare labels might be suggested based on these relationships. We train the re-ranking model via contrastive learning as it has strong discriminative power that can extract features uniquely associated with each class, which empowers the model to make more accurate recommendations. To summarize, the major contributions of this paper are: * • We formalize the medical coding task as a recommendation problem and present a novel multi-stage retrieve and re-rank framework to make more accurate predictions by ruling out the irrelevant codes before ranking, rather than making direct predictions on the entire large label set. * • To address the large label set and long-tailed distribution issues, in the two-stage retrieval process we use external knowledge and BM25 to retrieve a subset of candidate labels from the large label space. We further leverage the code co-occurrence in the re-ranking stage to capture the internal connections among the codes. * • We apply contrastive learning in the re-ranking stage. It effectively pulls together the representations of a clinical note and its corresponding golden truth labels, which allows the model to make more accurate predictions. ## 2 Related Work The automatic ICD indexing task is well established in the healthcare domain. Extensive research using deep learning has been dedicated to ICD indexing, including recurrent-based neural networks (RNNs), convolution-based neural networks (CNNs), and their variations Mullenbach et al. (2018); Li and Yu (2020); Shi et al. (2017); Xie and Xing (2018). These architectures are able to extract and categorize semantic features, reducing the need for medical domain expertise during the traditional feature selection stage seen in conventional algorithms Teng et al. (2023). The ICD indexing task is formulated as a multi-label classification problem in these approaches. Mullenbach et al. (2018) introduced a combination of CNN with an attention mechanism to effectively capture pertinent information within clinical texts for each ICD code. Building on this foundation, Xie et al. (2019) enhanced the CNN attention model by integrating a multi-scale feature attention technique. Many CNN variants were subsequently introduced to address the challenges posed by lengthy and noisy clinical texts, including MultiResCNN Li and Yu (2020), DCAN Ji et al. (2020), and EffectiveCAN Liu et al. (2021). RNN-based models, renowned for their capacity to capture contextual information across input texts, have also been widely used for ICD indexing. Shi et al. (2017) proposed a character-aware Long Short-Term Memory (LSTM) recurrent network to learn the underlying representations of clinical texts. Xie and Xing (2018) introduced a tree-of-sequences LSTM architecture alongside adversarial learning to capture hierarchical relationships among ICD codes. Additionally, Baumel et al. (2018) presented a Hierarchical Attention-Bidirectional Gated Recurrent Unit (HA-GRU) model, facilitating document labeling by identifying sentences relevant to each ICD code. LAAT Vu et al. (2020) used a bidirectional Long-Short Term Memory (BiLSTM) encoder and a customized label-wise attention mechanism to cultivate label-specific vectors across distinct clinical text fragments. To address the hierarchical relationships intrinsic to ICD codes, Graph Convolutional Neural Networks (GCNNs) Kipf and Welling (2017) have emerged as a powerful tool. Rios and Kavuluru (2018) and Xie et al. (2019) used GCNNs to capture both the hierarchical interplay among ICD codes and the semantic information specific to each code. HyperCore Cao et al. (2020) took a comprehensive approach by considering both code hierarchy and code co- occurrence, employing GCNNs to learn code representations within the co-graph. Incorporating external knowledge beyond ICD code information has also gained traction. Bai and Vucetic (2019) introduced a Knowledge Source Integration (KSI) model that integrates external knowledge from Wikipedia. This integration calculated matching scores between clinical notes and disease- related Wikipedia documents, in order to enrich the available information for ICD predictions. Additionally, Yuan et al. (2022) proposed a Multiple Synonym Matching Network (MSMN) to use synonyms of ICD codes, enhancing the quality of code representation learning. Expanding on this, Yang et al. (2022) integrated a pre-trained language model with three domain-specific knowledge sources: code hierarchy, synonyms, and abbreviations. This fusion of knowledge sources contributes significantly to the performance of ICD classification. ## 3 Method ### 3.1 A Multi-stage Framework Figure 2: Overview of the proposed multi-stage retrieve and re-rank framework. The model first leverages auxiliary knowledge and BM25 to retrieve a candidate list from the full label space, then uses a re-rank model that leverages the code co-occurrence guided contrastive learning to generate the final relevant labels. We formulate the medical coding task as a recommendation task given medical records $\mathcal{D}=\\{d_{1},d_{2},...,d_{N}\\}$ and a set of ICD codes $\mathcal{Y}=\\{y_{1},y_{2},...,y_{L}\\}$ with associated external auxiliary knowledge $\mathcal{K}$. We construct the label information as a graph structure $\mathcal{G}$, using code co-occurrence relations, and we train a multi-stage recommender system $\mathcal{R}$, based on the text information $\mathcal{D}$, constructed label information $\mathcal{G}$, and the external auxiliary knowledge $\mathcal{K}$. The system $\mathcal{R}$ needs to predict the relevant labels given a document $d\notin\mathcal{D}$. In this section, we present a multi-stage retrieve and re-rank framework for ICD indexing, which is shown in Figure 2. Our model is composed of a two-stage retrieval process that uses auxiliary knowledge of the EHR and BM25 to obtain a shortened candidate list, and a re-ranking process that conducts code co- occurrence guided contrastive learning to further improve the recommended ICD list. ### 3.2 The Retrieval Stage #### Using Auxiliary Knowledge To retrieve the candidate list using auxiliary knowledge, we incorporate insights from three external sources of knowledge: diagnosis-related group (DRG) codes, current procedural terminology (CPT) codes, and medications prescribed to patients. DRG codes are used by hospitals and healthcare providers to classify patients into groups based on their diagnosis, treatment, and length of stay. These codes are used for reimbursement purposes, and they help determine the amount that healthcare providers are remunerated for their services. DRG codes are further classified into medical DRGs (which exclude operating room procedures) and surgical DRGs. CPT codes are used to describe medical procedures and services provided by healthcare providers. They provide a standardized way of documenting and billing for medical services. CPT codes are used by insurance companies to determine reimbursement rates for healthcare providers. Such code terminologies significantly contribute to the refinement of ICD indexing. Moreover, the medications prescribed to patients offer a wealth of predictive information for ICD codes. These prescriptions often mark the conclusion of a patient’s care episode. As patients approach the conclusion of their treatment, the prescribed medications serve a critical role in managing their conditions. Consequently, these medications emerge as potent indicators of underlying health conditions or diagnoses. Their inclusion in the retrieval process greatly enhances the accuracy and relevance of the corresponding ICD code recommendations. The aforementioned auxiliary knowledge, such as DRG codes, CPT codes, and drug prescriptions, typically appears in the EHR data and is readily accessible. Given a clinical note $d$, we retrieve the candidate ICD list by calculating the auxiliary knowledge and label co-occurrence matrix using conditional probabilities, i.e., $P(y_{i}\,|\,k_{j})$, which denote the probabilities of occurrence of ICD $y_{i}$ when auxiliary knowledge $k_{j}$ appears. $P(y_{i}\,|\,k_{j})=\frac{C_{y_{i}\cap k_{j}}}{C_{k_{j}}},$ (1) where $C_{y_{i}\cap k_{j}}$ denotes the number of co-occurrences of $y_{i}$ and $k_{j}$, and $C_{k_{j}}$ is the number of occurrences of $k_{j}$ in the training set. To avoid the noise of rare co-occurrences, a threshold $\eta$ filters noisy correlations. $\tilde{K}_{j}$ denotes the selected ICD set for auxiliary knowledge $j$. $\tilde{K}_{j}=\\{y_{i}|P(y_{i}|k_{j})>\eta,\;i=1,...,L\\},$ (2) where $L$ is the total number of ICD codes in the label set, and $\eta=0.005$. We then join the ICD codes retrieved from the auxiliary knowledge co- occurrences for the DRG codes, CPT codes and prescribed drugs to form the candidate ICD subset $\mathcal{C}_{\mathrm{auxiliary}}$: $\mathcal{C}_{\mathrm{auxiliary}}(d)=\\\ \tilde{K}_{\mathrm{DRG}}(d)\cup\tilde{K}_{\mathrm{CPT}}(d)\cup\tilde{K}_{\mathrm{drug}}(d),$ (3) where $\mathcal{C}_{\mathrm{auxiliary}}\subseteq\mathcal{Y}$. #### Using BM25 The retrieval stage using auxiliary knowledge incorporates the co-relations between ICD codes and external knowledge, but ignores the relationship between clinical texts and labels. To increase the recall of the retrieval stage, we adopt BM25 Robertson and Walker (1994) to allow lexical matching between the medical documents and labels on the retrieved candidate list $\mathcal{C}_{\mathrm{auxiliary}}$. Given a medical record $d$ and an ICD code $y$, the score between $d$ and $y$ is calculated as: $\mathrm{BM25}(d,y)=\\\ \sum_{w\in d\cap t_{y}}\mathrm{IDF}(w)\frac{\scriptstyle\mathrm{TF}(w,t_{y})\cdot(k_{1}+1)}{\scriptstyle\mathrm{TF}(w,t_{y})\cdot k_{1}(1-b+b\frac{|\mathcal{Y}|}{\textit{avgdl}})},$ (4) and $\textit{avgdl}=\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}}|t_{y}|,$ (5) where $t_{y}$ represents the words in the label descriptors, $|\mathcal{Y}|$ is the length of the label descriptors in words, avgdl is the average length of text information in the label. When the BM25 score between $d$ and $y_{i}$ exceeds a certain threshold $\theta$, we add $y_{i}$ as a candidate of $d$: $\mathcal{C}_{\mathrm{BM25}}(d)=\\\ \\{{y_{i}}|\mathrm{BM25}(d,y_{i})>\theta,y_{i}\in\mathcal{C}_{\mathrm{auxiliary}}\\},$ (6) where $\theta=200$. Given a clinical note $d$, its candidate ICD set is first generated by using the auxiliary knowledge in the retrieval stage and then reduced by using BM25, where $\mathcal{C}_{\mathrm{BM25}}\subseteq\mathcal{C}_{\mathrm{auxiliary}}$ and $\mathcal{C}_{\mathrm{auxiliary}}\subseteq\mathcal{Y}$. ### 3.3 The Re-ranking Stage #### Clinical Text Encoder Encouraged by the success of the pre-trained language model Longformer Beltagy et al. (2020) in dealing with longer texts, we use Clinical-Longformer Li et al. (2023), specifically pre-trained in the medical domain, as a text encoder. Given a medical document $d$ as input that consists of a sequence of tokens: $d=\\{[\texttt{CLS}],x_{1},x_{2},...,x_{n-2},[\texttt{SEP}]\\},$ (7) where $[\texttt{CLS}]$ and $[\texttt{SEP}]$ are two special tokens that indicate the beginning and end of the sequence, and $n$ is the sequence length, the Clinical-Longformer encodes the tokens and outputs the hidden representations for each token: $H_{\mathrm{hidden}}=\mathrm{ClinicalLongformer}(d),$ (8) where $H_{\mathrm{hidden}}\in\mathbb{R}^{n\times h_{e}}$, and $h_{e}$ is the hidden size. Following previous work Wang et al. (2022); Yang et al. (2022), we use the hidden state of the $\mathrm{[CLS]}$ token to represent the document, which is the first token of $H_{\mathrm{hidden}}$, denoted as $H_{\mathrm{T}}$. #### Label Encoder The occurrence of two ICD codes together in clinical texts frequently indicates a simultaneous presence or a causal connection between specific diseases. This implies that the codes representing these interconnected diseases often manifest together within clinical notes. We employ a Graphormer Ying et al. (2021) to incorporate the co-occurrence relationships among ICD codes. Unlike the original GNN, Graphormer models graphs using Transformer layers Vaswani et al. (2017) with spatial encoding and edge encoding, which could effectively encode the structural information (i.e., code co-occurrence) of a graph into the model. We create a directed code co-occurrence graph $\mathcal{G}=(\mathcal{Y},\mathcal{E})$, where node set $\mathcal{Y}$ is the labels and edge set $\mathcal{E}$ denotes the co-occurrence relations. This graph is constructed using the code co-occurrence matrix, which has been used as the edge matrix for the graph. We create the code co-occurrence matrix by using the correlated relationship between labels based on conditional probabilities. This approach encapsulates the interdependence between various ICD codes in a quantifiable manner, offering valuable insights into the underlying connections among disease codes within the clinical texts. To be more specific, we calculate the probability of occurrence of label $y_{j}$ when label $y_{i}$ appears as follows: $P(y_{j}\,|\,y_{i})=\frac{C_{y_{i}\cap y_{j}}}{C_{y_{i}}}$ (9) where $C_{y_{j}\cap y_{i}}$ denotes the number of co-occurrences of $y_{i}$ and $y_{j}$, and $C_{y_{i}}$ is the number of occurrences of $y_{i}$ in the training set. To facilitate graph construction, we binarize the correlation probability $P(y_{j}\,|\,y_{i})$. This entails converting the probability values into binary values which indicates whether a correlation exists (or not) between two labels. The operation can be written as: $\mathcal{E}_{ij}=\begin{cases}0,\text{if }P(y_{j}\,|\,y_{i})<\lambda\\\ 1,\text{if }P(y_{j}\,|\,y_{i})\geq\lambda,\end{cases}$ (10) where $\mathcal{E}$ is the binary correlation matrix that is used to form the edge set, and $\lambda$ is the hyper-parameter threshold to filter the noise edges. In our experiment, $\lambda=1$, which means that a edge is formed when the two labels in each pair always appear together. To encode the graph $\mathcal{G}$, we first generate the initial node features using the ICD full descriptors for each code $y$ via Clinical-Longformer: $\begin{gathered}\mathrm{y}=\\{[\texttt{CLS}],x_{1},x_{2},...,x_{n-2},[\texttt{SEP}]\\},\\\ H_{v}=\mathrm{ClinicalLongformer}(\mathrm{y}),\end{gathered}$ (11) where $\mathrm{y}$ represents a sequence of words in the label descriptors of label $y$, $H_{v}\in\mathbb{R}^{n\times h_{e}}$, and $h_{e}$ is the hidden size. We use the hidden state of the first token ($\mathrm{[CLS]}$) to represent the initial node feature denoted as $H_{\mathrm{node}}^{i}$ for the $i^{th}$ label. With all initial node features stacked as a matrix $V=\\{H_{\mathrm{node}}^{1},H_{\mathrm{node}}^{2},...,H_{\mathrm{node}}^{L}\\}$, where $V\in\mathbb{R}^{h_{e}\times L}$, a standard self-attention layer is then used for feature migration. To leverage the structural information, a novel spatial encoding method is used to modify the Query-Key product matrix $A^{\mathcal{G}}$ in the self-attention layer: $A^{\mathcal{G}}_{\textit{ij}}=\frac{(H_{\mathrm{node}}^{i}W^{\mathcal{G}}_{Q})(H_{\mathrm{node}}^{j}W^{\mathcal{G}}_{K})^{\intercal}}{\sqrt{h_{e}}}+b_{\phi(y_{i},y_{j})},$ (12) where $W^{\mathcal{G}}_{Q}$ and $W^{\mathcal{G}}_{K}$ are layer-specific weight matrices, and $\phi(y_{i},y_{j})$ is the spatial relation between $y_{i}$ and $y_{j}$ in graph $\mathcal{G}$, and the function $\phi(\cdot)$ is defined as the connectivity between the nodes in $\mathcal{G}$, which is the co-occurrence relation among labels. $b_{\phi(y_{i},y_{j})}$ is a learnable scalar indexed by $\phi(y_{i},y_{j})$, and shared across all layers. The attention score $A^{\mathcal{G}}_{\textit{ij}}$, then, has been used to aggregate the multi-head attention for the final output: $h^{l+1}=\mathrm{MHA}(\mathrm{LN}(h^{l}))+h^{l},$ (13) where $\mathrm{LN}$ denotes the layer normalization, $\mathrm{MHA}$ denotes the multi-head self-attention, $h^{l}$ and $h^{l+1}\in\mathbb{R}^{L\times h_{e}}$ indicate the node representation of the $l^{th}$ and $(l+1)^{th}$ layers. We use the last layer to represent the label feature denoted as $H_{L}$. For more details on the full structure of Graphormer, please refer to the original paper Ying et al. (2021). #### Contrastive Learning for Re-ranking Now, we construct a code co-occurrence guided contrastive learning framework. Unlike supervised learning that aims to understand “what is what”, contrastive learning adopts a different perspective by learning “what is similar or dissimilar to what”. In our problem setting, we focus on the distances between a clinical document and its associated ICD codes, rather than solely between samples themselves. We consider the ground truth labels as positive samples, while the negative samples comprise all the other labels within the label space. Given $H_{\mathrm{T}}$, the representation for a clinical note $d$, and the set of representations of its corresponding ICD codes denoted as $H^{+}_{\mathrm{L}}$, we denote the representations of $N$ negative ICD codes randomly chosen from the ICD codes of the documents in the batch (batch size is $N$), which are not ICD codes of document $d$, as $H^{-}_{\mathrm{L}}$. Contrastive learning aims to learn the effective representations by pulling $d$ and $H^{+}_{\mathrm{L}}$ together while pushing apart $d$ and $H^{-}_{\mathrm{L}}$, represented as $S$ and $D$, respectively, in the equation below. The contrastive loss can be defined as: $\mathcal{L}=-\mathrm{log}\frac{S/\tau}{S/\tau+D/\tau},$ (14) where $S=\exp(\sum_{c\in L^{+}_{\mathrm{L}}}\cos(H_{\mathrm{T}},c)/|H^{+}_{\mathrm{L}}|)$, $D=\exp(\sum_{c^{\prime}\in L^{-}_{\mathrm{L}}}\cos(H_{\mathrm{T}},c^{\prime})/N)$, and $\tau$ is the temperature hyper-parameter. During inference, a comparison is conducted by measuring the distance between the query clinical note and ICD codes in the embedding space, which ranks the ICD codes and then provides recommendations of the potential ICD candidates. ## 4 Experiments ### 4.1 Dataset and Pre-processing We conduct our experiments on the publicly available benchmark MIMIC-III Johnson et al. (2016) dataset that contains a variety of patient data types, including discharge summaries, demographic details, interventions, laboratory results, physiologic measures, and medication information. Following previous work, we are interested in the de-identified discharge summaries with annotated ICD-9 codes. There are 52,722 discharge summaries and 8,922 unique ICD-9 codes in the dataset. We mainly use three major data resources from the dataset: (1) de-identified discharge summaries (from the NOTEEVENTS table); (2) ICD-9 codes (from DIAGNOSES_ICD and PROCEDURES_ICD tables); and (3) auxiliary knowledge including DRG codes, CPT codes and drug prescriptions (from DRGCODES, CPTEVENTS, and PRESCRIPTIONS tables). To preprocess the clinical notes, we first remove all de-identified information, then replace punctuation and atypical alphanumerical character combinations (e.g., ‘3a’, ‘4kg’) with white space, and lowercase every token. We truncate the discharge summaries at a maximum length of 4000 tokens. We follow Mullenbach et al. (2018) to form two settings: full codes (MIMIC-III- full) and top-50 frequent codes (MIMIC-III-top 50). In MIMIC-III-full, there are 47,719 discharge summaries for training, with 1,632 for validation, and with 3,372 for testing. ### 4.2 Implementation and Evaluation We implement our model in PyTorch Paszke et al. (2019) on a single NVIDIA A100 40G GPU. We use the Adam optimizer and early stopping strategies using the Micro-F1 score over the validation set as the stopping criterion to avoid over-fitting. We set the initial learning rate as 5e-5 with batch size 16. We choose a learning rate scheduler which is warmed up with cosine decay, and the warm up ratio is set to 0.1. Our code is available at https://github.com/xdwang0726/ICD-contrastive-curriculum. For evaluating the performance of our proposed model, we employ three commonly used metrics: F1-score (Micro and Macro), AUC (Micro and Macro), and precision at $K$ ($\mathrm{P@K}$). ## 5 Results and Discussion Models | MIMIC-III-full | MIMIC-III-top 50 ---|---|--- AUC | F1 | P@K | AUC | F1 | P@5 Macro | Micro | Macro | Micro | P@8 | P@15 | Macro | Micro | Macro | Micro CAML Mullenbach et al. (2018) | 0.895 | 0.986 | 0.088 | 0.539 | 0.709 | 0.561 | 0.875 | 0.909 | 0.532 | 0.614 | 0.609 DR-CAML Mullenbach et al. (2018) | 0.897 | 0.985 | 0.086 | 0.529 | 0.690 | 0.548 | 0.884 | 0.916 | 0.576 | 0.633 | 0.618 MultiResCNN Li and Yu (2020) | 0.910 | 0.986 | 0.085 | 0.552 | 0.734 | 0.584 | 0.899 | 0.928 | 0.606 | 0.670 | 0.641 LAAT Vu et al. (2020) | 0.919 | 0.988 | 0.099 | 0.575 | 0.738 | 0.591 | 0.925 | 0.946 | 0.666 | 0.715 | 0.675 Joint-LAAT Vu et al. (2020) | 0.921 | 0.988 | 0.107 | 0.575 | 0.735 | 0.590 | 0.925 | 0.946 | 0.661 | 0.716 | 0.671 EffectiveCAN Liu et al. (2021) | 0.915 | 0.988 | 0.106 | 0.589 | 0.758 | 0.606 | 0.915 | 0.938 | 0.644 | 0.702 | 0.656 MSMN Yuan et al. (2022) | 0.950 | 0.992 | 0.103 | 0.584 | 0.752 | 0.599 | 0.928 | 0.947 | 0.683 | 0.725 | 0.680 KEPTLongformer Yang et al. (2022) | - | - | 0.118 | 0.599 | 0.771 | 0.615 | 0.926 | 0.947 | 0.689 | 0.728 | 0.672 Ours | 0.949 | 0.995 | 0.114 | 0.603 | 0.775 | 0.623 | 0.927 | 0.947 | 0.687 | 0.732 | 0.685 Table 1: Comparison to previous methods across three main evaluation metrics MIMIC-III dataset. Bold: the optimal values. In order to asses the efficacy of our proposed framework, we compare with the existing state-of-the-art (SOTA) models, as outlined in Table 1. The top score for each metric is denoted in bold. As shown, our model outperforms in the majority of evaluation metrics, with the exception of Macro-AUC and Macro-F1 on MIMIC-III-full and MIMIC-III-top 50. Notably, our model achieves comparable performance on Micro-F1 and Micro-AUC, and improves precision at $K$ on both MIMIC-III-full and MIMIC-III-top 50. These results provide solid evidence to validate the efficacy of integrating auxiliary knowledge in the retrieval stage and leveraging code co-occurrence guided contrastive learning in the re- ranking stage. As the occurrence frequencies of the ICD codes are imbalanced, our focus lies in assessing the efficacy of our model, specifically on the infrequently appearing ICD codes. We categorize the ICD codes into four groups based on their occurrences in the training set: [0, 10), [10, 50), [50, 500), and [500,). Figure 3 illustrates the distribution of ICD codes and their occurrence percentages across the four categorized groups in the training set, which show that the distribution of ICD frequency is highly biased, conforming to a long-tail distribution. Figures 3b and 3c present the performance of our model on MIMIC-III-full in comparison to the CAML baseline Mullenbach et al. (2018) across the four ICD groups on Macro-AUC and Micro-F1, respectively. Our model demonstrates significant improvements for both frequent and infrequent labels on both metrics. Figure 3: (a) ICD code distribution. (b) Macro-AUC performance comparison of our model and CAML on ICD codes at different frequency. (c) Micro-F1 performance comparison of our model and CAML on ICD codes at different frequency. To confirm the specific contributions of these modules in terms of enhancing both the effectiveness and robustness of the model, we conduct ablation studies with three different settings: (a) we examine the effectiveness of using auxiliary knowledge in the retrieval stage by removing the retrieval stage and rank the ICD codes on the whole label set; (b) we examine the influence of different embedding methods by replacing the Clinical-Longformer with Clinical-BERT; and (c) we test the effectiveness of label embedding by replacing the encoding of the label with the average of word embeddings in the label descriptors. The experimental results are shown in Table 2. We also conduct case studies to qualitatively understand the effects of incorporating the label co-occurrence and the auxiliary knowledge. Two case studies have been presented in Appendix A. Methods | F1 | P@K ---|---|--- Macro | Micro | P@8 | P@15 Full Model | 0.114 | 0.603 | 0.775 | 0.623 w/o auxiliary knowledge | 0.097 | 0.579 | 0.748 | 0.587 embedded w/ Clinical-BERT | 0.083 | 0.548 | 0.711 | 0.546 w/o Graphormer | 0.102 | 0.583 | 0.753 | 0.591 Table 2: Ablation experiment results on the MIMIC-III-full. Bold: the optimal values. #### Effectiveness of Using Auxiliary Knowledge for Retrieval We employ three distinct types of auxiliary knowledge in the retrieval stage: DRG codes, CPT codes, and drug prescriptions. As shown in Table 2, removing auxiliary knowledge leads to a decline in performance, indicating the pivotal role of the retrieval stage. This outcome further provides evidence that external knowledge effectively addresses the challenge presented by a large pool of potential ICD codes. Through integrating external knowledge, the retrieval stage attains the capability to refine the candidate list using the co-occurrence relationships between ICD codes and the auxiliary knowledge, thereby amplifying both the efficiency and accuracy of the re-ranking stage. The selection of an appropriate candidate list for a given medical record hinges upon a hyper-parameter, specifically the threshold $\eta$ governing the co-occurrence between auxiliary knowledge and ICD codes. The choice of $\eta$ determines the candidate numbers that implicitly affect the overall performance of the model. Setting $\eta=0.005$, the candidate list guarantees inclusion of 99.22% of the gold-standard ICD codes, resulting in an average of 1,460 codes in the subset. Notably, this accounts for approximately one-sixth of the complete code set. A further reduction using BM25 limits the candidate list to 1,299 on average. #### Comparison of Clinical-Longformer and Clinical-BERT Increasing the maximum token limit is important in the context of clinical notes analysis as clinical texts are lengthy. Specially, in the MIMIC-III dataset, the average length of the discharge summaries is 1,596. Given this substantial token volume in the clinical notes, encoding a maximum number of tokens prior to downstream analysis becomes a pivotal requirement, which facilitates robust and meaningful subsequent analysis. To test the effectiveness of using longer sequences, we compare the model performance of Clinical-Longformer and a BERT-based pre-trained language model (i.e., Clinical-BERT) which can encode a maximum of 512 tokens. As shown in Table 2, Clinical-Longformer substantially outperforms Clinical-BERT, indicating the importance of the maximum token limit on language models in the automatic medical coding task. #### Effectiveness of Learning Label Features Using Code Co-occurrence The graph structure has been shown to be effective in modeling code correlations and Graphormer efficiently learns code representations. The findings presented in Table 2 highlight the affirmative impact of integrating code co-occurrence into label representations. By using Graphormer, the model effectively captures and exploits the intricate connections and interdependencies among the labels, thereby improving the overall performance. This indicates that incorporating code co-occurrence information with Graphormer empowers the model to gain insights from the collaborative behaviours of the labels, consequently facilitating a more holistic comprehension of the underlying label co-relations. ## 6 Conclusion In this paper, we regard the medical coding task as a recommendation problem and present a novel multi-stage retrieve and re-rank framework. The primary objective of the proposed framework is twofold: to construct a curated list of ICD codes and, subsequently, to further refine the candidate list for a given medical record. Specifically, we first conduct a two-step retrieval process, incorporating auxiliary knowledge and the BM25 algorithm. This approach retrieves a concise subset of the candidate list, mitigating the challenges of a very large and imbalanced label distribution. We then use a re-ranking model to refine the previously obtained candidate list, employing code co-occurrence guided contrastive learning. Experimental results demonstrate that our proposed framework outperforms the previous SOTA, which suggests that it provides more precise and contextually grounded ICD recommendations for the given medical records. In the future, our proposed framework may be extended with more external knowledge such as the Unified Medical Language System (UMLS) and code synonymy. ## Limitations Our usage of auxiliary knowledge is limited to external knowledge that includes DRG codes, CPT codes, and drug prescriptions, only. Other knowledge including disease-symptom, disease-lab relations, Unified Medical Language System (UMLS), and others, could also be potentially useful for the auto ICD coding task. We also acknowledge that the auxiliary knowledge we used is labeled by human annotators, which may require some extra effort. We are not quite sure about the workload for annotating different code terminologies, but we believe linking different code terminologies is important. Our study is constrained by its evaluation limited to MIMIC-III-full and MIMIC-III-top 50 datasets, primarily concentrated on common disease. To comprehensively assess the model’s performance on rare diseases, future work could benefit from a curated list of rare diseases validated by domain experts. ## Ethics Statement We are using the publicly available clinical dataset MIMIC-III, which contains de-identified patient information. We do not see any ethics issue here in this paper. ## Acknowledgements We would like to thank all reviewers for their comments, which helped improve this paper considerably. Computational resources used in preparing this research were provided, in part, by Compute Ontario666https://www.computeontario.ca, Digital Research Alliance of Canada777https://ccdb.alliancecan.ca, the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute888https://www.vectorinstitute.ai/partners. This research is partially funded by The Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant to R. E. Mercer. F. Rudzicz is supported by a CIFAR Chair in AI. ## References * Bai and Vucetic (2019) Tian Bai and Slobodan Vucetic. 2019. Improving medical code prediction from clinical text via incorporating online knowledge sources. _The World Wide Web Conference_ , page 72–82. * Baumel et al. (2018) Tal Baumel, Jumana Nassour-Kassis, Michael Elhadad, and Noémie Elhadad. 2018. Multi-label classification of patient notes a case study on ICD code assignment. In _The Workshops of the Thirty-Second AAAI Conf. on Art. Intell._ , pages 409–416. * Beltagy et al. (2020) Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. _arXiv:2004.05150_. * Cao et al. (2020) Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Shengping Liu, and Weifeng Chong. 2020. Hypercore: Hyperbolic and co-graph representation for automatic icd coding. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , page 3105–3114. * Ji et al. (2020) Shaoxiong Ji, Erik Cambria, and Pekka Marttinen. 2020. Dilated convolutional attention network for medical code assignment from clinical text. In _Proceedings of the 3rd Clinical Natural Language Processing Workshop_ , pages 73–78. * Johnson et al. (2016) Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, Li wei H. Lehman, Mengling Feng, Mohammad Mahdi Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. 2016. MIMIC-III, a freely accessible critical care database. _Scientific Data_ , 3. * Kipf and Welling (2017) Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In _Proceedings of the 5th Int. Conference on Learning Representations_. * Li and Yu (2020) Fei Li and Hong Yu. 2020. ICD coding from clinical text using multi-filter residual convolutional neural network. In _Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence_ , pages 8180–8187. * Li et al. (2023) Yikuan Li, Ramsey M Wehbe, Faraz S Ahmad, Hanyin Wang, and Yuan Luo. 2023. A comparative study of pretrained language models for long clinical text. _Journal of the American Medical Informatics Association_ , 30(2):340–347. * Liu et al. (2021) Yang Liu, Hua Cheng, Russell Klopfer, Matthew R. Gormley, and Thomas Schaaf. 2021. Effective convolutional attention network for multi-label clinical document classification. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 5941–5953. * Mullenbach et al. (2018) James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In _Proc. of the 2018 Conf. of the North American Chapter of the Ass. for Comp. Ling.: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1101–1111. * Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. * Rios and Kavuluru (2018) Anthony Rios and Ramakanth Kavuluru. 2018. Few-shot and zero-shot multi-label learning for structured label spaces. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3132–3142. * Robertson and Walker (1994) S. E. Robertson and S. Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In _Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval_ , page 232–241. * Shi et al. (2017) Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, and Eric P. Xing. 2017. Towards automated ICD coding using deep learning. _ArXiv_ , abs/1711.04075. * Teng et al. (2023) Fei Teng, Yiming Liu, Tianrui Li, Yi Zhang, Shuangqing Li, and Yue Zhao. 2023. A review on deep neural networks for icd coding. _IEEE Trans. on Knowledge and Data Eng._ , 35(5):4357–4375. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Proc. of the 31st Int. Conference on Neural Information Processing Systems_ , page 6000–6010. * Vu et al. (2020) Thanh Vu, Dat Quoc Nguyen, and Anthony Nguyen. 2020. A label attention model for ICD coding from clinical text. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_ , pages 3335–3341. Main track. * Wang et al. (2022) Zihan Wang, Peiyi Wang, Lianzhe Huang, Xin Sun, and Houfeng Wang. 2022. Incorporating hierarchy into text encoder: a contrastive learning approach for hierarchical text classification. In _Proc. of the 60th Annual Meeting of the Assoc. for Computational Linguistics (Volume 1: Long Papers)_ , pages 7109–7119. * Xie and Xing (2018) Pengtao Xie and Eric Xing. 2018. A neural architecture for automated ICD coding. In _Proc. of the 56th Annual Meeting of the Ass. for Comp. Ling. (Vol. 1: Long Papers)_ , pages 1066–1076. * Xie et al. (2019) Xiancheng Xie, Yun Xiong, Philip S. Yu, and Yangyong Zhu. 2019. EHR coding with multi-scale feature attention and structured knowledge graph propagation. In _Proc. of the 28th ACM Int. Conf. on Information and Knowledge Management_ , page 649–658. * Yang et al. (2022) Zhichao Yang, Shufan Wang, Bhanu Pratap Singh Rawat, Avijit Mitra, and Hong Yu. 2022. Knowledge injected prompt based fine-tuning for multi-label few-shot ICD coding. In _Findings of the Assoc. for Computational Linguistics: EMNLP 2022_ , pages 1767–1781. * Ying et al. (2021) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. 2021. Do transformers really perform badly for graph representation? In _Thirty-Fifth Conference on Neural Information Processing Systems_. * Yuan et al. (2022) Zheng Yuan, Chuanqi Tan, and Songfang Huang. 2022. Code synonyms do matter: Multiple synonyms matching network for automatic ICD coding. In _Proc. of the 60th Annual Meeting of the Assoc. for Comp. Ling. (Vol. 2: Short Papers)_ , pages 808–814. ## Appendix A Case Studies Figure 4: Case study on the effectiveness of incorporating label co- occurrence. Correctly predicted labels are marked in green and the incorrect ones are marked in red. We conducted case studies to qualitatively explore the impacts of integrating label co-occurrence (illustrated in Figure 4) and auxiliary knowledge (depicted in Figure 5). We compared the full model with models that did not integrate the label co-occurence and the external knowledge on the predictions of two patient records. For each patient, we present the discharge summary, ground truth ICD codes, label co-occurrence information, and auxiliary knowledge information, along with the predicted ICD codes from the full model and ablated models. In Case 1, the ground truth ICD codes ‘785.51 Cardiogenic shock’ and ‘V49.86 Do not resuscitate status’ are not explicitly mentioned in the discharge summary. The observed label co-occurrence between ‘427.5 Cardiac arrest’ and ‘785.51 Cardiogenic shock’, as well as co-relation between ‘96.71 Continuous invasive mechanical ventilation for less than 96 consecutive hours’ and ‘V49.86 Do not resuscitate status’ provide strong indicators suggesting the presence of the codes ‘785.51’ and ‘V49.86’. Without the label co-occurrence signals, the ablated model missed the predictions of codes ‘785.51’ and ‘V49.86’, indicating a failure to leverage latent label information. In Case 2, the patient has been diagnosed with ‘244.9 Unspecified acquired hypothyroidism’ with less explicit information in the discharge summary. Notably, the presence of the medication ‘Levothyroxine’ in the drug prescription, an element of auxiliary knowledge, suggests that the patient is likely to have acquired hypothyroidism. The ablated model, lacking the auxiliary knowledge, misses the prediction of code ‘244.9’. The aforementioned Cases 1 and 2 highlight the benefits of incorporating label co-occurrence and auxiliary knowledge, respectively. Figure 5: Case study on the effectiveness of incorporating auxiliary knowledge. Correctly predicted labels are marked in green and the incorrect ones are marked in red.
# VOICE CLONING: A MULTI-SPEAKER TEXT-TO-SPEECH SYNTHESIS APPROACH BASED ON TRANSFER LEARNING ###### Abstract Deep learning models are becoming predominant in many fields of machine learning. Text-to-Speech (TTS), the process of synthesizing artificial speech from text, is no exception. To this end, a deep neural network is usually trained using a corpus of several hours of recorded speech from a single speaker. Trying to produce the voice of a speaker other than the one learned is expensive and requires large effort since it is necessary to record a new dataset and retrain the model. This is the main reason why the TTS models are usually single speaker. The proposed approach has the goal to overcome these limitations trying to obtain a system which is able to model a multi-speaker acoustic space. This allows the generation of speech audio similar to the voice of different target speakers, even if they were not observed during the training phase. Index Terms— text-to-speech, deep learning, multi-speaker speech synthesis, speaker embedding, transfer learning ## 1 Introduction Text-to-Speech (TTS) synthesis, the process of generating natural speech from text, remains a challenging task despite decades of investigation. Nowadays there are several TTS systems able to get impressive results in terms of synthesis of natural voices very close to human ones. Unfortunately, many of these systems learn to synthesize text only with a single voice. The goal of this work is to build a TTS system which can generate in a data efficient manner natural speech for a wide variety of speakers, not necessarily seen during the training phase. The activity that allows the creation of this type of models is called Voice Cloning and has many applications, such as restoring the ability to communicate naturally to users who have lost their voice or customizing digital assistants such as Siri. Over time, there has been a significant interest in end-to-end TTS models trained directly from text-audio pairs; Tacotron 2 [1] used WaveNet [2] as a vocoder to invert spectrograms generated by sequence-to-sequence with attention [3] model architecture that encodes text and decodes spectrograms, obtaining a naturalness close to the human one. It only supported a single speaker. Gibiansky et al. [4] proposed a multi-speaker variation of Tacotron able to learn a low-dimensional speaker embedding for each training speaker. Deep Voice 3 [5] introduced a fully convolutional encoder-decoder architecture which supports thousands of speakers from LibriSpeech [6]. However, these systems only support synthesis of voices seen during training since they learn a fixed set of speaker embeddings. Voiceloop [7] proposed a novel architecture which can generate speech from voices unseen during training but requires tens of minutes of speech and transcripts of the target speaker. In recent extensions, only a few seconds of speech per speaker can be used to generate new speech in that speaker’s voice. Nachmani et al. [8] for example, extended Voiceloop to utilize a target speaker encoding network to predict speaker embedding directly from a spectrogram. This network is jointly trained with the synthesis network to ensure that embeddings predicted from utterances by the same speaker are closer than embeddings computed from different speakers. Jia et al. [9] proposed a speaker encoder model similar to [8], except that they used a network independently-trained exploring transfer learning from a pre-trained speaker verification model towards the synthesis model. This work is similar to [9] however introduces different architectures and uses a new transfer learning technique still based on a pre-trained speaker verification model but exploiting utterance embeddings rather than speaker embeddings. In addition, we use a different strategy to condition the speech synthesis with the voice of speakers not observed before and compared several neural architectures for the speaker encoder model. The paper is organized as follows: Section 2 describes the model architecture and its formal definition; Section 3 reports experiments and results done to evaluate the proposed solution; finally conclusions are reported in Section 4. ## 2 Model Architecture Following [9], the proposed system consists of three components: a _speaker encoder_ , which computes a fixed-dimensional embedding vector from a few seconds of reference speech of a target speaker; a _synthesizer_ , which predicts a mel spetrogram from an input text and an embedding vector; a _neural vocoder_ , which infers time-domain waveforms from the mel spectrograms generated by the synthesizer. At inference time, the speaker encoder takes as input a short reference utterance of the target speaker and generates, according to its internal learned speaker characteristics space, an embedding vector. The synthesizer takes as input a phoneme (or grapheme) sequence and generates a mel spectrogram, conditioned by the speaker encoder embedding vector. Finally the vocoder takes the output of the synthesizer and generates the speech waveform. This is illustrated in Figure 1. ### 2.1 Problem Definition Fig. 1: High level overview of the three components of the system. Consider a dataset of N speakers each of which has M utterances in the time- domain. Let’s denote the j-th utterance of the i-th speaker as uij while the feature extraced from the j-th utterance of the i-th speaker as xij ($1\leq i\leq N$ and $1\leq j\leq M$). We chose as feature vector xij the mel spectrogram. The speaker encoder $\mathcal{E}$ has the task to produce meaningful embedding vectors that should characterize the voices of the speakers. It computes the embedding vector eij corresponding to the utterance uij as: $\mathbf{e}_{ij}=\mathcal{E}\left(\mathbf{x}_{ij};\mathbf{w}_{\mathcal{E}}\right)$ (1) where wE represents the encoder model parameters. Let’s define it _utterance embedding_. In addition to defining embedding at the utterance level, we can also define the _speaker embedding_ : $\mathbf{c}_{i}=\frac{1}{n}\sum_{j=1}^{n}\mathbf{e}_{ij}$ (2) In [9], the synthesizer $S$ predicts xij given cij and tij, the transcript of the utterance uij: $\hat{\mathbf{x}}_{ij}=\mathcal{S}\left(\mathbf{c}_{i},\mathbf{t}_{ij};\mathbf{w}_{\mathcal{S}}\right)$ (3) where wS represents the synthesizer model parameters. In our approach, we propose to use the utterance embedding rather than the speaker embedding: $\hat{\mathbf{x}}_{ij}=\mathcal{S}\left(\mathbf{e}_{ij},\mathbf{t}_{ij};\mathbf{w}_{\mathcal{S}}\right)$ (4) We will motivate this choice in Paragraph 2.4. Finally, the vocoder $\mathcal{V}$ generates uij given $\hat{\mathbf{\textbf{x}}}_{ij}$. So we have: $\hat{\mathbf{u}}_{ij}=\mathcal{V}\left(\hat{\mathbf{x}}_{ij};\mathbf{w}_{\mathcal{V}}\right)$ (5) where wV represents the vocoder model parameters. This system could be trained in an end-to-end mode trying to optimize the following objective function: $\min_{\mathbf{w}_{\mathcal{E}},\mathbf{w}_{\mathcal{S}},\mathbf{w}_{\mathcal{V}}}L_{\mathcal{V}}\left(\mathbf{u}_{ij},\mathcal{V}\left(\mathcal{S}\left(\mathcal{E}\left(\mathbf{x}_{ij};\mathbf{w}_{\mathcal{E}}\right),\mathbf{t}_{ij};\mathbf{w}_{\mathcal{S}}\right);\mathbf{w}_{\mathcal{V}}\right)\right)$ (6) where LV is a loss function in the time-domain. However, it requires to train the three models using the same dataset, moreover, the convergence of the combined model could be hard to reach. To overcome this drawback, the synthesizer can be trained independently to directly predict the mel spectrogram xij of a target utterance uij trying to optimize the following objective function: $\min_{\mathbf{w}_{\mathcal{S}}}L_{\mathcal{S}}\left(\mathbf{x}_{ij},\mathcal{S}\left(\mathbf{e}_{ij},\mathbf{t}_{ij};\mathbf{w}_{\mathcal{S}}\right)\right)$ (7) where LS is a loss function in the time-frequency domain. It is necessary to have a pre-trained speaker encoder model available to compute the utterance embedding eij. The vocoder can be trained either directly on the mel spectrograms predicted by the synthesizer or on the groundtruth mel spectrograms: $\min_{\mathbf{w}_{\mathcal{V}}}L_{\mathcal{V}}\left(\mathbf{u}_{ij},\mathcal{V}\left(\mathbf{x}_{ij};\mathbf{w}_{\mathcal{V}}\right)\right)\text{ or }\min_{\mathbf{w}_{\mathcal{V}}}L_{\mathcal{V}}\left(\mathbf{u}_{ij},\mathcal{V}\left(\hat{\mathbf{x}}_{ij};\mathbf{w}_{\mathcal{V}}\right)\right)$ (8) where LV is a loss function in the time-domain. In the second case, a pre- trained synthesizer model is needed. If the definition of the objective function was quite simple for both the synthesizer and the vocoder, unfortunately this is not the case for the speaker encoder. The encoder does not have labels to be trained on because its task is only to create the space of characteristics necessary to create the embedding vectors. The Generalized End-to-End (GE2E) [10] loss brings a solution to this problem and it allows the training of the speaker encoder independently. Consequently, we can define the following objective function: $\min_{\mathbf{w}_{\mathcal{E}}}L_{\mathcal{G}}(\mathbf{S};w_{\mathcal{E}})=\sum_{j,i}L\left(\mathbf{e}_{ji}\right)$ (9) where S represents a similarity matrix and LG is the GE2E loss function. ### 2.2 Speaker Encoder Fig. 2: Speaker encoder model architecture. Input is composed of a time sequence of dimension 40. The last linear layer takes the hidden state of the last GRU layer as input. The speaker encoder must be able to produce an embedding vector that meaningfully represents speaker characteristics in the transformed space starting from a target speaker’s utterance. Furthermore, the model should identify these characteristics using a short speech signal, regardless of its phonetic content and background noise. This can be achieved by training a neural network model on a text-independent speaker verification task that tries to optimize the GE2E loss so that embeddings of utterances from the same speaker have high cosine similarity, while those of utterances from different speakers are far apart in the embedding space. The network maps a sequence of mel spectrogram frames to a fixed-dimensional embedding vector, known as d-vector [11, 12]. Input mel spectrograms are fed to a network consisting of one Conv1D [13] layer of 512 units followed by a stack of 3 GRU [14] layers of 512 units, each followed by a linear projection of 256 dimension. Following [9], the final embedding dimension is 256 and it is created by L2-normalizing the output of the top layer at the final frame. This is shown in Figure 2. We noticed that this architecture was the best among the various tried and tested, as we will see in Section 3. During the training phase, all the utterances are split into partial utterances that are 1.6 seconds long (160 frames). Also at inference time, the input utterance is split into segments of 1.6 seconds with 50% overlap and the model processes each segment individually. Following [9, 10], the final utterance-wise d-vector is generated by L2 normalizing the window-wise d-vectors and taking the element-wise average. ### 2.3 Synthesizer and Vocoder The synthesizer component of the system is a sequence-to-sequence model with attention [1, 3] which is trained on pairs of text derived token sequences and audio derived mel spectrogram sequences. Furthermore, the network is trained in a transfer learning configuration (see Paragraph 2.4), using an independently-trained speaker encoder to extract embedding vectors useful to condition the outcomes of this component. In view of reproducibility, the adopted vocoder component of the system is a Pytorch github implementation111https://github.com/fatchord/WaveRNN of the neural vocoder WaveRNN [15]. This model is not directly conditioned on the output of the speaker encoder but just on the input mel spectrogram. The multi-speaker vocoder is simply trained by using data from many speakers (see Section 3). ### 2.4 Transfer Learning Modality The conditioning of the synthesizer via speaker encoder is the fundamental part that makes the system multi-speaker: the embedding vectors computed by the speaker encoder allow the conditioning of the mel spectrograms generated by the synthesizer so that they can incorporate the new speaker voice. In [9], the embedding vectors are speaker embeddings obtained by Equation 2. We used the utterance embeddings computed by Equation 1. In fact, at inference time only one utterance of the target speaker is fed to the speaker encoder which therefore produces a single utterance-level d-vector. Thus, in this case, it is not possible to create an embedding at the speaker level since the average operation cannot be applied. This implies that only utterance embeddings can be used during the inference phase. In addition, an average mechanism could cause some loss in terms of accuracy. This is due to larger variations in pitch and voice quality often occurring in utterances of the same speaker while utterances have lower intra-variation. Following [9], the embedding vectors computed by the speaker encoder are concatenated only with the synthesizer encoder output in order to condition the synthesis. However, we experimented with a new concatenation technique: first we passed the embedding through a single linear layer and then we applied the concatenation between the output of this layer and the synthesizer encoder one. The goal was to exploit the weights of the linear layer to make the embedding vector more meaningful, since the layer was trained together with the synthesizer. We noticed that this method achieved good convergence of training and was about 75% times faster than the former vector concatenation. ## 3 Experiments and Results We used different publicly available datasets to train and evaluate the components of the system. For the speaker encoder, different neural network architectures were tested. Each of them was trained using a combination of three public sets: LibriTTS [16] train-other and dev-other; VoxCeleb [17] dev and VoxCeleb2 [18] dev. In this way, we obtained a number of speakers equal to 8,381 and a number of utterances equal to 1,419,192, not necessarily all clean and noiseless. Furthermore, transcripts were not required. The models were trained using Adam [19] as optimizer with an initial learning rate equal to 0.001. Moreover, we experimented with different learning rate decay strategies. During the evaluation phase, we used a combination of the corresponding test sets of the training ones, obtaining a number of speakers equal to 191 and a number of utterances equal to 45,132. Both training and test sets have been sampled at 16 kHz and input mel spectrograms were computed from 25ms STFT analysis windows with a 10ms step and passed through a 40-channel mel-scale filterbank. We separately trained the synthesizer and the vocoder using the same training set given by the combination of the two “clean” sets of LibriTTS, obtaining a number of speakers equal to 1,151, a number of utterances equal to 149,736 and a total number of hours equal to 245,14 of 22.05 kHz audio. We trained the synthesizer using the L1 loss [20] and Adam as optimizer. Moreover, the input texts were converted into phoneme sequences and target mel spectrogram features are computed on 50 ms signal windows, shifted by 12.5 ms and passed through an 80-channel mel-scale filterbank. The vocoder was trained using groundtruth waveforms rather than the synthesizer outputs. ### 3.1 Baseline System We choose as baseline for our work the Corentin Jemine’s real-time voice cloning system [21], a public re-implementation of the Google system [9] available on github222https://github.com/CorentinJ/Real-Time-Voice-Cloning. This system is composed out of three components: a recurrent speaker encoder consisting of 3 LSTM [22] layers and a final linear layer, each of which has 256 units; a sequence-to-sequence with attention synthesizer based on [1] and WaveRNN [15] as vocoder. ### 3.2 Speaker Encoder: Proposed System To evaluate all the speaker encoder models and choose the best one, the Speaker Verification Equal Error Rate (SV-EER) was estimated by pairing each test utterance with each enrollment speaker. The models implemented are: * • _rec_conv network_ : 5 Conv1D layers, 1 GRU layer and a final linear layer; * • _rec_conv_2 network_ : 3 Conv1D layers, 2 GRU layers each followed by a linear projection layer; * • _gru network_ : 3 GRU layers each followed by a linear projection layer; * • _advanced_gru network_ : 1 Conv1D layer and 3 GRU layers each followed by a linear projection layer (Figure 2); * • _lstm network_ : 1 Conv1D layer and 3 LSTM [22] layers each followed by a linear projection layer. All layers have 512 units except the linear ones which have 256. Moreover, dropout rate of 0.2 was used after all the layers except before the first and after the last. All the models were trained using a batch size of 64 speakers and 10 utterances for each speaker. The results obtained are shown in Table 1. Table 1: Speaker Verification Equal Error Rates. Name Step Time Train Loss SV-EER LR Decay rec_conv 0.33s 0.36 0.073 Reduce on Plateau rec_conv_2 0.45s 0.49 0.075 Reduce on Plateau gru 1,45s 0.33 0.054 Every 100,000 step advanced_gru 0.86s 0.14 0.040 Exponential lstm 1.08s 0.17 0.052 Exponential We designed the advanced gru network trying to combine the advantages of convolutional and gru networks. In fact, looking at the table, this architecture was much faster than the gru network during training, and obtained the best SV-EER on the test set. Figure 3 illustrates the projection in a two-dimensional space of the utterance embeddings computed by the advanced gru network on the basis of 6 utterances extracted from 12 speakers of the test set. In Figure 4, the 12 speakers are 6 men and 6 women. The projections were made using UMAP [23]. Both the figures show that the model has created a space of internal features that is robust regarding the speakers, creating well-formed clusters of speakers based on their utterances and nicely separating male speakers from female ones. The SV-EER obtained on the test set from the speaker encoder model of the proposed system is 0.040 vs the baseline one which is 0.049. Fig. 3: Advanced Gru Network test utterance embeddings projection. Fig. 4: Advanced Gru Network six utterances for six male vs six utterances for six female taken from the test set. ### 3.3 Similarity Evaluation To assess how similar the waveforms generated by the system were from the original ones, we transformed the audio signals produced into utterance embeddings (using the speaker encoder advanced gru network) and then projected them in a two-dimensional space together with the utterance embeddings computed on the basis of the groundtruth audio. As test speakers, we randomly choose eight target speakers: four speakers (two male and two female) were extracted from the test-set-clean of LibriTTS [16], three (two male and one female) from VCTK [24] and finally a female proprietary voice. For each speaker we randomly extracted 10 utterances and compared them with the utterances generated by the system calculating the cosine similarity. The speakers averaged values of cosine similarity between the generated and groundtruth utterance embeddings range from 0.56 to 0.76. Figure 5 shows that synthesized utterances tend to lie close to real speech from the same speaker in the embedding space. Fig. 5: Groundtruth utterance embeddings vs the corresponding generated ones of the 8 speakers chosen for testing. ### 3.4 Subjective Evaluation Finally, we evaluated how the generated utterances were, subjectively speaking, similar in terms of speech timbre to the original ones. To do this, we gathered Mean Similarity Scores (MSS) based on a 5 points mean opinion score scale, where 1 stands for “very different” and 5 for “very similar”. Ten utterances of the proprietary female voice were cloned using both the proposed and the baseline system and then 12 subjects, most of them TTS experts, were asked to listen to the 20 samples, randomly mixed, and rate them. Participants were also provided with an original utterance as reference. The question asked was: “How do you rate the similarity of these samples with respect to the reference audio? Try to focus on vocal timbre and not on content, intonation or acoustic quality of the audio”. The results obtained are shown in Table 2. Although not conclusive, this experiment highlights a subjective evidence of the goodness of the proposed approach, despite the significant variance of both systems: this is largely due to the low number of test participants. Table 2: MSS of the baseline and the proposed systems. System MSS baseline $2.59\pm 1.03$ proposed $3.17\pm 0.97$ ## 4 Conclusions In this work, our goal was to build a Voice Cloning system which could generate natural speech for a variety of target speakers in a data efficient manner. Our system combines an independently trained speaker encoder network with a sequence-to-sequence with attention architecture and a neural vocoder model. Using a transfer learning technique from a speaker-discriminative encoder model based on utterance embeddings rather than speaker embeddings, the synthesizer and the vocoder are able to generate good quality speech also for speakers not observed before. Despite the experiments showed a reasonable similarity with real speech and improvements over the baseline, the proposed system does not fully reach human-level naturalness in contrast to the single speaker results from [1]. Additionally, the system is not able to reproduce the speaker prosody of the target audio. These are consequences of the additional difficulty of generating speech for a variety of speakers given significantly less data per speaker unlike when training a model on a single speaker. ## 5 Acknowledgements The authors thank Roberto Esposito, Corentin Jemine, Quan Wang, Ignacio Lopez Moreno, Skjalg Lepsøy, Alessandro Garbo and Jürgen Van de Walle for their helpful discussions and feedback. ## References * [1] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan, R. A. Saurous, Y. Agiomvrgiannakis, and Y. Wu, “Natural tts synthesis by conditioning wavenet on mel spectrogram predictions,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 4779–4783. * [2] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu, “Wavenet: A generative model for raw audio,” CoRR, vol. abs/1609.03499, 2016. * [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural machine translation by jointly learning to align and translate,” CoRR, vol. abs/1409.0473, 2015. * [4] Andrew Gibiansky, Sercan Arik, Gregory Diamos, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, and Yanqi Zhou, “Deep voice 2: Multi-speaker neural text-to-speech,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., pp. 2962–2970. Curran Associates, Inc., 2017. * [5] Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, and John Miller, “Deep voice 3: 2000-speaker neural text-to-speech,” in International Conference on Learning Representations, 2018. * [6] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: An asr corpus based on public domain audio books,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 5206–5210. * [7] Yaniv Taigman, Lior Wolf, Adam Polyak, and Eliya Nachmani, “Voiceloop: Voice fitting and synthesis via a phonological loop,” in International Conference on Learning Representations, 2018. * [8] Eliya Nachmani, Adam Polyak, Yaniv Taigman, and Lior Wolf, “Fitting new speakers based on a short untranscribed sample,” CoRR, vol. abs/1802.06984, 2018. * [9] Ye Jia, Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez-Moreno, and Yonghui Wu, “Transfer learning from speaker verification to multispeaker text-to-speech synthesis,” CoRR, vol. abs/1806.04558, 2018. * [10] Lipeng Wan, Qi shan Wang, Alan Papir, and Ignacio Lopez-Moreno, “Generalized end-to-end loss for speaker verification,” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4879–4883, 2018. * [11] Georg Heigold, Ignacio Moreno, Samy Bengio, and Noam Shazeer, “End-to-end text-dependent speaker verification,” 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5115–5119, 2016. * [12] Ehsan Variani, Xin Lei, Erik McDermott, Ignacio Lopez Moreno, and Javier Gonzalez-Dominguez, “Deep neural networks for small footprint text-dependent speaker verification,” in Proc. ICASSP, 2014. * [13] Serkan Kiranyaz, Onur Avci, Osama Abdeljaber, Turker Ince, Moncef Gabbouj, and Daniel J. Inman, “1d convolutional neural networks and applications: A survey,” ArXiv, vol. abs/1905.03554, 2019. * [14] Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” in NIPS 2014 Workshop on Deep Learning, December 2014, 2014. * [15] Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aäron van den Oord, Sander Dieleman, and Koray Kavukcuoglu, “Efficient neural audio synthesis,” in ICML, 2018. * [16] Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu, “Libritts: A corpus derived from librispeech for text-to-speech,” in INTERSPEECH, 2019. * [17] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman, “Voxceleb: A large-scale speaker identification dataset,” in INTERSPEECH, 2017. * [18] Joon Son Chung, Arsha Nagrani, and Andrew Zisserman, “Voxceleb2: Deep speaker recognition,” in INTERSPEECH, 2018. * [19] Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” International Conference on Learning Representations, 12 2014. * [20] Katarzyna Janocha and Wojciech Czarnecki, “On loss functions for deep neural networks in classification,” ArXiv, vol. abs/1702.05659, 2017. * [21] Corentin Jemine, “Master thesis: Automatic multispeaker voice cloning,” 2019, Unpublished master’s thesis, Université de Liège, Liège, Belgique. * [22] Klaus Greff, Rupesh K. Srivastava, Jan Koutnik, Bas R. Steunebrink, and Jurgen Schmidhuber, “Lstm: A search space odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, Oct 2017. * [23] Leland McInnes and John Healy, “Umap: Uniform manifold approximation and projection for dimension reduction,” ArXiv, vol. abs/1802.03426, 2018. * [24] Christophe Veaux, Junichi Yamagishi, and Kirsten MacDonald, “Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit,” 2018.
Open Communications in Nonlinear Mathematical Physics ]ocnmp[ Vol.2 (2022) pp 1–References Letter ††footnotetext: © The author(s). Distributed under a Creative Commons Attribution 4.0 International License Letter to the Editors On fully-nonlinear symmetry-integrable equations with rational functions in their highest derivative: Recursion operators Marianna Euler and Norbert Euler${}^{\,*}$ Centro Internacional de Ciencias, Av. Universidad s/n, Colonia Chamilpa, 62210 Cuernavaca, Morelos, Mexico ∗ Corresponding author<EMAIL_ADDRESS> Abstract: We report a class of symmetry-intergable third-order evolution equations in 1+1 dimensions under the condition that the equations admit a second-order recursion operator that contains an adjoint symmetry (integrating factor) of order six. The recursion operators are given explicitly. ## 1 Introduction We recently reported four fully-nonlinear Möbius-invariant and symmetry- integrable third-order evolution equations, namely [2] $\displaystyle u_{t}=\frac{u_{x}}{(b-S)^{2}},\quad b\neq 0$ (1.1a) $\displaystyle u_{t}=\frac{u_{x}}{S^{2}}$ (1.1b) $\displaystyle u_{t}=-2\frac{u_{x}}{\sqrt{S}}$ (1.1c) $\displaystyle u_{t}=\frac{u_{x}(a_{1}-S)}{(a_{1}^{2}+3a_{2})(S^{2}-2a_{1}S-3a_{2})^{1/2}},\quad a_{1}^{2}+3a_{2}\neq 0,$ (1.1d) where $S$ denotes the Schwarzian Derivative $\displaystyle S:=\frac{u_{xxx}}{u_{x}}-\frac{3}{2}\left(\frac{u_{xx}}{u_{x}}\right)^{2}.$ (1.2) This classification was achieved by matching quasi-linear auxiliary symmetry- integrable evolution equations in $S$ for each equation (1.1a) – (1.1d). In [3] we propose a method to compute the higher members of the hierarchies of (1.1a) – (1.1d) without the knowledge of the equations’ recursion operators. In particular, the proposed method makes use of the recursion operators of the auxiliary quasi-linear evolution equations in the variable $S$. This is an essential point since it is in general rather complicated and tedious to compute recursion operators, especially for fully-nonlinear equations. It is important to point out that the method to compute the higher-order members of the hierarchies as proposed in [3], only applies to evolution equations that are Möbius-invariant and symmetry-integrable. Furthermore we point out that it is not possible to extend the idea of Möbius-invariant evolution equation to systems of evolution equations in a direct sense. This has been investigated in [4]. Inspired by the above mentioned results, we address here the problem of identifying fully-nonlinear symmetry-integrable evolution equations beyond the Möbius-invariant class and we do so by requiring the equations to admit a recursion operator of a certain form. In particular, we restrict ourselves to evolution equations that contain rational functions in $u_{xxx}$. Moreover, we assume a recursion operator of order two with an integrating factor of maximum order six. This of course restricts us to a special class of equations, namely equations that admit those type of recursion operators. Nevertheless, we believe that our findings are of interest and that the results reported here are new. We would like to point out that Hernández Heredero [6] classified a type of third-order integrable fully-nonlinear evolution equations that does not include equations with rational functions in $u_{xxx}$. ## 2 Notations and conditions To fix the notation and to recall the conditions that are needed in this paper, we consider the general $n$th-order autonomous evolution equation in 1+1 dimensions $\displaystyle E:=u_{t}-F(u,u_{x},u_{xx},u_{xxx},\ldots,u_{nx})=0.$ (2.1) The subscripts of $u$ denote partial derivatives, where partial derivatives of order 4 and higher are indicated by $u_{nx}$, $n\geq 4$. Equation (2.1) is said to be symmetry-integrable if it admits a recursion operator $R[u]$ that generates an infinite number of local Lie-Bäcklund (or generalized) symmetries for the equation. In this paper we consider recursion operators of the following form $\displaystyle R[u]:=\sum_{k=1}^{m}G_{k}[u]D_{x}^{k}+G_{0}[u]+\sum_{j=1}^{p}I_{j}[u]D_{x}^{-1}\circ\Lambda_{j}[u].$ (2.2) The notation $R[u]$ and $G_{j}[u]$ indicates that the operator $R$ and functions $G_{j}$ depend on $u,\ u_{x},\ u_{xx},\ldots$ up to an order that is ab initio not fixed. Here $I_{j}$ are Lie-Bäcklund symmetry coefficients for (2.1), i.e. the coefficients of a symmetry generator $\displaystyle Z=I_{j}[u]\frac{\partial\ }{\partial u}$ (2.3) which satisfies the condition $\displaystyle\left.\vphantom{\frac{DA}{DB}}L_{E}[u]I_{j}[u]\right|_{E=0}=0,$ (2.4) where $L_{E}[u]$ denoted the linear operator $\displaystyle L_{E}[u]:=\frac{\partial E}{\partial u}+\frac{\partial E}{\partial u_{t}}D_{t}+\frac{\partial E}{\partial u_{x}}D_{x}+\frac{\partial E}{\partial u_{xx}}D_{x}^{2}+\cdots+\frac{\partial E}{\partial u_{nx}}D_{x}^{n}.$ (2.5) $\Lambda_{j}[u]$ are integrating factors for conservation laws $\displaystyle\left.\vphantom{\frac{DA}{DB}}D_{t}\Phi^{t}[u]+D_{x}\Phi^{x}[u]\right|_{E=0}=0,$ (2.6) of (2.1), where $\displaystyle\Lambda[u]=\hat{E}[u]\Phi^{t}[u]$ (2.7) and $\Lambda$ must satisfy the condition $\displaystyle\left.\vphantom{\frac{DA}{DB}}\hat{E}[u]\left(\Lambda[u]E\right)\right|_{E=0}=0.$ (2.8) Here $\hat{E}[u]$ is the Euler operator $\displaystyle\hat{E}[u]:=\frac{\partial\ }{\partial u}-D_{t}\circ\frac{\partial\ }{\partial u_{t}}-D_{x}\circ\frac{\partial\ }{\partial u_{x}}+D_{x}^{2}\circ\frac{\partial\ }{\partial u_{xx}}-D_{x}^{3}\circ\frac{\partial\ }{\partial u_{3x}}+\cdots.$ (2.9) Note that condition (2.8) is equivalent to $\displaystyle\left.\vphantom{\frac{DA}{DB}}L^{*}_{E}[u]\Lambda[u]\right|_{E=0}=0$ (2.10a) $\displaystyle\mbox{and}\quad L_{\Lambda}[u]E=L_{\Lambda}^{*}[u]E.$ (2.10b) The first condition (2.10a) requires $\Lambda$ to be an adjoint symmetry for (2.1), whereas the second condition (2.10b) requires $\Lambda$ to be a self- adjoint function (for scalar evolution equations this means even-order). Here $L_{E}^{*}[u]$ denotes the adjoint operator of $L_{E}[u]$, namely $\displaystyle L_{E}^{*}[u]:=\frac{\partial E}{\partial u}-D_{t}\circ\frac{\partial E}{\partial u_{t}}-D_{x}\circ\frac{\partial E}{\partial u_{x}}+D_{x}^{2}\circ\frac{\partial E}{\partial u_{xx}}-D_{x}^{3}\circ\frac{\partial E}{\partial u_{3x}}+\cdots.$ (2.11) The condition on the recursion operator $R[u]$ for (2.1) is $\displaystyle[L_{F}[u],\,R[u]]\varphi=(D_{t}R[u])\varphi,$ (2.12) where $[\ ,\ ]$ denotes the commutator (or Lie bracket). Condition (2.12) is evaluated on the equation (2.1). Moreover, the recursion operator of (2.1) should generate a hierarchy of symmetries coefficients $\eta$ for (2.1), i.e. symmetry generators of the form $\displaystyle Z=\eta[u]\frac{\partial\ }{\partial u},$ (2.13) by acting $R[u]$ repeatedly on $\eta$. That is $\displaystyle R^{k}[u]\eta_{1}[u]=\eta_{k+1}[u],\qquad k=1,2,\ldots.$ (2.14) For a symmetry-integrable evolution equation we require that all symmetries coefficients $\eta$ generated by $R$ are local, so a recursion operator for that equation would generate a hierarchy of local evolution equations $\displaystyle u_{t_{k}}=R^{k}[u]F[u],\qquad k=1,2,\ldots.$ (2.15) Each evolution equation in the hierarchy (2.15) should share the same set of symmetries that are generated by acting the recursion operator on the first (or seed) member for the hierarchy of (2.1). Those symmetries then span an Abelian Lie algebra and the recursion operator is hereditary for each member of the hierarchy (see [5] and [7] for more details). ## 3 Recursion operators for a class of third-order symmetry-integrable equations Our starting point is the general class of third-order autonomous evolution equations of the form $\displaystyle E:=u_{t}-F(u_{x},u_{xx},u_{xxx})=0.$ (3.1) For the symmetry-integrability of (3.1) we need to establish a recursion operator for the equation. In this paper we consider second-order recursion operators $R[u]$ of the form $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+I_{1}[u]D_{x}^{-1}\circ\Lambda_{1}[u]+I_{2}[u]D_{x}^{-1}\circ\Lambda_{2}[u].$ (3.2) The explicit conditions on $G_{j}$, $I_{j}$, $\Lambda_{j}$ and $F$ for (3.1) are given in Appendix A. In order to find equations of the form (3.1) that may admit a recursion operator of the form (3.2), we first establish the most general form of $F$ in terms of it highest derivative $u_{xxx}$. This is achieved by solving the first three equations in the split commutator condition (2.12), namely those conditions on $G_{j}$, and $F$ that do not involve the conditions on the integrating factors $\Lambda_{j}$ or the symmetries $I_{j}$. These are the conditions (A.2a), (A.2b) and (A.2c) given in Appendix A. ###### Proposition 1. In terms of the variable $u_{xxx}$, the most general form of $F(u_{x},u_{xx},u_{xxx})$ for which (3.1) admits a recursion operator of the form (3.2) is given by the following four cases: $\displaystyle F=\frac{Q_{3}(u_{x},u_{xx})\left[\vphantom{\frac{DA}{DB}}u_{xxx}+Q_{2}(u_{x},u_{xx})\right]}{Q_{1}(u_{x},u_{xx})\left[\vphantom{\frac{DA}{DB}}Q_{1}(u_{x},u_{xx})+(u_{xxx}+Q_{2}(u_{x},u_{xx}))^{2}\right]^{1/2}}+Q_{4}(u_{x},u_{xx})$ (3.3a) $\displaystyle F=Q_{1}(u_{x},u_{xx})\,u_{xxx}+Q_{2}(u_{x},u_{xx})$ (3.3b) $\displaystyle F=\frac{Q_{1}(u_{x},u_{xx})}{\left[\vphantom{\frac{DA}{DB}}u_{xxx}+Q_{2}(u_{x},u_{xx})\right]^{2}}+Q_{3}(u_{x},u_{xx})$ (3.3c) $\displaystyle F=\frac{Q_{1}(u_{x},u_{xx})(u_{xxx}+Q_{2}(u_{x},u_{xx}))}{Q_{2}^{2}(u_{x},u_{xx})\left[\vphantom{\frac{DA}{DB}}u_{xxx}^{2}+2Q_{2}(u_{x},u_{xx})u_{xxx}\right]^{1/2}}+Q_{3}(u_{x},u_{xx}).$ (3.3d) The functions $Q_{1},\ Q_{2},\ Q_{3}$ and $Q_{4}$ are arbitrary in their indicated arguments. Proof: Solving (A.2a), (A.2b) and (A.2c) we obtain the following condition on $F(u_{x},u_{xx},$ $u_{xxx})$: $\displaystyle F^{\prime}\left[9(F^{\prime})^{2}F^{(4)}-45F^{\prime}F^{\prime\prime}F^{\prime\prime\prime}+40(F^{\prime\prime})^{3}\right]=0,$ (3.4) where the primes denote partial derivatives with repect to $u_{xxx}$ and $F^{(4)}$ the fourth partial derivative with repect to $u_{xxx}$. The general solution of (3.4) is given by (3.3a), whereby (3.3b), (3.3c) and (3.3d) are singular solutions. $\Box$ ###### Remark 1. We remark that the conditions given in Proposition 1 are consistent with the conditions (2.3), (2.4) and (2.5) reported in [6]. The functions $Q_{1},\ Q_{2},\ Q_{3}$ and $Q_{4}$ should now be determined to gain recursion operators of the form (2.2) for the equation (3.1) for each case $F$ listed in Proposition 1. This identifies the exact form of $F$ for the symmetry-integrability of (3.1), which is achieved by solving the remaining conditions (A.2d), (A.2e), (A.2f) and (A.2g) given in Appendix A. In the current paper we restrict ourselves to the case where $F(u_{x},u_{xx},u_{xxx})$ is a rational functions in $u_{xxx}$, namely case (3.3c). This leads to the following ###### Proposition 2. The following equations, in the class $u_{t}=F(u_{x},u_{xx},u_{xxx})$ with $F$ a rational function in $u_{xxx}$, are symmetry-integrable: * • Case I $\displaystyle u_{t}=\frac{u_{xx}^{6}}{(\alpha u_{x}+\beta)^{3}u_{xxx}^{2}}+Q(u_{x}),$ (3.5a) where $\\{\alpha,\ \beta\\}$ are arbitrary constants, not simultaneously zero, and $Q(u_{x})$ needs to satisfy $\displaystyle(\alpha u_{x}+\beta)\frac{d^{5}Q}{du_{x}^{5}}+5\alpha\frac{d^{4}Q}{du_{x}^{4}}=0,$ (3.5b) which admits for $\alpha\neq 0$ the general solution $\displaystyle Q(u_{x})=c_{5}\left(u_{x}+\frac{\beta}{\alpha}\right)^{3}+c_{4}\left(u_{x}+\frac{\beta}{\alpha}\right)^{2}+c_{3}\left(u_{x}+\frac{\beta}{\alpha}\right)$ $\displaystyle\qquad+c_{2}\left(u_{x}+\frac{\beta}{\alpha}\right)^{-1}+c_{1}.$ (3.5c) For $\alpha=0$, the general solution of (3.5b) is $\displaystyle Q(u_{x})=c_{5}u_{x}^{4}+c_{4}u_{x}^{3}+c_{3}u_{x}^{2}+c_{2}u_{x}+c_{1}.$ (3.5d) Here $c_{j}$ are constants of integration. * • Case II $\displaystyle u_{t}=\frac{u_{xx}^{3}\left(\lambda_{1}+\lambda_{2}u_{xx}\right)^{3}}{u_{xxx}^{2}},$ (3.6) where $\\{\lambda_{1},\ \lambda_{2}\\}$ are arbitrary constants but not simultaneously zero. * • Case III $\displaystyle u_{t}=\frac{(\alpha u_{x}+\beta)^{11}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{2}},$ (3.7) where $\\{\alpha,\ \beta\\}$ are arbitrary constants but not simultaneously zero. * • Case IV $\displaystyle u_{t}=\frac{4u_{x}^{5}}{(2b\,u_{x}^{2}-2u_{x}u_{xxx}+3u_{xx}^{2})^{2}}\equiv\frac{u_{x}}{(b-S)^{2}},$ (3.8) where $b$ is an arbitrary constant and $S$ is the Schwarzian derivative (1.2). The recursion operators for each equation listed in Proposition 2 have been computed and are given in Appendix B. Note that equation (3.8) is identical to the Möbius-invariant equation (1.1a). This recursion operator for equation (1.1b) is obtained by setting $b=0$ in the recursion operator (B.7) of (3.8). For each equation listed in Proposition 2 one can easily remove the nonlinearity in the third derivative by a simple substitution $u_{x}=W(x,t)$ which, in a sense, “unpotentialises” the equations of Proposition 2. For completeness, we list the so obtained equations here: * • Case I: With $u_{x}=W(x,t)$, (3.5a) takes the form $\displaystyle W_{t}=-\frac{2W_{x}^{6}W_{xxx}}{(\alpha W+\beta)^{3}W_{xx}^{3}}-\frac{3\alpha W_{x}^{7}}{(\alpha W+\beta)^{4}W_{xx}^{2}}+\frac{6W_{x}^{5}}{(\alpha W+\beta)^{3}W_{xx}}$ $\displaystyle\qquad+Q^{\prime}(W)W_{x},$ (3.9a) where $\displaystyle(\alpha W+\beta)Q^{(5)}+5\alpha Q^{(4)}=0,\quad Q=Q(W).$ (3.9b) * • Case II: With $u_{x}=W_{1}(x,t)$, we obtain for (3.6) the following equation: $\displaystyle W_{1,t}=-\frac{2W_{1,x}^{3}(\lambda_{1}+\lambda_{2}W_{1,x})^{3}W_{1,xxx}}{W_{1,xx}^{3}}+\frac{3\lambda_{2}W_{1,x}^{3}(\lambda_{1}+\lambda_{2}W_{1,x})^{2}}{W_{1,xx}}$ $\displaystyle\qquad+\frac{3W_{1,x}^{2}(\lambda_{1}+\lambda_{2}W_{1,x})^{3}}{W_{1,xx}}.$ (3.10) With $W_{1,x}=W_{2}(x,t)$, we obtain for (• ‣ 3) the following equation: $\displaystyle W_{2,t}=-\frac{2W_{2}^{2}(\lambda_{1}+\lambda_{2}W_{2})^{3}W_{2,xxx}}{W_{2,x}^{3}}+\frac{6W_{2}^{3}(\lambda_{1}+\lambda_{2}W_{2})^{3}W_{2,xx}^{2}}{W_{2,x}^{4}}$ $\displaystyle\qquad-\frac{9W_{2}^{2}(\lambda_{1}+\lambda_{2}W_{2})^{3}(W_{2}+1)W_{2,xx}}{W_{2,x}^{2}}+\frac{9\lambda_{2}W_{2}^{2}(\lambda_{1}+\lambda_{2}W_{2})^{2}(W_{2,x}+1)}{W_{2,x}}$ $\displaystyle\qquad-6\lambda_{2}^{2}W_{2}^{3}(\lambda_{1}+\lambda_{2}W_{2})+6W_{2}(\lambda_{1}+\lambda_{2}W_{2})^{3}.$ (3.11) * • Case III: With $u_{x}=W(x,t)$, we obtain for (3.7) the following equation: $\displaystyle W_{t}=-\frac{(\alpha W+\beta)^{10}}{\left[\vphantom{\frac{DA}{DB}}(\alpha v+\beta)W_{xx}-3\alpha W_{x}^{2}\right]^{3}}\left[\vphantom{\frac{DA}{DB}}2\alpha WW_{xxx}\left(\alpha W+2\beta\right)+2\beta^{2}W_{xxx}\right.$ $\displaystyle\qquad\qquad\left.\vphantom{\frac{DA}{B}}-21\alpha W_{x}W_{xx}\left(\alpha W+\beta\right)+33\alpha^{2}W_{x}^{3}\right].$ (3.12) * • Case IV: With $u_{x}=W(x,t)$, we obtain for (3.8) the following equation: $\displaystyle W_{t}=\frac{4W^{4}}{\left(\vphantom{\frac{DA}{DB}}2bW^{2}-2WW_{xx}+3W_{x}^{2}\right)^{3}}\left(\vphantom{\frac{DA}{DB}}4W^{2}W_{xxx}-18WW_{x}W_{xx}\right.$ $\displaystyle\qquad\qquad\left.\vphantom{\frac{DA}{DB}}+15W_{x}^{3}+2bW^{2}W_{x}\right).$ (3.13) ## 4 Concluding remarks Our aim has been to construct fully-nonlinear third-order evolution equations in the class $u_{t}=F(u_{x},u_{xx},u_{xxx})$, namely to identify those equations in this class that admit a second-order recursion operator with a sixth-order integrating factor, which are then symmetry-integrable equations. Note that that exists no fully-nonlinear evolution equation in this class that admits a recursion operator of order two where both integrating factors, $\Lambda_{1}$ and $\Lambda_{2}$, are of order less than six. We report here four equations, listed in Proposition 2, namely (3.5a), (3.6), (3.7) and (3.8). Due to the mentioned restrictions on the form of the recursion operator, this is certainly not a complete classification of all fully-nonlinear third-order evolution equations of this form that admit a recursion operator. Nevertheless, we do consider the equations that we have obtained here to be of some interest and worthy of further study. It would, for example, be interesting to find all the potentialisations of the four fully-nonlinear equations (3.5a) to (3.8), as well as the equations (3.9a) to (3.13). This can be investigated by using the adjoint symmetries structure of the equations. Some preliminary calculations have revealed a rich adjoint symmetry structure for these equations, so one can expect to obtain interesting results. Furthermore, one could apply the multi-potentialisation method which may lead to nonlocal symmetries for the equations (see [1] for details regarding multi-potentialisations). One could also extend this study further, namely to include evolution equations of third order that explicitly depend on $u$ and allow algebraic functions in $u_{xxx}$. ### Acknowledgements We thank the anonymous referee for useful remarks regarding the classification of integrable evolution equations, for point out some misprints, and for clarifying the results in [6]. ## Appendix A Appendix: The general conditions for $R[u]$ of (3.1) For the equation $u_{t}=F(u_{x},u_{xx},u_{xxx})$ we provide here the explicit general conditions on the functions $F$, $G_{j}$, $I_{j}$ and $\Lambda_{j}$ for the existence of a recursion operator $R[u]$ of the form $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+I_{1}[u]D_{x}^{-1}\circ\Lambda_{1}[u]+I_{2}[u]D_{x}^{-1}\circ\Lambda_{2}[u].$ (A.1) This is obtained from the commutator condition (2.12) by equating to zero all the derivatives of the free function $\varphi$. For convenience we introduce the following notation: $\displaystyle A_{1}:=\frac{\partial F}{\partial u_{x}},\qquad A_{2}:=\frac{\partial F}{\partial u_{xx}},\qquad A_{3}:=\frac{\partial F}{\partial u_{xxx}}.$ The conditions are as follows: $\displaystyle\frac{\partial^{4}\varphi}{\partial x^{4}}:-2G_{2}D_{x}A_{3}+3A_{3}D_{x}G_{2}=0$ (A.2a) $\displaystyle\frac{\partial^{3}\varphi}{\partial x^{3}}:2A_{2}D_{x}G_{2}-2G_{2}D_{x}A_{2}-G_{2}D_{x}^{2}A_{3}-G_{1}D_{x}A_{3}+3A_{3}D_{x}^{2}G_{2}$ $\displaystyle\qquad\ +3A_{3}D_{x}G_{1}=0$ (A.2b) $\displaystyle\frac{\partial^{2}\varphi}{\partial x^{2}}:3A_{3}D_{x}^{2}G_{1}+A_{3}D_{x}^{3}G_{2}+2A_{2}D_{x}G_{1}+A_{2}D_{x}^{2}G_{2}+A_{1}D_{x}G_{2}+3A_{3}D_{x}G_{0}$ $\displaystyle\qquad\ \left.\vphantom{\frac{DA}{DB}}-2G_{2}D_{x}A_{1}-G_{2}D_{x}^{2}A_{2}-G_{1}D_{x}A_{2}-D_{t}G_{2}\right|_{E=0}=0$ (A.2c) $\displaystyle\frac{\partial\varphi}{\partial x}:A_{3}D_{x}^{3}G_{1}+3A_{3}D_{x}^{2}G_{0}+A_{2}D_{x}^{2}G_{1}+2A_{2}D_{x}G_{0}+A_{1}D_{x}G_{1}$ $\displaystyle\qquad\ +\sum_{j=1}^{2}\left(\vphantom{\frac{DA}{DB}}3A_{3}\Lambda_{j}D_{x}I_{j}+3A_{3}I_{j}D_{x}\Lambda_{j}+I_{j}\Lambda_{j}D_{x}A_{3}\right)$ $\displaystyle\qquad\ \left.\vphantom{\frac{DA}{DB}}-G_{2}D_{x}^{2}A_{1}-G_{1}D_{x}A_{1}-D_{t}G_{1}\right|_{E=0}=0$ (A.2d) $\displaystyle\varphi:A_{3}D_{x}^{3}G_{0}+A_{2}D_{x}^{2}G_{0}+A_{1}D_{x}G_{0}+\sum_{j=1}^{2}\left(\vphantom{\frac{DA}{DB}}-2I_{j}(D_{x}\Lambda_{j})(D_{x}A_{3})-I_{j}\Lambda_{j}D_{x}^{2}A_{3}\right.$ $\displaystyle\qquad\ \left.+I_{j}\Lambda_{j}D_{x}A_{2}-I_{j}D_{x}^{4}\Lambda_{j}+3A_{3}(D_{x}I_{j})(D_{x}\Lambda_{j})+3A_{3}\Lambda_{j}D_{x}^{2}I_{j}\right.$ $\displaystyle\qquad\ \left.\left.\vphantom{\frac{DA}{DB}}+2A_{2}\Lambda_{j}D_{x}I_{j}+2A_{2}I_{j}D_{x}\Lambda_{j}\right)-D_{t}G_{0}\right|_{E=0}=0,$ (A.2e) as well as the symmetry condition $\displaystyle\left.\vphantom{\frac{DA}{DB}}L_{E}[u]\,I_{j}\right|_{E=0}=0,\qquad j=1,2$ (A.2f) and the adjoint symmetry condition $\displaystyle\left.\vphantom{\frac{DA}{DB}}L^{*}_{E}[u]\,\Lambda_{j}\right|_{E=0}=0,\qquad j=1,2.$ (A.2g) ## Appendix B Appendix: The recursion operators for the symmetry-integrable equations of Proposition 2 Recursion operator for Case I: Equation (3.5a) of Proposition 2 viz. $\displaystyle u_{t}=\frac{u_{xx}^{6}}{(\alpha u_{x}+\beta)^{3}u_{xxx}^{2}}+Q(u_{x}),$ admits the recursion operator $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+(\alpha u_{x}+\beta)D_{x}^{-1}\circ\Lambda_{1}[u],$ (B.1) where $\displaystyle G_{2}[u]=\frac{u_{xx}}{(\alpha u_{x}+\beta)^{2}\,u_{xxx}^{2}}$ (B.2a) $\displaystyle G_{1}[u]=\frac{u_{xx}^{4}u_{4x}}{(\alpha u_{x}+\beta)^{2}\,u_{xxx}^{3}}-\frac{4u_{xx}^{3}}{(\alpha u_{x}+\beta)^{2}\,u_{xxx}}+\frac{\alpha u_{xx}^{5}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{2}}$ (B.2b) $\displaystyle G_{0}[u]=-\frac{u_{xx}^{4}u_{5x}}{(\alpha u_{x}+\beta)^{2}\,u_{xxx}^{3}}+\frac{3u_{xx}^{4}u_{4x}^{2}}{(\alpha u_{x}+\beta)^{2}\,u_{xxx}^{4}}$ $\displaystyle\qquad+\left(-\frac{8u_{xx}^{3}}{(\alpha u_{x}+\beta)^{2}u_{xxx}^{2}}+\frac{6\alpha u_{xx}^{5}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{3}}\right)u_{4x}+\frac{6\alpha^{2}u_{xx}^{6}}{(\alpha u_{x}+\beta)^{4}\,u_{xxx}^{2}}$ $\displaystyle\qquad-\frac{18\alpha u_{xx}^{4}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}}+\frac{12\alpha u_{xx}^{2}}{(\alpha u_{x}+\beta)^{2}}+\frac{1}{3}(\alpha u_{x}+\beta)\frac{d^{2}Q}{du_{x}^{2}}-\frac{\alpha}{3}\frac{dQ}{du_{x}}$ (B.2c) $\displaystyle\Lambda_{1}=\frac{u_{xx}^{4}u_{6x}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{3}}+\left(\frac{12u_{xx}^{3}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{2}}-\frac{9\alpha u_{xx}^{5}}{(\alpha u_{x}+\beta)^{4}\,u_{xxx}^{3}}\right)u_{5x}$ $\displaystyle\qquad+\left(\frac{24\alpha u_{xx}^{2}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}}-\frac{72\alpha u_{xx}^{4}}{(\alpha u_{x}+\beta)^{4}\,u_{xxx}^{2}}+\frac{36\alpha^{2}u_{xx}^{6}}{(\alpha u_{x}+\beta)^{5}\,u_{xxx}^{3}}\right)u_{4x}$ $\displaystyle\qquad-\frac{9u_{xx}^{4}u_{4x}u_{5x}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{4}}+\left(\frac{27\alpha u_{xx}^{5}}{(\alpha u_{x}+\beta)^{4}\,u_{xxx}^{4}}-\frac{28u_{xx}^{3}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{3}}\right)u_{4x}^{2}$ $\displaystyle\qquad+\frac{12u_{xx}^{4}u_{4x}^{3}}{(\alpha u_{x}+\beta)^{3}\,u_{xxx}^{5}}-\frac{24u_{xx}u_{xxx}}{(\alpha u_{x}+\beta)^{3}}+\frac{30\alpha^{3}u_{xx}^{7}}{(\alpha u_{x}+\beta)^{6}\,u_{xxx}^{2}}+\frac{108\alpha u_{xx}^{3}}{(\alpha u_{x}+\beta)^{4}}$ $\displaystyle\qquad-\frac{108\alpha^{2}u_{xx}^{5}}{(\alpha u_{x}+\beta)^{5}\,u_{xxx}}-\frac{1}{2}\frac{d^{3}Q}{du_{x}^{3}}\,u_{xx}.$ (B.2d) Here $Q(u_{x})$ needs to satisfy the 5th-order ordinary differential equation (3.5b), viz. $\displaystyle(\alpha u_{x}+\beta)\frac{d^{5}Q}{du_{x}^{5}}+5\alpha\frac{d^{4}Q}{du_{x}^{4}}=0.$ Recursion operator for Case II: Equation (3.6) of Proposition 2 viz. $\displaystyle u_{t}=\frac{u_{xx}^{3}\left(\lambda_{1}+\lambda_{2}u_{xx}\right)^{3}}{u_{xxx}^{2}}$ admits the recursion operator $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+D_{x}^{-1}\circ\Lambda_{1}[u],$ (B.3) where $\displaystyle G_{2}[u]=\frac{u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}}{u_{xxx}^{2}}$ (B.4a) $\displaystyle G_{1}[u]=\frac{u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{4x}}{u_{xxx}^{3}}-\frac{4\lambda_{2}u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})}{u_{xxx}}$ (B.4b) $\displaystyle G_{0}[u]=\frac{u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{5x}}{u_{xxx}^{3}}+\frac{3u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{4x}^{2}}{u_{xxx}^{4}}$ $\displaystyle\qquad-\frac{2u_{xx}u_{xxx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})(\lambda_{1}+4\lambda_{2}u_{xx})u_{4x}}{u_{xxx}^{4}}+12\lambda_{2}^{2}u_{xx}^{2}+6\lambda_{1}\lambda_{2}u_{xx}$ (B.4c) $\displaystyle\Lambda_{1}[u]=\frac{u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{6x}}{u_{xxx}^{3}}+\frac{4u_{xx}(\lambda_{1}+\lambda_{2}u_{xx})(\lambda_{1}+3\lambda_{2}u_{xx})u_{5x}}{u_{xxx}^{2}}$ $\displaystyle\qquad-\frac{9u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{4x}u_{5x}}{u_{xxx}^{4}}+\frac{12u_{xx}^{2}(\lambda_{1}+\lambda_{2}u_{xx})^{2}u_{4x}^{3}}{u_{xxx}^{5}}$ $\displaystyle\qquad-\frac{2u_{xx}u_{xxx}^{3}(\lambda_{1}+\lambda_{2}u_{xx})(5\lambda_{1}+14\lambda_{2}u_{xx})u_{4x}^{2}}{u_{xxx}^{5}}$ $\displaystyle\qquad+\frac{2(12\lambda_{2}^{2}u_{xx}^{2}+10\lambda_{1}\lambda_{2}u_{xx}+\lambda_{1}^{2})u_{4x}}{u_{xxx}}-6\lambda_{2}(\lambda_{1}+4\lambda_{2}u_{xx})u_{xxx}.$ (B.4d) Recursion operator for Case III: Equation (3.7) of Proposition 2 viz. $\displaystyle u_{t}=\frac{(\alpha u_{x}+\beta)^{11}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{2}}$ admits the recursion operator $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+(\alpha u_{x}+\beta)D_{x}^{-1}\circ\Lambda_{1}[u]$ (B.5) where $\displaystyle G_{2}[u]=\frac{(\alpha u_{x}+\beta)^{8}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{2}}$ (B.6a) $\displaystyle G_{1}[u]=\frac{(\alpha u_{x}+\beta)^{7}\,u_{4x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{3}}\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)^{2}u_{4x}-13\alpha(\alpha u_{x}+\beta)u_{xx}u_{xxx}\right.$ $\displaystyle\qquad\qquad\left.\vphantom{\frac{DA}{DB}}+24\alpha^{2}u_{xx}^{3}\right]$ (B.6b) $\displaystyle G_{0}[u]=-\frac{(\alpha u_{x}+\beta)^{9}u_{5x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{3}}$ $\displaystyle\qquad+\frac{3(\alpha u_{x}+\beta)^{6}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{4}}\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)^{4}u_{4x}^{2}-\frac{46}{3}\alpha(\alpha u_{x}+\beta)^{3}u_{xx}u_{xxx}u_{4x}\right.$ $\displaystyle\qquad\qquad\ +3\alpha(\alpha u_{x}+\beta)^{3}u_{xxx}^{3}+\frac{184}{3}\alpha^{2}(\alpha u_{x}+\beta)^{2}u_{xx}^{2}u_{xxx}^{2}$ $\displaystyle\qquad\qquad\left.\vphantom{\frac{DA}{DB}}-184\alpha^{3}(\alpha u_{x}+\beta)u_{xx}^{4}u_{xxx}+144\alpha^{4}u_{xx}^{6}\right]$ (B.6c) $\displaystyle\Lambda_{1}[u]=\frac{(\alpha u_{x}+\beta)^{8}\,u_{6x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{3}}-\frac{9(\alpha u_{x}+\beta)^{9}\,u_{4x}u_{5x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{4}}$ $\displaystyle\qquad-\frac{72\alpha^{2}(\alpha u_{x}+\beta)^{7}\,u_{xx}^{3}u_{5x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{4}}+\frac{81\alpha(\alpha u_{x}+\beta)^{8}\,u_{xx}u_{xxx}u_{5x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{4}}$ $\displaystyle\qquad+\frac{12(\alpha u_{x}+\beta)^{10}\,u_{4x}^{3}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{5}}$ $\displaystyle\qquad-\frac{45\alpha(\alpha u_{x}+\beta)^{8}\,u_{xx}u_{4x}^{2}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{5}}\left(5\alpha u_{x}u_{xxx}+5\beta u_{xxx}-3\alpha u_{xx}^{2}\right)$ $\displaystyle\qquad+\frac{5\alpha(\alpha u_{x}+\beta)^{6}\,u_{4x}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{5}}\left[\vphantom{\frac{DA}{DB}}11(\alpha u_{x}+\beta)^{3}u_{xxx}^{3}+291\alpha(\alpha u_{x}+\beta)^{2}u_{xx}^{2}u_{xxx}^{2}\right.$ $\displaystyle\qquad\qquad\left.\vphantom{\frac{DA}{DB}}-504\alpha^{2}(\alpha u_{x}+\beta)u_{xx}^{4}u_{xxx}+216\alpha^{3}u_{xx}^{6}\right]$ $\displaystyle\qquad-\frac{20\alpha^{2}(\alpha u_{x}+\beta)^{4}\,u_{xx}}{\left[\vphantom{\frac{DA}{DB}}(\alpha u_{x}+\beta)u_{xxx}-3\alpha u_{xx}^{2}\right]^{7}}\left[\vphantom{\frac{DA}{DB}}-\frac{67\alpha^{2}}{3}(\alpha u_{x}+\beta)^{4}u_{xx}^{4}u_{xxx}^{4}\right.$ $\displaystyle\qquad\qquad+148\alpha^{3}(\alpha u_{x}+\beta)^{3}u_{xx}^{6}u_{xxx}^{3}+288\alpha^{5}(\alpha u_{x}+\beta)u_{xx}^{10}u_{xxx}$ $\displaystyle\qquad\qquad-306\alpha^{4}(\alpha u_{x}+\beta)^{2}u_{xx}^{8}u_{xxx}^{2}+\frac{31}{27}(\alpha u_{x}+\beta)^{6}u_{xxx}^{6}$ $\displaystyle\qquad\quad\ \,\left.\vphantom{\frac{DA}{DB}}-\frac{38\alpha}{9}(\alpha u_{x}+\beta)^{5}u_{xx}^{2}u_{xxx}^{5}-108\alpha^{6}u_{xx}^{12}\right].$ (B.6d) Recursion operator for Case VI: Equation (3.8) of Proposition 2 viz. $\displaystyle u_{t}=\frac{4u_{x}^{5}}{(2b\,u_{x}^{2}-2u_{x}u_{xxx}+3u_{xx}^{2})^{2}}\equiv\frac{u_{x}}{(b-S)^{2}},$ admits the recursion operator $\displaystyle R[u]=G_{2}[u]D_{x}^{2}+G_{1}[u]D_{x}+G_{0}[u]+u_{x}D_{x}^{-1}\circ\Lambda_{1}[u]+u_{t}D_{x}^{-1}\circ\Lambda_{2}[u]$ (B.7) where $\displaystyle G_{2}[u]=\frac{1}{4(b-S)^{2}}$ (B.8a) $\displaystyle G_{1}[u]=-\frac{u_{xx}}{2u_{x}(b-S)^{2}}-\frac{S_{x}}{4(b-S)^{3}}$ (B.8b) $\displaystyle G_{0}[u]=\frac{u_{xx}^{2}}{8u_{x}^{2}(b-S)^{2}}+\frac{u_{xx}S_{x}}{4u_{x}(b-S)^{3}}+\frac{S_{xx}}{4(b-S)^{3}}-\frac{2bS^{2}-b^{2}S-3S_{x}^{2}-S^{3}}{4(b-S)^{4}}$ (B.8c) $\displaystyle\Lambda_{1}[u]=-\frac{S_{xxx}}{4u_{x}(b-S)^{3}}-\frac{9S_{x}S_{xx}}{4u_{x}(b-S)^{4}}-\frac{S_{x}(b+3S)}{8u_{x}(b-S)^{3}}-\frac{3S_{x}^{3}}{u_{x}(b-S)^{5}}$ (B.8d) $\displaystyle\Lambda_{2}[u]=-\frac{S_{x}}{8u_{x}}.$ (B.8e) Here $S$ is the Schwarzian derivative (1.2). ## References * [1] Euler M and Euler N, Nonlocal invariance of the multipotentialisations of the Kupershmidt equation and its higher-order hierarchies In: Nonlinear Systems and Their Remarkable Mathematical Structures, N Euler (ed), CRC Press, Boca Raton, 317–351, 2018. * [2] Euler M and Euler N, On Möbius-invariant and symmetry-integrable evolution equations and the Schwarzian derivative, Studies in Applied Mathematics, 143, 139–156, 2019. * [3] Euler M and Euler N, On the hierarchy of fully-nonlinear Möbius-invariant and symmetry-integrable equations of order three, Journal of Nonlinear Mathematical Physics, 27, 521–-528, 2021. * [4] Euler M, Euler N and Nucci M C, On differential equations invariant under two-variable Möbius transformations, Open Communications in Nonlinear Mathematical Physics, 2, 173–185, 2022. * [5] Fokas A S and Fuchssteiner B, On the structure of symplectic operators and hereditary symmetries, Lettere al Nuovo Cimento, 28, 299– 303, 1980. * [6] Hernández Heredero R, Classification of Fully Nonlinear Integrable Evolution Equations of Third Order, Journal of Nonlinear Mathematical Physics 12, 567–-585, 2005. * [7] Olver P J, Applications of Lie Groups to Differential Equations, Springer, New York, 1986.
2.5cm2.5cm2.5cm2.5cm # Fragility, Robustness and Antifragility in Deep Learning Chandresh Pravin University of Reading, United Kingdom Ivan Martino KTH Royal Institute of Technology, Sweden Giuseppe Nicosia University of Catania, Italy Varun Ojha Corresponding Author: Varun Ojha, email: <EMAIL_ADDRESS> Cite as: Pravin, Chandresh; Martino, Ivan, Nicosia, Giuseppe and Ojha, Varun (2023) Artificial Intelligence, Elsevier Newcastle University, United Kingdom ###### Abstract We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three filtering scores for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness. Keyword: Deep Neural Networks; Robustness Analysis; Adversarial Attacks; Parameter Filtering ## 1 Introduction Deep neural networks (DNNs) are extensively used in various tasks and domains, achieving noteworthy performances in both research and real-world applications [1, 2]. It is the critical weaknesses of DNNs, however, that warrant investigation if we are to better understand how they learn abstract relationships between inputs and outputs [3, 4]. We propose to investigate the effects of a systematic analysis on DNNs by using a signal processing technique for network parameter filtering (the terms DNN and network are used interchangeably), in contrast to random filtering [5, 6, 7] methods. Our work analyzes the performance of a DNN under (a) internal stress (i.e., the synaptic filtering of DNN parameters) and (b) external stress (i.e., perturbations of inputs to the DNN). We define internal and external s tress within the context of DNNs as a novel concept taking inspiration from the applications of stress on biological systems [8]. Through analyzing the performance of a network to input perturbations (external stress) formed using an adversarial attack [9, 10], we bring the weakness of the DNN to the foreground. We simultaneously apply synaptic filtering (internal stress) to the network parameters in order to identify the specific parameters most susceptible to the input perturbations, thus characterizing them as fragile. Similarly, we identify parameters of the DNN that are invariant to both internal and external stress when considering the network performance, thus characterizing them as robust to the applied stress. Following this reasoning, we introduce a novel notion of antifragility [11] in deep learning as the circumstance in which any applied perturbations (internal and external) on a network result in an improvement of the network performance. When considering external stress, such as variations to the network input, we focus our analysis specifically on varying magnitudes of adversarial attack perturbations [9, 10] due to their ability to exploit the learned representations of a network to decrease network performance [12]. In our study, we focus on the fast gradient sign method (FGSM) attack for its equal single-step perturbation calculation for increasing network loss [13]. Our synaptic filtering methodology (see Fig. 1) offers a comparative study of state-of-the-art DNNs using clean and adversarially perturbed datasets, and therefore, the study is relevant for any variation of perturbation introduced to the input space. We apply our methodology to expose the fragility, robustness and antifragility of network parameters over network learning epochs, which subsequently enables us to examine the landscape (performance variations over epochs) of the network learning process. In order to better understand how an adversarial attack is effective in bringing a network to failure [14], we take a novel methodology that considers network susceptibility to adversarial perturbations in conjunction with network architecture and the learning processes (see Fig. 1). The proposed synaptic filters are considered to be the lenses under which we can characterize parameters of network architecture. Introducing an adversarial attack to the methodology in Fig. 1 offers a unique insight into how the characterization of network parameters varies between clean and adversarial inputs. We validated the analysis on the ResNet-18, ResNet-50 [15], SqueezeNet-v1.1 [16], and ShuffleNet V2 x1.0 [17] networks for the MNIST [18], CIFAR10 [19] and the Tiny ImageNet datasets [20]. Fig. 1: Our methodology of parameter filtering and evaluating DNN performances on clean and adversarial datasets. Passing a DNN through parameter filters is equivalent to internal stress and applying an adversarial attack with various magnitudes on clean data is equivalent to external stress on a DNN. In this methodology, the DNN performances (labeled 1, 2, 3, and 4) are individually compared against a defined baseline DNN performance (sold green line in the illustration shown on the lower left) in order to characterize DNN parameters as fragile (red shaded area), robust (green shaded area), or antifragile (blue shaded area). Our main contributions of this work, therefore, are as follows: * • We offer a novel methodology based on signal processing techniques that apply internal stress (parameter removal) and external stress (adversarial attack) on DNNs to characterize the network parameters as either fragile, robust, or antifragile. * • We offer parametric filtering scores that use a defined baseline network performance to quantify the influence of specific parameters on the network performance. * • We apply internal stress on networks in the form of synaptic filters and use the filtered network performances to show that networks trained on different datasets contain parameter characterizations that are invariant to different datasets throughout the network training process. * • We apply external stress to networks, in the form of an adversarial attack, to identify the specific parameters targeted by the adversary through a comparison of the synaptic filtering performances of the clean and adversarial test datasets. * • We show that our synaptic filtering method boosts the test accuracy of ResNet and ShuffleNet models on adversarial dataset when only the robust and antifragile parameters are retrained at any given epoch, thus proving a useful strategy for improving network robustness. The following Sec. 2 gives insights into the background and related works. Section 3 offers definitions of the terms and concepts introduced in the proposed methodology. Section 4 reports the proposed methodologies. Section 5 shows the experimental results acquired using the proposed methodologies, and Sec. 6 concludes the work. ## 2 Background and related work We propose evaluating the resilience of DNNs using a physiologically inspired approach concerning the resilience of humans to stress on their physiology [8, 21]. Therefore, we analyze the performance of DNNs to internal and external stress. Within the context of deep learning, we consider internal stress to be the perturbations to the network parameters (i.e., synaptic filtering) [22, 23] and we take external stress to be variations to the learning environment of the network (i.e., input perturbations) [24, 25, 26]. There exist various avenues of research when considering an analysis of DNNs to input perturbations [13, 27] and synaptic filtering [23, 6]. The works of Szegedy et al. [9] and Goodfellow et al. [13] invited attention to investigate the vulnerability of DNNs to a particular method of crafting input perturbations in the form of adversarial attacks. The rapid development of new adversarial attacks [28] and equally abundant adversarial defense techniques [29, 30], call for methods of analyzing the resilience of DNNs to carefully crafted input perturbations, designed to bring networks to failure. The scrutiny of DNN resilience to these perturbations can be expanded to incorporate perturbations into network architectures. The study proposed by Han et al. [31] details how network parameters can be filtered out to reduce network size, without significantly affecting network performance. However, there may be conditions when filtering parameters may lead to improvements in the network performance. Therefore, we use a notion of antifragility to describe an increase in network performance whilst being subjected to internal and/or external stress, in the form of synaptic filters [7, 23, 6] and adversarial attacks [14, 28, 30]. Our notion of antifragility in DNN is in line with the antifragility notion described by Taleb and Douady [11]) to refer to a phenomenon whereby a system subjected to stress shows to improve in performance. We describe the related works on internal and external stress as follows: ##### Internal Stress (Parameter Filtering) Network architecture affects how and what DNNs learn [32, 33, 34, 35]. Therefore, the works of Ilyas et al. [36] highlight the presence of robust and non-robust features within networks. In a similar context, we highlight the presence of fragile, robust and antifragile [11] parameters of different network architectures on both clean and adversarial test datasets. For the characterization of the network parameters, we propose a synaptic filtering methodology (see Fig. 1). Identifying fragile, robust and antifragile parameters informs us about the compressibility of a network based on the variation and degradation in the network performance [37]. A central principle of network compression techniques is to reduce network size whilst retaining network performance [31]. A method of achieving network compression is through using network pruning techniques [38, 23, 6]. Our work of parameter filtering differs from the objective of pruning techniques that aim to reduce DNN size, whereas we aim to analyze the characteristics of DNN parameters by systematically filtering them. As well as our works differ from those systematic tuning of DNN hyperparameters such as the number of layers and number of neurons in a layer to analyze the DNN performance [39], i.e., we systematically internal architecture of the DNN. Siraj et al. [40] proposed a robust sparse regularisation method for network compactness while simultaneously optimizing network robustness to adversarial attacks. Similarly, we use our synaptic filtering methodology (a network parameter removal technique) to study the performances of a DNN on clean and adversarial datasets, which enable us to identify parameters that cause a decrease in network performance on the adversarial dataset [41] compared to the clean dataset, thus characterized as fragile in our work. We characterize parameters that are invariant to synaptic filtering on both clean and adversarial datasets as robust. Whereas the parameters that, when filtered, show to increase the network performance on the adversarial dataset compared to the clean dataset are characterized as antifragile. ##### External Stress (Adversarial Attacks) There are numerous methods for computing adversarial attacks on DNNs in the literature [25, 26]. The primary objective of adversarial attacks is to deceive a network into misclassifying otherwise correctly classified inputs [9, 13]. The importance of the analysis of adversarial attacks on DNNs is significant due to the existence of adversarial examples in real world applications [42, 43]. Similarly, in our work, we analyze the adversarial attack in order to characterize network parameters into the parameters that affect network performance negatively (fragile), invariantly (robust), and positively (antifragile). Adversarial examples are by design created to decrease network performance; however, when simultaneously carrying out synaptic filtering methods [41] it is possible to observe an increase in network performance, even under an adversarial attack, thus requiring the notion of antifragility. ## 3 Definitions In this Section, we define fragility, robustness, and antifragility within the scope of DNNs. For defining fragility, robustness, and antifragility, we also need to define the internal stress, external stress and baseline network performance of DNNs. Here the stress on a DNN is a systematic perturbation, either internal (synaptic filtering) or external (adversarial attack). The purpose of applying the stress on DNN is to test the operating conditions of the DNN for both learned and optimized states, when evaluated on unseen datasets. The concepts of network fragility, robustness, antifragility and stress are shown in Fig. 2, where Fig. 2(a) shows the application of stress on a DNN and Fig. 2(b) shows the interpretation of DNN performance for parameter characterization. For detailed definitions of the above-mentioned concepts, we consider the following notations. (a) Evaluation overview (b) Response characteristics Fig. 2: (a) Showing an overview of the proposed system evaluation method. (b) shows the characteristics of fragility, robustness and antifragility through analysing the performance of a system $\mathcal{F}$ whilst under stress. Consider a neural network architecture as a set of functions $f(x,\cdot)$ that consists of a configuration of parameters, such as convolutions, batch normalization, pooling layers, activation functions, etc. [38], we define a parameterized neural network as $f(x,\mathbf{W})$, for specific parameters $\mathbf{W}$ and input $x$. For an $l$-layer network with a $d$ dimensional input $x\in\mathbb{R}^{d}$; the $K$-class classification function is thus $f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}$. The prediction of $f(x,\mathbf{W})$ is given by $\hat{y}=\arg\max_{1\leq k\leq K}f_{k}(x,\mathbf{W})$. The network parameters $\mathbf{W}$ are assumed to be optimized, either partially or fully, using back-propagation and a loss function $\mathcal{L}:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ given by $\mathcal{L}(\hat{{y}},y)$ to calculate the network error. ### 3.1 Stress on DNNs To formulate internal stress on the network, we consider two filtering domains: local (the parameters of any specific layer) and global (the parameters of the whole network). We apply synaptic filtering to filter the parameters of trainable convolutional layers and fully connected layers of the network, the non-trainable parameters, however, remain unaffected by the synaptic filtering procedure. The $l$-th layer network parameters (local parameters) are given as $\mathbf{W}^{(l)}$, while the global network parameters are $\mathbf{W}$. For convenience, we denote the network parameters to be evaluated by the synaptic filtering methods as $\theta$, where $\theta=\mathbf{W}^{(l)}$ is the local parameter analysis [31] and $\theta=\mathbf{W}$ is the global parameter analysis, as mentioned in [44]. ###### Definition 1 (Synaptic filtering). The synaptic filtering involves taking a network $f(x,\theta)$ with parameter $\theta$ as an input and producing a filtered network $f(x,\tilde{\theta}_{\alpha})$ with filtered parameter $\tilde{\theta}_{\alpha})$ as: $\begin{gathered}\tilde{\theta}_{\alpha}=B_{\alpha}\odot\theta_{\alpha},\quad B_{\alpha}\in\\{0,1\\}^{|\theta_{\alpha}|},\end{gathered}$ (1) where $\alpha=\\{\alpha_{0},\alpha_{1},\dots,\alpha_{A}\\}$ is the normalised synaptic filtering thresholds across the complete parameter range of the evaluated network/layer with a lower bound $\alpha_{0}=0$, upper bound $\alpha_{A}=1$ and step size $\Delta_{\alpha}$ given by $\alpha_{1}=\alpha_{0}+\Delta_{\alpha}$. For synaptic filtering of a network, we have $\hat{y}=f(x,\theta)$ as the network predictions for the unperturbed network and $\hat{y}_{\alpha}=f(x,\tilde{\theta}_{\alpha})$ as the network predictions for the perturbed network. In Eq. 1, $B_{\alpha}$ is a binary mask for a threshold $\alpha$ that filters parameters, $\theta_{\alpha}$ are the set of parameters to be filtered that may be different to $\theta$, and $\odot$ is the element-wise product operator To further constrain the internal stress analysis, we define that the network parameters to be filtered $\theta$ is not a zero vector prior to the synaptic filtering, i.e., $\theta$ must be a trained network: $\theta\neq\textbf{0}$. If this constraint is not met, the prediction of the network $f(x,\theta)$ will result in output values of zero for all inputs. ###### Definition 2 (Internal stress - synaptic filtering of the DNN parameters.). The internal stress on a DNN is the application of the synaptic filtering method with various magnitudes of $\alpha$ ranging from a minimum filtering threshold $\alpha_{0}$ to the maximum filtering threshold $\alpha_{A}$ in order to obtain a set of $|\alpha|$ filtered networks $\mathcal{S}_{\alpha}$: $\begin{gathered}\mathcal{S}_{\alpha}=\\{f(x,\tilde{\theta}_{\alpha_{0}}),f(x,\tilde{\theta}_{\alpha_{1}}),\dots,f(x,\tilde{\theta}_{\alpha_{A}})\\}.\end{gathered}$ (2) With evaluating a network to internal stress, we examine how the filtering of learned parameters of a network, influences the network performance, thus identifying the specific filtering thresholds required to bring the network to failure. Considering external stress as variations to the input $x$, we introduce $x_{\epsilon}=x+\delta_{\epsilon}$ as the perturbed example of $x$ with an adversarial perturbation $\delta_{\epsilon}\in\mathbb{R}^{d}$, where $\epsilon=\\{\epsilon_{0},\epsilon_{1},\dots,\epsilon_{E}\\}$ is the perturbation magnitude with minimum perturbation magnitude $\epsilon_{0}$ and maximum perturbation magnitude $\epsilon_{E}$ with step size $\epsilon_{1}=\epsilon_{0}+\Delta_{\epsilon}$. Using a single adversarial attack formulation method $\delta$ we define $\hat{y}=f(x,\theta)$ as the network predictions on clean dataset and $\hat{y}_{\epsilon}=f(x_{\epsilon},\theta)$ as the network predictions on the adversarial dataset [Fig. 1(Left)]. When dealing with external stress only, $\theta$ is taken as the complete set of network parameters $\mathbf{W}$. The performance of $f(x_{\epsilon},\theta)$ can inform us of the ability of the network to remain stable to external stress (input perturbations) applied to the network. This is achieved through a comparison of the network performance on a clean dataset and an adversarially perturbed dataset. There are numerous variations of $\delta$ that can be used to form external stress to the network, from targeting specific features of $x$ to drawing distortions from a different distribution [25, 26]. However, in this work, we only focus on one perturbation method $\delta$, i.e., FGSM attack, as our objective is to only compare DNN performance on clean and perturbed inputs (Fig. 1). When applying external stress with various magnitude $\epsilon$, we get a set of perturbed inputs for the network $\mathcal{S}_{\epsilon}$: ###### Definition 3 (External stress - adversarial attack on DNN). The external stress on a DNN is the application of an adversarial attack with various perturbation magnitudes $\epsilon$ ranging from a minimum perturbation magnitude $\epsilon_{0}$ to the maximum perturbation magnitude $\epsilon_{E}$ in order to obtain a set of $|\epsilon|$ inputs to the network $\mathcal{S}_{\epsilon}$: $\mathcal{S}_{\epsilon}=[f(x_{\epsilon_{0}},\theta),\dots,f(x_{\epsilon_{E}},\theta)].\begin{gathered}\end{gathered}$ (3) With external stress on a network we examine how the variations in the input environment influence the network performance, thus identifying the specific magnitudes of the attack required to bring the network to failure. An important consideration to make when analyzing networks using internal and external stress in Definitions 2 and 3, is that a resultant perturbed network ($\mathcal{S}_{\alpha}$ and $\mathcal{S}_{\epsilon}$) may offer equal performance to the unperturbed network, i.e., for all inputs $x$ in test set, we observe: $\displaystyle~{}~{}p(\hat{y}_{\alpha}=y|f(x,\tilde{\theta}_{\alpha}))\approx p(\hat{y}_{\alpha}=y|f(x,\theta))\text{ for internal stress threshold }\alpha,\text{and}$ $\displaystyle p(\hat{y}_{\epsilon}=y|f(x_{\epsilon},\theta))\approx p(\hat{y}_{\epsilon}=y|f(x,\theta))\text{ for external stress magnitude }\epsilon,$ where $p(\cdot)$ is a function that measures the network accuracy over all inputs $x$. This indicates that even under stress, a DNN may perform equivalently to an unperturbed network. Therefore, in order to evaluate the performance of a network to stress, we must define a baseline network performance against which we can measure the performance of perturbed and unperturbed networks. A baseline network performance can vary for different types of stress (internal or external), as there may arise instances where the response of the baseline network performance, defined as $\hat{f}(x,\theta_{\alpha})$ for internal stress and $\hat{f}(x_{\epsilon},\theta)$ for external stress, is not necessarily the same as the performance of the initially trained network (unperturbed network) $f(x,\theta)$. The baseline network performance for a combination of internal and external stress is defined as $\hat{f}(x_{\epsilon},\theta_{\alpha})$, where the baseline network is a function of $\epsilon$ and $\alpha$. To give context on why baseline network performance may not necessarily be the same as the performance of an unperturbed network, take the example when we apply internal stress to a DNN, the result is a set of filtered networks $\mathcal{S}_{\alpha}$. If we define the upper bound of the stress magnitude equal to the total number of network parameters, i.e., $\alpha_{A}=|\theta|$, then we obtain a network with parameter value zero $\tilde{\theta}_{\alpha_{A}}=\textbf{0}$. Noticeably, the performance of a maximally perturbed network $f(x,\tilde{\theta}_{\alpha_{A}})$ cannot equal to the performance of unperturbed network $f(x,\theta)$, i.e., $f(x,\tilde{\theta}_{\alpha_{A}})\neq f(x,\theta)$. Thus we require the baseline network performance to be a function of the magnitude of stress applied on the DNN. A detailed description of baseline network performance is given later in Sec. 4.1.2. ### 3.2 Fragility, Robustness and Antifragility Here we define the three characterizations of network parameters: fragility, robustness and antifragility. In order to define the different characterizations of network parameters, we must establish the stress to which we can evaluate network parameter fragility, robustness and antifragility. The stress in question may be internal ($\mathcal{S}_{\alpha}$) or external ($\mathcal{S}_{\epsilon}$), or a combination of the two. For simplicity, we consider only internal network stress for the definitions provided below. However, the change of variables from $\mathcal{S}_{\alpha}$ to $\mathcal{S}_{\epsilon}$, from $\hat{f}(x,\tilde{\theta}_{\alpha})$ to $\hat{f}(x_{\epsilon},\theta)$, and from $\Delta_{\alpha}$ to $\Delta_{\epsilon}$ will give the definition of fragility, robustness and antifragility for external stress. ###### Definition 4 (Fragility). The parameters of a network are fragile if the performance of the networks decreases below a threshold $-\varepsilon$, compared to the baseline network performance for all magnitudes of the applied stress. Formally, the fragility to internal stress can be defined as: $\color[rgb]{0.0,0.0,0.0}{\sum_{i=0}^{A}[\mathcal{S}_{\alpha_{i}}-\hat{f}(x,\tilde{\theta}_{\alpha_{i}})]\Delta_{\alpha}<-\varepsilon},$ (4) where $\Delta_{\alpha}$ is the change in synaptic filtering threshold $\alpha$, $A$ is equal to $|\alpha|$, $\varepsilon\geq 0$ and asserts a variable fragility measure, as shown in Fig. 2(b) (red shaded region). When the threshold $\varepsilon=0$, we have a strict fragility condition. Equation (4) computes the discrete area difference between the stressed network performance and the baseline network performance for all stress magnitudes of $\alpha$. ###### Definition 5 (Robustness). The parameters of a network are robust if the performance of the networks is invariant to a threshold $\pm\varepsilon$, compared to the baseline network performance for all magnitudes of the applied stress. Formally, the robustness to internal stress can be defined as: $\color[rgb]{0.0,0.0,0.0}{-\varepsilon\leq\sum_{i=0}^{A}[\mathcal{S}_{\alpha_{i}}-\hat{f}(x,\tilde{\theta}_{\alpha_{i}})]\Delta_{\alpha}\leq\varepsilon},$ (5) where $\Delta_{\alpha}$ is the change in synaptic filtering threshold $\alpha$, $\varepsilon\geq 0$ and asserts a variable robustness measure, as shown in Fig. 2(b) (green shaded region). When the threshold $\varepsilon=0$, we have a strict robustness condition. Equation (4) computes the discrete area difference between the stressed network performance and the baseline network performance for all stress magnitudes of $\alpha$. ###### Definition 6 (Antifragility). The parameters of a network are antifragile if the performance of the networks increases to a threshold $\varepsilon$, compared to the baseline network performance for all magnitudes of the applied stress. Formally, the antifragility to internal stress can be defined as: $\color[rgb]{0.0,0.0,0.0}{\varepsilon<\sum_{i=0}^{A}[\mathcal{S}_{\alpha_{i}}-\hat{f}(x,\tilde{\theta}_{\alpha_{i}})]\Delta_{\alpha}},$ (6) where $\Delta_{\alpha}$ is the change in synaptic filtering threshold $\alpha$, $\varepsilon\geq 0$ and asserts a variable robustness measure, as shown in Fig. 2(b) (blue shaded region). When the threshold $\varepsilon=0$, we have a strict antifragility condition. Equation (4) computes the discrete area difference between the stressed network performance and the baseline network performance for all stress magnitudes of $\alpha$. ## 4 Methodology of DNN parameters characterization In this Section, we present the methodology of DNN parameter characterization that is shown in Fig. 1. Concisely, Fig. 1 shows that this methodology has two major aspects a) the application of internal and external stress on DNN in terms of synaptic filtering and adversarial attack, and b) the need of a process to characterize parameters into fragile, robust and antifragile. This section first explains how we apply internal and external stress on DNNs in Sec. 4.1 and then introduces parameter scores that characterize the parameters in Sec. 4.2. Finally, we discuss the experiment setting in Sec. 4.3. ### 4.1 Framework of internal and external stress on DNNs We systematically apply internal and external stress on DNNs. The process of internal and external stress on DNNs is shown in Fig. 3, which is a three-step framework (adversarial attack on DNNs, synaptic filtering of DNNs, combined network performance) that leads to parameter score calculation for the DNN parameter characterization. Fig. 3: synaptic filtering framework. Left block (1) shows the input $x$ at time $t_{0}$; network $f(x,\textbf{{W}}$) with parameters W; the adversarial attack $\delta$ [this study computes $\delta$ using $f(x,\textbf{{W}})$]; the perturbation magnitude $\epsilon$ and the resultant adversarial example $x_{\epsilon}$ at time $t_{1}$. The perturbation magnitude $\epsilon$ is bounded by ($\hat{y}_{\epsilon}\approx\hat{y}$) and ($\hat{y}_{\epsilon}{K}>1$) for $K$ classes; $\hat{y}$ and $\hat{y}_{\epsilon}$ are clean and adversarial accuracies. Middle block (2) outlines the set of synaptic filters h, containing $h_{1},h_{2}$ and $h_{3}$ filters at each point $\alpha_{i}$ applied to layer $\textbf{{W}}^{(l)}$, resulting in the network performance to the filters. There are $\color[rgb]{0.0,0.0,0.0}{A}$ sets of h for each $\alpha_{i}\in[\alpha_{0}$, $\alpha_{A}]$. Right block (3) shows $\beta^{(l)}=f(x,\textbf{{h}}(\theta,\alpha))$ as the system performances for all values of $\alpha$, where $\theta$ is $\textbf{{W}}^{(l)}$ for a local analysis at layer $l$. The function $g(\cdot)$ combines $\beta^{(l)}_{1},\beta^{(l)}_{2}$ and $\beta^{(l)}_{3}$ into a combined system performance $\hat{\bar{\beta}}^{(l)}$. #### 4.1.1 Attack on DNNs In evaluating networks to internal stress, we compare the network performances to the synaptic filtering procedure for clean and adversarial (external stress) datasets (discussed in the following Sec. 4.1.2). In this study, we work primarily with the FGSM attack [13] for the adversarial perturbation formulation; other attack formulation methods would not affect the synaptic filtering described in this section. The synaptic filtering technique is designed to be applied to a network with any variation on the inputs, therefore, the nature of the attack formulation method can be changed without affecting the synaptic filtering technique. In order to experiment with an adversarial dataset, we must define some constraints of the attack [Fig. 3(Left block)], such that the synaptic filtering responses are comparable between different network architectures and datasets. The constraints imposed upon the adversarial attack magnitude $\epsilon$ are, as follow: ###### Definition 7 (minimum attack bound $\epsilon_{0}$ – constraint 1.). We limit the adversarial attack to follow $p(\hat{y}_{\epsilon}=y|x+\delta_{\epsilon_{0}})<p(\hat{y}=y|x)$, for all inputs $x$ in the test dataset. This constraint allows us to select a suitable minimum attack magnitude $\epsilon_{0}$, such that otherwise correctly classified inputs are misclassified, due to the adversarial attack. ###### Definition 8 (maximum attack bound $\epsilon_{E}$ – constraint 2.). We limit the adversarial attack to a suitable maximum attack magnitude $\epsilon_{E}$, such that the network test accuracy is above a random guess ($\hat{y}_{\epsilon_{E}}K>1$), i.e., we have the constraint: $p(\hat{y}_{\epsilon_{E}}=y|x+\delta_{\epsilon_{E}})K>1$, for all inputs $x$ in the test dataset. ###### Definition 9 (relative attack $\epsilon$ – constraint 3.). To compare the performance of different network architectures and datasets to the synaptic filtering procedure, we must consider values of $\epsilon$ for different networks/datasets that reduce the network performance equally. Considering two different networks $f_{1}$ and $f_{2}$, we use a single attack $\delta$, for which $\epsilon_{1}$ and $\epsilon_{2}$ are the relative attack magnitudes for $f_{1}$ and $f_{2}$. Suitable values of $\epsilon_{1}$ and $\epsilon_{2}$ should be chosen, such that $f_{1}(x,\theta)-f_{1}(x+\delta_{\epsilon_{1}},\theta)\approx f_{2}(x,\theta)-f_{2}(x+\delta_{\epsilon_{2}},\theta)$ thus ensuring that the adversarial perturbations affect the network performances equally. #### 4.1.2 Synaptic filtering of DNNs We investigate a set of synaptic filters $\mathbf{\textit{h}}=\\{h_{1},h_{2},h_{3}\\}$ containing three different synaptic filters [Fig. 3(Middle block)]: $h_{1}$, the ideal high-pass filter; $h_{2}$, the ideal low-pass filter and $h_{3}$ the pulse wave filter. The operation of filtering for the three different filters is detailed in Eq. (1). We apply filter $h_{1}$ to the learned (unperturbed) network parameters $\theta$, resulting in perturbed network parameters $\tilde{\theta}_{1,\alpha_{i}}$ for a given $\alpha_{i}$ threshold, as per: $\tilde{\theta}_{1,\alpha_{i}}=h_{1}(\theta,\alpha_{i})=\left\\{\begin{array}[]{rl}0&\text{if}\;\;\theta\leq\alpha_{i},\\\ 1&\text{otherwise}\\\ \end{array}\right.,$ (7) where $\alpha_{i}\in\alpha$, and $\alpha=\\{\alpha_{0},\alpha_{1},\dots,\alpha_{A}\\}$ we create $|\alpha|$ thresholds between the lower and upper bounds $\alpha_{0}=\min(\theta)$ and $\alpha_{A}=\max(\theta)$. This results in a set of filtered networks $\mathcal{S}_{\alpha}$ with each threshold defined as $\alpha_{i}=\alpha_{i-1}+\Delta_{\alpha}$ for a step length $\Delta_{\alpha}=[{\max(\theta)-\min(\theta)}]/{A}$ (i.e. $\Delta_{\alpha}=1/{A}$ when $\alpha$ is normalised between 0 and 1) and ${\color[rgb]{0.0,0.0,0.0}i=\\{i\in\mathbb{N}:0\leq i\leq A\\}}$. Similar to filter $h_{1}$, we apply the filter $h_{2}$ to the learned (unperturbed) network parameters $\theta$ from the opposite direction, resulting in perturbed network parameters $\tilde{\theta}_{2,\alpha_{i}}$ for an $\alpha_{i}\in\alpha$: $\tilde{\theta}_{2,\alpha_{i}}=h_{2}(\theta,\alpha_{i})=\left\\{\begin{array}[]{rl}0&\text{if}\;\;\theta\geq\alpha_{i},\\\ 1&\text{otherwise}\\\ \end{array}\right.,$ (8) where $\alpha_{i}=\alpha_{i-1}-\Delta_{\alpha}$, $\alpha_{0}=\min(-\theta),\;\text{and }\alpha_{A}=\max(-\theta).$ The pulse wave filter $h_{3}$, applied to $\theta$ results in equal filtered parameters $\theta_{3,\alpha_{i}}$ for values of $\alpha_{i}$ increased from $\min(\mathbf{\theta})$ to $\max(\theta)$ or decreased from $\max(\theta)$ to $\min(\theta)$. The results of filter $h_{3}$ is given by: $\tilde{\theta}_{3,\alpha_{i}}=h_{3}(\theta,\alpha_{i})=\left\\{\begin{array}[]{rl}0&\text{if}\;\alpha_{i}-\frac{\Delta_{\alpha}}{2}<\theta\leq\alpha_{i}+\frac{\Delta_{\alpha}}{2},\\\ 1&\text{otherwise}\\\ \end{array}\right.,$ (9) where $\alpha_{i}=\alpha_{i-1}\pm\Delta_{\alpha}$, $\alpha_{0}=\min(\pm\theta),\;\text{and }\alpha_{A}=\max(\pm\theta).$ In Eq. (9), the threshold window shifts by $\Delta_{\alpha}$ centred at threshold $\alpha_{i}$ with either side having a length $\frac{\Delta_{\alpha}}{2}$. These three filters $h_{1},h_{2}$, and $h_{3}$ with distinct properties when applied to a DNNs with threshold $\alpha_{i}$ offers three sets of distinct perturbed networks $f(\tilde{\theta}_{1,\alpha_{i}},x),f(\tilde{\theta}_{2,\alpha_{i}},x)$, and $f(\tilde{\theta}_{3,\alpha_{i}},x)$. Therefore, we require three baseline network performances corresponding to the properties of the respective synaptic filters against which the three sets of perturbed networks are compared. ##### Baseline Network Performances We denote $\phi_{1},\phi_{2}$ and $\phi_{3}$ to be the number of parameters filtered out by the synaptic filters $h_{1},h_{2}$ and $h_{3}$ corresponding to filtering threshold $\alpha_{i}$. If the synaptic filtering procedure is only applied to a local layer $l$ (e.g., only on a convolutional layer or a linear layer) then $\phi^{(l)}$ is the maximum number of parameters in local layer $l$. For the whole network $\phi$ denote the maximum number of parameters in the network. Let us consider $\phi^{(l)}_{1}$ to the number parameters filtered out by the filter $h_{1}$ for the layer $l$ at threshold $\alpha_{i}$, then the base network performance $\bar{\phi}^{(l)}_{1,\alpha_{i}}$ at threshold $\alpha_{i}$is given as: $\overline{\phi}^{(l)}_{1,\alpha_{i}}=1-\frac{\phi^{(l)}_{1,\alpha_{i}}}{\phi^{(l)}},$ (10) Similarly, the baseline network performances for filters $h_{2}$ and $h_{3}$ are $\overline{\phi}^{(l)}_{2,\alpha_{i}}$ and $\overline{\phi}^{(l)}_{3,\alpha_{i}}$. In Eq. (10), the fraction $\frac{\phi^{(l)}_{1}}{\phi^{(l)}}$ is a ratio between the number of parameters removed to the total number of parameters in the layer, defining the compactness of the filtered layer. We consider the baseline network performance for all values of $\alpha$, which we use to determine the parameter characteristics to synaptic filtering, as a function that reduces the network performance proportionally to the internal stress (synaptic filtering) applied on the network (see Fig. 4(a)). Using this definition of the baseline network performance, we expect the network performance to decrease proportionally to the number of parameters filtered by the synaptic filtering procedure (see Fig. 4(b)). The underling assumption of the baseline network performance is that the parameters being filtered out have an overall influence on the network performance. Hence, the baseline network performance represents the expected behaviour of the network, given as the classification accuracy on the test set, whilst the network is subjected the synaptic filtering procedure for all synaptic filtering threshold values $\alpha$. (a) Baseline Network Performance. (b) Baseline Network Performance and Scaled Synaptic Filtering Performance. Fig. 4: (a) Baseline network performance (green dotted line) $\bar{\phi}^{(l)}_{1}$ [Eq. (10)] for ResNet-18 trained for 100 epochs on CIFAR10. $\phi^{(l)}_{1}$ is the function that contains the number of parameters filtered (teal solid line) for filtering thresholds in $\alpha$ for filter $h_{1}$ on layer $l$. The maximum number of parameters in layer $l$ is denoted by $\phi^{(l)}$ (yellow dot). (b) Comparison of the scaled of synaptic filtering performance with baseline network performance, and synaptic robustness computation. The network performance to the synaptic filter is $\beta^{(l)}_{1}$ (blue dotted line), which is scaled w.r.t the unperturbed network accuracy $f(x,\theta)$ (red dot), resulting in $\bar{\beta}^{(l)}_{1}$ (blue solid line). The blue shaded region $r_{x}$, enclosed by base system response $\bar{\phi}^{(l)}_{1}$ (green dotted line) is the area [Eq. (14) and Eq. (15)] of synaptic robustness. ##### Network Compactness Our synaptic filtering method is a systematic ablation of DNN parameters to analyze variations in network performance caused by parameter filtering. We show that a proportion of the network parameters can be filtered out from a DNN, whilst retaining (and occasionally improving) the network performance on both clean and adversarially perturbed test sets [40]. The characteristics of the baseline network performance, describes a network with parameters that, when filtered, proportionally influence the network performance. From Eq. (10), the proposed baseline network performance is linked to the the compactness of the network/layer; the characteristics of the baseline network performance is inversely proportional to the compactness ratio of the network/layer weights. For a specific non-random synaptic filtering method, the compactness characteristics of a network is constant for different variations to the input (e.g. adversarial attacks). Thus, we can compare the scaled network performances of a network to both clean and adversarial datasets, against the baseline network performance. ##### Network vs. adversary For a network, we define the network performances for all synaptic filtering thresholds $\alpha$ to be an $|\alpha|$-length vector of the network prediction accuracies $p(\hat{y}_{\alpha}=y|f(x,\tilde{\theta}_{\alpha}))$ on the test set $x$. The network performance to the synaptic filtering $h_{1}$ [Eq. (7)], $h_{2}$ [Eq. (8)] and $h_{3}$ [Eq. (9)] are given as $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ respectively. We construct a clean network performance matrix $\beta$ on inputs $x$ by combining $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ as: $\beta=\begin{bmatrix}\beta_{1}\\\ \beta_{2}\\\ \beta_{3}\end{bmatrix}=\begin{bmatrix}p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{1},x))\\\ p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{2},x))\\\ p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{3},x))\end{bmatrix}.$ (11) We further apply the synaptic filtering to the network under an adversarial attack $\delta$ with perturbation magnitudes $\epsilon$, resulting in adversarial network performance matrix $\beta_{\epsilon}$. $\beta_{\epsilon}=\begin{bmatrix}\beta_{1,\epsilon}\\\ \beta_{2,\epsilon}\\\ \beta_{3,\epsilon}\end{bmatrix}=\begin{bmatrix}p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{1},x_{\delta_{\epsilon}}))\\\ p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{2},x_{\delta_{\epsilon}}))\\\ p(\hat{y}_{\alpha}=y|f(\tilde{\theta}_{3},x_{\delta_{\epsilon}}))\end{bmatrix}$ (12) ##### Targeted parameters The matrices $\beta$ and $\beta_{\epsilon}$ are the network performances on clean and adversarial datasets to the synaptic filtering method that are the two different DNN states to compared. Thus, through a comparison of $\beta$ and $\beta_{\epsilon}$ (see Fig. 5), we expose the specific parameters (targeted parameters) that are either negatively, invariently or positively affecting the synaptic filtering performances for the adversarial dataset, compared to the clean dataset. Fig. 5: Learning landscape of layers and the regime change of test accuracy. Targeted parameters of ResNet-18 trained on MNIST using filter $h_{1}$. Showing the combined responses for layer ‘layer3.0.conv2’, measured every 10 epoch up to 100 epochs. The difference between clean (left) and adversarial (middle) responses results in the targeted parameters (right). Every pixel on the clean and adversarial images represents the network test accuracy and for targeted image it is the difference between former two over all evaluated epochs and $\alpha$. #### 4.1.3 Combined network performance of synaptic filters Different synaptic filtering methods expose different characterisations of parameters of the network. Thus we combine the network performances of different synaptic filters using a function $g(\cdot)$ to form a combined network performance $\tilde{\beta}$, as shown in the synaptic filtering framework in Fig. 3(right). In order to combine the performances, let us consider $\beta$ as the network performance to be combined; the procedure is the same for the adversarial network performances to the synaptic filters $\beta_{\epsilon}$. We take $\bar{\beta}$ as the performance of the perturbed network (synaptic filtering performance) relative to the unperturbed network performance $f(x,\theta)$. Subsequently, we take the mean of the performances of the network of all three different filters, as such: $\tilde{\beta}_{i}=g(\overline{\beta}_{i})=\frac{1}{|h|}\sum_{j\in h}\overline{\beta}_{j,i}\quad\text{ for }i=1,\ldots,|\alpha|.$ (13) Fig. 6 shows an example of network accuracy results of the synaptic filtering procedure applied to layer ‘conv1’ of ResNet-18 trained on CIFAR10. The top row shows the effects of three different filters on network accuracy; the middle row shows each filter’s epoch-wise effect on network accuracy. The third row is the effect of the combined response [as per Eq. (13)] of the filters. Similarly, the combined adversarial network performance $\tilde{\beta}_{\epsilon}$ is computed by replacing $\bar{\beta}$ with $\bar{\beta}_{\epsilon}$ in Eq. (13), where $\bar{\beta}_{\epsilon}$ is the performance of the perturbed network (synaptic filtering performance) relative to the unperturbed network performance $f(x_{\epsilon},\theta)$ on adversarial perturbed datasets $x_{\epsilon}$. Although the combined network performances $\tilde{\beta}$ and $\tilde{\beta}_{\epsilon}$ offer more descriptive information to examine the network parameters, a single synaptic filter is also able to expose the targeted network parameters. As calculating $\tilde{\beta}$ and $\tilde{\beta}_{\epsilon}$ is computationally expensive for local analysis (as this increases exponentially to the number of local layers in a DNN), we suggest computing $\tilde{\beta}$ and $\tilde{\beta}_{\epsilon}$ for all network parameters (i.e., global analysis). Fig. 7 shows the combined synaptic filtering responses (local response is in the left three columns and the global response is on the rightmost column in Fig. 7) for ResNet-18 trained on MNIST, CIFAR10, and Tiny ImageNet datasets (see each row in Fig. 7 for respective dataset) for every 10 epochs up to 100 epochs). Fig. 6: Example of network accuracy results of the synaptic filtering procedure applied to layer ’conv1’ of ResNet-18 trained on CIFAR10, shown to illustrate the combined system response. The bottom-left plot is a combination of three top-row plots. Fig. 7: Combined synaptic filtering responses for ResNet-18 trained on CIFAR10, MNIST and Tiny ImageNet datasets for every 10 epochs up to 100 epochs. (1) Local layer-wise system response to the filtering methods for all $\alpha$ values. (2) Global network responses using the full network for all $\alpha$ values. Pixel intensities on the shown images represents the average network accuracy using the different synaptic filters on the clean test dataset, for each $\alpha_{i}$ in $\alpha$. ### 4.2 Parameter scoring for DNN parameter characterization To expose the network parameters targeted by the adversary, let us consider the network performance $\beta^{(l)}_{1}$ for synaptic filter $h_{1}$ on layer $l$. We scale $\beta^{(l)}_{1}$ relative to $f(x,\theta)$ resulting in $\overline{\beta}^{(l)}_{1}$; the baseline network performance is $\overline{\phi}^{(l)}_{1}$ [Eq. (10)] and the procedure is captured in Fig. 4. Similarly, we compute $\overline{\beta}_{2}^{(l)}$ and $\overline{\beta}_{3}^{(l)}$ for synaptic filters $h_{2}$ and $h_{3}$. The combined performance of the three different synaptic filters is $\hat{\bar{\beta}}^{(l)}$ [Eq. (13)]. (a) Clean and Adversarial Performances (b) Parameter Score Computation Fig. 8: (a) Synaptic parameter score computation for clean and adversarial inputs (ResNet-18, CIFAR10, 100 epochs, layer ’conv1’). The scaled network performance to the clean $\bar{\beta}^{(l)}_{1}$ (top) and adversarial $\bar{\beta}^{(l)}_{1,\epsilon}$ (bottom) datasets. The clean and adversarial parameter scores are $r_{x}$ and $r_{x_{\epsilon}}$. (b) The behaviour of the network responses (ResNet-18, CIFAR10, 100 epochs, layer ’conv1’) using synaptic filtering on the CIFAR10 clean $\bar{\beta}^{(l)}_{1}$ and adversarial $\bar{\beta}^{(l)}_{1,\epsilon}$ datasets over normalised $\alpha$ (top). The area $r_{\epsilon}$ [Eq. (16)] is the adversarial parameter score (bottom). #### 4.2.1 Parameter score for clean data We take the baseline network performance $\bar{\phi}^{(l)}_{1}$ [Eq. (10)] for synaptic filter $h_{1}$ as the point to which we evaluate the filtered network responses to. We take $\bar{\phi}^{(l)}_{1}$ to describe a network/layer that contains neither an excess nor a deficiency of parameters that influence the network performance (i.e. the removal of any parameter affects the network performance). The network performance, on average will react inversely proportional to ablation of network parameters to synaptic filtering. The parameter score to synaptic filtering for a network using a clean dataset is $r_{x}$ is shown in Fig. 8(a)(top)] and given as: $r_{x}=\sum_{i=0}^{{\color[rgb]{0.0,0.0,0.0}A}}(\bar{\beta}^{(l)}_{1}-\bar{\phi}^{(l)}_{1})\Delta_{\alpha},$ (14) Where $\Delta_{\alpha}$ is the change in the $\alpha$ threshold window. A parameter score equal to 0 signifies that the network/layer responds, on average, proportionally to synaptic filtering, i.e., proportional to variations in architecture and thus is considered robust. Where the score $r_{x}$ is less than 0, this indicates that the network/layer contains fragile parameters to the network performance. Conversely, where the value of $r_{x}$ is greater than $0$, the parameter score indicates that the network/layer contains antifragile parameters, where the removal of parameters from the network/layer results in a network performance that is better than the baseline network performance. #### 4.2.2 Parameter score for adversarial data The parameter score to synaptic filtering for a network using an adversarial dataset is $r_{x_{\epsilon}}$, and is calculated using the baseline network performance $\bar{\phi}^{(l)}_{1}$. The baseline network performance is compared with the adversarial dataset performance $\bar{\beta}^{(l)}_{1,\epsilon}$ to give the parameter characterization score $r_{x_{\epsilon}}$, as per: $r_{x_{\epsilon}}=\sum_{i=0}^{{\color[rgb]{0.0,0.0,0.0}A}}(\bar{\beta}^{(l)}_{1,\epsilon}-\bar{\phi}^{(l)}_{1})\Delta_{\alpha},$ (15) Where $\Delta_{\alpha}$ is the change in the $\alpha$ threshold window. A parameter score equal to 0 signifies that the network/layer responds, over all magnitudes of internal stress, proportionally to synaptic filtering, i.e., proportional to variations in architecture and thus is considered robust. Where scores $r_{x}$ and $r_{x_{\epsilon}}$ are less than $0$, this indicates that the network/layer contains fragile parameters w.r.t. the network performance. Conversely, where $r_{x}$ and $r_{x_{\epsilon}}$ are greater than $0$, the scores indicate that the network/layer contains antifragile parameters. #### 4.2.3 Difference of parameter scores To compute the effects of the adversarial attack on the parameter characterisation, using our proposed synaptic filtering method, we take the baseline network performance to be the synaptic filtering performance on the clean dataset ($\bar{\phi}^{(l)}_{1}=\bar{\beta}^{(l)}_{1}$). The difference in the adversarial dataset performance $\overline{\beta}^{(l)}_{1,\epsilon}$ and clean dataset performance $\overline{\beta}^{(l)}_{1}$ (baseline network performance), results in the effects of the adversary on the synaptic filtering performance of the network. We take the area of the residual as the effects of the adversary on the network. The value of $r_{\epsilon}$ is computed by taking the discrete area difference, as shown in Fig. 8(b) (bottom) and expressed as: $r_{\epsilon}=\sum_{i=0}^{{\color[rgb]{0.0,0.0,0.0}A}}(\overline{\beta}^{(l)}_{1,\epsilon}-\overline{\beta}^{(l)}_{1})\Delta_{\alpha}.$ (16) If the network performs equally to clean and adversarial datasets for all filtering thresholds $\alpha$, the value of $r_{\epsilon}=0$. Where $r_{\epsilon}<0$, the network performance on the adversarial dataset is greater than the network performance on the clean dataset. This signifies that the evaluated network/layer contains parameters that increase the network performance on the adversarial dataset compared to the clean dataset. Conversely, $r_{\epsilon}>0$ signifies that the network performance on the clean dataset is greater than the network performance on the adversarial dataset. This signifies that the evaluated network/layer contains parameters that decrease the network performance on the adversarial dataset compared to the clean dataset. Hence, the magnitude of $r_{\epsilon}$ gives us a scalar value of the difference in clean and adversarial responses to the filtering. ### 4.3 Experimental set-up Our experiment setting includes standard training of state-of-the-art DNNs on popular benchmark datasets. ##### State-of-the-art datasets used All experiments111Source code: https://github.com/SynapFilter/InferLink in this study are performed on the MNIST [18], CIFAR10 [19] and Tiny ImageNet [20] datasets. The MNIST and CIFAR10 datasets both respectively contain 80,000 examples in the training set and 10,000 examples in the test set. The Tiny ImageNet dataset contains 80,000 training and 20,000 test examples from the original training set [20]. ##### State-of-the-art DNNs studies On the benchmark datasets, we train ResNet-18, ResNet-50 [15], SqueezeNet v1.1 [16] and ShuffleNet V2 x1.0 [17]. Each network was trained for 100 epochs, and the model of every 10 epochs was stored for analysis of our methodology. We investigated all convolutional and fully connected layers of ResNet-18, ResNet-50, SqueezeNet v1.1 and ShuffleNet V2 x1.0 only, any intermediary functions, such as the batch normalization layers, activation functions and pooling layers remain unaltered. ##### Training of DNNs on clean datasets For the training, the parameters of each DNN were initialised using the Kamming Uniform [45] method. We use a cross-entropy loss function and the Adam optimizer [46] configured with $\gamma=0.001$, $\beta_{1}=0.9$, $\beta_{2}=0.999$, $\lambda=0$ and $\varepsilon=1\times 10^{-08}$ for training networks. We train the networks using clean datasets only and apply the adversarial attack only to the test datasets for analysis using the synaptic filtering methodology. Networks are saved at every 10 epochs during network training to a maximum of 100 epochs. Saved networks are subsequently evaluated using the proposed synaptic filtering methodology, the results presented in Sec. 5 are shown for the saved networks. ##### Adversarial attack on datastes For the adversarial attack, we use the single-step FGSM attack [13] and analyze the difference in network performance on the test set to the proposed synaptic filtering methods (Sec. 4). The effectiveness of an adversarial attack on a given dataset can be attributed to the complexity of the datasets the attack has been applied to. ##### Collection of results We normalize the $r_{x}$ and $r_{x_{\epsilon}}$ parameter score values from Sec. 4.2.1 to be between -0.5 (indicating fragility) and 0.5 (indicating antifragility) with the mid-point being 0 (indicating robustness). We carry out the same normalization procedure independently for all $r_{\epsilon}$ values from Sec. 4.2.3 to be between -0.5 and 0.5. For each network and dataset, the synaptic filtering responses are averaged over three different randomly initialised (as per [47]) and trained networks. In order to satisfy constraint 3 from Sec. 4, we use a line search algorithm to find the optimal $\epsilon$ value for each model and dataset, that satisfies: $f(x+\delta_{\epsilon},\mathbf{W})\approx 0.5\cdot f(x,\mathbf{W})$. When carrying out the synaptic filtering procedure, we select $A=25$ for all experiments carried out. Therefore, the filtering step size $\Delta_{\alpha}=0.04$ over the normalised range of parameters in the evaluated network/layer. Where computational resources permit, we recommend using larger values of $A$ for experimentation in order to more accurately compute the parameter scores. ## 5 Results and Analysis The results of global (full network parameters) and local (network layer parameters) analysis shown in Figs. 9 and 10 describe the fragility, robustness, and antifragility characteristics of parameters (cf. Sec. 4.2.1 and Figs. 10 and 9). Furthermore, the results show the adversarially targeted ($r_{\epsilon}$) parameters (cf. Sec. 4.2.3 and Figs. 10 and 9). We identify parameter characteristics that are invariant for clean and adversarial datasets across different datasets and networks. (a) ResNet-18 (b) ResNet-50 (c) SqueezeNet-v1.1 (d) ShuffleNet V2 x1.0 Fig. 9: Global parameter scores of (a) ResNet-18, (b) ResNet-50, (c) SqueezeNet-v1.1 and (d) ShuffleNet V2x1.0 over all datasets are $r_{x},r_{x_{\epsilon}}$ and $r_{\epsilon}$, measured every 10 epochs up to 100 epochs for the whole network parameters using synaptic filter $h_{1}$. The parameter score interpretation is given in Sec. 4.2.1 and Sec. 4.2.3. ##### Fragility, robustness, and antifragility The global parameter scores for networks on different datasets are shown in Fig. 9. We find that ResNet18 and ResNet50 networks exhibit invariant parameter characteristics to different datasets: particularly for $r_{x}$ and $r_{x_{\epsilon}}$ values related to the CIFAR10 and ImageNet Tiny performances, over 100 epochs. The adversarial targeting results ($r_{\epsilon}$ values) are comparable for the CIFAR10 and ImageNet Tiny responses, with $r_{\epsilon}$ values for MNIST, suggesting that the clean dataset response is consistently greater than the adversarial dataset performance. From the ShuffleNet V2x1.0 parameter scores, we find distinctions in $r_{x}$ and $r_{x_{\epsilon}}$, for the MNSIT dataset, over 100 epochs. We see the ShuffleNet V2x1.0 parameters as transitioning from fragile to antifragile for the ImageNet Tiny dataset. From the SqueezeNet-v1.1 results for the ImageNet Tiny dataset, we observe a convergence of $r_{x}$ and $r_{x_{\epsilon}}$ to 0 over 100 epochs, indicating the network performance as robust, for both clean and adversarial datasets. The local parameter scores provide a learning landscape to examine individual network parameters (Fig. 10). All of the evaluated network and dataset parameter scores exhibit invariant fragility characteristics (marked as ‘Fr’) at the 1-st convolutional layer and the l-th linear layer, for both clean ($r_{x}$) and adversarial ($r_{x_{\epsilon}}$) parameter scores. This is further shown in Fig. 10(a) ImageNet Tiny; Fig. 10(c) MNIST, CIFAR10 and ImageNet Tiny, and Fig. 10(d) ImageNet Tiny. We see the presence of robust parameters (marked as ‘Ro’) in Fig. 10(a) CIFAR10 and ImageNet Tiny; Fig. 10(b) ImageNet Tiny; Fig. 10(c) MNIST and CIFAR10, and Fig. 10(d) CIFAR10 ImageNet Tiny. Antifragile parameters (marked as ‘Af’) are distinctly visible in Fig. 10(a) MNIST and CIFAR10; Fig. 10(b) MNIST, CIFAR10 and ImageNet Tiny; Fig. 10(d) MNIST and CIFAR10. Furthermore, periodic robustness characteristics are shown in Fig. 10(a) ImageNet Tiny; Fig. 10(b) MNIST, CIFAR10 and ImageNet Tiny; Fig. 10(c) MNIST, CIFAR10, ImageNet Tiny, and Fig. 10(d) CIFAR10 and ImageNet Tiny. ##### Adversarially targeted parameters In Fig. 5, we present targeted parameters to an adversarial attack using the combined network response for ResNet-18 trained on MNIST. We further see targeted parameters using the parameter scores $r_{\epsilon}$ (Sec. 4.2.3) from Fig. 9 and Fig. 10. In Fig. 10, we show that the network response is greater for the adversarial dataset than the clean dataset (marked by ‘$r_{x_{\epsilon}}$’), as shown in layers of Fig. 10(c) CIFAR10 and ImageNet Tiny. We find instances where both the adversarial performance and clean performance are equal, indicating that the layer response is robust (marked by ‘Ro’) and shown in Fig. 10(a) MNIST, CIFAR10 and ImageNet Tiny; Fig. 10(b) MNIST, CIFAR10 and ImageNet Tiny; Fig. 10(c) MNIST, and Fig. 10(d) CIFAR10 and ImageNet Tiny. Furthermore, we see instances of the network performances for the clean dataset being greater than that of the adversarial dataset (marked by ‘$r_{x}$’), shown in Fig. 10(a) MNIST and CIFAR10; Fig. 10(b) MNIST, CIFAR10 and ImageNet Tiny, and Fig. 10(d) MNIST and CIFAR10. (a) ResNet-18 (b) ResNet-50 (c) SqueezeNet-v1.1 (d) ShuffleNet V2 x1.0 Fig. 10: Local parameter scores of (a) ResNet-18, (b) ResNet-50, (c) SqueezeNet-v1.1 and (d) ShuffleNet V2 x1.0 over all datasets. The parameter scores $r_{x},r_{x_{\epsilon}}$ and $r_{\epsilon}$ are measured every 10 epochs up to 100 epochs and for all layers in the network for filter $h_{a}$. The parameter score interpretation is given in Sec. 4.2.1 and Sec. 4.2.3. The fragile, robust, and antifragile parameters of the network are respectively represented by values ’Fr,’ ’Ro,’ and ’Af’ (see rightmost colour bar). ##### Effects of batch normalization (a) Filter $h_{1}$ (b) Filter $h_{2}$ Fig. 11: Synaptic filtering network performances of ResNet-18 trained on CIFAR10 for layers ’layer2.0.conv1’, ’layer3.0.conv1’ and ’layer4.0.downsample.0’. The results show that, even after filtering all parameters in a layer, the network performs relatively well (shown by the purple dotted lines at $\Phi^{(l)}$, the maximum number of parameters filtered). This is due to the following batch normalization layer features propagating through the network during the forward pass, thus highlighting the effects of batch normalisation layers on network performance. We investigated the phenomenon of the network retaining classification performance despite all features at layer $l$ removed (see column $\alpha_{A}$ in Fig. 5). When we investigate the output of layers deeper than $l$, we discover that residual features continue to propagate through the network despite the filtering out of network weights at the $l$-th layer. This is attributed to the Batch normalization (BN) layers that follow convolutional layers and are tasked with minimising covariance shift in the network [48]. When implementing a network architecture, we utilise the standard models in accordance with literature; the functionality of batch normalization layers is also predefined and remains unaltered in our analysis. Consider the condition where a convolutional layer $l$ has been filtered maximally using a synaptic filter, the subsequent batch normalization computation is given as: $\hat{y}^{(l)}=\frac{x^{(l)}-\mathop{\mathbb{E}}[x]}{\sqrt{\mathrm{Var}[x]+\epsilon}}*\gamma^{(l)}+\beta^{(l)}$ (17) Where $\hat{y}^{(l)}$ is the output of the batch normalization process at the output of convolutional layer $l$; $y^{(l-1)}$ is the output of the previous convolutional layer $l-1$ given by $f(\tilde{\theta}^{(l)}_{\\{1,2,3\\}},\hat{y}^{(l-1)})$. The variables $\gamma^{(l)}$ and $\beta^{(l)}$ are learnable parameter vectors and $\epsilon$ is a value added to the denominator for numerical stability (set to $1\text{\times}{10}^{-05}$). Implementations of networks compute the expectation and variance from Eq. (17) as running statistics during network training; the statistics calculated during training are used during network inference. In consequence, when the input to the BN layer following convolutional layer $l$ is a 0 vector, the case where layer $l$ has been filtered maximally through synaptic filtering, the BN layer retains features of the training batches, even when evaluating test sets. This is shown from the results in Fig. 11, where the filtering of parameters from certain layers results in only a slight decrease of network performance. The ability of the network to retain sufficient performance, despite the filtering out of certain layer parameters, is due to the features propagated during the forward pass by the batch normalization layer following the filtered layer. ##### Selective backpropagation on robust and antifragile parameters Upon identifying robust, fragile and antifragile parameters using the difference in parameter scores (see Sec. 4.2.3) we consider fragile parameters to be parameters that, when perturbed, result in greater degradation of synaptic filtering performance on the clean dataset compared to the adversarial dataset. Robust parameters show to be invariant to both clean and adversarial datasets, and antifragile parameters show to have an increased network performance on the clean dataset compared to the adversarial dataset. Thus, we consider fragile parameters to be parameters that are important to the network performance on the adversarial dataset. We propose selectively retraining only the robust and antifragile parameters using backpropagation. In order to carry out this operation during network training, we take a layer- wise approach that considers the parameter characterization scores of individual network layers and we subsequently omit the characterized fragile layers corresponding to negative parameter characterizations scores from network training by zeroing out the update gradients during training. The results from our selective backpropagation method is shown in Fig. 12 where the mean (solid lines) and standard deviation (coloured shaded regions) of network performances are shown for networks tested at epoch 10 to epoch 100 measured every 10 epochs. We test each network to a maximum perturbation magnitude (external stress magnitude) of $\epsilon_{E}$, which is selected using Definitions. 7, 8 and 9. As can be seen from the results, our proposed method, shown in teal, outperforms the networks trained using regular backpropagation training, shown in orange, when considering robustness to adversarial attacks. Our proposed method shows to improve network robustness better on the CIFAR10 (Fig. 12(b)) and ImageNet Tiny (Fig. 12(c)) dataset compared to the MNIST (Fig. 12(a)) dataset. The effectiveness of the selective backpropagation method on CIFAR10 and ImageNet Tiny compared to MNIST can be attributed to the complexity of the datasets [49], where MNIST can be considered to have a lower complexity relative to CIFAR10 and ImageNet Tiny. (a) MNIST (b) CIFAR10 (c) ImageNet Tiny Fig. 12: Selective backpropagation re-training of robust and antifragile parameters. Network accuracy of ResNet-18 on (a) MNIST, (b) CIFAR10, (c) ImageNet Tiny datasets to external stress (adversarial attack) with magnitude $\epsilon$. ## 6 Conclusions We can examine deep neural networks using our proposed synaptic filtering technique to characterize parameters of the network as fragile, robust and antifragile on both clean and adversarial inputs as a test bed. When subjected to synaptic filtering and an adversarial attack the fragile parameters are the parameters that cause a decrease in DNN performance. Whilst parameters characterized as robust cause the DNN performance to remain within a defined tolerance threshold (e.g. $\pm 2\%$ change in DNN performance). Parameters characterized as antifragile cause an increase in DNN performance. Such an identification method can be applied to distill a trained network in order to make it usable in several resource-constrained applications, such as wearable devices. We offer parameter scores to evaluate the affects of specific parameters on the network performance and expose parameters targeted by an adversary. We find that there are global and local filtering responses that have invariant features to different datasets over the learning process of a network. For a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We analyze the performance of DNN architectures through a selective backpropagation technique where we only retrain robust and antifragile parameters at given epoch. We compare the selective backpropagation technique with regular training to show that retraining only robust and antifragile parameters improves DNN robustness to adversarial attacks on all evaluated datasets and network architectures. ## References * [1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _Nature_ , vol. 521, no. 7553, pp. 436–444, 2015. * [2] W. Samek, G. Montavon _et al._ , “Explaining deep neural networks and beyond: A review of methods and applications,” _Proc IEEE_ , vol. 109, no. 3, pp. 247–278, 2021. * [3] N. Papernot, P. McDaniel _et al._ , “The limitations of deep learning in adversarial settings,” in _EuroS &P_, 2016, pp. 372–387. * [4] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in _Proc IEEE Symp Secur Priv (SP)_ , 2017. * [5] N. Srivastava, G. Hinton _et al._ , “Dropout: a simple way to prevent neural networks from overfitting,” _JMLR_ , vol. 15, no. 1, pp. 1929–1958, 2014. * [6] R. Yu, A. Li, C.-F. Chen, J.-H. Lai, V. I. Morariu, X. Han, M. Gao, C.-Y. Lin, and L. S. Davis, “Nisp: Pruning networks using neuron importance score propagation,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2018. * [7] Z. Mariet and S. Sra, “Diversity networks: Neural network compression using determinantal point processes,” _arXiv preprint arXiv:1511.05077_ , 2015\. * [8] B. S. Oken, I. Chamine, and W. Wakeland, “A systems approach to stress, stressors and resilience in humans,” _Behavioural brain research_ , vol. 282, pp. 144–154, 2015. * [9] C. Szegedy, W. Zaremba _et al._ , “Intriguing properties of neural networks,” in _ICLR_ , 2014. * [10] B. Biggio, I. Corona _et al._ , “Evasion attacks against machine learning at test time,” in _ECML PKDD_ , 2013. * [11] N. N. Taleb and R. Douady, “Mathematical definition, mapping, and detection of (anti) fragility,” _Quantitative Finance_ , vol. 13, no. 11, pp. 1677–1689, 2013. * [12] T. Freiesleben, “The intriguing relation between counterfactual explanations and adversarial examples,” _Minds and Machines_ , vol. 32, no. 1, pp. 77–109, 2022. * [13] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in _ICLR_ , 2015. * [14] S. Huang, N. Papernot _et al._ , “Adversarial attacks on neural network policies,” in _ICLR_ , 2017. * [15] K. He, X. Zhang _et al._ , “Deep residual learning for image recognition,” in _CVPR_ , 2016. * [16] F. N. Iandola, S. Han _et al._ , “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and $<$0.5mb model size,” _arXiv:1602.07360v4_ , 2016. * [17] N. Ma, X. Zhang _et al._ , “ShuffleNet V2: Practical guidelines for efficient CNN architecture design,” in _ECCV_ , 2018. * [18] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010, http://yann.lecun.com/exdb/mnist/. * [19] A. Krizhevsky, G. Hinton _et al._ , “Learning multiple layers of features from tiny images,” 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf * [20] Y. Le and X. Yang, “Tiny imagenet visual recognition challenge,” 2015, stanford CS 231N. * [21] I. N. Karatsoreos and B. S. McEwen, “Psychobiological allostasis: resistance, resilience and vulnerability,” _Trends in cognitive sciences_ , vol. 15, no. 12, pp. 576–584, 2011. * [22] V. Ramanujan, M. Wortsman, A. Kembhavi, A. Farhadi, and M. Rastegari, “What’s hidden in a randomly weighted neural network?” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 11 893–11 902. * [23] P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz, “Importance estimation for neural network pruning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2019. * [24] X. Gao, R. K. Saha, M. R. Prasad, and A. Roychoudhury, “Fuzz testing based data augmentation to improve robustness of deep neural networks,” in _2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE)_. IEEE, 2020, pp. 1147–1158. * [25] Y. Wang, S. Wu _et al._ , “Demiguise attack: Crafting invisible semantic adversarial perturbations with perceptual similarity,” in _IJCAI_ , 2021\. * [26] H. Xu, Y. Ma _et al._ , “Adversarial attacks and defenses in images, graphs and text: A review,” _Int. J. of Autom. and Comput._ , vol. 17, no. 2, pp. 151–178, 2020. * [27] D. Tsipras, S. Santurkar _et al._ , “Robustness may be at odds with accuracy,” in _ICLR_ , 2019. * [28] N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” _IEEE Access_ , vol. 6, pp. 14 410–14 430, 2018. * [29] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-GAN: Protecting classifiers against adversarial attacks using generative models,” in _ICLR_ , 2018. * [30] X. Yuan, P. He _et al._ , “Adversarial examples: Attacks and defenses for deep learning,” _IEEE Trans Neural Netw Learn Syst_ , vol. 30, no. 9, pp. 2805–2824, 2019. * [31] S. Han, J. Pool _et al._ , “Learning both weights and connections for efficient neural networks,” in _NIPS_ , 2015. * [32] K. A. Sankararaman, S. De _et al._ , “The impact of neural network overparameterization on gradient confusion and stochastic gradient descent,” in _ICML_ , 2020. * [33] S. Kornblith, M. Norouzi _et al._ , “Similarity of neural network representations revisited,” in _ICML_ , 2019. * [34] P. Nakkiran, G. Kaplun, Y. Bansal, T. Yang, B. Barak, and I. Sutskever, “Deep double descent: Where bigger models and more data hurt,” _Journal of Statistical Mechanics: Theory and Experiment_ , vol. 2021, no. 12, p. 124003, 2021\. * [35] V. Ojha and G. Nicosia, “Backpropagation neural tree,” _Neural Networks_ , vol. 149, pp. 66–83, 2022. * [36] A. Ilyas, S. Santurkar _et al._ , “Adversarial examples are not bugs, they are features,” in _NIPS_ , 2019. * [37] C. Pravin, I. Martino, G. Nicosia, and V. Ojha, “Adversarial robustness in deep learning: attacks on fragile neurons,” in _International Conference on Artificial Neural Networks_. Springer, 2021, pp. 16–28. * [38] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag, “What is the state of neural network pruning?” in _Proceedings of Machine Learning and Systems_ , I. Dhillon, D. Papailiopoulos, and V. Sze, Eds., vol. 2, 2020, pp. 129–146. * [39] R. Taylor, V. Ojha, I. Martino, and G. Nicosia, “Sensitivity analysis for deep learning: ranking hyper-parameter influence,” in _2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)_. IEEE, 2021, pp. 512–516. * [40] A. S. Rakin, Z. He, L. Yang, Y. Wang, L. Wang, and D. Fan, “Robust sparse regularization: Simultaneously optimizing neural network robustness and compactness,” 2019. * [41] S. Ye, K. Xu, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, and X. Lin, “Adversarial robustness vs. model compression, or both?” in _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_ , October 2019. * [42] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in _ICLR_ , 2017. * [43] J. Wang, “Adversarial examples in physical world,” in _IJCAI_ , 2021. * [44] J. Frankle and M. Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural networks,” 2018. * [45] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , December 2015. * [46] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _ICLR_ , 2015. * [47] K. He, X. Zhang _et al._ , “Delving deep into rectifiers: Surpassing human-levelperformance on ImageNet classification,” in _ICCV_ , 2015. * [48] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in _Proceedings of the 32nd International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, F. Bach and D. Blei, Eds., vol. 37. PMLR, 2015, pp. 448–456. * [49] F. Branchaud-Charron, A. Achkar, and P.-M. Jodoin, “Spectral metric for dataset complexity assessment,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2019.
# Camera View Adjustment Prediction for Improving Image Composition Yu-Chuan Su Raviteja Vemulapalli Ben Weiss Chun-Te Chu Philip Andrew Mansfield Lior Shapira Colvin Pitts Google Research ###### Abstract Image composition plays an important role in the quality of a photo. However, not every camera user possesses the knowledge and expertise required for capturing well-composed photos. While post-capture cropping can improve the composition sometimes, it does not work in many common scenarios in which the photographer needs to adjust the camera view to capture the best shot. To address this issue, we propose a deep learning-based approach that provides suggestions to the photographer on how to adjust the camera view before capturing. By optimizing the composition before a photo is captured, our system helps photographers to capture better photos. As there is no publicly- available data for this task, we create a view adjustment dataset by repurposing existing image cropping datasets. Furthermore, we propose a two- stage semi-supervised approach that utilizes both labeled and unlabeled images for training a view adjustment model. Experiment results show that the proposed semi-supervised approach outperforms the corresponding supervised alternatives, and our user study results show that the suggested view adjustment improves image composition $79\%$ of the time. ## 1 Introduction Image composition has a significant effect on the perception of an image. While a good composition can help make a great picture out of the dullest subjects and plainest of environments, a bad composition can easily ruin a photograph despite how interesting the subject may be. Unfortunately, a typical camera user may lack the knowledge and expertise required to capture images with great composition. Figure 1: Our goal is to improve the composition of the captured photo by providing view adjustment suggestions when the user is composing the shot. A commonly used technique for improving image composition is image cropping, and several existing works study how to crop images automatically [38, 5, 35, 29, 4, 49, 23, 11, 31, 40, 10, 7, 12, 44, 6, 42, 47, 26, 39, 48]. However, cropping works only in limited scenarios in which the best composition can be achieved by removal of certain portions of the image. It is not suitable in many common scenarios where the photographer needs to adjust the camera view to get the best shot. When evaluated on our view adjustment dataset, the crops predicted by the state-of-the-art GAIC cropping model [48] have an average IoU of 0.61 with the groundtruth views, clearly indicating that cropping is not enough. In comparison, the proposed view adjustment model achieves a much higher IoU of 0.75. While there are some existing rules for composing photos, each rule is valid only for specific scenes, and requires detection of various low-level (leading lines, triangles, etc.) and high-level semantic (face/person, foreground objects) cues. Furthermore, it is non-trivial to determine which rule or combination of rules is applicable for a given scene. To address this problem, we introduce a system that provides camera view adjustment suggestions to the photographer when they are composing the shot. Given a view composed by the user, our goal is to suggest a candidate view adjustment and its magnitude such that the photo captured after applying the adjustment will have a better composition (see Fig. 1). Specifically, we consider the following adjustments in this work: horizontal (left or right), vertical (up or down), zoom (in or out), and rotation (clockwise or counter clockwise) along the principal axis. The adjustment magnitude is represented using a percentage of the image size for all adjustments except rotation, for which we use radians. By adjusting the view prior to capture, we enable generic modifications to image composition. Hence, our system can improve image composition in scenarios where cropping fails (see Fig. 2). Note that this work focuses on static scenes, or more specifically scenes where motion is not the subject (e.g. portraits, nature and urban environment photography). Suggesting view adjustments may not be suit dynamic (wildlife, sports, actions) scenes where lightning-quick decisions are required from the photographer. To the best of our knowledge, there is no publicly-available data for evaluating the performance of view adjustment prediction. While a common practice is for human raters to annotate images with ground truth labels, this is difficult for view adjustment because the results of adjustment are generally not available. The raters need to infer how the adjustment affects the composition, which may be difficult without professional photography knowledge. Instead, we create a new view adjustment dataset from existing image cropping datasets. The idea is to convert view adjustments into operations on 2D image bounding boxes and use the best crop annotation as the target view for adjustment. The proposed approach allows us to generate samples and view adjustment labels from image cropping datasets automatically without additional human labor. Figure 2: View adjustment enables more diverse modification to image composition, and can improve composition in scenarios where image cropping fails. We acknowledge that our view adjustment dataset is not ideal in several respects. It ignores perspective distortions while adjusting the camera view, and the adjustment magnitude could be limited depending on the ratios between the best crop and the uncropped image sizes. Despite these limitations, our view adjustment dataset still provides a good starting point to evaluate composition-aware view adjustment prediction models. Furthermore, we address these limitations by also evaluating the view adjustment model on $360\degree$ images, which do not suffer from distortions or limited field-of-view (FOV). Please refer to Sec. 4.3 for details. Another limitation of our view adjustment dataset is that its amount and diversity is inherently limited by the cropping datasets. Because state-of- the-art machine learning models typically require a large and diverse dataset to train well, our view adjustment dataset may not be sufficient for training a good view adjustment model. In light of this problem, we propose a two-stage training approach that makes use of additional unlabeled images. See Fig. 3. Our empirical results show that the additional unlabeled data is important for improving model performance. We evaluate our approach on a view adjustment dataset consisting of 3,026 samples generated from 521 images from the FCDB [6] and GAICD [47] datasets. Quantitative results show that the proposed semi-supervised approach clearly outperforms the corresponding supervised alternatives, and user study shows that the adjustments suggested by our model improve the composition $79\%$ of the time. Figure 3: Our two-stage semi-supervsied approach leverages both labeled and unlabeled data to learn the view adjustment model. In the first stage, we learn a composition scoring model from both labeled data and unlabeled well- composed images. The scoring model is used to generate pseudo view adjustment labels for unlabeled images. These pseudo-labeled images along with real labeled images are used to train the view adjustment model in the second stage. Our major contributions are as follows. First, we formulate the problem of view adjustment prediction for improving image composition. Second, we introduce a labeled dataset for evaluating view adjustment prediction models. Finally, we propose a two-stage semi-supervised approach that leverages both labeled and unlabeled data to train the view adjustment model. We show that the proposed semi-supervised approach outperforms the corresponding supervised alternative quantitatively and demonstrate the effectiveness of our model through user study. ## 2 Related Work #### Image cropping Cropping is a widely used technique for changing image composition during post-processing, and automatic image cropping algorithms aim to find the best crop within an image. One common approach followed by existing methods is to select the candidate crops using a scoring function, and the research focus has been on designing a good scoring function for cropping. Existing works exploit saliency [38, 5, 35, 29, 4], photography rules [49, 23, 11], or a data driven approach to learn the scoring function [31, 40, 10, 7, 12, 44, 6, 42, 47, 26, 39, 48, 22]. While early methods learn the cropping model using unsupervised data [7, 12] or data annotated for generic image aesthetic quality [31, 40, 10], recent works show that a large scale annotated dataset designed specifically for cropping is essential for learning the state-of-the- art image cropping model [42, 47, 26, 39, 48]. Instead of following the scoring function paradigm, some works try to predict the target crop directly without generating and scoring candidate crops [19, 20, 25, 21, 24]. Our goal is not to produce the best image crop; it is to improve image composition while the photographer is composing the image. Because view adjustment is a more general operation than image cropping, it may improve composition in scenarios where cropping is not suitable. Furthermore, a view adjustment model needs to make a suggestion based on partial information, while the target crop is fully visible to image cropping models. This introduces unique challenges in view adjustment prediction for both data collection and modeling. #### Photography recommendation Prior works on photography recommendation provide various types of suggestions. Some of them study person location recommendation [50, 43, 28, 27, 41, 34]. They take a scenic image as input and suggest where a person should stand within the image. Others study photography at popular landmarks and suggest either a view enclosure in the input image or a geo-location for capturing a better photo [16, 45, 51, 46, 32, 33]. These works utilize abundant web photos captured near the target landmark in order to provide landmark-specific suggestions. Our problem differs from the above in that we provide view adjustment suggestions to the photographer. Furthermore, our method is applicable to any photo instead of being restricted to specific image content or geo-locations. Similar to our approach, Virtual Portraitist [15] utilizes weakly-labeled “positives” and suggests how to adjust the head pose to approximate positive samples, which can be considered as an inverse camera view adjustment. However, it is designed specifically for learning head poses in selfies and is not applicable for other types of photos. While some works also provide view recommendation in arbitrary photos [8, 36], they assume the availability of a wide-angle image and find the target view within it. Therefore, they require specialized hardware and is not applicable to any camera. Finally, some commercial systems provide photography recommendation in the form of suggested viewfinder center [1, 3] but provide only a black box system. To the best of our knowledge, this is the first work that formulates and introduces an evaluation benchmark for the view adjustment prediction problem. ## 3 Approach In this section, we introduce our approach for learning view adjustment prediction. We first formalize our problem and provide an overview of the two- stage training approach. Next, we describe the two models and our pseudo-label generation process in more detail. ### 3.1 Problem Formulation Given an input image, the goal of view adjustment is to answer the following questions: 1) whether there exists a candidate view adjustment that will improve the composition of the image, 2) if the composition can be improved, which candidate view adjustment can lead to the best composition, and 3) given the candidate view adjustment, what is the appropriate magnitude for the adjustment. To create a dataset with the view adjustment labels for the above questions, we propose generating the sample images and ground truth labels from image cropping data. Given an image and the best crop within the image, we perturb the image crop by one of the candidate view adjustments with a random magnitude. We take the perturbed crop as the image sample and use the inverse perturbation as the ground truth label for view adjustment. We also use the best crop as the sample, where the label is that no adjustment is required. The candidate adjustments considered in this work include horizontal (left or right), vertical (up or down), zoom (in or out), and rotation (clockwise or counter clockwise) about the principal axis. We focus on these eight adjustments because they can be presented in the camera viewfinder and executed by the photographer easily. Note that rotation about yaw and pitch axes may be hard to distinguish from translation on a 2D display, and their effects may be similar depending on the scene. Also, more complex adjustments can be achieved by performing multiple basic adjustments in a sequence. We control the perturbation magnitude such that 1) the perturbed crop has sufficient overlap with the annotated crop, and 2) the perturbed crop is in the original image. See supp. for data generation details. While the proposed approach allows us to generate training samples for view adjustment from image cropping datasets, the diversity of the generated data is controlled by the source cropping datasets. To overcome this limitation, we propose a two-stage training approach that makes use of additional unlabeled images. In the first stage, we learn a composition scoring model that rates the composition quality of a given image. We then use this scoring model to generate pseudo view adjustment labels for unlabeled images. This is achieved by simulating candidate view adjustments for a given image and selecting the adjustment that leads to the best scoring image as the pseudo label. In the second stage, we train a view adjustment model using both the labeled images generated from cropping datasets and the additional pseudo-labeled images. See Fig. 3. The two-stage training approach exploits unlabeled data to increase the amount of training data and improve model performance in both stages. Details are provided in the following sections. Also see supp. for the training pipeline summary. ### 3.2 Composition Scoring Model We implement the composition scoring model using a deep convolution neural network (CNN), denoted $M_{c}$. The model takes an RGB image as input and produces a score in the range $[0,1]$ as output. Since the purpose of this model is to compare the results of different view adjustments, we train it using a pairwise ranking loss. Let $(I_{p},I_{n})$ be two images of the same scene where $I_{p}$ has better composition than $I_{n}$. The pairwise ranking loss encourages the score of $I_{p}$ to be more than the score of $I_{n}$ by a margin $\delta$: $\max(0,\ \delta+M_{c}(I_{n})-M_{c}(I_{p})).$ (1) To train this model, we use pairwise comparison data generated from both labeled image cropping data and unlabeled well-composed images. #### Labeled data We utilize image cropping datasets in the following formats to generate image pairs $(I_{p},I_{n})$: * $\bullet$ _Scored candidate crops_ —in this format, each image comes with a pre-defined set of candidate crops along with their composition scores. In each training iteration, we choose $N$ crops from a random image and generate $\frac{N(N-1)}{2}$ pairs. $I_{p}$ and $I_{n}$ are determined for each pair by comparing the provided scores. We use the pairwise ranking loss averaged over these $\frac{N(N-1)}{2}$ pairs, denoted as $L_{sc}$, for training. * $\bullet$ _Best crop annotation_ —this format provides a best crop for each image. We take the best crop as $I_{p}$ and generate $I_{n}$ by randomly perturbing the bounding box of the best crop. We restrict the perturbation magnitude such that $I_{n}$ remains within the original image. See supp. for the perturbation details. In each training step, we generate $K$ pairs from $K$ images and use the pairwise ranking loss averaged over these $K$ pairs, denoted as $L_{bc}$, for training. #### Unlabeled data Modern CNNs require a large and diverse dataset for training. While recent image cropping datasets provide a large number of candidate crops [42, 47, 48], they usually come from a relatively small number of images. To increase the amount and diversity of the training data, we exploit unlabeled images following prior works in learning image aesthetic [7, 12]. We first collect well-composed images from a photography website where the photos are contributed by experienced photographers111unsplash.com. Given a well-composed image, we randomly perturb the image (by cropping, shifting, zooming-out, or rotation) to generate a new image with inferior composition. We use the original image as $I_{p}$ and the perturbed image as $I_{n}$ to form an image pair. See supp. for perturbation details. Unlike the perturbation in the labeled data, we allow the generated image to go beyond the original image. Regions in $I_{n}$ that are not visible in $I_{p}$ are filled with zero pixels. Our method for using unlabeled data differs from prior works as we use perturbations beyond cropping. This is important because our goal is to provide more general view adjustment, and empirical results show that using cropping alone leads to worse performance. In each training step, we generate $P$ pairs from $P$ unlabeled images and use the ranking loss averaged over these $P$ pairs, denoted as $L_{wc}$, for training. The total loss function used to train the composition scoring model is given by $L=L_{sc}+L_{bc}+L_{wc}$. #### Data augmentations Many of the pairs generated from unlabeled images will have zero-pixel borders in $I_{n}$ but not in $I_{p}$. When trained on such pairs, the composition scoring model may just learn to score images based on the presence or absence of zero-pixel borders instead of focusing on the content. To avoid this, we introduce three types of zero-pixel border augmentations: * $\bullet$ _Shift borders_ selects a certain percentage of columns on either the left or the right side of the image, and a certain percentage of rows at either the top or the bottom of the image. The pixel values in selected rows and columns are replaced with zeros. This simulates the zero-pixel borders introduced by shift perturbations. * $\bullet$ _Zoom-out borders_ selects a certain percentage of rows and columns on all sides and replaces the pixel values in these rows and columns with zeros. This simulates the zero-pixel borders introduced by zoom-out perturbation. * $\bullet$ _Rotation borders_ selects a random $\theta$ and applies $R_{-\theta}\circ R_{\theta}$ to the image, where $R_{\theta}$ is the rotation operator. This simulates the borders introduced by a rotation perturbation. We randomly apply one of these augmentations to image $I_{p}$ in the pairs generated from unlabeled images so that the model learns to focus on the image content rather than zero-pixel borders. We also add these augmentations to both $I_{p}$ and $I_{n}$ in the pairs generated from labeled data. Our composition scoring model is motivated by existing cropping models [7, 42, 47]. The main difference is that existing cropping models are designed to compare image crops that are fully visible to the model. In contrast, the goal of our model is to compare the results of different view adjustments where many of them are only partially visible. Also, we combine both labeled and unlabeled data to train the model, while existing cropping models are trained with only one of them. Figure 4: Pseudo label generation. If the composition score improvement is below a given threshold, the label is set to _no adjustment is needed_. Otherwise, we use the best scoring adjustment as the label. ### 3.3 Pseudo Label Generation We generate pseudo view adjustment labels using the composition scoring model through simulation. Given an image, we simulate the results of candidate view adjustments. For each candidate adjustment, we generate the results for nine magnitudes equally spaced between $[\frac{\pi}{36},\frac{\pi}{4}]$ for rotation and $[5,45]$ for other adjustments (represented as the percentage of the image size). This leads to $8{\times}9{=}72$ different view adjustment results. We use the composition scoring model to rate these candidate views and select the one with the highest score. If the best view leads to a composition score higher than the original image by a margin of $\Delta{=}0.2$, the adjustment is taken as the label. Otherwise, the label is that no adjustment is required. Similar to the unlabeled image perturbation in the composition scoring model, unknown regions that come into the image during simulation are filled with zero pixels. Note that we ignore depths when simulating the adjustment results, because depth information is generally unavailable. See Fig. 4. ### 3.4 View Adjustment Model Given the labeled samples from the view adjustment dataset and the images with pseudo label generated by the composition scoring model, we learn a view adjustment model as follows. The view adjustment model is implemented using a CNN with three output heads: * $\bullet$ _Suggestion predictor_ predicts whether a view adjustment should be suggested. _Suggestion predictor_ is a binary classification head trained with cross- entropy loss. * $\bullet$ _Adjustment predictor_ predicts which candidate view adjustment should be suggested when adjustment is needed. It is a multi-class classification head trained with categorical cross-entropy loss. * $\bullet$ _Magnitude predictor_ predicts the adjustment magnitude given the suggested adjustment. The _magnitude predictor_ consists of eight regressors that regress the adjustment magnitude for each of the candidate adjustments and is trained with $\ell_{1}$ loss. During training, the gradients of the adjustment predictor are backpropagated only for samples where a suggestion should be provided. Similarly, the gradients of the magnitude predictor are only backpropagated for the adjustment that should be suggested. During inference, we first use the suggestion predictor to decide whether or not the view needs to be adjusted. If the view needs to be adjusted, we use the adjustment predictor to select the candidate adjustment. Finally, we use the output of the magnitude predictor corresponding to the selected adjustment as the suggested magnitude. We also examined other model designs, e.g. combining the suggestion and adjustment predictors as a single multi-class classifier, combining the adjustment and magnitude predictors, etc. However, these alternative models do not work as well as the proposed method. In particular, they fail significantly in suggestion prediction, i.e. determining whether an adjustment is needed. ## 4 Experiments We evaluate our view adjustment model both objectively and subjectively. The goal is to verify that 1) our two-stage training approach improves the view adjustment prediction performance, and 2) the suggested view adjustments improve the image composition. #### Training datasets As described in Sec. 3, the proposed approach exploits an array of datasets for training. _FCDB_ [6] is an image cropping dataset consisting of 1,265 training images annotated with best crops. _GAICD_ [47] is an image cropping dataset consisting of 1,036 training images. Each image comes with 90 candidate crops along with their composition scores, and we take the one with the highest score as the best crop. _CPC_ [42] consists of 10,797 images. Each image comes with 24 candidate crops and their composition scores. _Unsplash_ [2] consists of 2M images shared by 200K photographers on unsplash.com. Since many of these images are contributed by experienced photographers, we assume that they are well-composed and use them as unlabeled samples for training the composition scoring model. _Open Images_ [18] consists of 9M images from Flickr. We generate pseudo labels for a subset of 5.5M images222We use images containing human verified labels.. For the composition scoring model, we use the annotated best crops in _FCDB_ and _GAICD_ (with $K{=}16$), the scored candidate crops in _CPC_ and _GAICD_ (with $N{=}16$), and the unlabeled images in _Unsplash_ (with $P{=}16$). For the view adjustment model, we use labeled samples from _FCDB_ and _GAICD_ , and pseudo-labeled samples from _Open Images_. In each training iteration, we use 64 labeled and 64 pseudo-labeled samples. #### Comparative methods Since there is no prior published work on view adjustment models, we use the following variants of our approach for comparison. The goal is to demonstrate the effectiveness of the proposed two-stage semi-supervised training approach. * $\bullet$ Supervised – The view adjustment model is trained using only the labeled data from the view adjustment dataset. The unlabeled images from _Open Images_ are not used. * $\bullet$ Aesthetic-scoring – Motivated by existing image cropping approaches [40, 10], we use aesthetic scores to train the image scoring model. Specifically, we train the scoring model to regresses the mean opinion scores of 250K images from the _AVA_ [30] dataset, which is a widely-used image aesthetics dataset. * $\bullet$ Supervised-scoring – The composition scoring model is trained using only labeled data from FCDB, GAICD and CPC. The unlabeled images from _Unsplash_ are not used. The network architecture and training details are shared across all methods. #### Implementation details Both the view adjustment and composition scoring models use MobileNet [14] architecture with input size $299{\times}299$. The last convolution layer is followed by a spatial pyramid pooling layer ($1{\times}1$, $2{\times}2$ and $5{\times}5$) [13] and two fully-connected layers with 1,024 units and ReLU activation. The composition scoring model has an output layer with a single node that uses the sigmoid activation function, and the view adjustment prediction model has an output layer with three heads as described in Sec. 3.4. All models are trained asynchronously on 5 P100 GPUs starting from ImageNet [9] pretrained weights. We use the ADAM [17] optimizer with a learning rate of $2e^{-5}$ and weight decay of $5e^{-4}$. The composition scoring models are trained for 250K steps and the view adjustment models are trained for 480K steps. Because the distribution of view adjustments in the wild is unknown, we give equal importance to all adjustments and weight each sample by the inverse frequency of the adjustment predictor label. ### 4.1 Evaluation Dataset and Metrics A major contribution of this work is introducing a benchmark dataset for evaluating view adjustment models. This section describes the evaluation dataset and the proposed evaluation metrics. The proposed view adjustment evaluation dataset is created from the test splits of image cropping datasets _FCDB_ and _GAICD_ , following the procedure described in Sec. 3.1. For each image, we generate an evaluation sample for each candidate view adjustment by randomly sampling a magnitude in the range $[5,45]$ for shift and zoom adjustment and $[\frac{\pi}{36},\frac{\pi}{4}]$ for rotation. See Table 1 for statistics. Note that the possible perturbations for an image are determined by the size of its best crop. For example, horizontal shift and zoom-out are not possible for a crop whose width is same as the image width. Therefore, each candidate view adjustments has a different number of evaluation samples. Horizontal | Vertical | Zoom | Rotate | Total ---|---|---|---|--- 258 (Left) | 370 (Up) | 268 (In) | 255 (Clockwise) | 3,076 277 (Right) | 350 (Down) | 521 (Out) | 256 (Counter) | Table 1: Number of evaluations samples for each candidate view adjustment. | | | F1 score | ---|---|---|---|--- | AUC | TPR | Left | Right | Up | Down | Zoom-in | Zoom-out | Clockwise | Counter | IoU Supervised | 0.570 | 0.366 | 0.126 | 0.074 | 0.0 | 0.0 | 0.022 | 0.364 | 0.082 | 0.061 | 0.730 Aesthetic-scoring | 0.536 | 0.352 | 0.052 | 0.115 | 0.0 | 0.0 | 0.102 | 0.302 | 0.061 | 0.088 | 0.717 Supervised-scoring | 0.536 | 0.356 | 0.078 | 0.082 | 0.164 | 0.234 | 0.007 | 0.350 | 0.118 | 0.117 | 0.737 Proposed | 0.608 | 0.436 | 0.221 | 0.221 | 0.390 | 0.341 | 0.015 | 0.378 | 0.124 | 0.110 | 0.750 Table 2: Performance of view adjustment models trained with different strategies. The TPR, IoU and F1 scores are measured at 0.3 FPR for the suggestion predictor. We propose the following objective metrics for evaluating view adjustment models. The goal is to evaluate 1) how accurately does a model trigger suggestions, 2) whether the suggested adjustment is correct when the model provides a suggestion, and 3) how well the suggested view approximates the best view. * $\bullet$ _Area under receiver operating characteristics curve (AUC)_ measures the performance of the suggestion predictor, i.e., measures how accurately a model triggers suggestions. * $\bullet$ _True positive rate (TPR)_ also measures the performance of the suggestion predictor. More specifically, we measure the TPR at $0.3$ False Positive Rate (FPR). We choose a low FPR because suggesting an adjustment when it is not needed has a higher cost (asking the user to do unnecessary work and making the composition worse) compared to not suggesting an adjustment when the composition can be improved. All other metrics are computed at the same $0.3$ FPR operating point. * $\bullet$ _F1-score_ for each candidate adjustment measures the accuracy of the adjustment predictor and how accurate we are on the suggested adjustment. * $\bullet$ _Intersection over Union (IoU)_ between the predicted and ground truth views measures how well the suggested view approximates the best view. It evaluates the performance of the suggestion predictor, adjustment predictor, and magnitude predictor jointly. Besides the objective metrics, we also conduct subjective evaluation through a user study. Figure 5: Confusion matrix. We normalize along each row, so each cell shows the precision of adjustment prediction. ### 4.2 Objective Evaluation Table 2 summarizes the performance of various methods on the view adjustment evaluation dataset. Overall, the proposed semi-supervised approach achieves the best performance when compared with supervised alternatives. Our model clearly outperforms supervised approaches in determining whether a suggestion should be provided. It achieves $43.6\%$ TPR when the FPR is $30\%$. Similarly, our model achieves the best F1-score for all candidate adjustments except for the _Zoom-in_ and _Counter_ clockwise rotation adjustment. The absolute F1-score for _Zoom-in_ and _Rotation_ adjustments are low because of low recall values. This may be explained by the adjustment magnitudes distribution, where most zoom-in and rotation samples have small perturbation magnitudes and are by nature more difficult for view adjustment prediction. Finally, our method achieves the best IoU, which means that the suggested views best approximate the ground truth views. We also compare with off-the- shelf cropping models [42, 48] on our view adjustment dataset, but their performance is significantly worse than our method. The IoU between predicted crops and ground truth views is below 0.61. The results justify that cropping is not sufficient for solving the view adjustment problem. To further understand when our model fails, we show the confusion matrix in Fig. 5. The results show that our model rarely makes the mistake of choosing the wrong shift or rotation adjustment. Instead, most of the errors occur between zoom-out and shifting. This is not surprising, because both zoom-out and shifting can resolve the problem where the original view wrongly excludes important content. Comparing the performance of our model with the supervised alternatives, we can clearly see that the two-stage approach helps to improve model performance. The results also show that semi-supervised learning is important not only for the view adjustment model but also for the composition scoring model. If we train the composition scoring model using only supervised data, the results are actually worse than the purely supervised view adjustment model. This is potentially due to the poor quality of pseudo labels generated by the supervised composition scoring model, which is trained on a limited number of images. Finally, the performance of Aesthetic-scoring is worse than other methods. This suggests that a generic aesthetic score is not sufficient to differentiate the changes caused by camera view adjustment. | After is better | Before is better | No difference ---|---|---|--- View Adjustment | $79.0\%$ | $12.6\%$ | $8.5\%$ Pano2Vid [37] | $78.9\%$ | $17.1\%$ | $4.1\%$ Table 3: Subjective evaluation results on the view adjustment and the Pano2Vid dataset. The raters are asked whether each image has better composition before or after applying the suggested view adjustment. Figure 6: Qualitative examples. Each pair shows the original image (left) and the result of the adjustment (right). Our model appears to provide suggestions based on various factors, including objects, the ground plane, or even leading lines in the image. The last two rows show examples where our model predicts the wrong adjustment but still improves the image composition. ### 4.3 Subjective Evaluation To demonstrate the effectiveness of the proposed model, we conduct a user study that directly evaluates whether the suggested view adjustment improves the composition of the original image. In the user study, we show the raters the image both before and after applying the suggested view adjustment. The raters are asked which image has the better composition or if they cannot tell. The order of the two images is chosen randomly to avoid bias. We select $130$ images from our evaluation dataset for the user study, and each image is rated by 3 different raters. The results are in Table 4. It shows that our model has a high success rate ($79.0\%$) when a suggestion is provided, and it only wrongly suggests an adjustment about $12.6\%$ of the time. See Fig. 6 and Fig. 7 for qualitative examples. Beside the view adjustment dataset, we also conduct a user study on the _Pano2Vid_ [37] dataset. _Pano2Vid_ is a $360\degree$ video dataset with viewport annotations. The viewports capture a normal FOV video created from the $360\degree$ video by raters and cover interesting content with appropriate composition. We sample $200$ frames from the _Pano2Vid_ dataset and apply our model on the user-selected viewport. The horizontal and vertical adjustments become translations in spherical coordinates, and zoom-in and zoom-out adjustments become changes in the FOV. We run the model iteratively until the model stops suggesting any adjustment or reaches the maximum number of steps (3 steps), and we ask raters whether the final view has a better composition than the initial view. The results on _Pano2Vid_ are in Table 4. Interestingly, the success rate of the model remains high but has a slightly higher failure rate. The results show that our model generalizes well despite the fact that it has never been trained on $360\degree$ images. See Fig. 8 for examples. Note that we perform multi-step adjustment on _Pano2Vid_ , so the results also imply that our model can improve the composition iteratively even if it is trained only for single step view adjustment. This is important in practice, because the best view may not be reachable from the initial view using a single adjustment when applying the system in the wild. On the other hand, we also observe that our model may drift far from the initial view in some examples. In other words, it suggests a final view that is unrelated to the initial view, which shows a limitation of the single-step adjustment model. (a) False negatives. (b) False positives. Figure 7: Failure examples. False negative means that our model fails to provide a suggestion when the composition can be improved. Each pair shows the input image (left) and the ground truth (right). False positive means the suggested adjustment degrades the composition according to our user study. Each pair shows the input image (left) and our prediction (right). Our model fails when the target adjustment is minor or the important content is not visible in the initial view. Also, the bottom right example shows how a wrong magnitude prediction degrades the composition. Figure 8: Qualitative examples from $360\degree$ images. We perform multi-step view adjustment on $360\degree$ images, and our model improves the composition iteratively. ## 5 Conclusion We propose improving image composition through view adjustment prediction. Our system suggests how to improve the composition while the user is still composing the photo. We create a new benchmark dataset for evaluating the performance of a view adjustment prediction and propose a two-stage training approach that exploits both labeled and unlabeled data for learning the view adjustment model. Experimental results show that the suggestion provided by our model improves image composition $79\%$ of the time. Future plans are to explore the iterative nature of view adjustment for more accurate and consistent suggestions. ## References * [1] Camera51–a smarter camera. https://play.google.com/store/apps/details?id=com.camera51.android. Accessed: 2020-09-22. * [2] Unsplash dataset. https://unsplash.com/data. * [3] What is shot suggestions on galaxy? https://www.samsung.com/global/galaxy/what-is/shot-suggestions/. Accessed: 2020-09-22. * [4] Jiansheng Chen, Gaocheng Bai, Shaoheng Liang, and Zhengqin Li. Automatic image cropping: A computational complexity study. In CVPR, 2016. * [5] Li-Qun Chen, Xing Xie, Xin Fan, Wei-Ying Ma, Hong-Jiang Zhang, and He-Qin Zhou. A visual attention model for adapting images on small displays. Multimedia systems, 9(4):353–364, 2003. * [6] Yi-Ling Chen, Tzu-Wei Huang, Kai-Han Chang, Yu-Chen Tsai, Hwann-Tzong Chen, and Bing-Yu Chen. Quantitative analysis of automatic image cropping algorithms:a dataset and comparative study. In WACV, 2017. * [7] Yi-Ling Chen, Jan Klopp, Min Sun, Shao-Yi Chien, and Kwan-Liu Ma. Learning to compose with professional photographs on the web. In ACM Multimedia, 2017. * [8] Bin Cheng, Bingbing Ni, Shuicheng Yan, and Qi Tian. Learning to photograph. In ACM Multimedia, 2010. * [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. * [10] Yubin Deng, Chen Change Loy, and Xiaoou Tang. Aesthetic-driven image enhancement by adversarial learning. In ACM Multimedia, 2018. * [11] Chen Fang, Zhe Lin, Radomir Mech, and Xiaohui Shen. Automatic image cropping using visual composition, boundary simplicity and content preservation models. In ACM Multimedia, 2014. * [12] Hui Fang and Meng Zhang. Creatism: A deep-learning photographer capable of creating professional work. arXiv preprint arXiv:1707.03491, 2017. * [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, ECCV, 2014. * [14] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017. * [15] Chuan-Shen Hu, Yi-Tsung Hsieh, Hsiao-Wei Lin, and Mei-Chen Yeh. Virtual portraitist: An intelligent tool for taking well-posed selfies. ACM Trans. Multimedia Comput. Commun. Appl., 15(1), Jan. 2019. * [16] Yen-Ta Huang, Kuan-Ting Chen, Liang-Chi Hsieh, Winston Hsu, and Ya-Fan Su. Detecting the directions of viewing landmarks for recommendation by large-scale user-contributed photos. In ACM Multimedia, 2012. * [17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. * [18] Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2017. * [19] Debang Li, Huikai Wu, Junge Zhang, and Kaiqi Huang. A2-rl: Aesthetics aware reinforcement learning for image cropping. In CVPR, 2018. * [20] Debang Li, Huikai Wu, Junge Zhang, and Kaiqi Huang. Fast a3rl: Aesthetics-aware adversarial reinforcement learning for image cropping. IEEE Transactions on Image Processing, 28(10):5105–5120, 2019. * [21] Debang Li, Junge Zhang, and Kaiqi Huang. Learning to learn cropping models for different aspect ratio requirements. In CVPR, 2020. * [22] Debang Li, Junge Zhang, Kaiqi Huang, and Ming-Hsuan Yang. Composing good shots by exploiting mutual relations. In CVPR, 2020. * [23] Ligang Liu, Renjie Chen, Lior Wolf, and Daniel Cohen-Or. Optimizing photo composition. Computer Graphics Forum, 29(2):469–478, 2010. * [24] Peng Lu, Jiahui Liu, Xujun Peng, and Xiaojie Wang. Weakly supervised real-time image cropping based on aesthetic distributions. In ACM Multimedia, 2020. * [25] Peng Lu, Hao Zhang, Xujun Peng, and Xiaofu Jin. An end-to-end neural network for image cropping by learning composition from aesthetic photos. arXiv preprint arXiv:1907.01432, 2019. * [26] Weirui Lu, Xiaofen Xing, Bolun Cai, and Xiangmin Xu. Listwise view ranking for image cropping. IEEE Access, 7:91904–91911, 2019. * [27] Shuang Ma, Yangyu Fan, and Chang Wen Chen. Finding your spot: A photography suggestion system for placing human in the scene. In ICIP, 2014. * [28] Shuang Ma, Yangyu Fan, and Chang Wen Chen. Pose maker: A pose recommendation system for person in the landscape photographing. In ACM Multimedia, 2014. * [29] Luca Marchesotti, Claudio Cifarelli, and Gabriela Csurka. A framework for visual saliency detection with applications to image thumbnailing. In ICCV, 2009. * [30] Naila Murray, Luca Marchesotti, and Florent Perronnin. AVA: A large-scale database for aesthetic visual analysis. In CVPR, 2012. * [31] Masashi Nishiyama, Takahiro Okabe, Yoichi Sato, and Imari Sato. Sensation-based photo cropping. In ACM Multimedia, 2009. * [32] Yogesh Singh Rawat and Mohan S Kankanhalli. Context-aware photography learning for smart mobile devices. ACM Transactions on Multimedia Computing, Communications, and Applications, 12(1s):1–24, 2015. * [33] Yogesh Singh Rawat and Mohan S Kankanhalli. Clicksmart: A context-aware viewpoint recommendation system for mobile photography. IEEE Transactions on Circuits and Systems for Video Technology, 27(1):149–158, 2016. * [34] Y. S. Rawat, M. Song, and M. S. Kankanhalli. A spring-electric graph model for socialized group photography. IEEE Transactions on Multimedia, 20(3):754–766, 2018. * [35] Fred Stentiford. Attention based auto image cropping. In International Conference on Computer Vision Systems: Proceedings (2007), 2007. * [36] Hsiao-Hang Su, Tse-Wei Chen, Chieh-Chi Kao, Winston H. Hsu, and Shao-Yi Chien. Preference-aware view recommendation system for scenic photos based on bag-of-aesthetics-preserving features. IEEE Transactions on Multimedia, 14(3):833–843, 2012. * [37] Yu-Chuan Su, Dinesh Jayaraman, and Kristen Grauman. Pano2vid: Automatic cinematography for watching 360$\degree$ videos. In ACCV, 2016. * [38] Bongwon Suh, Haibin Ling, Benjamin B Bederson, and David W Jacobs. Automatic thumbnail cropping and its effectiveness. In UIST, 2003. * [39] Yi Tu, Li Niu, Weijie Zhao, Dawei Cheng, and Liqing Zhang. Image cropping with composition and saliency aware aesthetic score map. In AAAI, 2020. * [40] Wenguan Wang and Jianbing Shen. Deep cropping via attention box prediction and aesthetics assessment. In ICCV, 2017. * [41] Yinting Wang, Mingli Song, Dacheng Tao, Yong Rui, Jiajun Bu, Ah Chung Tsoi, Shaojie Zhuo, and Ping Tan. Where2stand: A human position recommendation system for souvenir photography. ACM Trans. Intell. Syst. Technol., 7(1), Oct. 2015. * [42] Zijun Wei, Jianming Zhang, Xiaohui Shen, Zhe Lin, Radomir Mech, Minh Hoai, and Dimitris Samaras. Good view hunting: Learning photo composition from dense view pairs. In CVPR, 2018. * [43] Pengfei Xu, Hongxun Yao, Rongrong Ji, Xian-Ming Liu, and Xiaoshuai Sun. Where should i stand? learning based human position recommendation for mobile photographing. Multimedia tools and applications, 69(1):3–29, 2014. * [44] Jianzhou Yan, Stephen Lin, Sing Bing Kang, and Xiaoou Tang. Learning the change for automatic image cropping. In CVPR, 2013. * [45] Wenyuan Yin, Tao Mei, and Chang Wen Chen. Crowdsourced learning to photograph via mobile devices. In ICME, 2012. * [46] Wenyuan Yin, Tao Mei, Chang Wen Chen, and Shipeng Li. Socialized mobile photography: Learning to photograph with social context via mobile devices. IEEE Transactions on Multimedia, 16(1):184–200, 2013. * [47] Hui Zeng, Lida Li, Zisheng Cao, and Lei Zhang. Reliable and efficient image cropping: A grid anchor based approach. In CVPR, 2019. * [48] Hui Zeng, Lida Li, Zisheng Cao, and Lei Zhang. Grid anchor based image cropping: A new benchmark and an efficient model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\. * [49] Mingju Zhang, Lei Zhang, Yanfeng Sun, Lin Feng, and Weiying Ma. Auto cropping for digital photographs. In ICME, 2005. * [50] Yanhao Zhang, Xiaoshuai Sun, Hongxun Yao, Lei Qin, and Qingming Huang. Aesthetic composition representation for portrait photographing recommendation. In ICIP, 2012. * [51] Y. Zhang and R. Zimmermann. Camera shooting location recommendations for landmarks in geo-space. In International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, 2013. ## Appendices The supplementary materials consist of: 1. A Perturbation details for the composition scoring model 2. B Perturbation details for the view adjustment model 3. C Data augmentation details 4. D Training pipeline overview 5. E Experiment results analysis 6. F Additional qualitative examples ### A Perturbation for Composition Scoring In this section, we describe the perturbation details for generating the image pairs $(I_{p},I_{n})$ for training the composition scoring model. Let $c_{x}$, $c_{y}$, $w$ and $h$ denote the center $x$, $y$ coordinate, width and height of the best crop bounding box normalized to $[0,1]$ for each image. Given the perturbation ($o_{x}$, $o_{y}$, $o_{z}$, $o_{\alpha}$) where $o_{x}$, $o_{y}$, $o_{z}$, and $o_{\alpha}$ are the magnitudes of horizontal shift, vertical shift, zoom, and rotation respectively, the perturbed bounding box is defined as $\displaystyle c^{\prime}_{x}$ $\displaystyle=c_{x}+w*o_{x}$ (2) $\displaystyle c^{\prime}_{y}$ $\displaystyle=c_{y}+h*o_{y}$ $\displaystyle w^{\prime}$ $\displaystyle=w+w*o_{z}$ $\displaystyle h^{\prime}$ $\displaystyle=h+h*o_{z}$ $\displaystyle\alpha$ $\displaystyle=o_{\alpha}.$ Note that $\alpha$ is the orientation of the bounding box in the image coordinate system and is defined as the counter clockwise angle between the $y$-axis of the image and the bounding box. Therefore, the coordinates of the bounding box corners in the image are given by $u=c+R_{\alpha}v$, where $R_{\alpha}$ is the rotation matrix and $v=[\pm\frac{w}{2},\pm\frac{h}{2}]$. Specifically, we apply four types of perturbation: * $\bullet$ Shifting randomly samples $o_{x}$ and $o_{y}$ in the range $[-0.4,0.4]$ and applies $(o_{x},o_{y},0,0)$ to the bounding box. * $\bullet$ Zooming-out randomly samples $o_{z}$ in the range $[0,0.4]$ and applies $(0,0,o_{z},0)$ to the bounding box. * $\bullet$ Cropping combines zooming-in and shifting. It first samples $o_{z}$ in the range $[\sqrt{0.5},\sqrt{0.8}]$ and then samples $o_{x}$ and $o_{y}$ in the range $[-o_{z}/2,o_{z}/2]$. The perturbation $(o_{x},o_{y},o_{z},0)$ is then applied to the bounding box. In other words, it randomly sample a crop within the original crop where the area is 0.5 to 0.8 times that of the original one and the aspect ratio is the same. * $\bullet$ Rotation randomly samples $o_{\alpha}$ in the range $[-\frac{\pi}{4},\frac{\pi}{4}]$ and applies $(0,0,0,o_{\alpha})$ to the bounding box. We apply the same perturbations for both the labeled crops and unlabeled images. For unlabeled image, the bounding box is set to the entire image, i.e. $c_{x}=c_{y}=0.5$, and $w=h=1.0$. Also, as described in the main paper, we discard samples where the perturbed bounding box goes beyond the original image for supervised samples. ### B Perturbation for View Adjustment This section describes how we generate samples for view adjustment prediction. The generated samples are used for both training and evaluation. Given an image crop represented as $(c_{x},c_{y},w,h)$, we apply the following perturbations: * $\bullet$ Horizontal shift samples $o_{x}$ in either $[0.05,0.45]$ or $[-0.45,-0.05]$ and applies perturbation $(o_{x},0,0,0)$ to the bounding box. * $\bullet$ Vertical shift samples $o_{y}$ in either $[0.05,0.45]$ or $[-0.45,-0.05]$ and applies perturbation $(0,o_{y},0,0)$ to the bounding box. * $\bullet$ Zoom samples $o_{z}$ in either $[-0.048,-0.310]$ or $[0.053,0.818]$ and applies $(0,0,o_{z},0)$ to the bounding box. * $\bullet$ Rotation samples $o_{\alpha}$ in either $[-\frac{\pi}{4},-\frac{\pi}{36}]$ or $[\frac{\pi}{36},\frac{\pi}{4}]$ and applies $(0,0,0,o_{\alpha})$ to the bounding box. Note that the ranges for shift and zoom perturbations are determined such that the inverse perturbation, i.e., the ground truth view adjustment, falls in the range of $[0.05,0.45]$. As described in the main paper, we discard samples where the perturbed bounding box falls outside the image. Result: Composition scoring model $M_{c}$ for _$i\leftarrow 1,\text{MaxStep}$_ do // Scored candidate samples $I_{s}\,,crop_{s}\leftarrow$ $N$ images with scored crops $crop_{p},crop_{n}\leftarrow$ create $\frac{N\times(N-1)}{2}$ pairs of crops $I^{p}_{s}\leftarrow$ ExtractCrop($I_{s}$, $crop_{p}$) $I^{n}_{s}\leftarrow$ ExtractCrop($I_{s}$, $crop_{n}$) // Best crop samples $I_{c}\,,crop_{c}\leftarrow$ $K$ images with best crop $crop^{\prime}_{c}\leftarrow$ Perturb($crop_{c}$) $I^{p}_{c}\leftarrow$ ExtractCrop($I_{c}$, $crop_{c}$) $I^{n}_{c}\leftarrow$ ExtractCrop($I_{c}$, $crop^{\prime}_{c}$) // Unsupervised samples $I^{p}_{u}\leftarrow$ $P$ well-composed images $I^{n}_{u}\leftarrow$ Perturb($I^{p}_{u}$) // Data augmentation $I^{i}_{j}\leftarrow$ Augment($I^{i}_{j}$) $\forall(i,j)\in\\{p,n\\}\times\\{s,c\\}$ $I^{p}_{u}\leftarrow$ Augment($I^{p}_{u}$) // Minimize joint loss $y^{p}\leftarrow M_{c}(I^{p})$ $y^{n}\leftarrow M_{c}(I^{n})$ $M_{c}\leftarrow\operatorname*{arg\,min}_{M_{c}}\,\sum_{x}Loss(y^{p}_{x},y^{n}_{x})$ end for Algorithm 1 Composition scoring model training ### C Data Augmentation This section describes the implementation details for the composition scoring model data augmentation. For _shift borders_ augmentation, we randomly select a percentage $s_{y}$ in $[0,40]$ and select $s_{x}\%$ of columns on either the left or right side of the image and replace the pixel values with zero. Similarly, we randomly select $s_{y}\%$ of rows from either the top or bottom of the image and replace the pixel values. Note that $s_{x}$ and $s_{y}$ are samples independently. For _zoom-out_ augmentation, we randomly select a percentage $s_{z}$ in $[0,40]$ and select $0.5s_{z}\%$ of rows and columns on both sides of the image. For _rotation_ augmentation, we randomly select an angle $\theta\in[-\frac{\pi}{4},\frac{\pi}{4}]$ and apply two rotation operator $R_{-\theta}\circ R_{\theta}$ to the image sequentially. Note that while $R_{-\theta}\circ R_{\theta}=I$ mathematically, if we apply the two operator sequentially on an image with finite size, they introduce pixels not within the original image and therefore simulate the synthetic artifacts introduced by rotation. ### D Training Pipeline Overview This section provides an overview of the training pipeline. See Algorithm 1 and Algorithm 2 for the training of the composition scoring and view adjustment model respectively. Result: View adjustment model $M_{v}$ for _$i\leftarrow 1,\text{MaxStep}$_ do // Pseudo-labeled samples $I_{s}\leftarrow$ $K$ images $y_{s}\leftarrow$ PseudoLabel($I_{s}$) // Labeled samples $I_{c}\,,crop_{c}\leftarrow$ $K$ images with best crop $p_{c}\leftarrow$ samples random perturbations $crop^{\prime}_{c}\leftarrow$ Perturb($crop_{c}$, $p_{c}$) $I^{\prime}_{c}\leftarrow$ ExtractCrop($I_{c}$, $crop^{\prime}_{c}$) $y_{c}\leftarrow p^{-1}_{c}$ // Minimize joint loss $y^{\prime}_{s}\leftarrow M_{v}(I_{s})$ $y^{\prime}_{c}\leftarrow M_{v}(I^{\prime}_{c})$ $M_{v}\leftarrow\operatorname*{arg\,min}_{M_{v}}\,Loss(y_{s},y^{\prime}_{s})+Loss(y_{c},y^{\prime}_{c})$ end for Algorithm 2 View adjustment model training ### E Experiment Results | After is better | Before is better | No difference ---|---|---|--- Horizontal | $80.0\%$ | $5.0\%$ | $15.0\%$ Vertical | $94.2\%$ | $0.0\%$ | $5.8\%$ Zoom | $74.7\%$ | $16.0\%$ | $9.3\%$ Rotation | $57.3\%$ | $41.3\%$ | $1.3\%$ Single-step | $81.6\%$ | $14.9\%$ | $3.5\%$ Multi-steps | $72.2\%$ | $22.2\%$ | $5.6\%$ Table 4: Subjective evaluation results on the view adjustment (row1–row4) and the Pano2Vid (row5–row6) dataset. In this section, we expand the experiment results in the main paper by providing more detailed analysis on the subjective evaluation results. In particular, we analyze the user study results w.r.t. 1) different view adjustments being suggested on the view adjustment dataset, and 2) the number of adjustment being performed on the Pano2Vid dataset. The results are in Table 4. Interestingly, the subjective evaluation results have a strong dependency on the adjustment being suggested. Our method achieves the best performance in vertical shift adjustment, where it improves the composition more than $94\%$ of the time without any false positive. In contrast, the rotation suggestion only improves the composition $57.3\%$ of the time. This is primarily caused by the incorrect magnitude prediction, because image composition is very sensitive to the orientation when the horizon is visible. Therefore, even a small error in magnitude prediction may significantly degrade the result. Not surprisingly, our method has a better performance in single-step view adjustment. In contrast, the success rate drops by $9.4\%$ when multiple adjustments are required. The results show the need for considering the iterative nature of view adjustment in future works. ### F Qualitative Results In this section, we show additional qualitative examples similar to Fig. 6 in the main paper. See Fig. 9 and Fig. 10. The results show that our model works in a wide range of scenarios, including both object centric and scenic photos. Also, our model works for different aspect ratios, including photos in both landscape and portrait orientations. Figure 9: Qualitative examples. Each pair shows the original image (left) and our prediction (right). Figure 10: Qualitative examples (cont.). Each pair shows the ground truth (left) and our prediction (right).
# Permutation Testing for Monotone Trend Joseph P. Romano111Supported by NSF Grant MMS-1949845. Departments of Statistics and Economics Stanford University <EMAIL_ADDRESS>Marius A. Tirlea222Supported by NSF Grant MMS-1949845. Department of Statistics Stanford University <EMAIL_ADDRESS> ###### Abstract In this paper, we consider the fundamental problem of testing for monotone trend in a time series. While the term “trend” is commonly used and has an intuitive meaning, it is first crucial to specify its exact meaning in a hypothesis testing context. A commonly used well-known test is the Mann- Kendall test, which we show does not offer Type 1 error control even in large samples. On the other hand, by an appropriate studentization of the Mann- Kendall statistic, we construct permutation tests that offer asymptotic error control quite generally, but retain the exactness property of permutation tests for i.i.d. observations. We also introduce “local” Mann-Kendall statistics as a means of testing for local rather than global trend in a time series. Similar properties of permutation tests are obtained for these tests as well. Key Words:. Hypothesis Testing, Mann-Kendall Test, Permutation Test, Time Series, Trend. ## 1 Introduction In both theoretical and applied time series analysis, the issue of determining whether or not a given sequence of observations exhibits monotone trend is a crucial element of understanding the process under study. An important nonparametric method used for testing monotone trend is the Mann-Kendall trend test (see Mann, (1945) and Kendall, (1990)), which tests the hypothesis $H_{\text{MK}}:X_{1},\,\dots,\,X_{n}\text{ i.i.d. }$ using a rank statistic (defined in (3.1)). Under the null hypothesis $H_{\text{MK}}$, the Mann-Kendall test of nominal level $\alpha$ is exact in finite samples and asymptotically valid. However, while it is the case that an i.i.d. sequence of random variables $\\{X_{i}:i\in[n]\\}$ should intuitively be described as exhibiting no monotone trend, it is also intuitively reasonable that a non-i.i.d. sequence $\\{X_{i}:i\in[n]\\}$ may also exhibit a lack of monotone trend. Indeed, in the time series contexts in which such problems are usually considered, there is an implicit dependence structure in time of the sequence under examination, in which case the null hypothesis $H_{\text{MK}}$ is not the null hypothesis intended to be under consideration. On account of this, significant problems of error control may arise, since the null hypotheses of i.i.d. and “no monotone trend” are quite different. This may lead to issues of Type 1 and Type 3 (or directional) error control analogous to those described in Romano and Tirlea, (2022) for testing autocorrelations. In the context of this paper, the issue is that one can reject the null hypothesis and conclude that there exists a monotone trend not because there does truly exist some monotone trend, but on account of the test only controlling Type 1 error for i.i.d. sequences, and not for weakly dependent sequences exhibiting no monotone trend. We propose two nonparametric testing procedures for the hypothesis $H:\\{X_{i}:i\in\mathbb{N}\\}\text{ exhibits no monotone trend,}$ where several precise definitions of the null hypothesis $H$ are considered. In particular, we define the notions of global and local monotone trend, and consider permutation testing procedures based on the full, or global, Mann- Kendall statistic. We also introduce a novel statistic, termed the local Mann- Kendall statistic, which is used for testing the null hypothesis of a lack of local monotone trend. In the broader context of testing for and estimation of monotone trend, Dietz and Killeen, (1981) provide a multivariate test of trend with application to a pharmaceutical setting, Zhao and Woodroofe, (2012) consider isotonic estimators of sequences with monotone trend under stationarity, and Han and Qian, (2018) extend the Mann-Kendall test to the setting of independent, but not necessarily i.i.d. sequences. In a more applied setting, Yue et al., (2002) discuss application of the Mann-Kendall and Spearman’s $\rho$ test to hydrological time series, Hamed, (2008) considers a modification of the Mann- Kendall test under scaling, and Wang et al., (2020) consider the power of different versions of the Mann-Kendall test in a simulation study. Permutation tests of trend based on OLS regression are discussed in Romano and Tirlea, (2024). There has been a resurgence in the use of permutation and randomization tests, as they offer the prospect of valid inference in complex settings. However, they can often fail (even in large samples) without careful implementation. Indeed, this paper is part of a growing body of work that shows how permutation tests can indeed be constructed that offer exact finite-sample validity under a restricted null hypothesis, but also offer asymptotic validity in much more generality. Many such instances of this phenomenon are given in Romano and Tirlea, (2022). Section 2 provides a precise definition of the null hypotheses of a lack of monotone trend under consideration. The main results for the global Mann- Kendall test are given in Section 3, in which we provide conditions for the asymptotic validity of the permutation test when $\\{X_{n}:n\in\mathbb{N}\\}$ is a stationary, absolutely regular sequence, satisfying fairly standard mixing conditions. For instance, the results in this section may be applied freely to a large class of stationary ARMA sequences. Section 4 introduces the local Mann-Kendall statistic, and furnishes the main results for asymptotic validity of the permutation test when $\\{X_{n}:n\in\mathbb{N}\\}$ is a stationary, $\alpha$-mixing sequence. Section 5 provides simulation studies illustrating the results of the previous sections. The majority of the proofs are deferred to the supplement, owing to the length and technical requirements required to prove the results. Most of our results require some notions of stationarity, weak dependence, and absolute regularity, whose definitions we now review. A sequence $\\{X_{n}:n\in\mathbb{N}\\}$ is called stationary if it is strongly stationary, i.e. if, for all $k\in\mathbb{N}$, for all $\\{n_{i}:i\in[k]\\}\subset\mathbb{N}$, and for all $m\in\mathbb{N}$, $(X_{n_{1}},\,\dots,\,X_{n_{k}})\overset{d}{=}(X_{m+n_{1}},\,\dots,\,X_{m+n_{k}})\,\,.$ With these notational conventions, we turn to a brief discussion of the notion of weak dependence in a sequence of random variables. ###### Definition 1.1. Let $\\{X_{n}:n\in\mathbb{N}\\}$ be a sequence of $(\Omega,\,\mathcal{F},\,\mathbb{P})$-measurable random variables. For each $n\in\mathbb{N}$, let $\mathcal{F}_{n}=\sigma(X_{m}:m\leq n)$, and let $\mathcal{G}_{n}=\sigma(X_{m}:m\geq n)$. For $n\in\mathbb{N}$, let $\alpha_{X}(n)$ be Rosenblatt’s $\alpha$-mixing coefficient, $\alpha_{X}(n)=\sup_{m\in\mathbb{N}}\sup_{A\in\mathcal{F}_{m},\,B\in\mathcal{G}_{m+n}}\left|\mathbb{P}(A\cap B)-\mathbb{P}(A)\mathbb{P}(B)\right|\,\,.$ We say that the sequence $\\{X_{n}\\}$ is strongly mixing or $\alpha$-mixing if $\alpha_{X}(n)\to 0$ as $n\to\infty$. For $n\in\mathbb{N}$, let $\beta_{X}(n)$ be the $\beta$-mixing or absolute regularity coefficient, defined as $\beta_{X}(n)=\sup_{m\in\mathbb{N}}\mathbb{E}\left[\sup_{B\in\mathcal{G}_{m+n}}\left|\mathbb{P}(B\,|\,\mathcal{F}_{m})-\mathbb{P}(B)\right|\right]\,\,.$ We say that the sequence $\\{X_{n}\\}$ is absolutely regular or $\beta$-mixing if $\beta_{X}(n)\to 0$ as $n\to\infty$. ###### Remark 1.1. As discussed in Bradley, (2005), we have that, for any sequence $\\{X_{n}:n\in\mathbb{N}\\}$ of $\mathbb{R}^{d}$-valued random variables, and for each $n\in\mathbb{N}$, $2\alpha_{X}(n)\leq\beta_{X}(n)\,\,.$ It follows that any $\beta$-mixing sequence is also $\alpha$-mixing. ## 2 Defining monotone trend We turn our attention to establishing the meaning of the expression “monotone trend” used in the remainder of this paper. In Mann, (1945), it is the case that monotone trend is defined solely in contrast to the null of i.i.d., and is not explicitly defined in itself. However, appropriately defining this phrase, or, more pertinently, defining a lack of monotone trend, is essential in order to conduct a hypothesis test of a lack of monotone trend, since otherwise defining the null becomes impossible. In this work, we restrict attention to consideration of weakly dependent sequences, as defined earlier. When considering the monotonicity (or lack thereof) of the sequence $\\{X_{n}:n\in\mathbb{N}\\}$, we provide some examples of the difficulties arising when using traditional notions of monotonicity, such as $\mathbb{E}[X_{i}]<\mathbb{E}[X_{j}]$ for $i<j$. ###### Example 2.1. (Positive local trend, negative global trend) Let $c,\,\epsilon>0$. Consider the sequence $\\{Y_{n}:n\in\mathbb{N}\\}$ such that $Y_{0}=0$, and, for all $n\geq 1$, $Y_{n}=\sum_{i=1}^{n}X_{i}\,\,,$ where the $X_{i}$ are i.i.d. such that $X_{i}=\begin{cases}1,\,\,\text{with probability }1-\epsilon\,\,,\\\ -c,\,\,\text{with probability }\epsilon\,\,.\end{cases}$ Let $\delta>0$, and let $M\in\mathbb{N}$. By letting $\epsilon=(1-\delta)^{1/M}$ and $c$ be sufficiently large, we have that $\\{Y_{n}\\}$ is a (strictly) decreasing sequence in expectation, and so this sequence exhibits a negative global monotone trend with probability 1, since, by the strong law of large numbers, $\frac{1}{n}Y_{n}\overset{a.s.}{\to}\mathbb{E}[X_{1}]<0\,\,,$ from which it follows that, for $i<j$, as $j-i\to\infty$, $\mathbb{P}(Y_{i}>Y_{j})\to 1$, and in fact, for any constant $K\in\mathbb{R}^{+}$, $\mathbb{P}(Y_{i}>Y_{j}+K)\to 1$ as $j-i\to\infty$. --- Figure 2.1: A sample path from the process described in Example 2.1, with $n=10^{5}$, $\epsilon=10^{-4}$, and $M=5\cdot 10^{4}$. However, such a sequence exhibits a local monotone trend in the opposite direction in the following sense: for any $n\in\mathbb{N}$, we have that $\mathbb{P}(X_{n}<X_{n+1}<\dots<X_{n+M-1})=1-\delta\,\,.$ To conclude, since it is possible to construct such a sequence for any choice of $\delta$ and $M$, we may construct a sequence $\\{X_{n}:n\in\mathbb{N}\\}$ with the following property: for any $i\in\mathbb{N}$, the sequence $(X_{i},\,\dots,\,X_{i+M-1})$ is strictly increasing with probability $(1-\delta)$, but $X_{n}\to-\infty$ with probability 1. Example 2.1 indicates that some distinction is required between the notions of local and global monotone trend. In particular, as will be illustrated in Section 3, strictly stationary and weakly dependent (in particular, absolutely regular) sequences cannot, by definition, exhibit global trend, in the sense that, as $|j-i|\to\infty$, $\mathbb{P}(X_{i}\leq X_{j})-\mathbb{P}(X_{i}\geq X_{j})\to 0\,\,.$ On account of this property of strictly stationary and weakly dependent sequences, it follows that any test of monotone trend in this context is, in fact, a test of stationarity, since one cannot disentangle the properties of stationarity and lack of monotone trend (see Remark 3.1). While there is an argument to be made that the restriction of stationarity is too strong in such a setting, this assumption is necessary to ensure some limiting behaviour of functions of the sequence $\\{X_{n}:n\in\mathbb{N}\\}$; indeed, even without considering weakly dependent processes, there exist sequences of independent, uniformly bounded random variables for which, for example, the sample mean does not converge in distribution. In light of this, we therefore consider a test of global monotone trend to test the null hypothesis $H_{0}:\\{X_{n}:n\in\mathbb{N}\\}\text{ is stationary,}$ where, for methods developed later in this paper, we consider the power of such tests against alternatives the form of which is explicitly specified. Fortunately, the issue of appropriately defining the notion of local monotone trend is a simpler one; in particular, local monotonicity is a property which is not directly tied to the stationarity of a sequence. For a simple example of a sequence with no local monotone trend, consider $X_{n}$ being i.i.d. random variables drawn from a continuous distribution $F$. In this setting, which is the null setting of the original test of Mann, (1945), any ordering of any local subsequence of this process $(X_{n},\,\dots,\,X_{n+M-1})$ is equally likely, and any definition of local monotone trend should exclude this example from exhibiting such a trend. In contrast, a sequence for which it should be said that a local monotone trend exists is given in the example below. ###### Example 2.2. (Markov chain with local monotone trend) Let $M\in\mathbb{N}$, and let $\epsilon>0$. Let $\pi_{-M}=\frac{\epsilon}{1-(1-\epsilon)^{2M+1}}\,\,.$ For each $i\in\\{-M+1,\,\dots,\,M\\}$, let $\pi_{i}=\pi_{-M}\cdot(1-\epsilon)^{i+M}$ Consider the Markov chain $\\{X_{n}:n\in\mathbb{N}\\}$ on the state space $\\{-M,\,\dots,\,M\\}$ with initial distribution $\pi$, and transition matrix $P_{i,\,j}=\begin{cases}1-\epsilon,\,\,&j=i+1\text{ and }i\leq M-1\\\ 1-\epsilon,\,\,&i=j=M\\\ \epsilon,&j=-M\,\,.\end{cases}$ This is an irreducible, aperiodic Markov chain on a finite state space with stationary distribution $\pi$. In particular, by the weak Markov property, this sequence is strictly stationary, and is absolutely regular, as a consequence of standard convergence results for irreducible, aperiodic Markov chains on a finite state space (see Freedman, (2011)). A sample path from such a process is shown below. --- Figure 2.2: A sample path from the process described in Example 2.2, with $n=10^{5}$, $\epsilon=10^{-4}$, and $M=10^{5}$. For an appropriate choice of $\epsilon$ and $M$, this process exhibits the same local behaviour as the process in Example 2.1; namely, for any arbitrary run length $K\in\mathbb{N}$ and any arbitrarily small $\delta>0$, one can construct such a process for which, for any $n$, $\mathbb{P}(X_{n}<X_{n+1}<\dots<X_{n+K})\geq(1-\delta)\,\,,$ i.e. one may construct a sequence with arbitrarily strong local monotone trend. In light of these examples, we proceed to define the null hypotheses of no global monotone trend and no local monotone trend. In order to test the hypothesis of no global monotone trend in the setting of weakly dependent sequences, we use the null hypothesis $H_{0}^{(g)}:\\{X_{n}\\}\text{ is strictly stationary.}$ In order to test the hypothesis of no local monotone trend of order $M$ in the setting of strictly stationary, weakly dependent sequences, we use the null hypothesis $H^{(l)}_{0,\,M}:\frac{2}{(n-M)M}\sum_{i<j\leq i+M}\left(\mathbb{P}(X_{i}<X_{j})-\mathbb{P}(X_{i}>X_{j})\right)=0\,\,.$ Of course, under strict stationarity, this is equivalent to the null hypothesis $\tilde{H}^{(l)}_{0,\,M}:\sum_{i=2}^{M+1}(\mathbb{P}(X_{1}<X_{i})-\mathbb{P}(X_{1}>X_{i}))=0\,\,.$ With these null hypotheses precisely defined, we may now begin construction of appropriate permutation testing procedures for both global and local monotone trend. ## 3 Testing for global trend In this section, we consider the problem of testing the null hypothesis $H^{(g)}_{0}:\\{X_{i}:i\in[n]\\}\text{ is stationary,}$ in the setting where the sequence $\\{X_{i}:i\in[n]\\}$ is absolutely regular. When the distribution of $(X_{1},\,\dots,\,X_{n})$ is invariant under permutation, i.e. the sequence is exchangeable, the randomization hypothesis holds, and so one may construct permutation tests of the hypothesis $H_{0}^{(g)}$ with exact level $\alpha$. However, in the case of absolutely regular sequences, by Lemma S.3.1 in Romano and Tirlea, (2022), exchangeability and i.i.d. are equivalent conditions. This implies that any permutation testing procedure will retain the property of exactness under the additional assumption that the $X_{i}$ are i.i.d., and this is the only condition under which the randomization hypothesis holds in this setting. However, if the realizations of the sequence $\\{X_{i}:i\in[n]\\}$ are not independent or are not i.i.d., the test may not be valid even asymptotically, i.e. the rejection probability of a permutation test may not be equal to $\alpha$ or even close to $\alpha$ as $n\to\infty$. Our goal is therefore to construct a testing procedure which has asymptotic rejection probability equal to $\alpha$, but which also retains finite sample exactness under the additional assumption that the $X_{i}$ are i.i.d., and so appropriate consideration of the asymptotic properties of the permutation distribution and the test statistic must be undertaken. We wish to consider a permutation test based on the Mann-Kendall statistic, and so we may begin by defining the Mann- Kendall statistic and analyzing the limiting behavior of the permutation distribution $\hat{R}_{n}$ based on this test statistic. ### 3.1 The global Mann-Kendall statistic ###### Definition 3.1. For $X_{1},\,\dots,\,X_{n}$ a sequence of random variables, let $U_{n}=U_{n}(X_{1},\,\dots,\,X_{n})=\binom{n}{2}^{-1}\sum_{i=1}^{n}\sum_{j=i}^{n}\left(I\\{X_{j}>X_{i}\\}-I\\{X_{i}>X_{j}\\}\right)\,\,.$ be the (global) Mann-Kendall statistic. ###### Theorem 3.1. Let $\\{X_{i}:i\in\mathbb{N}\\}$ be a sequence of random variables such that, for all $i\neq j$, $\mathbb{P}(X_{i}=X_{j})=0$. Let $\hat{R}_{n}$ be the permutation distribution, based on the test statistic $\sqrt{n}U_{n}$, with associated group of transformations $S_{n}$, the symmetric group of order $n$. We have that, as $n\to\infty$, $\sup_{t\in\mathbb{R}}\left|\hat{R}_{n}(t)-\Phi(3t/2)\right|\,\,,$ where $\Phi$ is the standard Gaussian c.d.f. ###### Proof. Let $\Pi_{n}\sim\text{Unif}(S_{n})$, independent of $\\{X_{i},\,i\in[n]\\}$. Let $U_{\Pi_{n}}=U_{n}(X_{\Pi_{n}(1)},\,\dots,\,X_{\Pi_{n}(n)})\,\,.$ Conditional on the sequence $\\{X_{i},\,i\in[n]\\}$, we observe that, on account of the lack of ties, we have that each ordering of the $X_{\Pi_{n}(i)}$ is equally likely. Since the distribution of $U_{n}$ only depends on the ranks of the sequence $\\{X_{i}:i\in[n]\\}$, the result follows by Mann, (1945). We observe that, other than the assumption of no ties among the $\\{X_{i}\\}$, no other conditions are required for the result of Theorem 3.1. This occurs since $U_{n}$ is a rank statistic, and, under the action of a random permutation $\Pi_{n}$, the ranks of the $\\{X_{i}\\}$ are uniformly distributed over the set of permutations $S_{n}$, and so the permutation distribution $\hat{R}_{n}$ is exactly equal to the distribution of $U_{n}$ under the when the $X_{i}$ are independent and identically distributed. In order to appropriately assess the asymptotic validity of the permutation test based on this statistic, we must turn our attention to determining the asymptotic distribution of $U_{n}$. ### 3.2 Asymptotic distribution of the Mann-Kendall statistic Having determined the limiting behavior of the permutation distribution based on $U_{n}$, we now consider the true unconditional limiting distribution of the test statistic, since consistency would require these limiting distributions be the same. Due to allowing quite general forms of weak dependence, the asymptotic distribution of the test statistic presents more of a challenge. We proceed as follows: although $U_{n}$ is not a U-statistic, since the corresponding kernel $h$ is antisymmetric, we may apply the same idea behind the projection method used to find the asymptotic distribution of U-statistics as in the original proof due to Hoeffding, (1948), i.e. we linearize the statistic $U_{n}$. Note, however, the linearization is not based on the true “projection” based on conditional expectations, but rather based on a “pseudo-projection”, which assumes the observations are i.i.d. In order to perform this linearization, we require the following definition, as well as the subsequent lemma. ###### Definition 3.2. For $\\{i_{j}:j\in[k]\\}\subset\mathbb{N}$, let $\\{X_{i_{j}}:j\in[k]\\}$ be a sequence of random variables. For each $j\in\\{1,\,\dots,\,k-1\\}$, let $\mathbb{P}_{j}^{(k)}$ be the probability measure defined by $\mathbb{P}_{j}^{(k)}(E^{(j)}\times E^{(k-j)})=\mathbb{P}((X_{i_{1}},\,\dots,\,X_{i_{j}})\in E^{(j)})\mathbb{P}((X_{i_{j+1}},\,\dots,\,X_{i_{k}})\in E^{(k-j)})\,\,,$ where $E^{(j)}$ and $E^{(k-j)}$ are Borel sets in $\mathbb{R}^{j}$ and $\mathbb{R}^{k-j}$, respectively. Also, let $\mathbb{P}_{0}^{k}$ be the probability measure defined by $\mathbb{P}_{0}^{k}(E^{(k)})=\mathbb{P}((X_{i_{1}},\,\dots,\,X_{i_{k}})\in E^{(k)})\,\,,$ where $E^{(k)}$ is a Borel set in $\mathbb{R}^{k}$. ###### Lemma 3.1. Let $h:\mathbb{R}^{k}\to\mathbb{R}$ be a Borel function such that $|h|\leq M$, for some $M\in\mathbb{R}^{+}$. Then $\left|\int_{\mathbb{R}^{k}}h(x_{1},\,\dots,\,x_{k}){\rm{d}}\mathbb{P}^{(k)}_{0}-\int_{\mathbb{R}^{k}}h(x_{1},\,\dots,\,x_{k}){\rm{d}}\mathbb{P}_{j}^{(k)}\right|\leq 2M\beta_{X}(i_{j+1}-i_{j})\,\,.$ ###### Proof. Let $\nu$ be the signed measure $\mathbb{P}_{0}^{(k)}-\mathbb{P}_{j}^{(k)}$, and let $|\nu|$ denote its corresponding total variation measure. We have that $\displaystyle\left|\int_{\mathbb{R}^{k}}h(x_{1},\,\dots,\,x_{k}){\rm{d}}\mathbb{P}^{(k)}_{0}-\int_{\mathbb{R}^{k}}h(x_{1},\,\dots,\,x_{k}){\rm{d}}\mathbb{P}_{j}^{(k)}\right|$ $\displaystyle=\left|\int_{\mathbb{R}^{k}}h(x_{1},\,\dots,\,x_{k}){\rm{d}}\nu\right|$ $\displaystyle\leq\int_{\mathbb{R}^{k}}|h(x_{1},\,\dots,\,x_{k})|{\rm{d}}|\nu|$ $\displaystyle\leq 2M\cdot\text{TV}\left(\mathbb{P}_{0}^{(k)},\,\mathbb{P}_{j}^{(k)}\right)$ $\displaystyle=2M\beta_{X}(i_{j+1}-i_{j})\,\,,$ where TV denotes total variation distance, and the final equality follows by Volkonskii and Rozanov, (1961). We utilize this lemma as follows. Let $F$ be the marginal distribution of the $X_{i}$. In essence, we may not perform a true linearization, since, for $i\neq j$, the conditional expectation $\mathbb{P}(X_{i}\leq X_{j}\,|\,X_{j})$ is not equal to $F(X_{j})$, since the $X_{i}$ need not be independent. In spite of this, Lemma 3.1 provides an upper bound on the difference between the true projection $\mathbb{P}(X_{i}\leq X_{j}\,|\,X_{j})$ and the approximate projection $F(X_{j})$, in terms of the $\beta$-mixing coefficients of the sequence $\\{X_{n}:n\in\mathbb{N}\\}$. By requiring appropriate $\beta$-mixing conditions on the sequence $\\{X_{i}\\}$, we may bound the error term of this approximate linearization, and, using the central limit theorem due to Neumann, (2013), we may conclude the following result. ###### Theorem 3.2. Let $\\{X_{n}:n\in\mathbb{N}\\}$ be a strictly stationary, absolutely regular sequence of real-valued random variables with marginal distribution $F$, such that, for all $i\neq j$, $\mathbb{P}(X_{i}=X_{j})=0$. Suppose that the $\beta$-mixing coefficients of $\\{X_{n}\\}$ satisfy $\sum_{n=1}^{\infty}\beta_{X}(n)<\infty\,\,.$ (3.1) For each $i\in[n]$, let $V_{i}=1-2F(X_{i})$. Let $\sigma^{2}=\frac{4}{9}+\frac{8}{3}\sum_{k\geq 1}\text{Cov}(V_{1},\,V_{1+k})\,\,.$ (3.2) Suppose that $\sigma^{2}>0$. As $n\to\infty$, $\sqrt{n}U_{n}\overset{d}{\to}N(0,\,\sigma^{2})\,\,.$ ###### Remark 3.1. Note that, as a consequence of the proof of Theorem 3.2, we have that $\mathbb{E}[\sqrt{n}U_{n}]=o(1)\,\,,$ i.e. for any stationary, absolutely regular sequence satisfying the conditions laid out therein, the limiting mean of the test statistic is 0. In particular, this motivates our choice of null hypothesis $H_{0}^{(g)}$, since this will be the case for any stationary sequence $\\{X_{n}:n\in\mathbb{N}\\}$ satisfying the requisite mixing conditions. ###### Remark 3.2. Since, for any sequence $\\{X_{n}:n\in\mathbb{N}\\}$, we have that, for all $n$, $\beta_{X}(n)\geq 2\alpha_{X}(n)\,\,,$ it follows that the condition (3.1) implies that $\sum_{n\geq 1}\alpha_{X}(n)<\infty\,\,.$ This condition is sufficient for Theorem 2.1 of Neumann, (2013) to be applied in the above proof. It also follows that $\sum_{k=1}^{\infty}\beta_{X}(k)<\infty\implies\sum_{k=1}^{n}k\beta_{X}(k)=o(n)\,\,,$ a proof of which is as follows. For $(k,\,l)\in\mathbb{N}\times(\mathbb{N}\cup\\{\infty\\})$ such that $k\leq l$, let $S_{k,\,l}=\sum_{i=k}^{l}\beta_{X}(i)\,\,.$ Let $\epsilon>0$. Without loss of generality, we may assume that $\epsilon<S_{1,\,\infty}$. There exists $N\in\mathbb{N}$ such that, for all $k\geq N$, $S_{k,\,\infty}<\epsilon\,\,.$ We have that, for $n\geq N$, $\displaystyle\sum_{k=1}^{n}k\beta_{X}(k)$ $\displaystyle=\sum_{k=1}^{n}S_{k,\,n}$ $\displaystyle=\sum_{k=1}^{N-1}S_{k,\,n}+\sum_{k=N}^{n}S_{k,\,N}$ $\displaystyle\leq(N-1)S_{1,\,\infty}+(n-N+1)\epsilon$ $\displaystyle\leq 2n\epsilon\,\,,$ for all $n\geq NS_{1,\,\infty}/\epsilon$. However, due to the above two implications, it may be seen that the condition (3.1) may be replaced with the weaker conditions $\displaystyle\sum_{k=1}^{\infty}\alpha_{X}(k)$ $\displaystyle<\infty$ (3.3) $\displaystyle\sum_{k=1}^{n}k\beta_{X}(k)$ $\displaystyle=o(n)\,\,,$ without affecting the result of Theorem 3.2. ###### Remark 3.3. The proof of Theorem 3.2 follows the same structure as the original proof of the limiting distribution of U-statistics found in Hoeffding, (1948). The statistic in question is split into a linearized term, formed by conditioning on one of the entries in the kernel $h$, and a remainder term. It is then shown that the remainder term converges to 0 in probability, and that the linearized term satisfies a central limit theorem. There are two immediate issues with this approach in the case of the Mann- Kendall statistic for dependent data: firstly, the kernel $h$ is antisymmetric, hence the order of projection becomes relevant to the resulting linearized term. In the case of independent, but not necessarily i.i.d. random variables, this has been previously considered in Han and Qian, (2018), with an application to the Mann-Kendall statistic. The second, and more challenging, issue is that, in the case of dependent data, the conditional expectation $\mathbb{E}[h(X_{i},\,X_{j})\,|\,X_{j}]$ (3.4) depends not only on $X_{j}$, but also on the conditional distribution of $X_{i}$ given $X_{j}$. In particular, in the case of a strictly stationary sequence, (3.4) depends on both $X_{j}$ and $j-i$. This suggests that a true projection, formed by conditioning on the variables $\\{X_{i}\\}$, will not lead to a sum of $n$ random variables which is more amenable to the application of a central limit theorem. However, as the proof of Theorem 3.2 demonstrates, one may take the approach of pseudo-projection: namely, we may rewrite the full statistic as a sum of linear terms which would be obtained by projection if the data were i.i.d., in addition to a remainder term. Heuristically, the reasoning behind this approach is as follows. By Lemma 3.1, the difference between the true projection and pseudo-projection for any given term in the Mann-Kendall statistic is $\mathbb{E}\left[|(1-2F(X_{j}))-\mathbb{E}[h(X_{i},\,X_{j})\,|\,X_{j}]|\right]=O(\beta_{X}(j-i))\,\,.$ By Markov’s inequality, the sum of the differences is therefore “small in probability” under the given conditions, and so a CLT may be applied to the sum of pseudo-projections without penalty. While we have shown that a central limit theorem holds for the statistic $U_{n}$, we observe, that, in general, the limiting variance (3.2) of the global Mann-Kendall statistic is not, in general, equal to the limiting variance of the permutation distribution, as found in Theorem 3.1. In particular, this implies that a permutation test based on this statistic does not control the probability of a Type 1 error, even asymptotically. We therefore turn our attention to studentization, i.e. we wish to find an appropriate estimator of the variance (3.2). To this end, we provide the following theorem. ###### Theorem 3.3. Let $\\{X_{n}:n\in\mathbb{N}\\}$ be a sequence of random variables satisfying the conditions in Theorem 3.2. Let $\hat{F}_{n}$ be the empirical distribution of $\\{X_{n}:n\in\mathbb{N}\\}$. Let $\\{b_{n}:n\in\mathbb{N}\\}\subset\mathbb{N}$ be a sequence such that $b_{n}<n$ for all $n$, $b_{n}=o(\sqrt{n})$, and, as $n\to\infty$, $b_{n}\to\infty$. Let $\hat{\sigma}_{n}^{2}=\frac{4}{9}+\frac{8}{3n}\sum_{k=1}^{b_{n}}\sum_{j=1}^{n-k}\left(1-2\hat{F}_{n}(X_{j})\right)\left(1-2\hat{F}_{n}(X_{j+k})\right)\,\,.$ (3.5) Let $\sigma^{2}$ be as in Theorem 3.2. Then, as $n\to\infty$, $\hat{\sigma}_{n}^{2}\overset{p}{\to}\sigma^{2}\,\,.$ Having constructed a consistent estimator of the variance, we may now studentize to obtain the following result. ###### Theorem 3.4. Let $\\{X_{n}:n\in\mathbb{N}\\}$ be a strictly stationary, absolutely regular sequence of random variables, with marginal distribution $F$, such that, for all $i\neq j$, $\mathbb{P}(X_{i}=X_{j})=0$. Suppose that the $\beta$-mixing coefficients of $\\{X_{n}\\}$ satisfy $\sum_{n=1}^{\infty}\beta_{X}(n)<\infty\,\,.$ For each $i\in\mathbb{N}$, let $V_{i}=1-2F(X_{i})$. Let $\sigma^{2}$ be as in (3.2), and suppose that $\sigma^{2}>0$. Let $\\{b_{n}:n\in\mathbb{N}\\}\subset\mathbb{N}$ be a sequence such that $b_{n}<n$ for all $n$, $b_{n}=o(\sqrt{n})$, and, as $n\to\infty$, $b_{n}\to\infty$. Let $\hat{\sigma}_{n}^{2}=\frac{4}{9}+\frac{8}{3n}\sum_{k=1}^{b_{n}}\sum_{j=1}^{n-k}\left(1-2\hat{F}_{n}(X_{j})\right)\left(1-2\hat{F}_{n}(X_{j+k})\right)\,\,.$ The following results hold. 1. (i) As $n\to\infty$, $\frac{\sqrt{n}U_{n}}{\hat{\sigma}_{n}}\overset{d}{\to}N(0,\,1)\,\,.$ 2. (ii) Let $\hat{R}_{n}$ be the permutation distribution, with associated group of transformations $S_{n}$, the symmetric group of order $n$, based on the test statistic $\sqrt{n}U_{n}/\hat{\sigma}_{n}$. Then, as $n\to\infty$, $\sup_{t\in\mathbb{R}}\left|\hat{R}_{n}(t)-\Phi(t)\right|\overset{a.s.}{\to}0\,\,,$ where $\Phi$ is the standard Gaussian c.d.f. ###### Proof. The result of (i) follows immediately from Theorem 3.2, Theorem 3.3, and Slutsky’s theorem. We turn our attention to (ii). Let $\Pi_{n},\,\Pi_{n}^{\prime}\sim\text{Unif}(S_{n})$, with $\Pi_{n}$, $\Pi_{n}^{\prime}$, and $(X_{1},\,\dots,\,X_{n})$ independent, and let $\mathbf{X}_{\Pi_{n}}$, $\mathbf{X}_{\Pi_{n}^{\prime}}$ denote the action of $\Pi_{n}$ and $\Pi_{n}^{\prime}$ on $(X_{1},\,\dots,\,X_{n})$, respectively. Since $F$ is continuous, the ranks of the sequences $\mathbf{X}_{\Pi_{n}}$ and $\mathbf{X}_{\Pi_{n}^{\prime}}$ are uniformly distributed over the set of permutations of $[n]$ with probability 1. Also, since $\Pi_{n}$ and $\Pi_{n}^{\prime}$ are independent, the ranks of $\mathbf{X}_{\Pi_{n}}$ and $\mathbf{X}_{\Pi_{n}^{\prime}}$ are independent. Furthermore, note that, for each $i\in[n]$, $\hat{F}_{\Pi_{n}}(X_{\Pi_{n}(i)})=\frac{1}{n}r(X_{\Pi_{n}(i)})\,\,,$ for $r(X_{\Pi_{n}(i)})$ the rank of $X_{\Pi_{n}(i)}$, i.e. $\hat{\sigma}_{\Pi_{n}}$ is also a rank statistic. In particular, it follows that, for $U_{\Pi_{n}}=U_{n}(\mathbf{X}_{\Pi_{n}})\,\,,$ and for $U_{\Pi_{n}^{\prime}}$ defined analogously, $U_{\Pi_{n}}/\hat{\sigma}_{\Pi_{n}}$ and $U_{\Pi_{n}^{\prime}}/\hat{\sigma}_{\Pi_{n}^{\prime}}$ are i.i.d. random variables, each having the same distribution as $\frac{U_{n}(\tilde{X}_{1},\,\dots,\,\tilde{X}_{n})}{\hat{\sigma}_{n}(\tilde{X}_{1},\,\dots,\,\tilde{X}_{n})}\,\,,$ where $\\{\tilde{X}_{i}:i\in[n]\\}$ is a sequence of i.i.d. random variables, each having the marginal distribution $F$. It follows that, by (i), as $n\to\infty$, $\left(\frac{\sqrt{n}U_{\Pi_{n}}}{\hat{\sigma}_{\Pi_{n}}},\,\frac{\sqrt{n}U_{\Pi_{n}^{\prime}}}{\hat{\sigma}_{\Pi_{n}^{\prime}}}\right)\overset{d}{\to}N(0,\,I_{2})\,\,,$ where $I_{2}$ is the $2\times 2$ identity matrix. Therefore, by Theorem 15.2.3 of Lehmann and Romano, (2022), the result of (ii) follows. ###### Remark 3.4. As a consequence of Theorem 3.4, a two-sided permutation test of the null hypothesis $H_{0}^{(g)}:\\{X_{n}:n\in\mathbb{N}\\}\text{ is stationary,}$ which rejects for large values of $\sqrt{n}{U_{n}}/\hat{\sigma}_{n}$, is asymptotically valid under the stated conditions. ###### Remark 3.5. Since $\sqrt{n}U_{n}/\hat{\sigma}_{n}$ is a rank statistic, it follows that the permutation distribution $\hat{R}_{n}$ has no dependence on the underlying distribution of the $X_{i}$, as long as the marginal distribution $F$ is continuous. In particular, it follows that $\hat{R}_{n}$ retains a convenient property of the original null distribution of the Mann-Kendall test: namely, for a fixed choice of $n$ and the truncation parameter $b_{n}$ of the studentization factor, the permutation distribution is fixed, and so may be computed in advance and tabulated. Having shown the asymptotic validity of the permutation test based on the studentized global Mann-Kendall statistic, we turn our attention to finding the local limiting power function of this test. ###### Theorem 3.5. Let $\\{X_{i}:i\in[n]\\}$ be a sequence of random variables satisfying the conditions of Theorem 3.2. Suppose further that $F$ is twice differentiable, with density $f$ with respect to Lebesgue measure, such that $f$ has uniformly bounded first derivative $f^{\prime}$, which is measurable (with respect to Lebesgue measure). Let $\\{\mu_{n,\,i}:i\in[n]\\}\subset\mathbb{R}$ be such that, as $n\to\infty$, $\displaystyle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\mu_{n,\,i}^{2}$ $\displaystyle\to 0\,\,.$ (3.6) Let $\nu_{n}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{n+1-2i}{n-1}\mu_{n,\,i}\,\,.$ For each $i\in[n]$, let $Y_{n,\,i}=X_{i}+\mu_{n,\,i}\,\,.$ Let $\mathbb{P}_{n}$, $\mathbb{Q}_{n}$ be the measures such that, for $A\in\mathcal{B}\left(\mathbb{R}^{n}\right)$, $\displaystyle\mathbb{P}_{n}(A)$ $\displaystyle=\mathbb{P}((X_{1},\,\dots,\,X_{n})\in A)$ $\displaystyle\mathbb{Q}_{n}(A)$ $\displaystyle=\mathbb{P}((Y_{n,\,1},\,\dots,\,Y_{n,\,n})\in A)\,\,.$ Suppose that $\\{\mathbb{Q}_{n}:n\in\mathbb{N}\\}$ is contiguous to $\\{\mathbb{P}_{n}:n\in\mathbb{N}\\}$. Then, under $\mathbb{Q}_{n}$, as $n\to\infty$, $\sqrt{n}U_{n}+\nu_{n}\overset{d}{\to}N(0,\,\sigma^{2})\,\,,$ where $\sigma^{2}$ is as defined in Theorem 3.2. ###### Remark 3.6. Note that the condition on $f^{\prime}$ implies that $f$ is uniformly bounded. Indeed, suppose the contrary. Let $\lVert f^{\prime}\rVert_{\infty}=k<\infty$. Then, for $N>\sqrt{\frac{1}{2k}}$, for $x$ such that $f(x)\geq N+1$, $\displaystyle\int_{x}^{x+2/N}f(t){\rm{d}}t$ $\displaystyle\geq\int_{x}^{x+2/N}(f(x)-kt){\rm{d}}t$ $\displaystyle=\frac{2f(x)}{N}-k\cdot\frac{2}{N^{2}}$ $\displaystyle>2+\frac{2}{N}-1$ $\displaystyle>1\,\,,$ i.e. we have obtained a contradiction. Note also that this implies that the random variable $f(X_{1})$ is bounded, and so, in particular, has finite moments of every order. ###### Corollary 3.1. Under the conditions of Theorem 3.5, for $\hat{\sigma}_{n}^{2}$ as defined in Theorem 3.3, we have that, under $\mathbb{Q}_{n}$, $\frac{\sqrt{n}U_{n}+\nu_{n}}{\hat{\sigma}_{n}}\overset{d}{\to}N(0,\,1)\,\,.$ ###### Proof. The result follows immediately from Theorem 3.5, Theorem 3.3, the definition of contiguity, and Slutsky’s theorem. ###### Remark 3.7. By Corollary 3.1, it follows that, as long as $\nu_{n}\to\nu\in[-\infty,\,\infty]$, for $\phi_{n}=\phi_{n}(Y_{1},\,\dots,\,Y_{n})$ the test function corresponding to the level $\alpha$ permutation test outlined in Theorem 3.4, the limiting power against the sequence of measures $\mathbb{Q}_{n}$ is given by $\mathbb{E}_{\mathbb{Q}_{n}}\left[\phi_{n}(Y_{1},\,\dots,\,Y_{n})\right]\to 1-\Phi\left(z_{1-\alpha}+\frac{\nu}{\sigma}\right)\,\,,$ where $\Phi$ is the standard Gaussian c.d.f., $\sigma$ is as defined in (3.2), and $z_{1-\alpha}$ is the $(1-\alpha)$-quantile of the standard Gaussian distribution. ###### Example 3.1. (White noise process) Let $\\{X_{n}:\,n\in\mathbb{N}\\}$ be a standard Gaussian white noise process, i.e. $X_{n}\sim N(0,\,1)$. Let $h>0$. For each $n\in\mathbb{N}$, for each $i\in[n]$, let $Y_{n,\,i}=X_{i}+\frac{hi}{n^{3/2}}\,\,,$ i.e. in the setting of Theorem 3.5, we take $\mu_{n,\,i}=\frac{hi}{n^{3/2}}\,\,.$ Let $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$ be as in Theorem 3.5. We begin by showing that $\\{\mathbb{Q}_{n}\\}$ is contiguous to $\\{\mathbb{P}_{n}\\}$. We have that the log-likelihood ratio is given by $\log L_{n}=-\frac{h^{2}}{2n^{3}}\sum_{i=1}^{n}i^{2}+\frac{h}{n^{3/2}}\sum_{i=1}^{n}iX_{i}\,\,.$ Let $\mu_{n}=\frac{h^{2}}{6n^{3}}n(n+1)(2n+1)\,\,.$ (3.7) We have that $\log L_{n}\sim N\left(-\frac{1}{2}\mu_{n},\,\mu_{n}^{2}\right)\,\,,$ and so, by Corollary 12.3.1 of Lehmann and Romano, (2022), $\\{\mathbb{Q}_{n}\\}$ and $\\{\mathbb{P}_{n}\\}$ are mutually contiguous. Also, $\displaystyle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(\frac{hi}{n^{3/2}}\right)^{2}$ $\displaystyle=\frac{h^{2}}{n^{7/2}}\sum_{i=1}^{n}i^{2}$ $\displaystyle=\frac{h^{2}n(n+1)(2n+1)}{n^{7/2}}$ $\displaystyle=o(1)\,\,.$ We may therefore apply Theorem 3.5, and so $\sqrt{n}U_{n}+\nu_{n}\overset{d}{\to}N\left(0,\,\frac{4}{9}\right)\,\,.$ By Remark 3.7, in order to compute the limiting power of the one-sided level $\alpha$ permutation test, it remains to compute the limit of the sequence $\\{\nu_{n}\\}$. We have that, as $n\to\infty$, $\displaystyle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{n+1-2i}{n-1}\cdot\frac{hi}{n^{3/2}}$ $\displaystyle=-\frac{h(n+1)}{6n}$ $\displaystyle\to-\frac{h}{6}\,\,.$ Therefore the local limiting power function of the permutation test is given by $\mathbb{E}_{\mathbb{Q}_{n}}[\phi_{n}]\to 1-\Phi\left(z_{1-\alpha}-\frac{h}{4}\right)\,\,,$ where $\Phi$ is the standard Gaussian c.d.f. ###### Example 3.2. ($AR(1)$ process) For $n\geq 1$, let $\\{X_{n}:\,n\in\mathbb{Z}\\}$ satisfy the equation, for $\rho\in\mathbb{R}$ such that $|\rho|<1$, $X_{n+1}=\rho X_{n}+\epsilon_{n+1}\,\,,$ (3.8) where the $\epsilon_{k}$ are independent and identically distributed, with $\mathbb{E}[\epsilon_{k}]=0$, $\mathbb{E}\left[\epsilon_{k}^{2}\right]=1$, and, for some $\delta>0$, $\mathbb{E}\left[\left|\epsilon_{k}\right|^{2+\delta}\right]<\infty\,\,,$ i.e. $X$ is an $AR(1)$ process. Since $|\rho|<1$, there exists a unique stationary solution to (3.8). By Mokkadem, (1988), Theorem 1, if the distribution of the $\epsilon_{k}$ is absolutely continuous with respect to Lebesgue measure, the conditions of Theorem 3.4 are satisfied. Therefore, asymptotically, the rejection probability of the permutation test applied to such a sequence will be equal to the nominal level $\alpha$. Now, consider the triangular array of sequences given by, for $i\in[n]$, $Y_{n,\,i}=X_{i}+\mu_{n,\,i}\,\,,$ where $\mu_{n,\,i}=\frac{hi}{n^{3/2}}\,\,.$ Let $\mathbb{P}_{n}$ and $\mathbb{Q}_{n}$ be as in Theorem 3.5. We begin by showing that $\\{\mathbb{Q}_{n}\\}$ is contiguous to $\\{\mathbb{P}_{n}\\}$. We have that the log-likelihood ratio is given by $\displaystyle\log L_{n}$ $\displaystyle=-\frac{1}{2\left(1-\rho^{2}\right)}\sum_{i=1}^{n}\mu_{n,\,i}^{2}+\frac{1}{\left(1-\rho^{2}\right)}\sum_{i=1}^{n}\mu_{n,\,i}(X_{i}-\rho X_{i-1})$ $\displaystyle=-\frac{1}{2\left(1-\rho^{2}\right)}\sum_{i=1}^{n}\mu_{n,\,i}^{2}+\frac{1}{\left(1-\rho^{2}\right)}\sum_{i=1}^{n}\mu_{n,\,i}\epsilon_{i}\,\,.$ Let $\mu_{n}$ be as in (3.7). We have that $\displaystyle\text{Var}\left(\sum_{i=1}^{n}\mu_{n,\,i}\epsilon_{i}\right)$ $\displaystyle=\mu_{n}^{2}$ $\displaystyle=\frac{h^{2}}{3}\left(1+o(1)\right)\,\,.$ Also, by an integral approximation, $\displaystyle\sum_{i=1}^{n}\mathbb{E}\left[\left|\mu_{n,\,i}\epsilon_{i}\right|^{2+\delta}\right]$ $\displaystyle=\mathbb{E}\left[|\epsilon_{1}|^{2+\delta}\right]\cdot\frac{h^{2+\delta}}{3n^{3(2+\delta)/2}}\sum_{i=1}^{n}i^{2+\delta}$ $\displaystyle=\mathbb{E}\left[|\epsilon_{1}|^{2+\delta}\right]\cdot\frac{h^{2+\delta}\cdot n^{3+\delta}}{3n^{3+3\delta/2}}\cdot\frac{1}{3+\delta}(1+o(1))$ $\displaystyle=o(1)\,\,.$ In particular, it follows that the triangular array $Z_{n,\,i}=\mu_{n,\,i}\epsilon_{i}$ satisfies the Lyapunov condition, and so, by the Lyapunov central limit theorem and Slutsky’s theorem, $\log L_{n}\overset{d}{\to}N\left(-\frac{h^{2}}{6},\,\frac{h^{2}}{3}\right)\,\,.$ Therefore, by Corollary 12.3.1 of Lehmann and Romano, (2022), $\\{\mathbb{Q}_{n}\\}$ and $\\{\mathbb{P}_{n}\\}$ are mutually contiguous. Hence, by the same argument as in Example 3.1, as long as the marginal distribution of the $X_{i}$ satisfies the conditions of Theorem 3.5, the local limiting power of the one-sided level $\alpha$ permutation test is given by $\mathbb{E}_{\mathbb{Q}_{n}}[\phi_{n}]\to 1-\Phi\left(z_{1-\alpha}-\frac{h}{6\sigma}\right)\,\,,$ where $\sigma$ is as in (3.2). ## 4 Tests of local trend In this section, we discuss the problem of conducting a hypothesis test of local trend, i.e. of testing the null hypothesis, for a stationary sequence $\\{X_{n}:n\in\mathbb{N}\\}$ and some $M\in\mathbb{N}$, $H^{(l)}_{0,\,M}:\frac{1}{(n-M)M}\sum_{i=1}^{n-M}\sum_{j={i+1}}^{i+M}\left(\mathbb{P}(X_{i}<X_{j})-\mathbb{P}(X_{i}>X_{j})\right)=0\,\,.$ Note that, if $H_{0,\,M}^{(l)}$ holds for all $M\in\mathbb{N}$, we have that, for all $i\neq j$, $\mathbb{P}(X_{i}<X_{j})=\mathbb{P}(X_{i}>X_{j})\,\,,$ i.e. the intersection of this countable family of null hypotheses results in a stronger condition on the sequence $\\{X_{n}:n\in\mathbb{N}\\}$ than $H_{0}^{(g)}$. In order to construct an appropriate permutation test of $H_{0,\,M}^{(l)}$, we begin by defining a family of statistics, which we term the local Mann-Kendall statistics. ### 4.1 The local Mann-Kendall statistic ###### Definition 4.1. Let $n\in\mathbb{N}$, and let $\\{X_{i}:i\in[n]\\}$ be a sequence of random variables. Let $g(n)\in[n-1]$. Let $V_{n}=V_{n}(X_{1},\,\dots,\,X_{n})=\frac{1}{ng(n)}\sum_{i=1}^{n-g(n)}\sum_{j=i+1}^{i+g(n)}\left(I\\{X_{i}<X_{j}\\}-I\\{X_{i}>X_{j}\\}\right)$ be the local Mann-Kendall statistic of order $g(n)$. ###### Remark 4.1. A heuristic interpretation of the parameter $g(n)$ is that it functions as a choice of local bandwidth, i.e. a choice of larger values of $g(n)$ will increase the consideration of the ordering of values of the $X_{i}$ that are further apart in time. While one may define the local Mann-Kendall statistic of order $g(n)$ for any value of $g(n)$ less than $(n-1)$, its primary use throughout this section will be to test hypotheses of a lack of local monotone trend. In general, the two choices of $g:\mathbb{N}\to\mathbb{N}$ under consideration throughout the remainder of this section will be either $g(n)=M$ for all $n\in\mathbb{N}$, or $g$ being a nondecreasing function of $n$ such that $g(n)/n\to 0$ as $n\to\infty$. With the definition of the local Mann-Kendall statistic in hand, we may begin consideration of a permutation test based on $V_{n}$. Under the additional assumption that the sequence $\\{X_{i}:i\in[n]\\}$ is i.i.d., the randomization hypothesis holds, and so the permutation test based on any test statistic will be exact in finite samples at the nominal level $\alpha$. However, in a weakly dependent setting, the randomization hypothesis does not hold, and so the permutation test based on may not be exact in finite samples or even asymptotically valid. In order to commence appropriately assessment of the validity of such a test, we provide the following result, which describes the limiting permutation distribution based on the test statistic $\sqrt{ng(n)}V_{n}$. ###### Theorem 4.1. Let $\\{X_{i}:i\in\mathbb{N}\\}$ be a sequence of random variables such that, for all $i\neq j$, $\mathbb{P}(X_{i}=X_{j})=0$. Let $g:\mathbb{N}\to\mathbb{N}$ be such that $g(n)=o\left(n^{1/7}\right)$. Let $\hat{R}_{n}$ be the permutation distribution, based on the test statistic $\sqrt{ng(n)}V_{n}$, with associated group of transformations $S_{n}$, the symmetric group of order $n$. For each $t\in\mathbb{R}$, we have that, as $n\to\infty$, $\sup_{t\in\mathbb{R}}\left|\hat{R}_{n}(t)-\Phi(t\sqrt{3})\right|\to 0\ \,\,,$ where $\Phi$ is the distribution of a standard Gaussian random variable. ###### Example 4.1. ($MA(2),\,g(n)\equiv 1$) Consider the setting in which $g(n)=1$ for all $n\in\mathbb{N}$, and, for all $i$, $X_{i}=\phi_{0}\epsilon_{i}+\phi_{1}\epsilon_{i-1}\,\,,$ where the $\epsilon_{i}$ are i.i.d. random variables with a symmetric continuous distribution $F$. We have that $V_{n}=\frac{1}{n}\sum_{i=1}^{n-1}\left(I\\{X_{i+1}>X_{i}\\}-I\\{X_{i+1}<X_{i}\\}\right)\,\,.$ Let $\displaystyle p_{1}$ $\displaystyle=\mathbb{P}\left(X_{3}>X_{2}>X_{1}\right)$ $\displaystyle p_{2}$ $\displaystyle=\mathbb{P}\left(X_{3}<X_{2}<X_{1}\right)\,\,,$ and let $r=2(p_{1}+p_{2})-1$. We have that $\displaystyle\text{Var}(V_{n})$ $\displaystyle=\frac{1}{n^{2}}\left(n-1+2\sum_{i=1}^{n-2}\text{Cov}\left(I\\{X_{i+2}>X_{i+1}\\}-I\\{X_{i+2}<X_{i+1}\\},\,I\\{X_{i+1}>X_{i}\\}-I\\{X_{i+1}<X_{i}\\}\right)\right)$ $\displaystyle=\frac{1}{n^{2}}\left(n-1+2r(n-2)\right)$ $\displaystyle=\frac{1+2r}{n}+o\left(\frac{1}{n}\right)\,\,.$ It follows that the limiting variance of $\sqrt{ng(n)}V_{n}$ is given by $\lim_{n\to\infty}\text{Var}\left(\sqrt{ng(n)}V_{n}\right)=1+2r\,\,.$ In the setting in which the $X_{i}$ are i.i.d. with marginal distribution $F$, we have that $p_{1}=p_{2}=1/6$, and so $r=-1/3$. However, since $r$ is, in general, not equal to $-1/3$ in the setting described above, this is not equal to the variance computed in Theorem 4.1. As illustrated by Example 4.1, a permutation test based on $\sqrt{ng(n)}V_{n}$ will not be asymptotically valid, since the limiting variance of the permutation distribution and the limiting variance of the test statistic will not be equal in general. We may however proceed as usual, and find an appropriate estimator of the limiting variance of $V_{n}$ which is asymptotically consistent, and which, under the action of a random permutation, converges to 1 in probability. ###### Lemma 4.1. Let $\\{X_{i},\,i\in[n]\\}$ be a stationary, $\alpha$-mixing sequence such that, for all $i\neq j$, $\mathbb{P}(X_{i}=X_{j})=0$. Suppose also that $\sum_{n\geq 1}\alpha_{X}(n)<\infty\,\,.$ Let $g(n)\equiv G\in\mathbb{N}$ be constant. For each $i\in[n]$, let $Y_{i}=\sum_{j=(i-G)\vee 1}^{i-1}\left(I\\{X_{i}>X_{j}\\}-I\\{X_{i}<X_{j}\\}\right)\,\,.$ Let $b_{n}\in\mathbb{N}$ be such that $b_{n}=o(\sqrt{n})$ and, as $n\to\infty$, $b_{n}\to\infty$. Let $\hat{\sigma}_{n}^{2}=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-\bar{Y}_{n}\right)^{2}+\frac{2}{n}\sum_{j=1}^{b_{n}}\sum_{i=1}^{n-j}(Y_{j}-\bar{Y}_{n})(Y_{i+j}-\bar{Y}_{n})\,\,.$ (4.1) Let $\sigma^{2}=\text{Var}(Y_{G+1})+2\sum_{k\geq 1}\text{Cov}(Y_{G+1},\,Y_{G+k+1})\,\,.$ We have that $\sigma^{2}<\infty$, and, as $n\to\infty$, $\hat{\sigma}_{n}^{2}\overset{p}{\to}\sigma^{2}\,\,.$ With the result of Lemma 4.1 in hand, we may now proceed to studentize the test statistic. By an application of Slutsky’s theorem for randomization distributions, this studentization factor will have no effect on the limiting permutation distribution, but will result in the limiting distribution of the test statistic being asymptotically pivotal. Therefore, combining the previous results, we obtain the following. ###### Theorem 4.2. In the setting of Lemma 4.1, let $\mu=\mathbb{E}Y_{1+G}$. Let $\hat{\tau}_{n}^{2}=\frac{1}{G}\hat{\sigma}_{n}^{2}\,\,,$ and let $\nu=\mu/G$. We have the following: 1. (i) As $n\to\infty$, $\frac{\sqrt{nG}(V_{n}-\nu)}{\hat{\tau}_{n}}\overset{d}{\to}N(0,\,1)\,\,.$ 2. (ii) Let $\hat{R}_{n}$ be the permutation distribution, with associated group of transformations $S_{n}$, the symmetric group of order $n$, based on the test statistic $\sqrt{nG}V_{n}/\hat{\tau}_{n}$. We have that, as $n\to\infty$, $\sup_{t\in\mathbb{R}}\left|\hat{R}_{n}(t)-\Phi(t)\right|\overset{a.s.}{\to}0\,\,,$ where $\Phi$ is the distribution of a standard Gaussian random variable. ###### Proof. As in the proof of Lemma 4.1, let $Z_{i}=Y_{i+G}$, for $1\leq i\leq n-G$. Let $m=n-G$. Since the $Y_{i}$ are uniformly bounded, we have that $\displaystyle\sqrt{n}(\bar{Y}_{n}-\mu)$ $\displaystyle=\sqrt{m}(\bar{Z}_{m}-\mu)(1+o(1))+\frac{1}{\sqrt{n}}\sum_{i=1}^{G}Y_{i}$ $\displaystyle=\sqrt{m}(\bar{Z}_{m}-\mu)(1+o(1))+O\left(\frac{1}{\sqrt{n}}\right)\,\,.$ In particular, by Ibragimov’s central limit theorem (Ibragimov,, 1962), Slutsky’s theorem, and the result of Lemma 4.1, we have that $\frac{\sqrt{n}\left(\bar{Y}_{n}-\mu\right)}{\hat{\sigma}_{n}}\overset{d}{\to}N(0,\,1)\,\,.$ Since $V_{n}=\frac{1}{g(n)}\bar{Y}_{n}\,\,,$ the result of (i) follows. Turning our attention to (ii), we observe that, since each $Y_{i}$ is a rank statistic, and, conditional on the data, under the action of $\Pi_{n}\sim\text{Unif}(S_{n})$, the ranks of the $X_{i}$ are uniformly distributed with probability 1, the permutation distribution $\hat{R}_{n}$ is exactly equal to the unconditional distribution of the test statistic $\sqrt{ng(n)}V_{n}/\hat{\tau}_{n}$ in the case when the $X_{i}$ are i.i.d. with a continuous marginal distribution $F$. In this case, since $\mu=0$, the result of (ii) follows from that of (i). ###### Remark 4.2. As a result of Theorem 4.2, under the conditions laid out therein we have that a one-sided permutation test of the null hypothesis $H_{0,\,M}^{(l)}$, based on the test statistic $\sqrt{nM}V_{n}/\hat{\tau}_{n}$, where $V_{n}$ is the local Mann-Kendall statistic of order $M$, is asymptotically valid. ###### Remark 4.3. Let $M,\,r\in\mathbb{N}$ such that $M>r$. By an entirely analogous argument to the one presented in this section, one may construct permutation tests of the null hypothesis $H_{0,\,M,\,r}^{(l)}\frac{1}{(n-M)\binom{M}{r}}\sum_{i_{1}<i_{2}<\dots<i_{r}\leq i_{1}+M}\left(\mathbb{P}(X_{i_{1}}<\dots<X_{i_{r}})-\mathbb{P}(X_{i_{1}}>\dots>X_{i_{r}})\right)=0\,\,,$ using a studentized version of the test statistic $\frac{1}{(n-M)\binom{M}{r}}\sum_{i_{1}<i_{2}<\dots<i_{r}\leq i_{1}+M}\left(I(X_{i_{1}}<\dots<X_{i_{r}})-I(X_{i_{1}}>\dots>X_{i_{r}})\right)\,\,.$ ## 5 Simulation results In this section, we provide Monte Carlo simulations illustrating our results. Subsection 5.1 provides simulation results of the performance of the global Mann-Kendall permutation test, and Subsection 5.2 illustrates the performance of the local Mann-Kendall permutation test. In both sections, we consider one- sided tests, i.e. those rejecting for large values of the test statistic, conducted at the nominal level $\alpha=0.05$. The simulation results confirm that the two permutation tests are valid in that, in large samples, the rejection probability under the null hypothesis is approximately equal to $\alpha$. ### 5.1 Testing for global trend We begin with a comparison of the performance of the studentized global Mann- Kendall permutation test against the classical Mann-Kendall test. As a review, the classical Mann-Kendall test computes the statistic $U_{n}=U_{n}(X_{1},\,\dots,\,X_{n})={\binom{n}{2}}^{-1}\sum_{i<j}\left(I\\{X_{i}<X_{j}\\}-I\\{X_{i}>X_{j}\\}\right)\,\,.$ Under the null hypothesis that the sequence $\\{X_{n}:n\in\mathbb{N}\\}$ is i.i.d. and that $\mathbb{P}(X_{i}=X_{j})=0$ for all $i\neq j$, the statistic $U_{n}$ has a fixed distribution $G_{n}$, and so the test compares $U_{n}$ to the $(1-\alpha)$ quantile of $G_{n}$ in order to reject or not reject the null hypothesis. Note that, by the result of Theorem 3.1, the permutation distribution based on the unstudentized Mann-Kendall statistic converges weakly in probability to the same limiting distribution as the Mann-Kendall statistic under the additional assumption that the sequence is i.i.d. It follows that, for a fixed infinite-dimensional sequence $\\{X_{n}:n\in\mathbb{N}\\}\sim P$, the rejection probability of the classical Mann-Kendall test and the unstudentized global Mann-Kendall permutation test will be asymptotically equal as $n\to\infty$. On account of this, in the following simulations, we present a comparison of the studentized global Mann-Kendall permutation test and the classical Mann-Kendall test. In this simulation, we consider the following two settings: in Table 5.1, we consider processes of the following form. Let $m\in\mathbb{N}$. For $\\{Z_{n}:n\in\mathbb{N}\\}$ be i.i.d. standard Gaussian random variables, for each $n\in\mathbb{N}$, let $X_{n}=\prod_{j=0}^{m-1}Z_{n+j}\,\,.$ This sequence is $m$-dependent and stationary, and, for each $i\neq j$, $\mathbb{P}(X_{i}\neq X_{j})=0$. Therefore $\\{X_{n}:n\in\mathbb{N}\\}$ satisfies the conditions of Theorem 3.4, and so the corresponding permutation test is asymptotically valid at the nominal level $\alpha$. Several distributions other than the standard Gaussian distribution were considered for the distribution of the $Z_{i}$, but these were observed to have little impact on the rejection probabilities of either the studentized global Mann- Kendall permutation test or the classical Mann-Kendall test. $m$ | $n$ | 10 | 50 | 100 | 500 | 1000 ---|---|---|---|---|---|--- 0 | Stud. Perm. | 0.045 | 0.059 | 0.047 | 0.047 | 0.054 Classical M-K | 0.042 | 0.038 | 0.047 | 0.049 | 0.044 10 | Stud. Perm. | 0.054 | 0.061 | 0.062 | 0.057 | 0.055 Classical M-K | 0.059 | 0.061 | 0.076 | 0.036 | 0.055 20 | Stud. Perm. | 0.036 | 0.082 | 0.063 | 0.051 | 0.045 Classical M-K | 0.056 | 0.062 | 0.069 | 0.057 | 0.061 30 | Stud. Perm. | 0.056 | 0.085 | 0.077 | 0.052 | 0.046 Classical M-K | 0.066 | 0.083 | 0.071 | 0.052 | 0.060 Table 5.1: Monte Carlo simulation results for null rejection probabilities for tests of $H_{0}^{(g)}$, in an $m$-dependent Gaussian product setting. We also consider more traditional autoregressive settings in Tables 5.2 and 5.3. Namely, we consider the following two examples. In Table 5.2, we consider the following processes: for $\\{\epsilon_{n}:n\in\mathbb{N}\\}$ a sequence of i.i.d. standard Gaussian random variables and some constant $\rho\in(-1,\,1)$, let $X_{1}\sim N(0,\,(1-\rho^{2})^{-1})$, independent of the $\epsilon_{i}$, and for each $n\geq 2$, let $X_{n}=\rho X_{n-1}+\epsilon_{n}\,\,,$ i.e. $\\{X_{n}:n\in\mathbb{N}\\}$ follows the unique distribution of the stationary $AR(1)$ process with standard Gaussian innovations. $\rho$ | $n$ | 10 | 50 | 100 | 500 | 1000 ---|---|---|---|---|---|--- -0.6 | Stud. Perm. | 0.047 | 0.192 | 0.039 | 0.074 | 0.043 Classical M-K | 0.004 | 0.002 | 0.002 | 0.003 | 0.000 -0.2 | Stud. Perm. | 0.043 | 0.072 | 0.058 | 0.046 | 0.057 Classical M-K | 0.029 | 0.028 | 0.019 | 0.021 | 0.025 0.2 | Stud. Perm. | 0.030 | 0.053 | 0.046 | 0.042 | 0.058 Classical M-K | 0.081 | 0.081 | 0.084 | 0.085 | 0.099 0.6 | Stud. Perm. | 0.035 | 0.052 | 0.068 | 0.042 | 0.049 Classical M-K | 0.183 | 0.200 | 0.196 | 0.213 | 0.206 Table 5.2: Monte Carlo simulation results for null rejection probabilities for tests of $H_{0}^{(g)}$, in an $AR(1)$ setting. Similarly, in Table 5.3, we consider processes of the following form: for $\rho\in(-1,\,1)$, let $\\{Z_{n}:n\in\mathbb{N}\\}$ and $\\{Z_{n}^{\prime}:n\in\mathbb{N}\\}$ be independent stationary $AR(1)$ processes with i.i.d. standard Gaussian innovations. For $n\in\mathbb{N}\cup\\{0\\}$, let $\displaystyle X_{2n+1}$ $\displaystyle=Z_{n}$ $\displaystyle X_{2n+2}$ $\displaystyle=Z_{n}^{\prime}\,\,.$ Note that we may also express $\\{X_{n}\\}$ as the unique stationary process satisfying the autoregressive equation, for $n\geq 3$, $X_{n}=\rho X_{n-2}+\eta_{n}\,\,,$ for $\\{\eta_{n}:n\in\mathbb{N}\\}$ a sequence of i.i.d. standard Gaussian random variables, i.e. the $X_{i}$ follow an $AR(2)$ process. As discussed in Example 3.2, the asymptotic rejection probability of the permutation test applied to such sequences is equal to the nominal level $\alpha$. For each of the three situations, 1,000 simulations were performed. Within each simulation, the permutation test was calculated by randomly sampling 1,000 permutations. $\rho$ | $n$ | 10 | 50 | 100 | 500 | 1000 ---|---|---|---|---|---|--- -0.6 | Stud. Perm. | 0.148 | 0.354 | 0.024 | 0.188 | 0.033 Classical M-K | 0.018 | 0.003 | 0.002 | 0.000 | 0.000 -0.2 | Stud. Perm. | 0.074 | 0.092 | 0.059 | 0.052 | 0.039 Classical M-K | 0.025 | 0.023 | 0.034 | 0.029 | 0.026 0.2 | Stud. Perm. | 0.030 | 0.048 | 0.048 | 0.037 | 0.055 Classical M-K | 0.054 | 0.086 | 0.096 | 0.081 | 0.093 0.6 | Stud. Perm. | 0.009 | 0.104 | 0.060 | 0.059 | 0.067 Classical M-K | 0.074 | 0.179 | 0.192 | 0.195 | 0.192 Table 5.3: Monte Carlo simulation results for null rejection probabilities for tests of $H_{0}^{(g)}$, in an $AR(2)$ setting. We observe that, in the $m$-dependent Cauchy product setting of Table 5.1, both the studentized global Mann-Kendall permutation test and the classical Mann-Kendall test perform comparably, both controlling the rejection probability at (close to) the nominal level $\alpha$. However, in contrast, while the studentized global Mann-Kendall permutation test also exhibits Type 1 error control at the nominal level $\alpha$ in both the $AR(1)$ and $AR(2)$ settings of Tables 5.2 and 5.3, respectively, we observe the following phenomena. For $\rho>0$, we observe that the rejection probability of the classical Mann-Kendall test is significantly greater than $\alpha$, i.e. we do not have Type 1 error control. In addition, for $\rho<0$, we observe that the performance of the classical Mann-Kendall test is also unsatisfactory; namely, the rejection probabilities obtained are significantly below the nominal level $\alpha$, i.e. the test is overly conservative. These issues may be explained as follows: since the limiting variance $\sigma^{2}$ of the Mann-Kendall statistic is given by (3.2), for positively (negatively) autocorrelated sequences we have that $\sigma^{2}$ is greater than (less than) the limiting variance of the Mann-Kendall statistic under the additional assumption that the sequence is i.i.d. Heuristically, we have the interpretation that the classical Mann-Kendall test “confuses” positive or negative autocorrelation with positive and negative trend, respectively, whereas the studentized global Mann-Kendall test does not succumb to this issue. We observe several computational choices to be made when applying the permutation testing framework in practice. By the results of Theorems 3.3 and 3.4, for large values of $n$, the estimate $\hat{\sigma}_{n}^{2}$, as defined in (3.5), will be be strictly positive with high probability. However, for smaller values of $n$, it may be the case that a numerically negative value of $\hat{\sigma}_{n}^{2}$ is observed, either when computing the test statistic or the permutation distribution. A trivial solution to this issue is the truncate the estimate at some sufficiently small fixed lower bound $\epsilon>0$. Note that, for appropriately small choices of $\epsilon$, i.e. $\epsilon<\sigma^{2}$, the results of Theorems 3.3 and 3.4 still hold, i.e. inference based on this choice of studentization is still asymptotically valid. In practice, however, the suitability of a choice of $\epsilon$ for a particular numerical application is affected by the distribution of the $X_{i}$, and, in general, the estimated variance $\hat{\sigma}_{n}^{2}$ is bounded away from zero with high probability, on account of the additive constant $4/9$ in the expression (3.5). For the above simulation, a constant value of $\epsilon=10^{-3}$ was used. A further choice is that of the truncation sequence $\\{b_{n}:\,n\in\mathbb{N}\\}$ used in the definition of $\hat{\sigma}_{n}^{2}$. Any sequence $\\{b_{n}\\}$ such that, as $n\to\infty$, $b_{n}\to\infty$ and $b_{n}=o\left(\sqrt{n}\right)$ is theoretically justified by Theorem 3.4, although, in a specific setting, some choices of $\\{b_{n}\\}$ will lead to more numerical stability than others. In practice, several choices of $\\{b_{n}\\}$ were implemented, but were found to make little difference to the rejection probabilities observed. In the simulations above, $\\{b_{n}\\}$ was taken to be $b_{n}=[n^{1/3}]$, where $[x]$ denotes the integer part of $x$. ### 5.2 Tests of local trend We now turn our attention to simulations involving the local Mann-Kendall permutation test of the hypothesis $H_{0,\,M}^{(l)}$. Under the assumption that the sequence $\\{X_{i}:i\in[n]\\}$ is i.i.d., the randomization hypothesis holds, and so the unstudentized local Mann-Kendall permutation test will be exact in finite samples and asymptotically valid. However, in general, the randomization hypothesis does not hold under $H_{0,\,M}^{(l)}$, and so the unstudentized local Mann-Kendall permutation test will not be exact or even asymptotically valid. However, by the result of Theorem 4.2, the studentized local Mann-Kendall permutation test will be asymptotically valid at the nominal level $\alpha$. In order to illlustrate this behavior, we provide a comparison of the simulated rejection probabilities of the studentized and unstudentized local Mann-Kendall permutation tests in two different settings. In both of the settings described below, we consider the choice of $M=5$. For both of the following situations, 1,000 simulations were performed. Within each simulation, the permutation test was calculated by randomly sampling 1,000 permutations. In Table 5.4, we compare the simulated rejection probabilities of these two tests in the Gaussian product $m$-dependent setting described in Subsection 5.1. $m$ | $n$ | 20 | 50 | 100 | 500 | 1000 ---|---|---|---|---|---|--- 0 | Stud. Perm. | 0.049 | 0.047 | 0.051 | 0.046 | 0.058 Unstud. Perm. | 0.060 | 0.062 | 0.061 | 0.057 | 0.058 1 | Stud. Perm. | 0.050 | 0.066 | 0.050 | 0.023 | 0.032 Unstud. Perm. | 0.099 | 0.096 | 0.098 | 0.091 | 0.101 2 | Stud. Perm. | 0.057 | 0.078 | 0.065 | 0.009 | 0.021 Unstud. Perm. | 0.115 | 0.129 | 0.131 | 0.164 | 0.138 3 | Stud. Perm. | 0.045 | 0.097 | 0.064 | 0.001 | 0.019 Unstud. Perm. | 0.141 | 0.160 | 0.167 | 0.150 | 0.180 Table 5.4: Monte Carlo simulation results for null rejection probabilities for tests of $H_{0,\,M}^{(l)}$, for $M=5$, in an $m$-dependent Gaussian product setting. We observe that, in Table 5.4, the studentized permutation test controls the rejection probability at the nominal level $\alpha$. However, in contrast, we observe that the unstudentized permutation test only has a rejection probability approximately equal to $\alpha$ in the case $m=0$, i.e. when the $X_{i}$ are i.i.d. and the randomization hypothesis holds. For $m>0$, the unstudentized permutation test visibly does not control the rejection probability at the nominal level $\alpha$. In Table 5.5, we provide a comparison of the rejection probabilities of the studentized and unstudentized local Mann-Kendall permutation tests in the $AR(1)$ setting of Subsection 5.1. $\rho$ | $n$ | 20 | 50 | 100 | 500 | 1000 ---|---|---|---|---|---|--- -0.6 | Stud. Perm. | 0.048 | 0.182 | 0.050 | 0.055 | 0.044 Unstud. Perm. | 0.021 | 0.028 | 0.036 | 0.049 | 0.042 -0.2 | Stud. Perm. | 0.062 | 0.070 | 0.053 | 0.038 | 0.045 Unstud. Perm. | 0.049 | 0.052 | 0.039 | 0.047 | 0.052 0.2 | Stud. Perm. | 0.037 | 0.029 | 0.033 | 0.078 | 0.061 Unstud. Perm. | 0.085 | 0.072 | 0.071 | 0.087 | 0.066 0.6 | Stud. Perm. | 0.015 | 0.024 | 0.007 | 0.009 | 0.030 Unstud. Perm. | 0.177 | 0.145 | 0.125 | 0.111 | 0.114 Table 5.5: Monte Carlo simulation results for null rejection probabilities for tests of $H_{0,\,M}^{(l)}$, for $M=5$, in an $AR(1)$ setting. We observe that, while the unstudentized permutation test somewhat surprisingly provides Type 1 error control at the nominal level $\alpha$ for negative values of $\rho$, it does not do so for positive values of $\rho$. For all values of $\rho$, the studentized permutation test has rejection probabilities below the nominal level $\alpha$. As in Subsection 5.1, there are several computational choices to be made in practice. In particular, on account of the same numerical issues involving the studentization factor $\hat{\tau}_{n}^{2}$, we truncate the variance estimate at the lower bound $\epsilon=10^{-3}$. Similarly, we must choose a truncation sequence $\\{b_{n}:n\in\mathbb{N}\\}$ to be used in the definition of $\hat{\tau}_{n}^{2}$. In the above simulations, the choice $b_{n}=[n^{1/3}]$ was used. ## 6 Conclusions When the fundamental assumption of exchangeability does not necessarily hold, permutation tests are invalid unless strict conditions on underlying parameters of the problem are satisfied. For instance, the permutation test of $H_{0}^{(g)}$ based on the classical Mann-Kendall statistic is asymptotically valid only when $\sigma^{2}$ is as defined in (3.2), is equal to 4/9. Hence rejecting the null must be interpreted correctly, since rejection of the null with this permutation test does not necessarily imply that the sequence truly does exhibit monotone trend, in the sense that the quantity $\lim_{n\to\infty}\binom{n}{2}^{-1}\sum_{i<j}(\mathbb{P}(X_{i}<X_{j})-\mathbb{P}(X_{i}>X_{j}))$ may be equal to zero, and the Mann-Kendall test will still reject the null hypothesis. We provide a testing procedure that allows one to obtain asymptotic rejection probability $\alpha$ in a permutation test setting. A significant advantage of this test is that it retains the property of finite- sample exactness of the Mann-Kendall test under the assumption of i.i.d., as well as achieving asymptotic level $\alpha$ in a much wider range of settings than the aforementioned tests. This test also retains the convenient property of the classical Mann-Kendall test that the permutation distribution is fixed for a given sample size $n$. An analogous permutation testing procedure also permits for asymptotically valid inference for the newly-introduced notion of local monotone trend. Correct implementation of a permutation test is crucial if one is interested in confirmatory inference via hypothesis testing; indeed, proper error control of Type 1, 2 and 3 errors can be obtained for tests of global or local monotone trend by basing inference on test statistics which are asymptotically pivotal. A framework has been provided for tests of both local and global monotone trend. In this paper, we have defined specific notions of a lack of global monotone trend and a lack of local monotone trend of order $M$, and constructed asymptotically valid testing procedures of these null hypotheses. Future work will expand on the methods presented herein, in order to provide analogous permutation testing procedures in the context of other commonly-used tests for monotone trend. ## References * Bradley, (2005) Bradley, R. C. (2005). Basic properties of strong mixing conditions. a survey and some open questions. Probab. Surveys, 2:107–144. * Dietz and Killeen, (1981) Dietz, E. J. and Killeen, T. J. (1981). A nonparametric multivariate test for monotone trend with pharamaceutical applications. Journal of the American Statistical Association, 76(373):169–174. * Freedman, (2011) Freedman, D. (2011). Markov Chains. Springer, New York, NY. * Hamed, (2008) Hamed, K. H. (2008). Trend detection in hydrologic data: The Mann–Kendall trend test under the scaling hypothesis. Journal of Hydrology, 349(3):350–363. * Han and Qian, (2018) Han, F. and Qian, T. (2018). On inference validity of weighted U-statistics under data heterogeneity. Electronic Journal of Statistics, 12(2):2637 – 2708. * Hoeffding, (1948) Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. The Annals of Mathematical Statistics, 19(3):293 – 325. * Ibragimov, (1962) Ibragimov, I. A. (1962). Some limit theorems for stationary processes. Theory of Probability & Its Applications, 7(4):349–382. * Kendall, (1990) Kendall, M. G. (1990). Rank Correlation Methods. Charles Griffin Book. Oxford University Press, London, England, 5 edition. * Lehmann and Romano, (2022) Lehmann, E. L. and Romano, J. P. (2022). Testing Statistical Hypotheses. Springer texts in statistics. Springer Nature, 4 edition. * Mann, (1945) Mann, H. B. (1945). Nonparametric tests against trend. Econometrica, 13(3):245–259. * Mokkadem, (1988) Mokkadem, A. (1988). Mixing properties of ARMA processes. Stochastic Processes and their Applications, 29(2):309 – 315. * Neumann, (2013) Neumann, M. H. (2013). A central limit theorem for triangular arrays of weakly dependent random variables, with applications in statistics. ESAIM: PS, 17:120–134. * Romano and Tirlea, (2022) Romano, J. P. and Tirlea, M. A. (2022). Permutation testing for dependence in time series. Journal of Time Series Analysis, 43:781 – 807. * Romano and Tirlea, (2024) Romano, J. P. and Tirlea, M. A. (2024). Least squares-based permutation tests in time series. Technical Report, Department of Statistics, Stanford University. * Volkonskii and Rozanov, (1961) Volkonskii, V. A. and Rozanov, Y. A. (1961). Some limit theorems for random functions. ii. Theory of Probability & Its Applications, 6(2):186–198. * Wang et al., (2020) Wang, F., Shao, W., Yu, H., Kan, G., He, X., Zhang, D., Ren, M., and Wang, G. (2020). Re-evaluation of the power of the Mann-Kendall test for detecting monotonic trends in hydrometeorological time series. Frontiers in Earth Science, 8. * Yue et al., (2002) Yue, S., Pilon, P., Phinney, B., and Cavadias, G. (2002). The influence of autocorrelation on the ability to detect trend in hydrological series. Hydrological Processes, 16(9):1807–1829. * Zhao and Woodroofe, (2012) Zhao, O. and Woodroofe, M. (2012). Estimating a monotone trend. Statistica Sinica, 22(1):359–378.
$\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q^{\prime\prime},A({\bf x}^{\prime})))\geq\frac{1}{4C\varepsilon n}\left(q_{1-\frac{1}{C\varepsilon n}}-q_{\frac{1}{C\varepsilon n}}\right)+\frac{1}{4}\mathcal{W}(P,P|_{q_{\frac{1}{C\varepsilon n}},q_{1-\frac{1}{C\varepsilon n}}}).$ We start with some notation. For any distribution $P$ with a density, let $f_{P}$ denote its density function. Throughout this section, we will use $q_{\alpha}$ to represent the $\alpha$-quantile of distribution $P$. Let $L(P)$ be the ‘starting point’ of distribution $P$ (defined as $\inf_{t\in\mathbb{R}}\\{t:F_{P}(t)>0\\}$ if the infimum exists, and $-\infty$ otherwise. Next, we describe some results on differentially private testing that we will use. We say that a testing algorithm $A_{test}$ distinguishes two distributions $P$ and $Q$ with $n$ samples, if given the promise that a dataset of size $n$ is drawn from either $P^{n}$ or $Q^{n}$, with probability at least $\frac{2}{3}$, it outputs $P$ if the dataset was drawn from $P^{n}$ and $Q$ if it was drawn from $Q^{n}$. We now state a theorem lower bounding the sample complexity of differentially private hypothesis testing. ###### Theorem 6.5 ([CKM+19, Theorem 1.2]). Fix $n\in\mathbb{N},\varepsilon>0$. For every pair of distributions $P,Q$ over $\mathbb{R}$, if there exists an $\varepsilon$-DP testing algorithm444The same bounds (and hence all our results in this subsection) can be extended to $(\varepsilon,\delta)$-DP (with $\delta\leq\varepsilon$) by using an equivalence of pure and approximate DP for identity and closeness testing [ASZ17, Lemma 5]. $A_{test}$ that distinguishes $P$ and $Q$ with $n$ samples, then $n=\Omega\left(\frac{1}{\varepsilon\tau(P,Q)+(1-\tau(P,Q))H^{2}(P^{\prime},Q^{\prime})}\right),$ where $\tau(P,Q)=\max\Big{\\{}\int_{\mathbb{R}}\max\\{e^{\varepsilon}f_{P}(t)-f_{Q}(t),0\\}dt,\int_{\mathbb{R}}\max\\{e^{\varepsilon}f_{Q}(t)-f_{P}(t),0\\}dt\Big{\\}},$ and $H^{2}(\cdot,\cdot)$ is the squared Hellinger distance between $P^{\prime}=\frac{\min(e^{\varepsilon}Q,P)}{1-\tau(P,Q)}$, and $Q^{\prime}=\frac{\min(e^{\varepsilon^{\prime}}P,Q)}{1-\tau(P,Q)}$, where $0\leq\varepsilon^{\prime}\leq\varepsilon$ is such that if $\tau(P,Q)=\int_{\mathbb{R}}\max\\{f_{P}(t)-e^{\varepsilon}f_{Q}(t),0\\}dt$, then $\varepsilon^{\prime}$ is the maximum value such that $\tau(P,Q)=\int_{\mathbb{R}}\max\\{f_{Q}(t)-e^{\varepsilon^{\prime}}f_{P}(t),0\\}dt,$ else $\varepsilon^{\prime}$ is the maximum value such that $\tau(P,Q)=\int_{\mathbb{R}}\max\\{f_{P}(t)-e^{\varepsilon^{\prime}}f_{Q}(t),0\\}dt.$ We now are ready to start proving our main theorem. ###### Proof. (of Theorem 6.4) The idea is to construct $Q$ from $P$ by moving mass from the leftmost quantiles to the rightmost quantile. We do this such that $Q$ is statistically close enough to $P$ such that the two distributions can not be distinguished with $n$ samples, but is also far from $P$ in Wasserstein distance. This produces a lower bound of $(1/2)\mathcal{W}(P,Q)$ on how well an algorithm can simultaneously estimate $P$ and $Q$ since if there was an algorithm that produced good estimates of $P$ and $Q$ in Wasserstein distance with $n$ samples, then we could tell them apart, and this would give a contradiction. Let $k$ be a quantity to be set later. Formally, we define $Q$ as the distribution with the following density function. $f_{Q}(t)=\left\\{\begin{array}[]{lr}\frac{1}{2}f_{P}(t),&\text{for }t<q_{1/k}\\\ f_{P}(t),&\text{for }q_{1/k}\leq t<q_{1-\frac{1}{k}}\\\ \frac{3}{2}f_{P}(t)&\text{for }q_{1-\frac{1}{k}}\leq t\end{array}\right\\}$ Note that by the definition of $Q$, we have that $D_{\infty}(P,Q)\leq 2$. We will prove that the sample complexity of telling apart $P$ and $Q$ under $(\varepsilon,\delta)$-DP is $\Omega(k/\varepsilon)$, using known results on hypothesis testing. Then, we will argue that the Wasserstein distance between $P$ and $Q$ is sufficiently large. Setting $k$ appropriately will complete the proof. Define $SC_{\varepsilon,\delta}(P,Q)$ to be the smallest $n$ such that there exists an $(\varepsilon,\delta)$-DP testing algorithm that distinguishes $P$ and $Q$; called the _sample complexity_ of privately distinguishing $P$ and $Q$. ###### Lemma 6.6. $SC_{\varepsilon,\delta}(P,Q)=\Omega(k/\varepsilon).$ The proof of this lemma is in Appendix F. We next argue that $P$ and $Q$ are sufficiently far away in Wasserstein distance. ###### Lemma 6.7. $\mathcal{W}(P,Q)\geq\frac{1}{2k}(q_{1-\frac{1}{k}}-q_{1/k})+\frac{1}{2}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$. The proof of this lemma is also in Appendix F. Finally, we are ready to prove the theorem. Assume that with probability larger than $0.75$ over the draw of two datasets ${\bf x}\sim P^{n}$, ${\bf x}^{\prime}\sim Q^{n}$, and the randomness used by invocations of algorithm $A$ we have that $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x}^{\prime}))<\frac{1}{2}\mathcal{W}(P,Q)$. Then, given a dataset ${\bf x}^{\prime\prime}$ of size $n$, we can perform the following test: run the differentially private algorithm $A$ on the dataset ${\bf x}^{\prime\prime}$ and compute $\mathcal{W}(P,A({\bf x}^{\prime\prime}))$ and $\mathcal{W}(Q,A({\bf x}^{\prime\prime}))$ and output the distribution with lower distance. Then, note that $\mathcal{W}(P,Q)\leq\mathcal{W}(P,A({\bf x}^{\prime\prime}))+\mathcal{W}(Q,A({\bf x}^{\prime\prime}))$ which implies that with probability at least $0.75$, $\mathcal{W}(Q,A({\bf x}^{\prime\prime}))>\frac{1}{2}\mathcal{W}(P,Q)$ if the dataset ${\bf x}^{\prime\prime}$ was sampled from $P^{n}$ (by the accuracy guarantee). A similar argument shows that with probability at least $0.75$, $\mathcal{W}(P,A({\bf x}^{\prime\prime}))>\frac{1}{2}\mathcal{W}(P,Q)$ if the dataset ${\bf x}^{\prime\prime}$ was sampled from $Q^{n}$. Hence, with $n$ samples we have defined a test that distinguishes $P$ and $Q$. However, for $k=C\varepsilon n$ for some constant $C$, by Lemma 6.6 we get that any differentially private test distinguishing $P$ and $Q$ requires more than $n$ samples, which is a contradiction. Hence, with probability at least $0.25$ over the draw of two datasets ${\bf x}\sim P^{n}$, ${\bf x}^{\prime}\sim Q^{n}$, and the randomness used by invocations of algorithm $A$ we have that $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x}^{\prime}))\geq\frac{1}{2}\mathcal{W}(P,Q)\geq\frac{1}{4C\varepsilon n}(q_{1-\frac{1}{C\varepsilon n}}-q_{1/C\varepsilon n})+\frac{1}{4}\mathcal{W}(P,P|_{q_{\frac{1}{C\varepsilon n}},q_{1-\frac{1}{C\varepsilon n}}})$ ,where the last inequality is by invoking Lemma 6.7 with $k=C\varepsilon n$. ∎ #### 6.1.2 Empirical Term In this section, we prove the following result. ###### Theorem 6.8. Fix sufficiently large natural numbers $n,k>0$ and let $C,C^{\prime}>0$ be sufficiently small constants. For all algorithms $A:\mathbb{R}^{n}\to\Delta_{\mathbb{R}}$, the following holds. For all continuous distributions $P$ over $\mathbb{R}$ with a density and with bounded expectation, there exists another distribution $Q$ (with $D_{\infty}(P,Q)\leq\ln 2$), that is indistinguishable from $P$ given $O(n)$ samples, such that with probability at least $0.25$ over the draws ${\bf x}\sim P^{n}$, ${\bf x}^{\prime}\sim Q^{n}$, the following holds. $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x}^{\prime})))\geq\frac{C^{\prime}}{\sqrt{\log n}}\mathbb{E}_{{\bf x}^{\prime\prime}\sim P^{n}}\left[\mathcal{W}\left(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}\right)\right],$ where $q_{\alpha}$ is the $\alpha$-quantile of $P$. Before going into the proof, we state the following result on the sample complexity of testing. This is a folklore result but for a proof of the lower bound see [BY02] and the upper bound see [Can17]. ###### Theorem 6.9. Fix $n\in\mathbb{N},\varepsilon>0$. For every pair of distributions $P,Q$ over $\mathbb{R}$, if there exists a testing algorithm $A_{test}$ that distinguishes $P$ and $Q$ with $n$ samples, then $n=\Omega\left(\frac{1}{H^{2}(P,Q)}\right),$ wherer $H^{2}(\cdot,\cdot)$ represents the squared Hellinger distance between $P$ and $Q$. Throughout the proof, we will use $q_{\alpha}$ to represent the $\alpha$-quantile of distribution $P$. ###### Proof of Theorem 6.8. $Q$ is constructed by adding progressively more mass to $P$ up until $q_{1/2}$ and subtracting proportionate amounts of mass from $P$ afterwards. Intuitively, this is done in such a way that to ‘change’ $P$ to $Q$, for all $i\geq 2$ one has to move roughly $\min\\{\frac{1}{\sqrt{2^{i}n}},\frac{1}{2^{i}}\\}$ mass from $q_{1/2^{i}}$ to $q_{1-1/2^{i}}$. This ensures that the Wasserstein distance between $P$ and $Q$ is larger than the expected Wasserstein distance between $P$ and its empirical distribution on $n$ samples $\hat{P}_{n}$. This is carefully done to ensure that $P$ is indistinguishable from $Q$. Formally, consider $i$ in the range $[2,\log n-1)$. For all $x\in(q_{1/2^{i}},q_{1/2^{i-1}}]$, we set $f_{Q}(t)=f_{P}(t)\left[1+\sqrt{\frac{2^{i}}{n}}\right]$. For all $t\in(q_{1-1/2^{i-1}},q_{1-1/2^{i}}]$, we set $f_{Q}(t)=f_{P}(t)\left[1-\sqrt{\frac{2^{i}}{n}}\right]$. Next, consider $i$ in the range $[\log n,\infty)$. For all $t\in(q_{1/2^{i}},q_{1/2^{i-1}}]$, we set $f_{Q}(t)=f_{P}(t)\left[1+\frac{1}{2}\right]$. For all $t\in(q_{1-1/2^{i-1}},q_{1-1/2^{i}}]$, we set $f_{Q}(t)=f_{P}(t)\left[1-\frac{1}{2}\right]$. Note that $P$ has bounded expectation by assumption, and hence, so does $Q$. Additionally, note that $D_{\infty}(P,Q)\leq\ln 2$. There are two key considerations balanced in the design of $Q$. On one hand, we need $Q$ to be indistinguishable from $P$ given $\tilde{O}(n)$ samples. On the other hand, we need $Q$ to be sufficiently far away from $P$ in Wasserstein distance. This ensures that given an accurate algorithm for estimating the density of the distribution (in Wasserstein distance) given access to $\tilde{O}(n)$ samples from it, we can design a test distinguishing $P$ and $Q$ with that many samples, thereby contradicting their indistinguishability. Detailed proofs of claims below can be found in Appendix F. First, we show that $P$ is indistinguishable from $Q$. ###### Lemma 6.10. $KL(P,Q)=O(\log n/n).$ Next, we establish a lower bound on the Wasserstein distance between $P$ and $Q$. ###### Lemma 6.11. $\mathcal{W}(P,Q)\geq\frac{1}{4}\left[\sum_{j=2}^{\log n-1}\frac{1}{\sqrt{2^{j}n}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right]+\sum_{j=\log n}^{\infty}\frac{1}{2^{j}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right]\right].$ Next, we upper bound the expected Wasserstein distance between the distribution $P$ and its empirical distribution on $n$ samples. ###### Lemma 6.12. $\mathbb{E}[\mathcal{W}(P,\hat{P}_{n})]\leq 8\left[\sum_{i=2}^{\log n-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\right]$ We now prove a simple claim regarding restrictions. ###### Claim 6.13 (Restrictions preserve Wasserstein distance). For all datasets ${\bf x}$, and any natural number $k>1$ we have that $\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\leq\mathcal{W}(P,\hat{P}_{n}).$ Finally, we are ready to put the above lemmas together to prove Theorem 6.8. Fix $n^{\prime}=\frac{n}{C\log n}$. Assume, for sake of contradiction, that with probability larger than $0.75$ over the draw of two datasets ${\bf x}\sim P^{n^{\prime}}$, ${\bf x}^{\prime}\sim Q^{n^{\prime}}$, and the randomness used by invocations of algorithm $A$ we have that $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x}^{\prime}))\leq\frac{1}{2}W_{1}(P,Q)$. Then, given a dataset ${\bf x^{\prime\prime}}$ of size $n^{\prime}$, we perform the following test: run the differentially private algorithm $A$ on the dataset ${\bf x}^{\prime\prime}$ and compute $\mathcal{W}(P,A({\bf x}^{\prime\prime}))$ and $\mathcal{W}(Q,A({\bf x}^{\prime\prime}))$ and output the distribution with lower distance. Then, note that $\mathcal{W}(P,Q)\leq\mathcal{W}(P,A({\bf x}^{\prime\prime}))+\mathcal{W}(Q,A({\bf x}^{\prime\prime}))$ which implies that with probability at least $0.75$, $\mathcal{W}(Q,A({\bf x}^{\prime\prime}))\geq\frac{1}{2}\mathcal{W}(P,Q)$ if ${\bf x}^{\prime\prime}\sim P^{n^{\prime}}$ (by the accuracy guarantee). A similar argument shows that with probability at least $0.75$, $\mathcal{W}(P,A({\bf x}^{\prime\prime}))\geq\frac{1}{2}\mathcal{W}(P,Q)$ if ${\bf x}^{\prime\prime}\sim Q^{n^{\prime}}$. Hence, with $n^{\prime}$ samples we have defined a test that distinguishes $P$ and $Q$. However, by Lemma 6.10 bounding the $KL$ divergence between $P$ and $Q$, Theorem 6.9 on sample complexity lower bounds for testing, and Lemma A.4 on the relationship between KL and Hellinger distance, we get that any statistical test distinguishing $P$ and $Q$ requires more than $n^{\prime}$ samples, which is a contradiction. Hence, with probability at least $0.25$ over the draw of two datasets ${\bf x}\sim P^{n^{\prime}}$, ${\bf x}^{\prime}\sim Q^{n^{\prime}}$, and the randomness used by invocations of algorithm $A$ we must have that $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x^{\prime}}))\geq\frac{1}{2}\mathcal{W}(P,Q).$ (9) Next, note that by Lemma 6.12 (with value $n^{\prime}$), we have that $\displaystyle\mathbb{E}[\mathcal{W}(P,\hat{P}_{n^{\prime}})]$ $\displaystyle\leq 8\left[\sum_{i=2}^{\log n^{\prime}-1}\frac{1}{\sqrt{2^{i}n^{\prime}}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n^{\prime}}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\right]$ $\displaystyle=8\left[\sum_{i=2}^{\log\frac{n}{C\log n}-1}\frac{\sqrt{C\log n}}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log\frac{n}{C\log n}}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\right]$ $\displaystyle=8\Bigg{[}\sqrt{C\log n}\sum_{i=2}^{\log n-\log(C\log n)-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\Bigg{]}$ Analyzing the middle term in the above sum, we have that $\displaystyle\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle\leq\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{\sqrt{2^{i}}}\frac{1}{\sqrt{2^{\log n-\log(C\log n)}}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle\leq\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{\sqrt{2^{i}}}\frac{\sqrt{C\log n}}{\sqrt{n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle=\sqrt{C\log n}\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ Substituting this back in the previous sum, we have that $\displaystyle\mathbb{E}[\mathcal{W}(P,\hat{P}_{n^{\prime}})]$ $\displaystyle\leq 8\Bigg{[}\sqrt{C\log n}\sum_{i=2}^{\log n-\log(C\log n)-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle\hskip 36.135pt+\sqrt{C\log n}\sum_{i=\log n-\log(C\log n)}^{\log n-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\Bigg{]}$ $\displaystyle\leq 8\sqrt{C\log n}\Bigg{[}\sum_{i=2}^{\log n-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\Bigg{]}$ $\displaystyle\leq 16\sqrt{C\log n^{\prime}}\Bigg{[}\sum_{i=2}^{\log n-1}\frac{1}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]\Bigg{]}$ where in the last inequality we use the fact that $n^{\prime}\geq\sqrt{n}$. Hence, by Lemma 6.11 (which gives a lower bound on $\mathcal{W}(P,Q)$) in conjunction with the above equation, we have that $\mathcal{W}(P,Q)\geq\frac{C^{\prime}}{\sqrt{\log n^{\prime}}}\mathbb{E}[P,\hat{P}_{n^{\prime}}]$ for some sufficiently small constant $C^{\prime}$. Substituting back in Equation 9, we have that with probability at least $0.25$ over the draw of two datasets ${\bf x}\sim P^{n^{\prime}}$, ${\bf x}^{\prime}\sim Q^{n^{\prime}}$, and the randomness used by invocations of algorithm $A$ we have that $\max(\mathcal{W}(P,A({\bf x})),\mathcal{W}(Q,A({\bf x^{\prime}}))\geq\frac{1}{2}\frac{C^{\prime}}{\sqrt{\log n^{\prime}}}\mathbb{E}[\mathcal{W}(P,\hat{P}_{n^{\prime}})]\geq\frac{1}{2}\frac{C^{\prime}}{\sqrt{\log n^{\prime}}}\mathbb{E}\left[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n^{\prime}}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\right],$ as required. ∎ ### 6.2 Upper Bound In this section, we describe an algorithm that achieves the instance optimal rate described in the previous section (up to polylogarithmic factors in some of the terms). We will be looking at distributions $P$ supported on a discrete, ordered interval $\\{a,a+\gamma,\dots,b-\gamma,b\\}$. Note that by a simple coupling argument, any continuous distribution $P^{cont}$ on $[a,b]$ is at most $\gamma$ away in Wasserstein distance from a distribution on this grid. The dependence on $\gamma$ in our bounds for discrete distributions will be inverse polylogarithmic (or better), and so our algorithms for estimating distributions $P$ in the interval $\\{a,a+\gamma,\dots,b-\gamma,b\\}$ also work to give similar bounds for continuous distributions on $[a,b]$, up to a small additive factor of $\gamma$, which can be set to any inverse polynomial in the dataset size without significantly affecting our bounds. Formally, we will prove the following theorem (See Theorem 6.15 for a more detailed statement). ###### Theorem 6.14. Fix $\varepsilon,\beta\in(0,1]$, $a,b\in\mathbb{R}$, and $\gamma<b-a\in\mathbb{R}$ such that $\frac{b-a}{\gamma}$ is an integer. Let $n\in\mathbb{N}>c_{2}\frac{\log^{4}\frac{b-a}{\beta\gamma}}{\varepsilon}$ for some sufficiently large constant $c_{2}$. There exists an algorithm $A$ that for any distribution $P$ on $\\{a,a+\gamma,a+2\gamma,\dots,b-\gamma,b\\}$ satisfies the following. When run with input a random sample ${\bf x}\sim P^{n}$, $A$ outputs a distribution $P^{DP}$ such that with probability at least $1-\beta$ over the randomness of ${\bf x}$ and the algorithm, $\mathcal{W}(P,P^{DP})=O\left(\frac{1}{k}\left(q_{1-\frac{1}{k}}-q_{\frac{1}{k}}\right)+\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+\sqrt{\log\frac{n}{\beta}}\mathbb{E}\left[\mathcal{W}\left(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}\right)\right]\right),$ where $\hat{P}_{n}$ is the empirical distribution on $n$ samples drawn independently from $P$, $q_{\alpha}$ represents the $\alpha$-quantile of distribution $P$, and $k=\lceil\frac{\varepsilon n}{4c_{3}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$ for a sufficiently large constant $c_{3}$. Since $k\approx\varepsilon n/\log(n)$, this upper bound matches the lower bound in Theorem 6.3 in its dependence on $\varepsilon$ and its dependence on $n$ (up to logarithmic factors in $n$). The algorithm that we will analyze proceeds by estimating sufficiently many quantiles from the empirical distribution and distributing mass evenly between the chosen quantiles. The number of quantiles is chosen carefully to ensure that the estimated $\alpha$-quantiles are also approximately $\alpha$-quantiles for the empirical distribution (and hence also approximately for the true distribution), and to ensure that the CDF of the output distribution closely tracks the CDF of the empirical distribution. Through a careful analysis, we are able to leverage these properties to give instance optimality guarantees for the accuracy of the algorithm. #### 6.2.1 Algorithm for density estimation Algorithm 5 is our algorithm for density estimation, and proceeds by differentially privately estimating sufficiently many quantiles of the distribution and placing equal mass on each of them. We argue that a simple CDF based differentially private quantiles estimator $A_{quant}$ satisfies a specific guarantee that will be key to our analysis. See Appendix E for more details about the quantiles algorithm and formal statements and proofs therein. Algorithm 5 Algorithm $A$ for estimating a distribution on $\mathbb{R}$ 1:Input: ${\bf x}=(x_{1},\dots,x_{n})\sim P^{n}$, privacy parameter $\varepsilon$, interval end-points $a,b$, granularity $\gamma$, access to algorithm $A_{quant}$ 2:Output: Distribution $P^{DP}$ on $\mathbb{R}$. 3:Let $k$ be set to $\lceil\frac{\varepsilon n}{4c_{3}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$ for a sufficiently large constant $c_{3}$. 4:Use Algorithm $A_{quant}$ referenced in Theorem E.2 with inputs interval end points $a,b$, granularity $\gamma$, ${\bf x}=(x_{1},\dots,x_{n})\in\\{a,a+\gamma,\dots,b-\gamma,b\\}^{n}$, and desired quantile values ${\bf\alpha}=\\{1/2k,3/2k,5/2k,\dots,(2k-1)/2k\\}$, and let the outputs be $\tilde{q}_{1}\dots,\tilde{q}_{k}$. 5:for $j\in[k]$ do 6: Set $P^{DP}(\tilde{q}_{j})=\frac{1}{k}$. 7:Output $P^{DP}$. Observe that Algorithm 5 inherits the privacy of $A_{quant}$, since it simply postprocesses the quantiles it receives from that subroutine, and hence is also $\varepsilon$-DP. Now, we are in a position to state our main theorem, which bounds the Wasserstein distance between the distribution output by our algorithm, and the underlying probability distribution $P$. ###### Theorem 6.15. Fix $\varepsilon,\beta\in(0,1]$, $a,b\in\mathbb{R}$, and $\gamma<b-a\in\mathbb{R}$ such that $\frac{b-a}{\gamma}$ is an integer. Let $n\in\mathbb{N}>c_{2}\frac{\log^{4}\frac{b-a}{\gamma\beta\varepsilon}}{\varepsilon}$ for some sufficiently large constant $c_{2}$. Let $P$ be any distribution supported on $\\{a,a+\gamma,a+2\gamma,\dots,b-\gamma,b\\}$, and ${\bf x}\sim P^{n}$. Then, Algorithm 5, when given inputs ${\bf x}$, privacy parameter $\varepsilon$, interval end points $a,b$, and granularity $\gamma$, outputs a distribution $P^{DP}$ such that with probability at least $1-O(\beta)$ over the randomness of ${\bf x}$ and the algorithm, $\mathcal{W}(P,P^{DP})\leq\sqrt{c\log n}\cdot\mathbb{E}\left[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}\right]+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+\frac{2}{k}\left(q_{1-1/k}-q_{1/k}\right),$ where $\hat{P}_{n}$ is the uniform distribution on ${\bf x}$, $q_{\alpha}$ represents the $\alpha$-quantile of distribution $P$, $c,C^{\prime\prime}$ are sufficiently large constants, and $k=\lceil\frac{\varepsilon n}{4c_{3}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$, where $c_{3}$ is a sufficiently large constant. We note that using more sophisticated differentially private CDF estimators to estimate quantiles (such as ones in [BNSV15, CLN+23]), we can also obtain a version of the same theorem for approximate differential privacy, with a better dependence on the size of the domain $\frac{b-a}{\gamma}$ (only $\log^{*}(\frac{b-a}{\gamma})$ as opposed to $poly\log(\frac{b-a}{\gamma})$, where $\log^{*}t$ is the number of times $\log$ has to be applied to $t$ to get it to be $\leq 1$). 555The theorem would be of the same form as Theorem 6.15, except that Algorithm 5 would be $(\varepsilon,\delta)$-DP, with the lower bound on $n$ instead being $n=\Omega\left(\frac{\operatorname{polylog}^{*}\frac{b-a}{\gamma\varepsilon\delta}\sqrt{\log 1/\delta}\log(1/\beta)}{\varepsilon}\right)$, and $k$ being set instead to $O\left(\frac{\varepsilon n}{\log^{*}\frac{b-a}{\gamma}\operatorname{polylog}\frac{n}{\beta}}\right)$. To prove Theorem 6.15, we first relate the Wasserstein distance of interest (between the true distribution $P$ and the algorithm’s output distribution $P^{DP}$ to a quantity related to an appropriately chosen restriction. Let $q_{\alpha}$ represent the $\alpha$-quantile of $P$ and $\hat{q}_{\alpha}$ represent the $\alpha$-quantile of $\hat{P}_{n}$ and $\tilde{q}_{\alpha}$ represent the $\alpha$-quantiles of $P^{DP}$. We also note that all these distributions (and others that will come up in the proof) are bounded distributions over the real line and so we can freely apply the triangle inequality for Wasserstein distance, and the cumulative distribution formula for Wasserstein distance (Lemma 2.3). The proof of the main theorem will follow from the following lemmas (all proved in Appendix F). ###### Lemma 6.16. Let $C^{\prime\prime}>0$ be a sufficiently large constant, and let $n>0$ be sufficiently large. With probability at least $1-O(\beta)$ over the randomness in data samples and Algorithm 5, $\mathcal{W}(P,P^{DP})\leq\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}).$ ###### Lemma 6.17 (Wasserstein in terms of quantiles). For all datasets ${\bf x}$ (with data entries in $[a,b]$), with probability at least $1-\beta$ over the randomness of Algorithm 5, we have that $\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\leq\frac{2}{k}\left(q_{1-1/k}-q_{1/k}\right),$ where $\hat{P}_{n}$ is the uniform distribution over ${\bf x}$. Now, we argue about the concentration of the Wasserstein distance between restrictions of the empirical distribution and restrictions of the true distribution. ###### Claim 6.18. Fix $\beta\in(0,1)$ and sufficiently large constants $c_{3},c_{6}$. Let $n>0$ be sufficiently large such that $n>\log n/\beta$ (as in Theorem 6.15). For all $k$ such that $\frac{1}{k}>c_{3}\frac{\log\frac{n}{\beta}}{n}$, with probability at least $1-O(\beta)$ over the randomness in the data, $\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\leq\sqrt{c_{6}\log\ \frac{n}{\beta}}\cdot\mathbb{E}[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})].$ Now, we give the proof of our main theorem. ###### Theorem 6.15. Using Lemma 6.16, Claim 6.18 and the triangle inequality, we have that with probability at least $1-O(\beta)$ over the randomness of the data and the algorithm, $\displaystyle\mathcal{W}(P,P^{DP})$ $\displaystyle\leq\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ $\displaystyle\leq\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ $\displaystyle\leq\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+\sqrt{c_{6}\log\frac{n}{\beta}}\mathbb{E}\left[\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\right]+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ Finally, applying Lemma 6.17 and taking a union bound over failure probabilities, we get that with probability at least $1-O(\beta)$ over the randomness of the data and the algorithm, $\displaystyle\mathcal{W}(P,P^{DP})$ $\displaystyle\leq\frac{2}{k}\left(q_{1-1/k}-q_{1/k}\right)+\sqrt{c_{6}\log\frac{n}{\beta}}\mathbb{E}\left[\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})\right]+C^{\prime\prime}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ as required. ∎ ## References * [AAK21] Ishaq Aden-Ali, Hassan Ashtiani, and Gautam Kamath. On the sample complexity of privately learning unbounded high-dimensional gaussians. In Vitaly Feldman, Katrina Ligett, and Sivan Sabato, editors, Algorithmic Learning Theory, 16-19 March 2021, Virtual Conference, Worldwide, volume 132 of Proceedings of Machine Learning Research, pages 185–216. PMLR, 2021. * [AAL23a] Mohammad Afzali, Hassan Ashtiani, and Christopher Liaw. Mixtures of gaussians are privately learnable with a polynomial number of samples. CoRR, abs/2309.03847, 2023. * [AAL23b] Jamil Arbas, Hassan Ashtiani, and Christopher Liaw. Polynomial time and private learning of unbounded gaussian mixture models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 1018–1040. PMLR, 2023. * [ABC17] Peyman Afshani, Jérémy Barbay, and Timothy M. Chan. Instance-optimal geometric algorithms. J. ACM, 64(1), mar 2017. * [AD20] Hilal Asi and John C. Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. * [ADJ+11] Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky, and Shengjun Pan. Competitive closeness testing. In Sham M. Kakade and Ulrike von Luxburg, editors, COLT 2011 - The 24th Annual Conference on Learning Theory, June 9-11, 2011, Budapest, Hungary, volume 19 of JMLR Proceedings, pages 47–68. JMLR.org, 2011\. * [ADJ+12] Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky, Shengjun Pan, and Ananda Theertha Suresh. Competitive classification and closeness testing. In Shie Mannor, Nathan Srebro, and Robert C. Williamson, editors, COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, volume 23 of JMLR Proceedings, pages 22.1–22.18. JMLR.org, 2012. * [AJOS13a] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. A competitive test for uniformity of monotone distributions. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2013, Scottsdale, AZ, USA, April 29 - May 1, 2013, volume 31 of JMLR Workshop and Conference Proceedings, pages 57–65. JMLR.org, 2013. * [AJOS13b] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. Optimal probability estimation with applications to prediction and classification. In Shai Shalev-Shwartz and Ingo Steinwart, editors, COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA, volume 30 of JMLR Workshop and Conference Proceedings, pages 764–796. JMLR.org, 2013. * [AKT+23] Daniel Alabi, Pravesh K. Kothari, Pranay Tankala, Prayaag Venkat, and Fred Zhang. Privately estimating a gaussian: Efficient, robust, and optimal. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 483–496. ACM, 2023. * [AL22] Hassan Ashtiani and Christopher Liaw. Private and polynomial time algorithms for learning gaussians and beyond. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 1075–1076. PMLR, 2022. * [ALMM19] Noga Alon, Roi Livni, Maryanthe Malliaris, and Shay Moran. Private PAC learning implies finite littlestone dimension. In Moses Charikar and Edith Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 852–860. ACM, 2019. * [ASSU24] Maryam Aliakbarpour, Rose Silver, Thomas Steinke, and Jonathan R. Ullman. Differentially private medians and interior points for non-pathological data. In Venkatesan Guruswami, editor, 15th Innovations in Theoretical Computer Science Conference, ITCS 2024, January 30 to February 2, 2024, Berkeley, CA, USA, volume 287 of LIPIcs, pages 3:1–3:21. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2024. * [ASZ17] Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Differentially private testing of identity and closeness of discrete distributions. CoRR, abs/1707.05128, 2017. * [ASZ20] Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Differentially private assouad, fano, and le cam. CoRR, abs/2004.06830, 2020. * [ASZ21] Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. Differentially private Assouad, Fano, and Le Cam. In Vitaly Feldman, Katrina Ligett, and Sivan Sabato, editors, Proceedings of the 32nd International Conference on Algorithmic Learning Theory, volume 132 of Proceedings of Machine Learning Research, pages 48–78. PMLR, 16–19 Mar 2021. * [BA20] Victor-Emmanuel Brunel and Marco Avella-Medina. Propose, test, release: Differentially private estimation with high probability. CoRR, abs/2002.08774, 2020. * [Bar96] Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, pages 184–193. IEEE Computer Society, 1996. * [BBDS13] Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. Differentially private data analysis of social networks via restricted sensitivity. In Robert D. Kleinberg, editor, Innovations in Theoretical Computer Science, ITCS ’13, Berkeley, CA, USA, January 9-12, 2013, pages 87–96. ACM, 2013. * [BG14] Emmanuel Boissard and Thibaut Le Gouic. On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 50(2):539 – 563, 2014. * [BGS+21] Gavin Brown, Marco Gaboardi, Adam D. Smith, Jonathan R. Ullman, and Lydia Zakynthinou. Covariance-aware private mean estimation without private covariance estimation. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 7950–7964, 2021. * [BHS23] Gavin Brown, Samuel B. Hopkins, and Adam Smith. Fast, sample-efficient, affine-invariant private mean and covariance estimation for subgaussian distributions, 2023. * [BKM+21] Eugene Bagdasaryan, Peter Kairouz, Stefan Mellem, Adrià Gascón, Kallista Bonawitz, Deborah Estrin, and Marco Gruteser. Towards sparse federated analytics: Location heatmaps under distributed differential privacy with secure aggregation. arXiv preprint arXiv:2111.02356, 2021. * [BKSW21] Mark Bun, Gautam Kamath, Thomas Steinke, and Zhiwei Steven Wu. Private hypothesis selection. IEEE Trans. Inf. Theory, 67(3):1981–2000, 2021. * [BL19] Sergey G. Bobkov and Michel Ledoux. One-dimensional empirical measures, order statistics, and kantorovich transport distances. Memoirs of the American Mathematical Society, 2019. * [BM23] Daniel Bartl and Shahar Mendelson. On a variance dependent Dvoretzky-Kiefer-Wolfowitz inequality. arXiv e-prints, page arXiv:2308.04757, August 2023. * [BNNR09] Khanh Do Ba, Huy L. Nguyen, Huy Ngoc Nguyen, and Ronitt Rubinfeld. Sublinear time algorithms for earth mover’s distance. Theory of Computing Systems, 48:428–442, 2009. * [BNS16] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. Theory Comput., 12(1):1–61, 2016. * [BNSV15] Mark Bun, Kobbi Nissim, Uri Stemmer, and Salil P. Vadhan. Differentially private release and learning of threshold functions. CoRR, abs/1504.07553, 2015. * [BS19] Mark Bun and Thomas Steinke. Average-case averages: Private algorithms for smooth sensitivity and mean estimation. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 181–191, 2019. * [BSV22] March Boedihardjo, Thomas Strohmer, and Roman Vershynin. Private measures, random walks, and synthetic data, 2022. * [BUV18] Mark Bun, Jonathan R. Ullman, and Salil P. Vadhan. Fingerprinting codes and the price of approximate differential privacy. SIAM J. Comput., 47(5):1888–1938, 2018. * [BY02] Z. Bar-Yossef. The Complexity of Massive Data Set Computations. University of California, Berkeley, 2002. * [Can17] Clément L. Canonne. A short note on distinguishing discrete distributions., 2017. * [CB22] Graham Cormode and Akash Bharadwaj. Sample-and-threshold differential privacy: Histograms and applications. In International Conference on Artificial Intelligence and Statistics, pages 1420–1431. PMLR, 2022. * [CCD+23] Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, and Kunal Talwar. Differentially private heavy hitter detection using federated analytics, 2023. * [CD20] Rachel Cummings and David Durfee. Individual sensitivity preprocessing for data privacy. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 528–547. SIAM, 2020. * [CDK17] Bryan Cai, Constantinos Daskalakis, and Gautam Kamath. Priv’it: Private and sample efficient identity testing. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 635–644. PMLR, 2017. * [CKM+19] Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam D. Smith, and Jonathan R. Ullman. The structure of optimal private tests for simple hypotheses. In Moses Charikar and Edith Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 310–321. ACM, 2019. * [CL15] T. Tony Cai and Mark G. Low. A framework for estimation of convex functions. Statistica Sinica, 25(2):423–456, 2015. * [CLN+23] Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, and Uri Stemmer. Optimal differentially private learning of thresholds and quasi-concave optimization. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 472–482. ACM, 2023. * [CR12] Guillermo D. Cañas and Lorenzo Rosasco. Learning probability measures with respect to optimal transport metrics. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pages 2501–2509, 2012. * [CSS11] T.-H. Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Trans. Inf. Syst. Secur., 14(3):26:1–26:24, 2011. * [CWZ19] T. Tony Cai, Yichen Wang, and Linjun Zhang. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. CoRR, abs/1902.04495, 2019. * [CZ13] Shixi Chen and Shuigeng Zhou. Recursive mechanism: towards node differential privacy and unrestricted joins. In Kenneth A. Ross, Divesh Srivastava, and Dimitris Papadias, editors, Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2013, New York, NY, USA, June 22-27, 2013, pages 653–664. ACM, 2013. * [DHS15] Ilias Diakonikolas, Moritz Hardt, and Ludwig Schmidt. Differentially private learning of structured discrete distributions. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2566–2574, 2015. * [DKM+06] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT ’06, pages 486–503, St. Petersburg, Russia, 2006. * [DKSS23] Travis Dick, Alex Kulesza, Ziteng Sun, and Ananda Theertha Suresh. Subset-based instance optimality in private estimation. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 7992–8014. PMLR, 2023. * [DL91] David L. Donoho and Richard C. Liu. Geometrizing Rates of Convergence, II. The Annals of Statistics, 19(2):633 – 667, 1991. * [DL09] Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In Michael Mitzenmacher, editor, Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May 31 - June 2, 2009, pages 371–380. ACM, 2009. * [DLSV23] Trung Dang, Jasper C.H. Lee, Maoyuan Song, and Paul Valiant. Optimality in mean estimation: Beyond worst-case, beyond sub-gaussian, and beyond $1+\alpha$ moments. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. * [DMNS17] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3):17–51, 2017. * [DNPR10] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. In Leonard J. Schulman, editor, Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010, pages 715–724. ACM, 2010. * [DR18] John C. Duchi and Feng Ruan. The right complexity measure in locally private estimation: It is not the fisher information. CoRR, abs/1806.05756, 2018. * [DSS11] Steffen Dereich, Michael Scheutzow, and Reik Schottstedt. Constructive quantization: Approximation by empirical measures. Annales De L Institut Henri Poincare-probabilites Et Statistiques, 49:1183–1203, 2011. * [Dud69] R. M. Dudley. The speed of mean glivenko-cantelli convergence. The Annals of Mathematical Statistics, 40(1):40–50, 1969. * [DY95] Vladimir Dobric and Joseph E. Yukich. Asymptotics for transportation cost in high dimensions. Journal of Theoretical Probability, 8:97–118, 1995. * [FG15] Nicolas Fournier and Arnaud Guillin. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3-4):707, August 2015\. * [FLN01] Ronald Fagin, Amnon Lotem, and Moni Naor. Optimal aggregation algorithms for middleware. In Proceedings of the Twentieth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS ’01, page 102–113, New York, NY, USA, 2001. Association for Computing Machinery. * [Fou23] Nicolas Fournier. Convergence of the empirical measure in expected wasserstein distance: non asymptotic explicit bounds in $\mathbb{R}^{d}$, 2023. * [FRT03] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. In Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, STOC ’03, page 448–455, New York, NY, USA, 2003. Association for Computing Machinery. * [GHK+23] Badih Ghazi, Junfeng He, Kai Kohlhoff, Ravi Kumar, Pasin Manurangsi, Vidhya Navalpakkam, and Nachiappan Valliappan. Differentially private heatmaps. In Brian Williams, Yiling Chen, and Jennifer Neville, editors, Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, pages 7696–7704. AAAI Press, 2023. * [GJK21] Jennifer Gillenwater, Matthew Joseph, and Alex Kulesza. Differentially private quantiles. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 3713–3722. PMLR, 2021. * [GKN20] Tomer Grossman, Ilan Komargodski, and Moni Naor. Instance Complexity and Unlabeled Certificates in the Decision Tree Model. In Thomas Vidick, editor, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020), volume 151 of Leibniz International Proceedings in Informatics (LIPIcs), pages 56:1–56:38, Dagstuhl, Germany, 2020. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. * [HKM22] Samuel B. Hopkins, Gautam Kamath, and Mahbod Majid. Efficient mean estimation with pure differential privacy via a sum-of-squares exponential mechanism. In Stefano Leonardi and Anupam Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 \- 24, 2022, pages 1406–1417. ACM, 2022. * [HKMN23] Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, and Shyam Narayanan. Robustness implies privacy in statistical estimation. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 497–506. ACM, 2023. * [HLY21] Ziyue Huang, Yuting Liang, and Ke Yi. Instance-optimal mean estimation under differential privacy. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 25993–26004, 2021. * [HO19] Yi Hao and Alon Orlitsky. Doubly-competitive distribution estimation. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2614–2623. PMLR, 2019. * [HVZ23] Yiyun He, Roman Vershynin, and Yizhe Zhu. Algorithmically effective differentially private synthetic data, 2023\. * [KDH23] Rohith Kuditipudi, John C. Duchi, and Saminul Haque. A pretty fast algorithm for adaptive private mean estimation. In Gergely Neu and Lorenzo Rosasco, editors, The Thirty Sixth Annual Conference on Learning Theory, COLT 2023, 12-15 July 2023, Bangalore, India, volume 195 of Proceedings of Machine Learning Research, pages 2511–2551. PMLR, 2023. * [KLM+20] Haim Kaplan, Katrina Ligett, Yishay Mansour, Moni Naor, and Uri Stemmer. Privately learning thresholds: Closing the exponential gap. In Jacob D. Abernethy and Shivani Agarwal, editors, Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research, pages 2263–2285. PMLR, 2020. * [KLSU19] Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan R. Ullman. Privately learning high-dimensional distributions. In Alina Beygelzimer and Daniel Hsu, editors, Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA, volume 99 of Proceedings of Machine Learning Research, pages 1853–1902. PMLR, 2019\. * [KMS22a] Gautam Kamath, Argyris Mouzakis, and Vikrant Singhal. New lower bounds for private estimation and a generalized fingerprinting lemma. In NeurIPS, 2022. * [KMS+22b] Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, and Jonathan R. Ullman. A private and computationally-efficient estimator for unbounded gaussians. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 544–572. PMLR, 2022. * [KMV22] Pravesh Kothari, Pasin Manurangsi, and Ameya Velingker. Private robust estimation by stabilizing convex relaxations. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 723–777. PMLR, 2022. * [KNRS13] Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, and Adam D. Smith. Analyzing graphs with node differential privacy. In Amit Sahai, editor, Theory of Cryptography - 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, March 3-6, 2013. Proceedings, volume 7785 of Lecture Notes in Computer Science, pages 457–476. Springer, 2013. * [KSS22] Haim Kaplan, Shachar Schnapp, and Uri Stemmer. Differentially private approximate quantiles. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 10751–10761. PMLR, 2022. * [KSSU19] Gautam Kamath, Or Sheffet, Vikrant Singhal, and Jonathan R. Ullman. Differentially private algorithms for learning mixtures of separated gaussians. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 168–180, 2019. * [KSU20] Gautam Kamath, Vikrant Singhal, and Jonathan R. Ullman. Private mean estimation of heavy-tailed distributions. In Jacob D. Abernethy and Shivani Agarwal, editors, Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research, pages 2204–2235. PMLR, 2020. * [KU20] Gautam Kamath and Jonathan R. Ullman. A primer on private statistics. CoRR, abs/2005.00010, 2020. * [KV18] Vishesh Karwa and Salil P. Vadhan. Finite sample differentially private confidence intervals. In Anna R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA, volume 94 of LIPIcs, pages 44:1–44:9. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. * [Lei20] Jing Lei. Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces. Bernoulli, 26(1):767 – 798, 2020. * [LKO22] Xiyang Liu, Weihao Kong, and Sewoong Oh. Differential privacy and robust statistics in high dimensions. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 1167–1246. PMLR, 2022. * [MJT+22] Audra McMillan, Omid Javidbakht, Kunal Talwar, Elliot Briggs, Mike Chatzidakis, Junye Chen, John Duchi, Vitaly Feldman, Yusuf Goren, Michael Hesse, Vojta Jina, Anil Katti, Albert Liu, Cheney Lyford, Joey Meyer, Alex Palmer, David Park, Wonhee Park, Gianni Parsa, Paul Pelzl, Rehan Rishi, Congzheng Song, Shan Wang, and Shundong Zhou. Private federated statistics in an interactive setting. arXiv preprint arXiv:2211.10082, 2022. * [MSU22] Audra McMillan, Adam D. Smith, and Jonathan R. Ullman. Instance-optimal differentially private estimation. CoRR, abs/2210.15819, 2022. * [Nar23] Shyam Narayanan. Better and simpler lower bounds for differentially private statistical estimation. CoRR, abs/2310.06289, 2023. * [NRS07] Kobbi Nissim, Sofya Raskhodnikova, and Adam D. Smith. Smooth sensitivity and sampling in private data analysis. In David S. Johnson and Uriel Feige, editors, Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007, pages 75–84. ACM, 2007. * [NWB19] Jonathan Niles-Weed and Quentin Berthet. Minimax estimation of smooth densities in wasserstein distance. The Annals of Statistics, 2019. * [OS15] Alon Orlitsky and Ananda Theertha Suresh. Competitive distribution estimation: Why is good-turing good. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2143–2151, 2015. * [QYL12] Wahbeh Qardaji, Weining Yang, and Ninghui Li. Differentially private grids for geospatial data. Proceedings - International Conference on Data Engineering, 09 2012\. * [Rou21] Tim Roughgarden. Beyond the Worst-Case Analysis of Algorithms. Cambridge University Press, 2021. * [RS16] Sofya Raskhodnikova and Adam D. Smith. Lipschitz extensions for node-private graph statistics and the generalized exponential mechanism. In Irit Dinur, editor, IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 495–504. IEEE Computer Society, 2016. * [Sin23] Vikrant Singhal. A polynomial time, pure differentially private estimator for binary product distributions. CoRR, abs/2304.06787, 2023. * [SP19] Shashank Singh and Barnabás Póczos. Minimax distribution estimation in wasserstein distance, 2019. * [TCK+22] Eliad Tsfadia, Edith Cohen, Haim Kaplan, Yishay Mansour, and Uri Stemmer. Friendlycore: Practical differentially private aggregation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 21828–21863. PMLR, 2022. * [vdV97] A. W. van der Vaart. Superefficiency, pages 397–410. Springer New York, New York, NY, 1997. * [Vov09] Vladimir Vovk. Superefficiency from the Vantage Point of Computability. Statistical Science, 24(1):73 – 86, 2009. * [VV16] Gregory Valiant and Paul Valiant. Instance optimal learning of discrete distributions. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 142–155. ACM, 2016. * [WB19] Jonathan Weed and Francis Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in wasserstein distance. Bernoulli, 25(4 A):2620–2648, 2019. * [Wol65] J. Wolfowitz. Asymptotic efficiency of the maximum likelihood estimator. Theory of Probability & Its Applications, 10(2):247–260, 1965\. * [ZKM+20] Wennan Zhu, Peter Kairouz, Brendan McMahan, Haicheng Sun, and Wei Li. Federated heavy hitters discovery with differential privacy. In International Conference on Artificial Intelligence and Statistics, pages 3837–3847. PMLR, 2020. * [ZXX16] Jun Zhang, Xiaokui Xiao, and Xing Xie. Privtree: A differentially private algorithm for hierarchical decompositions. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD ’16, page 155–170, New York, NY, USA, 2016. Association for Computing Machinery. ## Appendix A Preliminaries ### A.1 Distribution Distances A number of other distances between distributions are used in this work. ###### Definition A.1 ($KL$-divergence). Given two distributions $P$ and $Q$ with $supp(P)\subseteq supp(Q)$, the KL divergence $KL(P,Q)=\sum_{t\in supp(P)}P(t)\ln\frac{P(t)}{Q(t)}$, if $P$ and $Q$ are discrete, and $KL(P,Q)=\int_{t\in\mathbb{R}:f_{P}(t)>0}f_{P}(t)\ln\frac{f_{P}(t)}{f_{Q}(t)}dt$ if $P$ and $Q$ are distributions on $\mathbb{R}$, and have density functions. If $supp(P)\not\subseteq supp(Q)$, then $KL(P,Q)=\infty$. ###### Definition A.2 (Hellinger distance). Given two distributions $P$ and $Q$, the Hellinger distance $H(P,Q)=\frac{1}{\sqrt{2}}\|\sqrt{P}-\sqrt{Q}\|_{2}$ (where we think of $P$ and $Q$ as vectors representing the probability masses, and the square root being component-wise.), if $P$ and $Q$ are discrete. If $P$ and $Q$ are distributions on $\mathbb{R}$, and have density functions, then $H(P,Q)=\frac{1}{\sqrt{2}}\sqrt{\int_{t\in\mathbb{R}:f_{P}(t)>0}(\sqrt{f_{P}(t)}-\sqrt{f_{Q}(t)})^{2}dt}$. Note that we use $H^{2}(P,Q)$ to represent the squared Hellinger distance. Next, we define total variation distance, which will come up in our high- dimensional results. ###### Definition A.3 (Total Variation distance). Given two discrete distributions $P$ and $Q$, the Total Variation distance $TV(P,Q)=\frac{1}{2}\|P-Q\|_{1},$ (where we think of $P$ and $Q$ as vectors representing the probability masses). More generally, for any two probability measures $P$ and $Q$ defined on $(\Omega,\mathcal{F})$, the total variation distance is defined as $\sup_{A\in\mathcal{F}}|P(A)-Q(A)|$ where $P(A)$ represents the probability of $A$ under measure $P$ and likewise for $Q$. We use the following relationship between Hellinger distance and KL divergence. ###### Lemma A.4. For all distributions $P,Q$ such that KL-divergence of $P,Q$ is well defined, we have that $H^{2}(P,Q)\leq KL(P,Q),\hskip 5.69054ptH^{2}(P,Q)\leq TV(P,Q)$ (10) ### A.2 Differential Privacy ###### Lemma A.5 (Post-Processing [DMNS17]). If Algorithm $\mathcal{A}:\mathcal{X}^{n}\rightarrow\mathcal{Y}$ is $(\varepsilon,\delta)$-differentially private, and $\mathcal{B}:\mathcal{Y}\rightarrow\mathcal{Z}$ is any randomized function, then the algorithm $\mathcal{B}\circ\mathcal{A}$ is $(\varepsilon,\delta)$-differentially private. Secondly, differential privacy is robust to adaptive composition. ###### Lemma A.6 (Composition of $(\varepsilon,\delta)$-differential privacy [DMNS17]). If $\mathcal{A}$ is an adaptive composition of $m$ differentially private algorithms $\mathcal{A}_{1},\ldots,\mathcal{A}_{m}$, where $\mathcal{A}_{j}$ is $(\varepsilon_{j},\delta_{j})$ differentially private, then $\mathcal{A}$ is $\left(\sum_{j}\varepsilon_{j},\sum_{j}\delta_{j}\right)$-differentially private. Finally, we discuss the Laplace mechanism, which we will use in one of our algorithms. ###### Definition A.7 ($\ell_{1}$-Sensitivity). The $\ell_{1}$-sensitivity of a function $f:\mathcal{X}^{n}\rightarrow\mathbb{R}^{d}$ is $\Delta_{f}=\max_{\begin{subarray}{c}{\bf x},{\bf x}^{\prime}\in\mathcal{X}^{n}\\\ d_{ham}({\bf x},{\bf x}^{\prime})\leq 1\end{subarray}}\|f({\bf x})-f({\bf x}^{\prime})\|_{1}.$ ###### Lemma A.8 (Laplace Mechanism). Let $f:\mathcal{X}^{n}\rightarrow\mathbb{R}^{d}$ be a function with $\ell_{1}$-sensitivity $\Delta_{f}$. Then the Laplacian mechanism is algorithm $\mathcal{A}_{f}({\bf x})=f({\bf x})+(Z_{1},\ldots,Z_{d}),$ where $Z_{i}\sim Lap\left(\frac{\Delta_{f}}{\varepsilon}\right)$ (and $Z_{1},\dots,Z_{d}$ are mutually independent). Algorithm $\mathcal{A}_{f}$ is $\varepsilon$-DP. ## Appendix B Experiment Details Below we describe the experiment referenced in the introduction. The distribution: We have taken a distribution on $[0,999]$, which is concentrated on two points $430$ and $440$, with $p_{430}=\frac{1}{3}$ and $p_{440}=\frac{2}{3}$. These algorithms have been run with $n=1600$ samples from this distribution. Minimax Optimal Algorithm: The minimax-optimal algorithm here is the algorithm PSMM from [HVZ23] that considers a fixed partitioning of the interval into $\Omega(m^{\frac{1}{d}})$ equal intervals and places the empirical mass in each interval on an arbitrary point in each interval. Here we consider this algorithm with $\varepsilon=\infty$, so that no noise is added. We have run it here with $K=40$ buckets. Instance-optimal Algorithm: The instance-optimal algorithm finds $k$ quantiles as in Algorithm 5. In this particular implementation, we used the recursive exponential mechanism of [KSS22], but we expect other quantile algorithms would work similarly. In this particular case, we use $k=10$ quantiles with $\varepsilon=1$. ## Appendix C Appendix for Section 5 See 5.6 ###### Proof of Theorem 5.6. Given a distribution $P$, let $\mathcal{A}^{*}_{P}=\arg\min_{\mathcal{A}\text{ is }\varepsilon\text{-DP}}\max_{Q\in\mathcal{N}(P)}\mathbb{E}_{D\sim P^{n}}[\mathcal{W}(P,\mathcal{A}(D))]$ so $\mathcal{R}_{\mathcal{N},n,\varepsilon}(P)=\max_{Q\in\mathcal{N}(P)}\mathbb{E}_{D\sim Q^{n}}[\mathcal{W}(P,\mathcal{A}^{*}_{P}(D))].$ Let $\ell\in[0pt]$. We want to define an algorithm $\mathcal{A}^{*}_{P_{\ell}}$ on the distributions in $\mathcal{N}_{\ell}(P_{\ell})$ that achieves maximum error rate $\frac{1}{r_{\ell}}\mathcal{R}_{\mathcal{N},n,\varepsilon}(P)$. Define a randomised function $g_{P}$ which given a node $\nu_{\ell}$ at level $\ell$, $g_{P}(\nu_{\ell})$ is sampled from the distribution $P$ restricted to the leaf nodes that are children of $\nu_{\ell}$. Given a set of nodes at level $\ell$, define $g_{P}(D)$ to be the set where $g_{D}$ is applied to each set element individually. Then define $\mathcal{A}^{*}_{P_{\ell}}(D)=(\mathcal{A}^{*}_{P}(g_{P}(D)))_{\ell}$. Since $g_{P}$ is applied individually to each element in $D$, $\mathcal{A}^{*}_{P_{\ell}}$ is $\varepsilon$-DP. Given a distribution $Q^{\ell}\in\mathcal{N}_{\ell}(P_{\ell})$, define a distribution $Q$ on the leaves of the tree as follows: $Q(\nu)=\frac{Q^{\ell}(\nu_{\ell})}{P_{\ell}(\nu_{\ell})}*P(\nu),$ where $\nu_{\ell}$ is the parent node of $\nu$ at level $\ell$. Note $Q\in\mathcal{N}(P)$, $g_{P}(Q^{\ell})=Q$ and $Q_{\ell}=Q^{\ell}$. Now, $\displaystyle TV(Q^{\ell},\mathcal{A}^{*}_{P_{\ell}}(D))$ $\displaystyle=TV(Q_{\ell},(\mathcal{A}^{*}_{P}(g_{P}(D))_{\ell})$ $\displaystyle\leq\frac{1}{r_{\ell}}\sum_{\ell^{\prime}\in[0pt]}r_{\ell^{\prime}}TV(Q_{\ell^{\prime}},(\mathcal{A}^{*}_{P}(g_{P}(D))_{\ell^{\prime}})$ $\displaystyle=\frac{1}{r_{\ell}}\mathcal{W}(Q,\mathcal{A}^{*}_{P}(g_{P}(D)))$ where the first inequality follows by definition of $\mathcal{A}^{*}_{P_{\ell}}$ and the fact $Q_{\ell}=Q^{\ell}$. Since $g_{P}(Q^{\ell})=Q$, this implies that for all distributions in $\mathcal{N}_{\ell}(P_{\ell})$, $\mathbb{E}_{D\sim Q^{\ell}}\left[TV(Q^{\ell},\mathcal{A}^{*}_{P_{\ell}}(D))\right]\leq\mathbb{E}_{D\sim Q}\left[\frac{1}{r_{\ell}}\mathcal{W}(Q,\mathcal{A}^{*}_{P}(D))\right]\leq\frac{1}{r^{\ell}}\mathcal{R}_{\mathcal{N},n,\varepsilon}(P),$ which implies for all levels $\ell$, $\mathcal{R}_{\mathcal{N}_{\ell},n,\varepsilon}(P_{\ell})\leq\frac{1}{r^{\ell}}\mathcal{R}_{\mathcal{N},n,\varepsilon}(P)$ and so we are done. ∎ See 5.8 ###### Proof of Lemma 5.8. We will follow the proof of Theorem 3 in [ASZ21]. Given an estimator $\mathcal{A}$, define a classifier $\mathcal{A}^{*}$ by projecting on the product of hypercubes so $\mathcal{A}^{*}(X)=\arg\min_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)}d(\mathcal{A}(X),\theta(p_{u})).$ By the triangle inequality and the definition of $\mathcal{A}^{*}$, for any $p\in\mathcal{V}$, $d(\theta(p_{\mathcal{A}^{*}(X)}),\theta(p))\leq d(\mathcal{A}(X),\theta(p_{\mathcal{A}^{*}(X)}))+d(\mathcal{A}(X),\theta(p))\leq 2d(\mathcal{A}(X),\theta(p)).$ Therefore, we can restrict to a lower bound on the performance of DP classifiers: $\min_{\mathcal{A}\text{ is }(\varepsilon,\delta)\text{-DP}}\max_{p\in\mathcal{V}}\mathcal{R}_{\mathcal{A},n}(p)\geq\frac{1}{2}\min_{\mathcal{A^{*}}\text{ is }(\varepsilon,\delta)\text{-DP}}\max_{p\in\mathcal{V}}\mathbb{E}_{X\sim p^{n}}[d(\theta(p_{\mathcal{A}^{*}(X)}),\theta(p))].$ (11) Also, for any $(\varepsilon,\delta)$-DP classifier $\mathcal{A}^{*}$, $\displaystyle\max_{p\in\mathcal{V}}\mathbb{E}_{X\sim p^{n}}[d(\theta(p_{\mathcal{A}^{*}(X)}),\theta(p))]$ $\displaystyle\geq\frac{1}{|\mathcal{V}|}\sum_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)}\mathbb{E}_{X\sim p_{u}^{n}}[d(\theta(p_{\mathcal{A}^{*}(X)}),\theta(p_{u}))]$ $\displaystyle\geq\frac{2}{|\mathcal{V}|}\sum_{s}\tau_{s}\sum_{j=1}^{k_{s}}\sum_{u\in\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots}\Pr_{X\sum p_{u}^{n}}(\mathcal{A}^{*}(X)_{j}^{s}\neq u_{j}^{s}),$ where the first inequality follows from the fact that the max is greater than the average, and the second follows from assumption (5). For each $(s,j)$ pair, we divide $\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots$ into two groups; $\displaystyle\max_{p\in\mathcal{V}}$ $\displaystyle\mathbb{E}_{X\sim p^{n}}[d(\theta(p_{\mathcal{A}^{*}(X)}),\theta(p))]$ $\displaystyle\geq\frac{2}{|\mathcal{V}|}\sum_{s}\tau_{s}\sum_{j=1}^{k_{s}}\left[\sum_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)\;|\;u_{j}^{s}=+1}\Pr_{X\sum p_{u}^{n}}(\mathcal{A}^{*}(X)_{j}^{s}\neq u_{j}^{s})+\sum_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)\;|\;u_{j}^{s}=-1}\Pr_{X\sim p_{u}^{*}}(\mathcal{A}^{*}(X)_{j}^{s}\neq u_{j}^{s})\right]$ $\displaystyle\geq\frac{2}{|\mathcal{V}|}\sum_{s}\tau_{s}\sum_{j=1}^{k_{s}}\left[\sum_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)\;|\;u_{j}^{s}=+1}\Pr_{X\sim p_{u}^{n}}(\mathcal{A}^{*}(X)_{j}^{s}\neq u_{j}^{s})+\sum_{u\in(\mathcal{E}_{k_{0}}\times\mathcal{E}_{k_{1}}\times\cdots)\;|\;u_{j}^{s}=-1}\Pr_{X\sim p_{u}^{n}}(\mathcal{A}^{*}(X)_{j}^{s}\neq u_{j}^{s})\right]$ $\displaystyle\geq\sum_{s}\tau_{s}\sum_{j=1}^{k_{s}}(\Pr_{X\sim p_{+(s,j)}^{n}}(\mathcal{A}^{*}(X)\neq+1)+\Pr_{X\sim p_{-(s,j)}^{n}}(\mathcal{A}^{*}(X)\neq-1))$ $\displaystyle\geq\sum_{s}\tau_{s}\sum_{j=1}^{k_{s}}(\Pr_{X\sim p_{+(s,j)}^{n}}(\phi_{s,j}(X)\neq+1)+\Pr_{X\sim p_{-(s,j)}^{n}}(\phi_{s,j}(X)\neq-1)).$ Combining with eqn 11 we have the first statement. Next, since for each pair $(s,j)$, there exists a coupling $(X,Y)$ between $p_{+(s,j)}$ and $p_{-(s,j)}$ such that $\mathbb{E}[d_{Ham}(X,Y)]\leq D_{s}$, we can use the DP version of Le Cam’s method from [ASZ21] to give for any classifier $\phi_{s,j}$, $\Pr_{X\sim p_{+(s,j)}^{n}}(\phi_{s,j}(X)\neq+1)+\Pr_{X\sim p_{-(s,j)}^{n}}(\phi_{s,j}(X)\neq-1)\geq\frac{1}{2}(0.9e^{-10\varepsilon D_{s}}-10D_{s}\delta),$ which implies the final result. ∎ See 5.12 ###### Proof of Lemma 5.12. A standard result in the statistics literature states that for any pair of distributions $P$ and $Q$, $\min_{\phi}\left(\Pr_{X\sim P^{n}}(\phi(X)=1)+\Pr_{X\sim Q^{n}}(\phi(X)=-1)\right)=\frac{1}{2}(1-\text{\rm TV}(P^{n},Q^{n}))\geq\frac{1}{2}(1-\sqrt{n\text{\rm KL}(P,Q)}),$ where the minimum is over all binary classifiers. If $P=\texttt{Bernoulli}(p-\alpha)$ and $Q=\texttt{Bernoulli}(p+\alpha)$ where $0\leq\alpha\leq\frac{1}{2}L(p)$ then $\displaystyle\text{\rm KL}(Q,P)$ $\displaystyle=(p+\alpha)\ln\frac{p+\alpha}{p-\alpha}+(1-p-\alpha)\ln\frac{1-p-\alpha}{1-p+\alpha}$ $\displaystyle=(p+\alpha)\ln\left(1+\frac{2\alpha}{p-\alpha}\right)+(1-p-\alpha)\ln\left(1-\frac{2\alpha}{1-p+\alpha}\right)$ $\displaystyle\leq(p+\alpha)\frac{2\alpha}{p-\alpha}-(1-p-\alpha)\frac{2\alpha}{1-p+\alpha}$ $\displaystyle=\frac{4\alpha^{2}}{p-\alpha}+\frac{4\alpha^{2}}{1-p+\alpha}$ $\displaystyle=\frac{\alpha^{2}}{(p-\alpha)(1-p+\alpha)}$ $\displaystyle\leq\frac{1}{4n}.$ where the first inequality holds since $\ln(1+x)<x$ for $x\in[-1,1]$ and by assumption $2\alpha/(p-\alpha),2\alpha/(1-p+\alpha)\in[0,1]$ and the second follows again because of the constraint on $\alpha$. ∎ See 5.14 Lemma 5.14 is an immediate corollary of the following lemma. ###### Lemma C.1. For any distribution $P$, if $\log(n/\beta)>1$ then with probability $1-30pt\beta$, $\mathfrak{W}(\widehat{\mathfrak{G}_{P}},\mathfrak{G}_{P})\leq\sum_{\ell\in[0pt]}\sum_{x\in[N_{\ell}]}\min\left\\{P_{\ell}(x),1-P_{\ell}(x),4\sqrt{3\frac{P_{\ell}(x)\log(n/\beta)}{n}},4\sqrt{3\frac{(1-P_{\ell}(x))\log(n/\beta)}{n}}\right\\}$ ###### Proof of Lemma 5.14. We’ll consider each level of the tree individually then use a union bound over all the levels to obtain our final bound. Let $(\hat{P}_{\ell})_{n}$ be the empirical distribution without truncation. The following conditions are sufficient to ensure that the bounds hold for a single level $\ell$: $\sup_{\nu\text{ s.t. }P_{\ell}(\nu)\leq\frac{3\ln(n/\beta)}{n}}\hat{(P_{\ell})}_{n}(\nu)\leq\frac{7\ln(n/\beta)}{n}$ $\sup_{\nu\text{ s.t. }P_{\ell}(\nu)\geq 1-\frac{3\ln(n/\beta)}{n}}\hat{(P_{\ell})}_{n}(\nu)\geq 1-\frac{7\ln(n/\beta)}{n}$ $\forall\left(\nu\text{ s.t. }P_{\ell}(\nu)\in\left[\frac{3\ln(n/\beta)}{n},1-\frac{3\ln(n/\beta)}{n}\right]\right),\hskip 144.54pt$ $\hskip 72.26999pt|\hat{(P_{\ell})}_{n}(x)-P_{\ell}(\nu)|\leq\min\left\\{\sqrt{\frac{3P_{\ell}(\nu)\ln(n/\beta)}{n}},\sqrt{\frac{3(1-P_{\ell}(x))\ln(n/\beta)}{n}}\right\\}$ We will begin by showing these conditions are sufficient. If $P_{\ell}(\nu)\notin[\frac{3\ln(n/\beta)}{n},1-\frac{3\ln(n/\beta)}{n}]$ then these conditions imply that the empirical density for node $\nu$ is truncated, and hence the error that that node is either $P_{\ell}(\nu)$ or $1-P_{\ell}(\nu)$ (when $P_{\ell}(\nu)<1/2$ and $P_{\ell}(\nu)>1/2$, respectively), as required. If $P_{\ell}(\nu)\in[\frac{3\ln(n/\beta)}{n},1-\frac{3\ln(n/\beta)}{n}]$ then either the estimate is not truncated and the error is less than $\min\left\\{\sqrt{\frac{3P_{\ell}(\nu)\ln(2n/\beta)}{n}},\sqrt{\frac{3(1-P_{\ell}(x))\ln(2n/\beta)}{n}}\right\\}\leq\min\\{P_{\ell}(\nu),1-P_{\ell}(\nu)\\}$, as required. Or the estimate is truncated and the error is $\min\\{P_{\ell}(\nu),1-P_{\ell}(\nu)\\}$. Under the above conditions, if $P_{\ell}(\nu)\leq 1/2$ then truncation will only occur if $P_{\ell}(\nu)-\sqrt{\frac{3p\ln(2n/\beta)}{n}}\leq\frac{7\ln(n/\beta)}{n}\leq\sqrt{\frac{7\ln(n/\beta)}{n}\frac{7}{3}\frac{3\ln(n/\beta)}{n}}\leq\sqrt{\frac{7\ln(n/\beta)}{n}\frac{7}{3}p}=\frac{7}{3}\sqrt{\frac{3\ln(n/\beta)}{n}p},$ in which case $P_{\ell}(nu)\leq 4\sqrt{\frac{3P_{\ell}(\nu)\ln(n/\beta)}{n}}$, as required. Similarly, if $P_{\ell}(\nu)>1/2$ then truncation will only occur if $1-P_{\ell}(\nu)\leq 4\sqrt{\frac{3(1-P_{\ell}(\nu))\ln(n/\beta)}{n}}$, as required. We will now show that these conditions hold simultaneously with probability at least $1-3\beta$ for all the nodes at level $\ell$. If $P_{\ell}(\nu)\leq\frac{1}{en}$ then using the multiplicative form of Chernoff bound, $\displaystyle\Pr((\hat{P_{\ell}})_{n}(\nu)\geq\frac{3\ln(n/\beta)}{n})$ $\displaystyle=\Pr((\hat{P_{\ell}})_{n}(\nu)\geq\left(1+\frac{3\ln(n/\beta)}{P_{\ell}(\nu)n}-1\right)P_{\ell}(\nu))$ $\displaystyle\leq\left(\frac{e^{\frac{3\ln(n/\beta)}{nP_{\ell}(\nu)}-1}}{(\frac{3\ln(n/\beta)}{nP_{\ell}(\nu)})^{\frac{3\ln(n/\beta)}{nP_{\ell}(\nu)}}}\right)^{P_{\ell}(\nu)n}$ $\displaystyle\leq\left(\frac{enP_{\ell}(\nu)}{3\ln(n/\beta)}\right)^{3\ln(n/\beta)}$ $\displaystyle\leq P_{\ell}(\nu)n\left(\frac{e}{3\ln(n/\beta)}\right)^{3\ln(n/\beta)}(nP_{\ell}(\nu))^{3\ln(n/\beta)-1}$ Firstly, since $\ln(n/\beta)\geq 1$, $\left(\frac{e}{3\ln(n/\beta)}\right)^{3\ln(n/\beta)}\leq 1$. Further, $nP_{\ell}(\nu)\leq 1/e$ and $3\ln(n/\beta)-1\geq\ln(n/\beta)$ so $(nP_{\ell}(\nu))^{3\ln(n/\beta)-1}\leq(1/e)^{\ln(n/\beta)}=\beta/n$. Therefore, $\Pr((\hat{P_{\ell}})_{n}(\nu)\geq\frac{3\ln(n/\beta)}{n})\leq P_{\ell}(\nu)\beta.$ (12) Let $\mathcal{S}=\\{x\in[N_{\ell}]\;|\;P_{\ell}(x)<1/(en)\\}$ then using a union bound and Eqn (12) we have $\displaystyle\Pr(\exists x\in\mathcal{S}\text{ s.t. }\hat{(P_{\ell})}_{n}(x)\geq\frac{2\sqrt{2}\log(n/\beta)}{n})$ $\displaystyle\leq\sum_{x\in\mathcal{S}}P_{\ell}(\nu)\beta\leq\beta$ There exist at most $n$ elements in $[N_{\ell}]$ that do not belong in $\mathcal{S}$. We will prove that, independently, each of these elements satisfy the required condition with probability $\leq 2\beta/n$ then a union bound proves the final result. If $P_{\ell}(\nu)\in[\frac{3\ln(n/\beta)}{n},1-\frac{3\ln(n/\beta)}{n}]$ then using the multiplicative form of Chernoff bound (If $X_{i}$ are all i.i.d. and $0<\delta<1$, then $\Pr(|\sum_{i=1}^{n}X_{i}-n\mathbb{E}[X_{1}]|\geq\delta n\mathbb{E}[X_{1}])\leq 2e^{-\delta^{2}n\mathbb{E}[X_{1}]/3}$), $\displaystyle\Pr(|\hat{(P_{\ell})}_{n}(x)-P_{\ell}(x)|\geq\sqrt{\frac{3P_{\ell}(x)\log(n/\beta)}{n}})$ $\displaystyle=\Pr(|\hat{(P_{\ell})}_{n}(x)-P_{\ell}(x)|\geq\sqrt{\frac{3\log(n/\beta)}{P_{\ell}(x)n}}P_{\ell}(x))$ $\displaystyle\leq 2e^{\frac{-\left(\frac{3\log(n/\beta)}{P_{\ell}(x)n}\right)P_{\ell}(x)n}{3}}$ $\displaystyle=2\beta/n.$ Next, if $P_{\ell}(\nu)\leq\frac{3\ln(n/\beta)}{n}$ then using the additive form of Chernoff bound (If $X_{i}$ are all i.i.d. and $\varepsilon\geq 0$, then $\Pr(\frac{1}{n}\sum_{i=1}^{n}X_{i}\geq\mathbb{E}[X_{1}]+\varepsilon)\leq e^{-\varepsilon^{2}n/(2(p+\varepsilon))}$) $\displaystyle\Pr((\hat{P_{\ell}})_{n}(\nu)\geq\frac{7\ln(n/\beta)}{n})$ $\displaystyle\leq\Pr((\hat{P_{\ell}})_{n}(\nu)\geq p+(7\frac{\ln(n/\beta)}{n}-p))$ $\displaystyle\leq e^{-\frac{(7\frac{\ln(n/\beta)}{n}-p)^{2}n}{14\frac{\ln(n/\beta)}{n}}}$ $\displaystyle\leq e^{-\frac{(4\frac{\ln(n/\beta)}{n})^{2}n}{14\frac{\ln(n/\beta)}{n}}}$ $\displaystyle\leq e^{-\ln(n/\beta)}$ $\displaystyle=\beta/n.$ By symmetry, if $P_{\ell}(\nu)\geq 1-\frac{3\ln(n/\beta)}{n}$ then $\Pr((\hat{P_{\ell}})_{n}(\nu)\leq 1-\frac{7\ln(n/\beta)}{n})\leq\beta/n.$ ∎ See 5.15 ###### Proof of Lemma 5.15. First notice that if a node $\nu$ is an $\alpha$-active node, then all of it’s ancestor nodes are also $\alpha$-active. So, it suffices to show that (with high probability) if at any stage a node makes to it Line 7 of Algorithm 3, then if $\nu\notin\gamma_{{P}}\left({2\kappa}\right)$ then $\widehat{\mathfrak{G}_{P}}(\nu)+\mathsf{Lap}(\frac{1}{\varepsilon n})\leq 2\kappa+\frac{\log(2/\beta)}{\varepsilon n}$ and if $\nu\in\gamma_{{P}}\left({\max\left\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{\log(n/\beta)}{n}\right\\}}\right)$ then $\widehat{\mathfrak{G}_{P}}(\nu)+\mathsf{Lap}(\frac{1}{\varepsilon n}))>2\kappa+\frac{\log(2/\beta)}{\varepsilon n}$. By Lemma 5.14, with probability $1-30pt\beta$, all nodes $\nu$ satisfy $|\widehat{\mathfrak{G}_{P}}(\nu)-\mathfrak{G}_{P}(\nu)|\leq\min\left\\{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu)),4\sqrt{\frac{3\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu))\log(n/\beta)}{n}}\right\\}$ Further, if one samples $X$ independent samples from $\mathsf{Lap}(\frac{1}{\varepsilon n})$ then with probability $1-X\beta$, $\sup|\mathsf{Lap}(\frac{1}{\varepsilon n})|\leq\frac{\ln(2/\beta)}{\varepsilon n}.$ So conditioning on both these events if $x\notin\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)$, $\widehat{\mathfrak{G}_{P}}(\nu)+\mathsf{Lap}(\frac{1}{\varepsilon n})\leq\mathfrak{G}_{P}(\nu)+\frac{\ln(2/\beta)}{\varepsilon n}\leq\frac{1}{2\varepsilon n}+\frac{\ln(2/\beta)}{\varepsilon n},$ so they will not survive Line 7 of Algorithm 3. If $x\in\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\\}}\right)$ then $\displaystyle\widehat{\mathfrak{G}_{P}}(\nu)+\mathsf{Lap}(\frac{1}{\varepsilon n})$ $\displaystyle\geq\mathfrak{G}_{P}(\nu)-4\sqrt{3\frac{\mathfrak{G}_{P}(\nu)\log(n/\beta)}{n}}-\frac{\ln(2/\beta)}{\varepsilon n}$ $\displaystyle\geq\frac{1}{2}\mathfrak{G}_{P}(\nu)-\frac{\log(2/\beta)}{\varepsilon n}$ $\displaystyle\geq\frac{1}{\varepsilon n}+\frac{\log(2/\beta)}{n}$ Each level has at most $2\varepsilon n$ in $\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)$ so we query at most $4\varepsilon n$ nodes in the tree when running LocateActiveNodes since each node has at most 2 children. Therefore, we can set $X=4\varepsilon n0pt$. ∎ See 5.16 ###### Proof of Lemma 5.16. The key component of this proof is that any discrepancy between the weight of the nodes on $P$ and that assigned by $\widehat{\mathfrak{G}_{P}}$ was already paid for in $\mathcal{W}(P,\widehat{\mathfrak{G}_{P}})$. $\displaystyle\mathfrak{W}(\widehat{\mathfrak{G}_{P}},\widehat{\mathfrak{G}_{P}}|_{\hat{\gamma}_{\varepsilon}})$ $\displaystyle=\sum_{\nu\notin\hat{\gamma}_{\varepsilon}}r_{\nu}|\widehat{\mathfrak{G}_{P}}(\nu)|$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\\}}\right)}r_{\nu}|\widehat{\mathfrak{G}_{P}}(\nu)|$ $\displaystyle=\sum_{\nu\notin\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\\}}\right)}r_{\nu}|\widehat{\mathfrak{G}_{P}}(\nu)-\mathfrak{G}_{P}(\nu)+\mathfrak{G}_{P}(\nu)|$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\\}}\right)}r_{\nu}|\widehat{\mathfrak{G}_{P}}(\nu)-\mathfrak{G}_{P}(\nu)|$ $\displaystyle\hskip 144.54pt+\sum_{\nu\notin\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+2\frac{\log(2/\beta)}{n},\frac{192\log(n/\beta)}{n}\\}}\right)}r_{\nu}|\mathfrak{G}_{P}(\nu)|$ $\displaystyle\leq\mathfrak{W}(\mathfrak{G}_{P},\widehat{\mathfrak{G}_{P}})+\mathfrak{W}(\mathfrak{G}_{P},\mathfrak{G}_{P}|_{\gamma_{{P}}\left({\max\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\\}}\right)})$ as required. ∎ See 5.17 ###### Proof of Lemma 5.17. We first note that for any pair of sequences of real values $a_{1},\cdots,a_{k}$ and $b_{1},\cdots,b_{k}$, and constant $A$ such that $\sum_{i}a_{i}\neq 0$, $\sum|\frac{A}{\sum a_{i}}a_{i}-b_{i}|\leq\sum|\frac{A}{\sum a_{i}}a_{i}-a_{i}|+|a_{i}-b_{i}|=|A-\sum a_{i}|+\sum|a_{i}-b_{i}|\leq|A-\sum b_{i}|+2\sum|a_{i}-b_{i}|.$ Also if $\sum a_{i}=0$ then $\sum|\frac{A}{k}-b_{i}|\leq\sum|\frac{A}{k}-\frac{\sum_{i}b_{i}}{k}|+|\frac{\sum_{i}b_{i}}{k}-b_{i}|=|A-\sum b_{i}|+2\sum|b_{i}|=|A-\sum b_{i}|+2\sum|a_{i}-b_{i}|$ Let $\bar{\mathfrak{G}}^{\ell}$ be the function $\bar{\mathfrak{G}}$ after only levels $0,\cdots,\ell$ have been updated. So $\bar{\mathfrak{G}}^{\ell}$ matches $\bar{\mathfrak{G}}^{\ell-1}$ on all levels except $\ell$. Let $\nu$ be a node in the $\ell$th level of the HST. If we suppose the sum is over the normalised children of a node $\nu$, $A=\bar{\mathfrak{G}}^{\ell-1}(\nu)$, and for all the children $\nu^{\prime}$ of $\nu$, $a_{i}=\mathfrak{G}(\nu^{\prime})$ and $b_{i}=\mathfrak{G}_{P}(\nu^{\prime})$, we can see that the contribution to the Wasserstein distance by the children increases by an additive factor of $|\mathfrak{G}^{\ell-1}(\nu)-\mathfrak{G}_{P}(\nu)|$. Iterating, we can see that $\mathcal{W}(P,\texttt{Projection}(\mathfrak{G}))\leq 2\sum_{\ell=0}^{0pt}\sum_{\nu\text{ at level }\ell}(r_{\ell}+r_{\ell+1}\cdots r_{0pt})|\mathfrak{G}(\nu)-\mathfrak{G}_{P}(\nu)|\leq 4\sum_{\ell=0}^{0pt}\sum_{\nu\text{ at level }\ell}r_{\ell}|\mathfrak{G}(\nu)-\mathfrak{G}_{P}(\nu)|,$ which is 4 times the wasserstein distance. ∎ ## Appendix D Local Minimality in the High Dimensional Setting ###### Theorem D.1. Given any $\varepsilon>0$, and a distribution $P$, and let $n^{\prime}=\frac{5}{4\min\\{W(\frac{0.45\varepsilon}{\delta}),0.6\\}}n$, then for all $(\varepsilon,\delta)$-DP algorithms $\mathcal{A}^{\prime}$, there exists a distribution $Q\in\mathcal{N}(P)$ such that with probability $1-(0pt\log n+40pt\varepsilon n)\beta$, $\mathcal{W}(Q,\hat{Q_{\varepsilon,n^{\prime}}})\leq\tilde{O}(\mathbb{E}_{X\sim Q^{n},\mathcal{A^{\prime}}}(\mathcal{W}(\mathcal{A^{\prime}}(X),Q))),$ where $\hat{Q_{\varepsilon,n^{\prime}}}$ is the output of $\texttt{PrivDensityEstTree}(Q)$ with $n^{\prime}$ samples. ###### Proof. First, let us obtain a slightly simpler upper bound on $\mathcal{W}(P,\hat{P}_{\varepsilon})$. From eqn (8) in the proof of Theorem 5.13 we have that for each level $\ell$, $\mathcal{W}(P_{\ell},(\hat{P}_{\varepsilon})_{\ell})\leq 2\left(\mathfrak{W}((\mathfrak{G}_{P})_{\ell},(\widehat{\mathfrak{G}_{P}})_{\ell})+\mathfrak{W}((\widehat{\mathfrak{G}_{P}})_{\ell},(\widehat{\mathfrak{G}_{P}}|{\hat{\gamma}_{\varepsilon}})_{\ell})+\mathfrak{W}((\widehat{\mathfrak{G}_{P}}|{\hat{\gamma}_{\varepsilon}})_{\ell},(\widetilde{\mathfrak{G}_{\hat{P}_{n},\hat{\gamma}_{\varepsilon}}})_{\ell})\right),$ from Lemma 5.15 we have that with probability $1-0pt(\log n+4\varepsilon n)\beta$, $\gamma_{{P}}\left({\max\left\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\right\\}}\right)\subset\hat{\gamma}_{\varepsilon}\subset\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right),$ and if one samples $4\varepsilon n0pt$ independent samples from $\mathsf{Lap}(\frac{1}{\varepsilon n})$ then we have that with probability $1-4\varepsilon n0pt\beta$, $\sup|\mathsf{Lap}(\frac{1}{\varepsilon n})|\leq\frac{\ln(2/\beta)}{\varepsilon n}.$ Therefore, for all $\nu\notin\hat{\gamma}_{\varepsilon}$ we have $P_{\ell}(\nu)\leq\max\left\\{\frac{2}{\varepsilon n}+4\frac{\log(2/\beta)}{\varepsilon n},\frac{192\log(n/\beta)}{n}\right\\}\leq C\frac{\ln(n/\beta)}{\varepsilon n}$ for some constant $C$ therefore, $\displaystyle\mathfrak{W}((\widehat{\mathfrak{G}_{P}})_{\ell},(\widehat{\mathfrak{G}_{P}}|{\hat{\gamma}_{\varepsilon}})_{\ell})+\mathfrak{W}((\widehat{\mathfrak{G}_{P}}|{\hat{\gamma}_{\varepsilon}})_{\ell},(\widetilde{\mathfrak{G}_{\hat{P}_{n},\hat{\gamma}_{\varepsilon}}})_{\ell})$ $\displaystyle\leq\sum_{\nu\notin\hat{\gamma}_{\varepsilon}}P_{\ell}(\nu)+\sum_{\nu\in\hat{\gamma}_{\varepsilon}}\frac{\ln(2/\beta)}{\varepsilon n}$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)}P_{\ell}(\nu)+\sum_{\nu\in\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)\backslash\hat{\gamma_{\varepsilon}}}C\frac{\ln(n/\beta)}{\varepsilon n}+\sum_{\nu\in\hat{\gamma}_{\varepsilon}}\frac{\ln(2/\beta)}{\varepsilon n}$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)}P_{\ell}(\nu)+C\ln(n/\beta)\sum_{\nu\in\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)}\frac{1}{\varepsilon n}.$ For the same reason as in the proof of Theorem 5.13, we can upper bound $\sum_{\nu\in\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)}\frac{1}{\varepsilon n}$ by $(|\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}$ by dealing with the $|\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)|=1$ case separately. Therefore, $\displaystyle\mathcal{W}(P,\hat{P}_{\varepsilon})$ $\displaystyle\leq 2C\ln(n/\beta)\left(\sum_{\nu}\min\left\\{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu)),\sqrt{\frac{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu))}{n}}\right\\}+\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)}\mathfrak{G}_{P}(\nu)+(|\gamma_{{P}}\left({\frac{1}{2\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}\right),$ Further, by Theorem 5.7 and Theorem 5.6, given $\varepsilon>0$ and $\delta\in[0,1]$, let $\kappa=\frac{1}{10\varepsilon n}\min\\{W\left(\frac{0.45\varepsilon}{\delta}\right),0.6\\}$ where $W(x)$ is the Lambert W function so $W(x)e^{W(x)}=x$. Given a distribution $P$, there exists a constant $C^{\prime}$ such that $\mathcal{R}_{\mathcal{N},n,\varepsilon}(P)\geq\frac{C^{\prime}}{D_{T}}\left(\sum_{\nu}\min\left\\{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu)),\sqrt{\frac{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu))}{n}}\right\\}+\sum_{\nu\notin\gamma_{{P}}\left({2\kappa}\right)}\mathfrak{G}_{P}(\nu)+(|\gamma_{{P}}\left({2\kappa}\right)|-1)\kappa\right)$ Let $Q\in\mathcal{N}(P)$, then $\gamma_{{P}}\left({\frac{1}{\varepsilon n}}\right)\subset\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)\subset\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)$ so $\displaystyle\sum_{\nu\notin\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)}\mathfrak{G}_{Q}(\nu)+(|\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}$ $\displaystyle=\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)}\mathfrak{G}_{Q}(\nu)+\sum_{\nu\in\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)\backslash\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)}\mathfrak{G}_{Q}(\nu)+(|\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)}2\mathfrak{G}_{P}(\nu)+\sum_{\nu\in\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)\backslash\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)}\frac{1}{\varepsilon n}+(|\gamma_{{Q}}\left({\frac{1}{2\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}$ $\displaystyle\leq\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)}2\mathfrak{G}_{P}(\nu)+(|\gamma_{{P}}\left({\frac{1}{4\varepsilon n}}\right)|-1)\frac{1}{\varepsilon n}.$ Now, let $n^{\prime}=\frac{5}{4\min\\{W(\frac{0.45\varepsilon}{\delta}),0.6\\}}n\geq n$ so for all $Q\in\mathcal{N}(P)$, $\displaystyle\mathcal{W}(Q,\hat{Q_{\varepsilon,n^{\prime}}})$ $\displaystyle\leq\tilde{O}\left(\sum_{\nu}\min\left\\{\mathfrak{G}_{Q}(\nu)(1-\mathfrak{G}_{Q}(\nu)),\sqrt{\frac{\mathfrak{G}_{Q}(\nu)(1-\mathfrak{G}_{Q}(\nu))}{n^{\prime}}}\right\\}+\sum_{\nu\notin\gamma_{{Q}}\left({\frac{1}{2\varepsilon n^{\prime}}}\right)}\mathfrak{G}_{Q}(\nu)+(|\gamma_{{Q}}\left({\frac{1}{2\varepsilon n^{\prime}}}\right)|-1)\frac{1}{\varepsilon n^{\prime}}\right)$ $\displaystyle\leq\tilde{O}\left(\sum_{\nu}2\min\left\\{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu)),\sqrt{\frac{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu))}{n^{\prime}}}\right\\}+\sum_{\nu\notin\gamma_{{P}}\left({\frac{1}{4\varepsilon n^{\prime}}}\right)}2\mathfrak{G}_{P}(\nu)+(|\gamma_{{P}}\left({\frac{1}{4\varepsilon n^{\prime}}}\right)|-1)\frac{1}{\varepsilon n^{\prime}}\right)$ $\displaystyle=\tilde{O}\left(\sum_{\nu}2\min\left\\{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu)),\sqrt{\frac{\mathfrak{G}_{P}(\nu)(1-\mathfrak{G}_{P}(\nu))}{n}}\right\\}+\sum_{\nu\notin\gamma_{{P}}\left({2\kappa}\right)}2\mathfrak{G}_{P}(\nu)+(|\gamma_{{P}}\left({2\kappa}\right)|-1)\frac{1}{\varepsilon n}\right)$ $\displaystyle\leq\tilde{O}\left(\min_{\mathcal{A}^{\prime}}\max_{Q^{\prime}\in\mathcal{N}(P)}\mathbb{E}_{X\sim Q^{\prime n},\mathcal{A^{\prime}}}(\mathcal{W}(\mathcal{A^{\prime}}(X),Q^{\prime}))\right).$ As in Proposition 3.5, since $\mathcal{N}(P)$ is compact, for all $\mathcal{A}^{\prime}$, there exists a specific $Q^{*}\in\mathcal{N}(P)$ such that $\mathcal{W}(Q^{*},\hat{Q^{*}_{\varepsilon,n^{\prime}}})\leq\tilde{O}(\mathbb{E}_{X\sim(Q^{*})^{n},\mathcal{A^{\prime}}}(\mathcal{W}(\mathcal{A^{\prime}}(X),Q^{*})))$ ∎ ## Appendix E Differentially Private Quantiles Estimating appropriately chosen quantiles is the main part of our algorithm for approximating the distribution over $\mathbb{R}$ in Wasserstein distance, and so in this section, we describe some known differentially private algorithms for this task and derive some corollaries that we use extensively in our application. We will use $F$ to represent CDF functions, with $F_{P}$ representing the CDF of distribution $P$. We start by stating an important theorem on private CDF estimation. This follows from a use of the binary tree mechanism [CSS11, DNPR10]. A version of this theorem for approximate differential privacy is described in a survey by Kamath and Ullman [KU20, Theorem 4.1]. The version presented here for pure differential privacy follows from a very similar argument, except using Laplace Noise instead of Gaussian noise (and basic composition instead of advanced composition to analyze privacy). Their accuracy was also in expectation, but a similar analysis yields a high probability bound, as in the theorem below. ###### Theorem E.1. [KU20, Theorem 4.1] Let $\varepsilon,\beta\in(0,1]$, let $D$ be an ordered, finite domain, and let ${\bf x}\in D^{n}$ be a dataset. Let $\hat{P}_{n}$ be the uniform distribution on ${\bf x}$. Then, there exists an $\varepsilon$-DP algorithm $A^{CDF}$ that on input ${\bf x}$ and the domain $D$ outputs a vector $G$ over $D$ such that with probability at least $1-\beta$ over the randomness of $A^{CDF}$: $\|G-F_{\hat{P}_{n}}\|_{\infty}=O\left(\frac{\log^{3}\frac{|D|}{\beta}}{\varepsilon n}\right).$ CDF estimation is intimately related to quantile estimation, and we use the following quantitative statement that will follow from a simple application of Theorem E.1. ###### Theorem E.2. Fix any $n>0$, $\varepsilon,\beta\in(0,1]$, $a,b\in\mathbb{R}$, and $\gamma<b-a\in\mathbb{R}$ such that $\frac{b-a}{\gamma}$ is an integer. Let $C$ be a sufficiently large constant. Then, there exists an $\varepsilon$-DP algorithm $A_{quant}$, that on input interval end points $a,b$, granularity $\gamma$, ${\bf x}=(x_{1},\dots,x_{n})\in\\{a,a+\gamma,\dots,b-\gamma,b\\}^{n}$, and desired quantile values ${\bf\alpha}\in(0,1)^{k}$, outputs quantiles $\tilde{q}\in\\{a,a+\gamma,\dots,b-\gamma,b\\}^{k}$ such that with probability at least $1-\beta$ over the randomness of $A_{quant}$, for all $r\in[k]$, $\alpha_{r}-F_{\hat{P}_{n}}(\tilde{q}_{r})\leq C\frac{\log^{3}\frac{b-a}{\beta\gamma}}{\varepsilon n},$ and $\Pr_{y\sim\hat{P}_{n}}(y<\tilde{q}_{r})<\alpha_{r}+C\frac{\log^{3}\frac{b-a}{\beta\gamma}}{\varepsilon n},$ where $\hat{P}_{n}$ is the uniform distribution on the entries of ${\bf x}$. ###### Proof. Algorithm $A_{quant}$ operates by running the algorithm $A^{CDF}$ referenced in Theorem E.1 on ${\bf x}$ and domain $\\{a,a+\gamma,\dots,b-\gamma,b\\}$, and postprocessing its outputs to get quantile estimates as follows. For every quantile $\alpha_{r}$ that we are asked to estimate, $A_{quant}$ simply scans the vector $G$ output by algorithm $A^{CDF}$ in order, and outputs the first domain element whose CDF estimate in $G$ crosses $\alpha_{r}$. Conditioned on the accuracy of the CDF estimation algorithm $G$, we have that this output $\tilde{q}_{r}$ satisfies $\alpha_{r}-F_{\hat{P}_{n}}(\tilde{q}_{r})\leq C\frac{\log^{3}\frac{b-a}{\beta\gamma}}{\varepsilon n}.$ Additionally, since $\tilde{q}_{r}$ is the first domain element whose estimate in $G$ crosses $\alpha_{r}$, we also have that $Pr_{y\sim\hat{P}_{n}}(y<\tilde{q}_{r})<\alpha_{r}+C\frac{\log^{3}\frac{b-a}{\beta\gamma}}{\varepsilon n}.$ Hence, with probability at least $1-\beta$, we have this property for all $r\in[k]$. ∎ We now state a corollary of this theorem that we will use extensively in our presentation. ###### Corollary E.3. Fix any $\varepsilon,\beta\in(0,1]$, $a,b\in\mathbb{R}$, and $\gamma<b-a\in\mathbb{R}$ such that $\frac{b-a}{\gamma}$ is an integer. Let $n\in\mathbb{N}>\frac{4c_{2}\log^{4}(\frac{b-a}{\beta\gamma\varepsilon})}{\varepsilon}$, such that $k$ set to $\lceil\frac{\varepsilon n}{4c_{3}log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$ is an integer greater than or equal to $1$, where $c_{2}$ and $c_{3}$ are sufficiently large constants. 666$k$ is set to be sufficiently small in order to relate the accuracy of the quantiles algorithm to a parameter depending on $k$, and $n$ is set sufficiently large that $k$ is not less than $1$. The dependence on $\beta$ comes up in the proof of 6.18. Then, there exists an $\varepsilon$-DP algorithm $A_{quant}$ (the same one referenced in Theorem E.2), that on input interval end points $a,b$, granularity $\gamma$, ${\bf x}=(x_{1},\dots,x_{n})\in\\{a,a+\gamma,\dots,b-\gamma,b\\}^{n}$, and desired quantile values ${\bf\alpha}=\\{1/2k,3/2k,5/2k,\dots,(2k-1)/2k\\}$, outputs quantiles $\tilde{q}\in\\{a,a+\gamma,\dots,b-\gamma,b\\}^{k}$ such that with probability at least $1-\beta$, for all $r\in[k]$, $\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}}\leq\tilde{q}_{r}\leq\hat{q}_{\frac{2r-1}{2k}+\frac{1}{4k}},$ where $\hat{P}_{n}$ is the uniform distribution on the entries of ${\bf x}$ and for all $p\in(0,1)$, $\hat{q}_{p}$ is the $p$-quantile of $\hat{P}_{n}$. ###### Proof. First, note that $k$ is set such that $\frac{1}{4k}\geq C\frac{log^{3}\frac{b-a}{\beta\gamma}}{\varepsilon n}$. Hence, by Theorem E.2, we have that with probability at least $0.99$, for all $r\in[k]$, $\frac{2r-1}{2k}-F_{\hat{P}_{n}}(\tilde{q}_{r})\leq\frac{1}{4k},$ and $\Pr_{y\sim\hat{P}_{n}}(y<\tilde{q}_{r})<\frac{2r-1}{2k}+\frac{1}{4k}.$ Condition the event above for the rest of the proof. Note that the first equation implies that for all $r\in[k]$, $\Pr_{y\sim\hat{P}_{n}}(y\leq\tilde{q_{r}})\geq\frac{2r-1}{2k}-\frac{1}{4k},$ which implies that $\tilde{q_{r}}\geq\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}}$. Next, note that we also have that for all $r\in[k]$, $\Pr_{y\sim\hat{P}_{n}}(y<\tilde{q}_{r})<\frac{2r-1}{2k}+\frac{1}{4k}.$ This implies that for all $r\in[k]$, $\tilde{q_{r}}\leq\hat{q}_{\frac{2r-1}{2k}+\frac{1}{4k}}$. ∎ ## Appendix F Proofs in Section 6 ### F.1 Omitted Proofs in Section 6.1.1 ###### Proof of Lemma 6.6. We evaluate the various terms in Theorem 6.5. We start by evaluating $\tau(P,Q)=\max\\{\int_{\mathbb{R}}\max(f_{P}(t)-e^{\varepsilon}f_{Q}(t),0)dt,\int_{\mathbb{R}}\max(f_{Q}(t)-e^{\varepsilon}f_{P}(t),0)dt$ }. Consider the first term in the outer maximum. For all $t\in[L(P),q_{1/k})$, we have that $f_{Q}(t)=\frac{1}{2}f_{P}(t)$. For all other $t$, one can see that the value of the integrand is $0$. Hence, the value of the first term is $\int_{L(P)}^{q_{1/k}}\max(f_{P}(t)-\frac{e^{\varepsilon}}{2}f_{P}(t),0)dt=\max\\{\left(1-\frac{e^{\varepsilon}}{2}\right)\frac{1}{k},0\\}\leq\frac{1}{2k}$. Now, consider the second term in the outer maximum. For all $t<q_{1-\frac{1}{k}}$, the value of the integrand is $0$. For all $q_{1-\frac{1}{k}}\leq t\leq q_{1}$, the value of the integrand is $\max\\{\left(\frac{3}{2}-e^{\varepsilon}\right)f_{P}(t),0\\}$. Hence, the second term is $\max\\{\left(\frac{3}{2}-e^{\varepsilon}\right)\frac{1}{k},0\\}\leq\frac{1}{2k}$. Put together, we get that $\tau(P,Q)\leq\frac{1}{2k}$. When $\varepsilon\geq\ln 2$, we have that $1-\frac{e^{\varepsilon}}{2}\leq 0$, and so we have that the largest value of $\varepsilon^{\prime}\in[0,\varepsilon]$ that makes $\int_{\mathbb{R}}\max(f_{Q}(t)-e^{\varepsilon^{\prime}}f_{P}(t),0)dt=\tau(P,Q)=0$, is $\varepsilon^{\prime}=\varepsilon$. When $\varepsilon<\ln 2$, we have that the value of $\varepsilon^{\prime}$ that makes $\int_{\mathbb{R}}\max(f_{Q}(t)-e^{\varepsilon^{\prime}}f_{P}(t),0)dt=\max\\{\left(\frac{3}{2}-e^{\varepsilon}\right)\frac{1}{k},0\\}=\left(1-\frac{e^{\varepsilon}}{2}\right)\frac{1}{k}$, is $\varepsilon^{\prime}=\ln\left(\frac{1+e^{\varepsilon}}{2}\right)$. Finally, we describe the distributions $P^{\prime}$ and $Q^{\prime}$ and compute the squared Hellinger distance between them. There are two cases, based on the range of $\varepsilon$. First, consider $\varepsilon\geq\ln 2$. First, we calculate $\tilde{P}\equiv\min\\{e^{\varepsilon}Q,P\\}$. This value is equal to $\min\\{e^{\varepsilon}/2,1\\}f_{P}(t)=f_{P}(t)$ for $t<q_{\frac{1}{k}}(P)$, and is also equal to $f_{P}(t)$ for $q_{\frac{1}{k}}\leq t\leq q_{1}$. Similarly, consider $\tilde{Q}\equiv\min\\{e^{\varepsilon^{\prime}}P,Q\\}=\min\\{e^{\varepsilon}P,Q\\}$; it is equal to $\frac{f_{P}(t)}{2}$ for $t<q_{\frac{1}{k}}(P)$, and is equal to $f_{P}(t)$ for $q_{\frac{1}{k}}\leq t\leq q_{1-\frac{1}{k}}$. It is also equal to $\min(e^{\varepsilon},\frac{3}{2})f_{P}(t)=\frac{3}{2}f_{P}(t)$ for $q_{\frac{1}{k}}\leq t\leq q_{1}$. Since $\tau(P,Q)=0$, and by the above calculations, we have that $P^{\prime}=P$, and $Q^{\prime}=Q$. Upper bounding the squared Hellinger distance between $P^{\prime}$ and $Q^{\prime}$ by the TV distance (See Lemma A.4), we get that $H^{2}(P^{\prime},Q^{\prime})=H^{2}(P,Q)\leq TV(P,Q)=\frac{1}{2k}\leq\frac{\varepsilon}{2(\ln 2)k}$ (where we have used that $\varepsilon\geq\ln 2$). Next, consider $\varepsilon<\ln 2$. First, consider $\tilde{P}\equiv\min\\{e^{\varepsilon}Q,P\\}$. This value is equal to $\min\\{e^{\varepsilon}/2,1\\}f_{P}(t)=\frac{e^{\varepsilon}}{2}f_{P}(t)$ for $t<q_{\frac{1}{k}}(P)$, and is also equal to $f_{P}(t)$ for $q_{\frac{1}{k}}\leq t\leq q_{1}$. Similarly, consider $\tilde{Q}\equiv\min\\{e^{\varepsilon^{\prime}}P,Q\\}=\min\\{\frac{1+e^{\varepsilon}}{2}P,Q\\}$; it is equal to $\frac{1}{2}f_{P}(t)$ for $t<q_{\frac{1}{k}}(P)$, and is equal to $f_{P}(t)$ at $q_{\frac{1}{k}}\leq t\leq q_{1-\frac{1}{k}}$. It is also equal to $\min\\{\frac{1+e^{\varepsilon}}{2},\frac{3}{2}\\}f_{P}(t)=\frac{1+e^{\varepsilon}}{2}f_{P}(t)$ at $q_{1-\frac{1}{k}}\leq t\leq q_{1}$. Note that $\tau(P,Q)=\left(1-\frac{e^{\varepsilon}}{2}\right)\frac{1}{k}$. $P^{\prime}$ and $Q^{\prime}$ are the distributions created by normalizing $\tilde{P}$ and $\tilde{Q}$ by dividing by a factor of $1-\tau(P,Q)$. Now, we upper bound the squared Hellinger distance between $P^{\prime}$ and $Q^{\prime}$ by the TV distance (See Lemma A.4), to get that $H^{2}(P^{\prime},Q^{\prime})\leq TV(P^{\prime},Q^{\prime})=O(\frac{\varepsilon}{k})$. Substituting into the lower bound for sample complexity of distinguishing $P$ and $Q$, this tells us that for all $\varepsilon\in(0,1]$, $SC_{\varepsilon}(P,Q)=\Omega\left(\frac{1}{\varepsilon\cdot\frac{1}{k}}\right)=\Omega(k/\varepsilon)$. ∎ ###### Proof of Lemma 6.7. Note that $P$ has bounded expectation (and hence, so does $Q$). Hence, we can use the following form of the Wasserstein distance: $\mathcal{W}(P,Q)=\int_{\mathbb{R}}|F_{P}(t)-F_{Q}(t)|dt.$ Now, given the settings of $P$ and $Q$, we can precisely write the forms of their cumulative distribution function as follows. Note that for $L(P)\leq t<q_{1/k}(P)$, we have that $|F_{P}(t)-F_{Q}(t)|=\frac{1}{2}F_{p}(t)$. For $q_{1/k}\leq t\leq q_{1-\frac{1}{k}}$, we have $|F_{P}(t)-F_{Q}(t)|=\frac{1}{2k}$. Finally, for $q_{1-\frac{1}{k}}\leq t\leq q_{1}$, we have that $F_{P}(t)=1-\frac{1}{k}+\int_{q_{1-1/k}}^{t}f_{P}(t)dt$ and $F_{Q}(t)=1-\frac{3}{2k}+\frac{3}{2}\int_{q_{1-1/k}}^{t}f_{P}(t)dt$, which gives us that $F_{P}(t)-F_{Q}(t)=\frac{1}{2k}-\frac{1}{2}\int_{q_{1-1/k}}^{t}f_{P}(t)dt=\frac{1}{2}[1-F_{P}(t)]$. Hence, we have that $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle=\int_{\mathbb{R}}|F_{P}(t)-F_{Q}(t)|dt$ $\displaystyle=\frac{1}{2}\int_{L(P)}^{q_{1/k}}F_{P}(t)dt+\int_{q_{1/k}}^{q_{1-\frac{1}{k}}}|F_{P}(t)-F_{Q}(t)|dt+\int_{q_{1-\frac{1}{k}}}^{q_{1}}|F_{P}(t)-F_{Q}(t)|dt$ $\displaystyle\geq\frac{1}{2}\int_{L(P)}^{q_{1/k}}F_{P}(t)dt+\frac{1}{2}\int_{q_{1-\frac{1}{k}}}^{q_{1}}[1-F_{P}(t)]dt+\frac{1}{2k}(q_{1-\frac{1}{k}}-q_{\frac{1}{k}})$ $\displaystyle=\frac{1}{2}\int_{q_{1-\frac{1}{k}}}^{q_{1}}\Big{|}F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)\Big{|}dt+\frac{1}{2}\int_{L(P)}^{q_{1/k}}\Big{|}F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)\Big{|}dt+\frac{1}{2k}(q_{1-\frac{1}{k}}-q_{\frac{1}{k}})$ $\displaystyle=\frac{1}{2k}(q_{1-\frac{1}{k}}-q_{1/k})+\frac{1}{2}\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ ∎ ### F.2 Omitted proofs in Section 6.1.2 ###### Proof of Lemma 6.10. The KL divergence is defined as $\int_{t:f_{Q}(t)>0}f_{P}(t)\log f_{P}(t)/f_{Q}(t)dt$. This can be broken up into a sum over the dyadic quantiles as: $\displaystyle KL(P,Q)$ $\displaystyle=\sum_{i=2}^{\log n-1}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}f_{P}(t)\log\frac{f_{P}(t)}{f_{Q}(t)}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}f_{P}(t)\log\frac{f_{P}(t)}{f_{Q}(t)}dt$ $\displaystyle+\sum_{i=\log n}^{\infty}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}f_{P}(t)\log\frac{f_{P}(t)}{f_{Q}(t)}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}f_{P}(t)\log\frac{f_{P}(t)}{f_{Q}(t)}dt$ $\displaystyle=\sum_{i=2}^{\log n-1}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}f_{P}(t)\log\frac{1}{1+\sqrt{\frac{2^{i}}{n}}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}f_{P}(t)\log\frac{1}{1-\sqrt{\frac{2^{i}}{n}}}dt$ $\displaystyle+\sum_{i=\log n}^{\infty}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}f_{P}(t)\log\frac{1}{1+\frac{1}{2}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}f_{P}(t)\log\frac{1}{1-\frac{1}{2}}dt$ $\displaystyle=\sum_{i=2}^{\log 4n}\frac{1}{2^{i}}\left[\log\frac{1}{1+\sqrt{\frac{2^{i}}{n}}}+\log\frac{1}{1-\sqrt{\frac{2^{i}}{n}}}\right]+\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}\left[\log\frac{1}{1+\frac{1}{2}}+\log\frac{1}{1-\frac{1}{2}}\right]$ $\displaystyle\leq\sum_{i=2}^{\log n-1}\frac{1}{2^{i}}\log\frac{1}{1-\frac{2^{i}}{n}}+O\left(\frac{1}{n}\right)$ $\displaystyle\leq\sum_{i=2}^{\log n-1}\frac{1}{2^{i}}2\frac{2^{i}}{n}+O\left(\frac{1}{n}\right)$ $\displaystyle=O\left(\frac{\log n}{n}\right),$ where the third inequality from last is by the fact that the geometric series $\sum_{i=\log n}^{\infty}\frac{1}{2^{i}}$ converges to $O(\frac{1}{n})$, the second inequality from last is from the fact that $\frac{2^{i}}{n}<1/2$, and $\log(1/(1-y))<2y$ for $0<y<1/2$. ∎ ###### Proof of Lemma 6.11. First, we recall the definition of the 1-Wasserstein distance in terms of the cumulative distribution function. $\displaystyle\mathcal{W}(P,Q)=\int_{\mathbb{R}}|F_{P}(t)-F_{Q}(t)|dt$ Fix any $2\leq i<\log n-1$. Observe that by construction, for all $t\in[q_{1/2^{i}},q_{1-1/2^{i}})$ and for all $t\in[q_{1-1/2^{i-1}},q_{1-1/2^{i}})$, $|F_{P}(t)-F_{Q}(t)|\geq\sum_{j=i+1}^{\log n-1}\frac{1}{\sqrt{2^{j}n}}+\frac{1}{2}\sum_{j=\log n}^{\infty}\frac{1}{2^{j}}$. Similarly, fix any $\log n-1\leq i<\infty$. Observe that for all $t\in[q_{1/2^{i}},q_{1-1/2^{i}})$, and for all $t\in[q_{1-1/2^{i-1}},q_{1-1/2^{i}})$, we have that $|F_{P}(t)-F_{Q}(t)|\geq\frac{1}{2}\sum_{j=i+1}^{\infty}\frac{1}{2^{j}}$. Substituting the above bounds in the formula for the Wasserstein distance, we get that $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle\geq\sum_{i=2}^{\log n-2}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\left[\sum_{j=i+1}^{\log n-2}\frac{1}{\sqrt{2^{j}n}}+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\frac{1}{2^{j}}\right]dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\left[\sum_{j=i+1}^{\log n-2}\frac{1}{\sqrt{2^{j}n}}+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\frac{1}{2^{j}}\right]dt$ $\displaystyle+\sum_{i=\log n-1}^{\infty}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\frac{1}{2}\sum_{j=i+1}^{\infty}\frac{1}{2^{j}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\frac{1}{2}\sum_{j=i+1}^{\infty}\frac{1}{2^{j}}dt$ Pulling the summation over $j$ outside the integral and grouping terms, $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle\geq\sum_{i=2}^{\log n-2}\Bigg{[}\sum_{j=i+1}^{\log n-2}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\frac{1}{\sqrt{2^{j}n}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\frac{1}{\sqrt{2^{j}n}}dt+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\frac{1}{2^{j}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\frac{1}{2^{j}}dt\Bigg{]}$ $\displaystyle+\frac{1}{2}\sum_{i=\log n-1}^{\infty}\sum_{j=i+1}^{\infty}\Bigg{[}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\frac{1}{2^{j}}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\frac{1}{2^{j}}dt\Bigg{]}$ $\displaystyle=\sum_{i=2}^{\log n-2}\left[({q_{1/2^{i-1}}}-q_{1/2^{i}})+(q_{1-1/2^{i}}-{q_{1-1/2^{i-1}}})\right]\left[\sum_{j=i+1}^{\log n-2}\frac{1}{\sqrt{2^{j}n}}+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\frac{1}{2^{j}}\right]$ $\displaystyle+\sum_{i=\log n-1}^{\infty}\left[({q_{1/2^{i-1}}}-q_{1/2^{i}})+(q_{1-1/2^{i}}-{q_{1-1/2^{i-1}}})\right]\frac{1}{2}\sum_{j=i+1}^{\infty}\frac{1}{2^{j}}$ Switching the order of summation (summing over $j$ first), and grouping terms, we get $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle\geq\sum_{j=3}^{\log n-2}\frac{1}{\sqrt{2^{j}n}}\sum_{i=2}^{j-1}\left[({q_{1/2^{i-1}}}-q_{1/2^{i}}+(q_{1-1/2^{i}}-{q_{1-1/2^{i-1}}})\right]$ $\displaystyle+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\frac{1}{2^{j}}\sum_{i=2}^{j-1}\left[({q_{1/2^{i-1}}}-q_{1/2^{i}}+(q_{1-1/2^{i}}-{q_{1-1/2^{i-1}}})\right]$ Telescoping the inner sums over $i$ we get that $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle\geq\sum_{j=3}^{\log n-2}\frac{1}{\sqrt{2^{j}n}}\left[q_{1-1/2^{j-1}}-q_{1/2^{j-1}}\right]+\frac{1}{2}\sum_{j=\log n-1}^{\infty}\frac{1}{2^{j}}\left[q_{1-1/2^{j-1}}-q_{1/2^{j-1}}\right]$ A change of variables (where we now set $j$ to $j-1$) then gives $\displaystyle\mathcal{W}(P,Q)$ $\displaystyle\geq\frac{1}{\sqrt{2}}\sum_{j=2}^{\log n-3}\frac{1}{\sqrt{2^{j}n}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right]+\frac{1}{4}\sum_{j=\log n-2}^{\infty}\frac{1}{2^{j}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right]$ $\displaystyle\geq\frac{1}{4}\sum_{j=2}^{\log n-1}\frac{1}{\sqrt{2^{j}n}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right]+\frac{1}{4}\sum_{j=\log n}^{\infty}\frac{1}{2^{j}}\left[q_{1-1/2^{j}}-q_{1/2^{j}}\right],$ where the last inequality is by pulling the first two terms from the summation in second term to the summation in the first term, and using the fact that for $j=\log n-2,j=\log n-1$, we have that $\frac{1}{4\cdot 2^{j}}\geq\frac{1}{2\sqrt{2}}\frac{1}{\sqrt{2^{j}n}}$ ∎ ###### Proof of Lemma 6.12. We first state a theorem of Bobkov and Ledoux [BL19]. ###### Theorem F.1 (Theorem 3.5, [BL19]). There is an absolute constant $c>0$, such that for all distributions $P$ over $\mathbb{R}$, for every $n\geq 1$, $c(A_{n}+B_{n})\leq\mathbb{E}[\mathcal{W}(P,\hat{P}_{n}]\leq A_{n}+B_{n}.$ where $A_{n}=2\int_{F(t)[1-F(t)]\leq\frac{1}{4n}}F(t)[1-F(t)]dt,$ and $B_{n}=\frac{1}{\sqrt{n}}\int_{F(t)[1-F(t)]\geq\frac{1}{4n}}\sqrt{F(t)[1-F(t)]}dt.$ Now, we are ready to prove the main theorem. Fix natural number $i\geq 2$. Restricted to $t\leq q_{1/2}$, $F_{P}(t)(1-F_{P}(t))$ is an increasing function, and hence for $t\in[q_{1/2^{i}},q_{1/2^{i-1}}]$, we have that $F_{P}(t)(1-F_{P}(t))\leq\frac{1}{2^{i-1}}[1-\frac{1}{2^{i-1}}]$. Similarly, restricted to $t>q_{1/2}$, $F_{P}(t)(1-F_{P}(t))$ is a decreasing function, and hence for $t\in[1-q_{1/2^{i-1}},q_{1-1/2^{i}}]$, we have that $F_{P}(t)(1-F_{P}(t))\leq\frac{1}{2^{i-1}}[1-\frac{1}{2^{i-1}}]$. Using this, we can now upper bound the expected Wasserstein distance between $P$ and its empirical distribution using Theorem F.1. Hence, we upper bound the terms $B_{n}$ and $A_{n}$. We start by upper bounding $B_{n}$. Note that for all $t\not\in[q_{\frac{1}{4n}},q_{1-\frac{1}{4n}}]$, we have that $F_{P}(t)(1-F_{P}(t))\leq\frac{1}{4n}$. Hence, $\displaystyle B_{n}$ $\displaystyle=\frac{1}{\sqrt{n}}\int_{F_{P}(t)[1-F_{P}(t)]\geq\frac{1}{4n}}\sqrt{F_{P}(t)[1-F_{P}(t)]}dt$ $\displaystyle\leq\frac{1}{\sqrt{n}}\int_{q_{\frac{1}{4n}}}^{q_{1-\frac{1}{4n}}}\sqrt{F_{P}(t)[1-F_{P}(t)]}dt$ $\displaystyle\leq\sum_{i=2}^{\log 4n}\frac{1}{\sqrt{n}}\left[\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\sqrt{F_{P}(t)[1-F_{P}(t)]}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\sqrt{F_{P}(t)[1-F_{P}(t)]}dt\right]$ $\displaystyle\leq\sum_{i=2}^{\log 4n}\frac{1}{\sqrt{n}}\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\sqrt{\frac{1}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]}dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\sqrt{\frac{1}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]}dt$ $\displaystyle=\sum_{i=2}^{\log 4n}\frac{1}{\sqrt{n}}\sqrt{\frac{1}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]}\left[q_{1/2^{i-1}}-q_{1/2^{i}}+q_{1-1/2^{i}}-q_{1-1/2^{i-1}}\right]$ $\displaystyle\leq\sum_{i=2}^{\log 4n}\frac{2}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle=\sum_{i=2}^{\log n-1}\frac{2}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\log 4n}\frac{2}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ $\displaystyle\leq\sum_{i=2}^{\log n-1}\frac{2}{\sqrt{2^{i}n}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]+\sum_{i=\log n}^{\log 4n}\frac{4}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right],$ where the last inequality is because for $i\leq\log(4n)$, we have that $\frac{1}{n}\leq\frac{4}{2^{i}}$. Next, we bound $A_{n}$. Note that for all $t\geq q_{1/2n}$ and for all $t\leq q_{1-\frac{1}{2n}}$, we have that $F_{P}(t)(1-F_{P}(t))\not\leq\frac{1}{4n}$. Hence, $\displaystyle A_{n}$ $\displaystyle=2\int_{F_{P}(t)[1-F_{P}(t)]\leq\frac{1}{4n}}F_{P}(t)[1-F_{P}(t)]dt$ $\displaystyle\leq 2\left[\int_{-\infty}^{q_{\frac{1}{2n}}}F_{P}(t)[1-F_{P}(t)]dt+\int_{q_{1-\frac{1}{2n}}}^{\infty}F_{P}(t)[1-F_{P}(t)]dt\right]$ $\displaystyle=\sum_{i=1+\log 2n}^{\infty}2\left[\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}F_{P}(t)[1-F_{P}(t)]dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}F_{P}(t)[1-F_{P}(t)]dt\right]$ $\displaystyle\leq\sum_{i=1+\log 2n}^{\infty}2\left[\int_{q_{1/2^{i}}}^{q_{1/2^{i-1}}}\frac{1}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]dt+\int_{q_{1-1/2^{i-1}}}^{q_{1-1/2^{i}}}\frac{1}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]dt\right]$ $\displaystyle=\sum_{i=1+\log 2n}^{\infty}\frac{2}{2^{i-1}}\left[1-\frac{1}{2^{i-1}}\right]\left[q_{1/2^{i-1}}-q_{1/2^{i}}+q_{1-1/2^{i}}-q_{1-1/2^{i-1}}\right]$ $\displaystyle\leq\sum_{i=1+\log 2n}^{\infty}\frac{4}{2^{i}}\left[q_{1-1/2^{i}}-q_{1/2^{i}}\right]$ Then, using the upper bound in Theorem F.1, substituting in the bounds for $A_{n}$ and $B_{n}$, and simplifying, we get the claim. ∎ ###### Proof of Claim 6.13. By the definition of Wasserstein distance and restrictions of distributions, we have that $\displaystyle\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ $\displaystyle=\int_{a}^{b}\left|F_{\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)\right|dt$ $\displaystyle=\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\left|F_{\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)\right|dt$ $\displaystyle=\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\left|F_{\hat{P}_{n}}(t)-F_{P}(t)\right|dt\leq\mathcal{W}(P,\hat{P_{n}})$ ∎ ### F.3 Omitted Proofs in Section 6.2 Before going into the proofs, we state the standard Chernoff concentration bound that we will use multiple times. ###### Theorem F.2 (Binomial Concentration). Let $X\sim Bin(n,p)$ with expectation $\mu=np$, and $0<\delta<1$. Then, $\Pr(|X-\mu|\geq\delta\mu)\leq 2e^{\frac{-\delta^{2}\mu}{3}}.$ ###### Proof of Lemma 6.16. $\displaystyle\mathcal{W}(P,P^{DP})$ $\displaystyle=\int_{t}|F_{P}(t)-F_{P^{DP}}(t)|dt$ (13) $\displaystyle\leq\int_{t=a}^{q_{1/k}}|F_{P}(t)-F_{P^{DP}}(t)|dt+\int_{t=q_{1/k}}^{q_{1-1/k}}|F_{P}(t)-F_{P^{DP}}(t)|dt+\int_{t=q_{1-1/k}}^{b}|F_{P}(t)-F_{P^{DP}}(t)|dt$ (14) Note that for all $t\in[q_{1/k},q_{1-1/k}]$, we have that the cumulative distribution functions of $P$ and its restricted version are identical and likewise for $P^{DP}$. Additionally, the cumulative density functions for the restricted versions of the two distributions are identical to each other outside of this interval. Hence, we can simplify the middle term in the RHS of the inequality above as follows: $\displaystyle\int_{t=q_{1/k}}^{q_{1-1/k}(P)}|F_{P}(t)-F_{P^{DP}}(t)|dt=\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ Next, we reason about the remaining terms. Consider the term $\int_{t=a}^{q_{1/k}}|F_{P}(t)-F_{P^{DP}}(t)|dt$. First, condition on the event in Theorem E.3 (on the accuracy of the private quantiles for the empirical distribution), which tells us that with probability at least $1-\beta$, we have for all $r\in[k]$, that $\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}}\leq\tilde{q}_{\frac{2r-1}{2k}}\leq\hat{q}_{\frac{2r-1}{2k}+\frac{1}{4k}},$ (15) which implies in particular that $\hat{q}_{1/4k}\leq\tilde{q}_{1/2k}\leq\hat{q}_{3/4k}$. Next, we argue that $\hat{q}_{1/4k}\geq q_{1/8k}$ with high probability. By the definition of quantiles, we have that $Pr_{y\sim P}(y<q_{1/8k})<\frac{1}{8k}$. The number of entries in the dataset ${\bf x}$ less than $q_{1/8k}$ is hence a Binomial with mean less than $\frac{n}{8k}$, and hence, we have by Theorem F.2 (with $\delta$ set to $0.9)$ that with probability at least $1-\beta$, the number of entries in the dataset less than $q_{1/8k}$ is at most $1.9\frac{n}{8k}<\frac{n}{4k}$, which means the total mass less than $q_{\frac{1}{8k}}$ in the empirical distribution is less than $\frac{1}{4k}$. This implies that $\hat{q}_{1/4k}\geq q_{1/8k}$ by the definition of quantiles. Additionally, note that for all $t<q_{1/k}$, $F_{P}(t)<\frac{1}{k}$. The number of entries in the dataset ${\bf x}$ that are less than $q_{1/k}$ is hence a Binomial with success probability less than $\frac{1}{k}$. By Theorem F.2, we can again argue that with probability at least $1-\beta$, there is a constant $c^{\prime}$ such that the total mass of the empirical distribution on values less than $q_{1/k}$ is less than $\frac{c^{\prime}}{k}$. Hence, $q_{1/k}\leq\hat{q}_{c^{\prime}/k}$. This implies by Equation 15, that $q_{1/k}\leq\tilde{q}_{c/k}$ for some constant $c$. Hence, for all $t<q_{1/k}$, we have that $F_{P^{DP}}(t)\leq\frac{c}{k}$. Hence, taking a union bound, with probability at least $1-O(\beta)$, $\displaystyle\int_{t=a}^{q_{1/k}}|F_{P}(t)-F_{P^{DP}}(t)|dt$ $\displaystyle=\int_{t=a}^{\tilde{q}_{1/2k}}|F_{P}(t)-F_{P^{DP}}(t)|dt+\int_{\tilde{q}_{1/2k}}^{q_{1/k}}|F_{P}(t)-F_{P^{DP}}(t)|dt$ $\displaystyle\leq\int_{t=a}^{\tilde{q}_{1/2k}}|F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)|dt+\int_{q_{1/8k}}^{q_{1/k}}|F_{P}(t)-\frac{c}{k}|dt$ $\displaystyle\leq\int_{t=a}^{\tilde{q}_{1/2k}}|F_{P}(x)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)|dt+\int_{q_{1/8k}}^{q_{1/k}}|F_{P}(t)-8cF_{P}(t)|dt$ $\displaystyle\leq(1-8c)\left[\int_{t=a}^{\tilde{q}_{1/2k}}|F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)|dt+\int_{q_{1/8k}}^{q_{1/k}}|F_{P}(t)|dt\right]$ $\displaystyle\leq(1-8c)\left[\int_{t=a}^{\tilde{q}_{1/2k}}|F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)|dt+\int_{q_{1/8k}}^{q_{1/k}}|F_{P}(t)-F_{P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)|dt\right]$ $\displaystyle\leq 2(1-8c)\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ By a symmetric argument, we also have that with probability at least $1-O(\beta)$, $\int_{t=q_{1-1/k}}^{b}|F_{P}(t)-F_{P^{DP}}(t)|dt\leq 2(1-8c)\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}).$ Taking a union bound to ensure that all terms in Equation 14 are bounded as required, the proof is complete. ∎ ###### Proof of Lemma 6.17. First, we condition on the event in Corollary E.3 (on the accuracy of differentially private quantile estimates) that for all $r\in[k]$, $\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}}\leq\tilde{q}_{r}\leq\hat{q}_{\frac{2r-1}{2k}+\frac{1}{4k}},$ note that this event happens with probability at least $1-\beta$ over the randomness of the algorithm. Observe that this implies that $F_{DP}$ increases by $\frac{1}{k}$ somewhere in the range $[\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}},\hat{q}_{\frac{2r-1}{2k}+\frac{1}{4k}}]$ (for all $r\in[k]$) and remains constant outside these intervals. Now, we show that for all $t\in[a,b]$, we have that $|F_{P^{DP}}(t)-F_{\hat{P}_{n}}(t)|\leq\frac{2}{k}$. If there exists $t\in[a,\hat{q}_{\frac{1}{4k}})$, we have that $F_{P_{DP}}(t)=0$, and $F_{\hat{P}_{n}}(t)\leq\frac{1}{4k}$, which implies that $|F_{DP}(t)-F_{\hat{P}_{n}}(t)|\leq\frac{1}{4k}$. If there exists no such $t$, then we have that $a=\hat{q}_{\frac{1}{4k}}$, and the corresponding interval collapses to a single point (which will fall in another interval considered below). Next, fix any $r\in[k-1]$. Note that if there exists $t\in[\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}},\hat{q}_{\frac{2r+1}{2k}-\frac{1}{4k}})$, we have for all such $t$ that $\frac{r-1}{k}\leq F_{DP}(t)<\frac{r}{k}$, and $\frac{2r-1}{2k}-\frac{1}{4k}\leq F_{\hat{P}_{n}}(t)\leq\frac{2r+1}{2k}+\frac{1}{4k}$. This implies that for all such $t$, $|F_{DP}(t)-F_{\hat{P}_{n}}(t)|\leq\frac{2}{k}$. If there exists no such $t$, then we have that $\hat{q}_{\frac{2r-1}{2k}-\frac{1}{4k}}=\hat{q}_{\frac{2r+1}{2k}-\frac{1}{4k}}$, and this $r$ is not relevant since the corresponding interval collapses to a single point (that is considered in another interval). Finally, for $t\in[\hat{q}_{\frac{2k-1}{2k}},b]$, we have that $F_{P_{DP}}(t)\geq 1-\frac{1}{k}$, and $F_{\hat{P}_{n}}(t)\geq 1-\frac{1}{2k}$, so we have that $|F_{DP}(t)-F_{\hat{P}_{n}}(t)|\leq\frac{1}{k}$. Note that every $t\in[a,b]$ is considered in some interval above and hence we have shown that for all $t\in[a,b]$, we have that $|F_{P^{DP}}(t)-F_{\hat{P}_{n}}(t)|\leq\frac{2}{k}$. Finally, using the formula for Wasserstein distance (and the definition of a restriction), we have that $\displaystyle\mathcal{W}(\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ $\displaystyle=\int_{a}^{b}\left|F_{\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)-F_{P^{DP}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}}(t)\right|dt$ (16) $\displaystyle=\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\left|F_{P}(t)-F_{P^{DP}}(t)\right|dt$ (17) $\displaystyle\leq\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\frac{2}{k}dt$ (18) $\displaystyle\leq\frac{2}{k}\left(q_{1-1/k}-q_{1/k}\right)$ (19) ∎ Before the proof of Claim 6.18, we state the following variance-dependent version of the DKW inequality that uniformly bounds the absolute difference in CDFs between the true and empirical distribution. ###### Theorem F.3 (See for example Theorem 1.2 in [BM23]). Fix $n>0$. There are absolute constants $c_{0},c_{1}$ such that for all $\Delta\geq\frac{c_{0}\log\log n}{n}$, $\displaystyle\Pr\left[\sup_{t:F_{P}(t)(1-F_{P}(t))\geq\Delta}\Big{|}F_{P}(t)-F_{\hat{P}_{n}}(t)\Big{|}\geq\sqrt{\Delta\cdot{F(t)(1-F(t)}}\right]\leq 2e^{-c_{1}\Delta n}$ We also state the following lemma on Binomial random variables, which is a simple consequence of a Lemma by Bobkov and Ledoux [BL19]. ###### Lemma F.4 (Lemma 3.8 in [BL19]). Let $S_{n}=\sum_{i=1}^{n}\eta_{i}$ be the sum of $n$ independent Bernoulli random variables with $\Pr[\eta_{i}=1]=p$ and $\Pr[\eta_{i}=0]=q=1-p$ (for all $i$). Also assume $p\in[\frac{1}{n},1-\frac{1}{n}]$. Then, for some sufficiently small constant $c$, $\displaystyle c\sqrt{npq}\leq\mathbb{E}[|S_{n}-np|]\leq\sqrt{npq}$ ###### Proof of Claim 6.18. Now, by the formula for Wasserstein distance, the definition of restriction, and Fubini’s theorem, we have that $\displaystyle\mathbb{E}[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})]=\mathbb{E}\Big{[}\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\Big{|}F_{P}(t)-F_{\hat{P}_{n}}(t)\Big{|}dt\Big{]}=\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\mathbb{E}\Big{[}\Big{|}F_{P}(t)-F_{\hat{P}_{n}}(t)\Big{|}\Big{]}dt$ By Lemma F.4, using the fact that $F_{\hat{P}_{n}}(t)=\sum_{i=1}^{n}\mathrm{1}[x_{i}\leq t]$, where each term in the sum is an independent Bernoulli random variable with expectation $F_{P}(t)$, with $q_{\frac{1}{k}}\leq t<q_{1-\frac{1}{k}}$ (ensuring that the conditions of the lemma are met), we get that $\mathbb{E}\Big{[}\Big{|}F_{P}(t)-F_{\hat{P}_{n}}(t)\Big{|}\Big{]}\geq c\sqrt{\frac{F_{P}(t)[1-F_{P}(t)]}{n}}$, which gives $\displaystyle\mathbb{E}[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})]$ $\displaystyle\geq c\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\sqrt{\frac{F_{P}(t)[1-F_{P}(t)]}{n}}dt$ Now, consider the random variable $\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$. Note that $\frac{1}{k}\geq\frac{c_{3}\log\frac{n}{\beta}}{n}$ (for an appropriately chosen $c_{3}$), and so we are in the regime where we can apply Theorem F.3 for an appropriately chosen $\Delta$. In particular, we have that for $t\in[q_{\frac{1}{k}},q_{1-\frac{1}{k}})$, $F_{P}(t)\in[\frac{1}{k},1-\frac{1}{k})$. Setting $\Delta=\frac{\log\frac{n}{\beta}}{c_{1}n}$, we have that $\Delta\geq c_{0}\frac{\log\log n}{n}$, and $\Delta\leq\frac{1}{2k}$ (the second inequality for sufficiently large $c_{3}$). In particular, this implies for $t\in[q_{\frac{1}{k}},q_{1-\frac{1}{k}})$, $F_{P}(t)\in[2\Delta,1-2\Delta)$, which implies that $F_{P}(t)(1-F_{P}(t))\geq\Delta$, as long as $n>c_{4}\log\frac{n}{\beta}$ for some sufficiently large constant $c_{4}$. Now, using Theorem F.3, we have that with probability at least $1-2e^{-c_{1}\frac{\log\frac{n}{\beta}}{c_{1}n}n}\geq 1-O(\beta)$, $\sup_{t\in[q_{\frac{1}{k}},q_{1-\frac{1}{k}})}\Big{|}F_{P}(t)-F_{\hat{P}_{n}}(t)\Big{|}\leq\sqrt{\frac{\log\frac{n}{\beta}}{c_{1}n}{F_{P}(t)(1-F_{P}(t))}}$ Condition on this for the rest of the proof. Then, we can write the following set of equations. $\displaystyle\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})$ $\displaystyle=\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}|F_{P}(t)-F_{\hat{P}_{n}}(t)|dt$ $\displaystyle\leq\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\sqrt{\frac{\log\frac{n}{\beta}}{c_{1}n}{F_{P}(t)(1-F_{P}(t))}}dt$ $\displaystyle\leq\sqrt{c_{5}\log\frac{n}{\beta}}\int_{q_{\frac{1}{k}}}^{q_{1-\frac{1}{k}}}\sqrt{\frac{F_{P}(t)(1-F_{P}(t))}{n}}dt$ $\displaystyle\leq\sqrt{c_{6}\log\frac{n}{\beta}}\mathbb{E}[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})]$ as required. ∎ ### F.4 Local Minimality in the One-Dimensional Setting In this subsection, we argue that the instance-optimal algorithm discussed in Section 6.2 is also locally-minimal (See Section 3.2 for a discussion of local minimality). First, we state a corollary of our upper bound for continuous distributions, Theorem 6.14. This corollary follows by discretizing the distribution and applying the previous upper bound to the discretized distribution. The parameters of the discretized distribution are related to that of the original distribution via simple coupling arguments. ###### Corollary F.5. Fix $\varepsilon,\beta\in(0,1]$, $a,b\in\mathbb{R}$, $n\in\mathbb{N}$. Let $P$ be any continuous distribution supported on $[a,b]$. Consider any $\gamma<b-a\in\mathbb{R}$ (such that $\gamma$ divides $b-a$), and let $n>c_{2}\frac{\log^{4}\frac{b-a}{\gamma\beta\varepsilon}}{\varepsilon}$ for some sufficiently large constant $c_{2}$. Then, there exists an algorithm, that when given inputs ${\bf x}\sim P^{n}$, privacy parameter $\varepsilon$, interval end points $a,b$, granularity $\gamma$, and access to algorithm $A_{quant}$, outputs a distribution $P^{DP}$ such that with probability at least $1-O(\beta)$ over the randomness of ${\bf x}$ and the algorithm, $\mathcal{W}(P,P^{DP})=O\left(\sqrt{\log n}\mathbb{E}\left[\mathcal{W}(P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}},\hat{P}_{n}|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}}\right]+\mathcal{W}(P,P|_{q_{\frac{1}{k}},q_{1-\frac{1}{k}}})+\frac{1}{k}\left(q_{1-1/k}-q_{1/k}\right)\right)+\gamma$ where $\hat{P}_{n}$ is the uniform distribution on ${\bf x}$, $q_{\alpha}$ represents the $\alpha$-quantile of distribution $P$, and $k=\lceil\frac{\varepsilon n}{4c_{3}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$, where $c_{3}$ is a sufficiently large constant. We state a lemma of Ledoux and Bobkov that we will use in the main proof of this section. ###### Lemma F.6 (Lemma 3.8 in [BL19]). Let $S_{n}=\sum_{i=1}^{n}\eta_{i}$ be the sum of $n$ independent Bernoulli random variables with $\Pr[\eta_{i}=1]=p$ and $\Pr[\eta_{i}=0]=q=1-p$ (for all $i$). Then, for some sufficiently small constant $c$, $\displaystyle c\min\\{2npq,\sqrt{npq}\\}\leq\mathbb{E}[|S_{n}-np|]\leq\min\\{2npq,\sqrt{npq}\\}$ Now, we are ready to state and prove the local minimality result. Note that the statement will reference the rates defined by Equation 1 in the introduction. ###### Theorem F.7. Let $a,b\in\mathbb{R}$, $\gamma\in\mathbb{R}$. For any continuous distribution $P$ over $[a,b]$ with a density, let $N(P)=\\{Q:D_{\infty}(P,Q)\leq\log 2\\}$. Fix $\beta,\gamma,\varepsilon\in(0,1]$, and let $n=\Omega\left(\frac{\log^{4}\frac{b-a}{\gamma\varepsilon}}{\varepsilon}\right)$, with $n^{\prime}=\frac{n}{c_{7}\log n\log^{3}\frac{b-a}{\gamma\varepsilon}}$ for some constant $c_{7}$. There exists an algorithm $\mathcal{A}$ such that for all continuous distributions $P$, for all algorithms $\mathcal{A}^{\prime}$, there exists a distribution $Q\in N(P)$ such that $R_{\mathcal{A},n}(Q)\leq O(\operatorname{polylog}n)\cdot\max\\{R_{\mathcal{A}^{\prime},\lceil n^{\prime}\rceil}(Q),R_{\mathcal{A}^{\prime},\lfloor n^{\prime}/4\rfloor}(Q)\\}+\gamma,$ ###### Proof. Let $k=\lceil\frac{\varepsilon n}{4c_{3}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}\rceil$, and set $n^{\prime}=\frac{2n}{c_{4}\log^{3}\frac{b-a}{\beta\gamma}\log\frac{n}{\beta}}$ for a sufficiently large constant $c_{4}$. Then, by Corollary F.5 with appropriately chosen $\beta$ we have that with probability at least 0.95, for any distribution $Q$ (and hence particularly any distribution $Q\in N(P)$, $\displaystyle\mathcal{W}(Q,\mathcal{A}(\hat{Q}_{n})$ $\displaystyle=O\Bigg{(}\frac{1}{\varepsilon n^{\prime}}\left(q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)-q_{\frac{2}{C\varepsilon n^{\prime}}}(Q)\right)+\mathcal{W}(Q,Q|_{q_{\frac{2}{C\varepsilon n^{\prime}}(Q)},q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)})$ $\displaystyle+\sqrt{\log n}\mathbb{E}\left[\mathcal{W}\left(Q|_{q_{\frac{2}{C\varepsilon n^{\prime}}}(Q),q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)},\hat{Q}_{n}|_{q_{\frac{2}{C\varepsilon n^{\prime}}}(Q),q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)}\right)\right]\Bigg{)}+\gamma,$ where $C$ is the constant referenced in Theorem 6.3. We will show that for distribution $P$, each of the corresponding distribution-dependent terms is closely related to the terms for $Q$. First, consider $\frac{1}{\varepsilon n^{\prime}}\left(q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)-q_{\frac{2}{C\varepsilon n^{\prime}}}(Q)\right)$. Firstly, note that for all $\alpha\in(0,1)$, $q_{\alpha}(P)\geq q_{\alpha/2}(Q)$, and $q_{\alpha}(P)\leq q_{2\alpha}(Q)$, since $D_{\infty}(P,Q)\leq\ln 2$, which implies that $\frac{1}{2}F_{Q}(t)\leq F_{P}(t)\leq 2F_{Q}(t)$ for all $t\in\mathbb{R}$. Similarly, note that for all $\alpha\in(0,1)$, $q_{1-\alpha}(P)\geq q_{1-2\alpha}(Q)$, and $q_{1-\alpha}(P)\leq q_{1-\frac{1}{2}\cdot\alpha}(Q)$. Hence, we have that $\displaystyle\frac{1}{\varepsilon n^{\prime}}\left(q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)-q_{\frac{2}{C\varepsilon n^{\prime}}}(Q)\right)$ $\displaystyle\leq\frac{1}{\varepsilon n^{\prime}}\left(q_{1-\frac{1}{C\varepsilon n^{\prime}}}(P)-q_{\frac{1}{C\varepsilon n^{\prime}}}(P)\right)$ Next, consider $\mathcal{W}(P,P|_{q_{\frac{1}{C\varepsilon n}}(P),q_{1-\frac{1}{C\varepsilon n}}(P)})$. Recall that $q_{\frac{1}{C\varepsilon n}}(P)\leq q_{\frac{2}{C\varepsilon n}}(Q)$, and $q_{1-\frac{1}{C\varepsilon n}}(P)\geq q_{1-\frac{2}{C\varepsilon n}}(Q)$. Then, (noting that $L(P)=L(Q)$ and $q_{1}(P)=q_{1}(Q)$), we have that $\displaystyle\mathcal{W}(Q,Q|_{q_{\frac{2}{C\varepsilon n^{\prime}}}(Q),q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)})$ $\displaystyle=\int_{L(Q)}^{q_{\frac{2}{C\varepsilon n^{\prime}}}(Q)}F_{Q}(t)dt+\int_{q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)}^{q_{1}(Q)}|1-F_{Q}(t)|dt$ $\displaystyle\leq 2\int_{L(Q)}^{q_{\frac{2}{C\varepsilon n^{\prime}}}(Q)}F_{P}(t)dt+2\int_{q_{1-\frac{1}{C\varepsilon n^{\prime}}}(Q)}^{q_{1}(Q)}|1-F_{P}(t)|dt$ $\displaystyle\leq 2\int_{L(P)}^{q_{\frac{4}{C\varepsilon n^{\prime}}}(P)}F_{P}(t)dt+2\int_{q_{1-\frac{1}{4C\varepsilon n^{\prime}}}(P)}^{q_{1}(P)}|1-F_{P}(t)|dt$ $\displaystyle=2\mathcal{W}(P,P|_{q_{\frac{4}{C\varepsilon n^{\prime}}}(P),q_{1-\frac{4}{C\varepsilon n}}(P)})$ Finally, consider $\frac{1}{\sqrt{\log n}}\mathbb{E}\left[\mathcal{W}(P|_{q_{\frac{1}{C\varepsilon n}(P)},q_{1-\frac{1}{C\varepsilon n}}(P)},\hat{P}_{n}|_{q_{\frac{1}{C\varepsilon n}(P)},q_{1-\frac{1}{C\varepsilon n}}(P)})\right]$. By Fubini’s theorem and applying both inequalities in Lemma F.6, we have that $\displaystyle\mathbb{E}\left[\mathcal{W}(Q|_{q_{\frac{2}{C\varepsilon n^{\prime}}(Q)},q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)},\hat{Q}_{n}|_{q_{\frac{2}{C\varepsilon n^{\prime}}(Q)},q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)})\right]$ $\displaystyle=\int_{q_{\frac{2}{C\varepsilon n^{\prime}}(Q)}}^{q_{1-\frac{2}{C\varepsilon n^{\prime}}}(Q)}\mathbb{E}[|F_{Q}(t)-F_{\hat{Q}_{n}}(t)|]dt$ $\displaystyle\leq\int_{q_{\frac{1}{C\varepsilon n^{\prime}}(P)}}^{q_{1-\frac{1}{C\varepsilon n^{\prime}}}(P)}\mathbb{E}[|F_{Q}(t)-F_{\hat{Q}_{n}}(t)|]dt$ $\displaystyle\leq\int_{q_{\frac{1}{C\varepsilon n^{\prime}}(P)}}^{q_{1-\frac{1}{C\varepsilon n^{\prime}}}(P)}\min\left\\{2F_{Q}(t)[1-F_{Q}(t)],\sqrt{\frac{F_{Q}(t)[1-F_{Q}(t)]}{n}}\right\\}dt$ $\displaystyle\leq\int_{q_{\frac{1}{C\varepsilon n^{\prime}}(P)}}^{q_{1-\frac{1}{C\varepsilon
# Anderson localization of a Rydberg electron Matthew T. Eiles<EMAIL_ADDRESS>Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, D-01187 Dresden, Germany Alexander Eisfeld <EMAIL_ADDRESS>Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, D-01187 Dresden, Germany Jan M. Rost<EMAIL_ADDRESS>Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, D-01187 Dresden, Germany ###### Abstract Highly excited Rydberg atoms inherit their level structure, symmetries, and scaling behavior from the hydrogen atom. We will demonstrate that these fundamental properties enable a thermodynamic limit of a single Rydberg atom subjected to interactions with nearby ground state atoms. The limit is reached by simultaneously increasing the number of ground state atoms and the level of excitation of the Rydberg atom, for which the Coulomb potential supplies infinitely many and highly degenerate excited states. Our study reveals a surprising connection to an archetypal concept of condensed matter physics, Anderson localization, facilitated by a direct mapping between the Rydberg atom’s electronic spectrum and the spectrum of a tight-binding Hamiltonian. The hopping amplitudes of this tight-binding system are determined by the arrangement of ground state atoms and can range from nearest-neighbor to power-law-tailed to effectively infinite-range, giving rise to different localization scenarios. For arrangements yielding nearest-neighbor hopping amplitudes we identify clear signatures of the Anderson localization of the Rydberg electron. The origin of quantum mechanics is inextricably linked to the bound state spectrum of hydrogen bohrConstitution1913 ; pauliUeber1926 ; schrodingerUndulatory1926 . The Coulomb potential supports an infinite series of discrete energy levels labeled by an integer-valued principle quantum number $\nu$. Because of hydrogen’s underlying $SO(4)$ symmetry, these levels are $\nu^{2}$-fold degenerate banderGroup1966 ; gallagherRydberg2005 . This enhances the effect of external perturbations, as evinced by the response of hydrogen atoms to electric and magnetic fields friedrichHydrogen1989a or to electron scattering gailitisInfluence1963 ; sadeghpourDominant1990 . The study of these aspects exposes deep connections between the excited electronic structure of hydrogen and seemingly disparate physical arenas. Compelling examples include the hydrogen atom in a strong magnetic field, which is fundamental to quantum chaos and non-linear dynamics delandeQuantum1986 ; wintgenIrregular1989 , and the organization of doubly-excited H- states into multiplets, a phenomenon akin to the symmetry classifications ubiquitous in elementary particle physics tannerTheory2000a . In this article we forge a connection between the hydrogen atom and condensed matter via the concept of Anderson localization. Hydrogen’s properties are shared by the highly excited Rydberg states of more complicated atomic species since the influence of the multielectron core essentially vanishes for these exaggerated states characterized by large $\nu$ values, almost millisecond lifetimes, and nearly micron-scale orbits seatonQuantum1983 ; gallagherRydberg2005 . Localized perturbations to the Rydberg atom, caused by the scattering of its electron off of one or more ground state atoms – denoted scatterers in the following – can be described via an effective interaction encapsulated by Fermi’s zero-range contact pseudopotential fermiSopra2008 . By mixing the degenerate states within each $\nu$ manifold, the otherwise weak interaction of the scatterers can have a surprisingly strong effect greeneCreation2000 ; shafferUltracold2018a . Recently, nearly arbitrarily shaped optical tweezer arrays have become available bernienProbing2017 ; browaeysManybody2020 ; bluvsteinControlling2021a . These allow for the possibility of perturbing a Rydberg atom with a predetermined configuration of point-like impurities hunterRydberg2020a . Figure 1a illustrates the modified level structure resulting from the immersion of $M$ scatterers within the Rydberg wave function. Many states in each $\nu$ manifold are not perturbed, but a subspace of dimension $M$ splits away and possesses a density of states which depends non-trivially on the scatterer arrangement hunterRydberg2020a . The spectrum of this perturbed subspace coincides identically with that of a tight-binding Hamiltonian eilesRing2020 $\hat{H}=\sum_{q=1}^{M}E_{q}|{q}\rangle\langle{q}|+\sum_{q=1}^{M}\sum_{q^{\prime}\neq q}^{M}V_{qq^{\prime}}|{q}\rangle\langle{q^{\prime}}|,$ (1) where $\\{|{q}\rangle\\}$ is a basis of wave functions localized on individual scatterers (sites). The on-site potentials $E_{q}$ and hopping amplitudes $V_{qq^{\prime}}$ arise from the Rydberg electron’s motion in the confluence of infinite-ranged Coulomb and zero-ranged electron-scatterer potentials. Equation 1 creates an unexpected conceptual link between a Rydberg atom interacting with many ground state atoms and the dynamics of a particle hopping through a lattice. Not only are the spectra identical, but the eigenstates in both representations share many important features, as illustrated in Fig. 1. Figure 1: Energy landscape and wave functions of the perturbed Rydberg atom a, The level structure of the perturbed Rydberg atom. The Coulomb potential $V(r)=-1/r$ supports an infinite bound spectrum $E_{\nu}=-1/2\nu^{2}$, denoted with blue lines. The length of each line represents the level degeneracy $D_{\nu}=\nu^{2}$ and similarly the typical size of the electronic states, $\langle r\rangle\sim\nu^{2}$. The inset highlights the densities of states (DoS) of three Rydberg levels when the atom is perturbed by a ring of $M$ scatterers with radius $2\nu^{2}$. A highly structured DoS consisting of $M$ perturbed states (orange) forms, shifted away from the unaffected $M-\nu^{2}$ states (blue). In the thermodynamic limit, $M,\nu\to\infty$, the bandwidth and center of mass of the shifted DoS are (within an overall scaling factor) independent of $M$ and $\nu$. b, and c, The absolute values of the amplitudes of the electronic state (blue) and site representation (black spheres) for eigenstates located at the marked positions in the DoS for both periodic and disordered arrays. The scatterers are shown as orange spheres. Both representations exhibit the same behavior in the vicinity of the scatterers. d, An exemplary Rydberg basis state, which is spherically symmetric and delocalized over the scatterers. e, An exemplary trilobite state $|{T_{q}}\rangle$ associated with the scatterer $|{q}\rangle$, marked in red. The trilobite’s amplitude at $\vec{R}_{q}$ determines the on-site potential $E_{q}$, while its amplitude at $\vec{R}_{q^{\prime}}$ determines the hopping amplitude $V_{qq^{\prime}}$. We exploit this link to demonstrate that the Rydberg electron can undergo Anderson localization: the entire spectrum of electron eigenstates exponentially localizes in the presence of arbitrarily weak disorder in the thermodynamic limit of infinite system size andersonAbsence1958 ; thoulessElectrons1974 ; leeAnderson1981 ; abrahamsScaling1979 ; eversAnderson2008 . To realize the thermodynamic limit in the Rydberg system we determine a relationship between $M$ and $\nu$ such that increasing them in tandem – relying on the infinite series and scaling relations of Rydberg levels – leads to a well-defined Hamiltonian whose matrix elements are independent of its size. We study effectively one-dimensional localization by placing the scatterers on a ring around the Rydberg atom’s core, and then randomly disordering either their radial or angular positions. We show that different ring radii lead to different hopping amplitudes, ranging from the nearest-neighbor interactions conventionally studied to more unusual long- range and sign-changing interactions. This flexibility gives rise to a variety of Anderson models and provides insight into the circumstances under which the Rydberg electron localizes fully. The Perturbed Rydberg Atom To demonstrate the equivalency between the spectrum of equation 1 and the Rydberg spectrum, we investigate the Hamiltonian of the Rydberg atom perturbed by $M$ scatterers. The perturbation is too weak to couple different $\nu$ manifolds. Thus, we consider only the $\nu^{2}$ degenerate states of a given manifold, labeling these with a collective index $i=\\{l,m\\}$, where $0\leq l\leq\nu-1$ and $|m|\leq l$ are the angular momentum quantum numbers. The Hamiltonian matrix elements $\mathcal{H}_{ii^{\prime}}=-\frac{1}{2\nu^{2}}\delta_{ii^{\prime}}+2\pi\sum_{q=1}^{M}\mathcal{W}_{iq}\mathcal{W}^{\dagger}_{qi^{\prime}}$ (2) are obtained by summing the Rydberg energy of the isolated atom and the interaction potential, a sum over the $M$ contact pseudopotentials. Here, and throughout, we use atomic units. The $\nu^{2}\times M$ rectangular matrix $\mathcal{W}$ is composed of Rydberg wave functions evaluated at each of the scatterers, i.e. $\mathcal{W}_{iq}=\phi^{*}_{\nu i}(\vec{R}_{q})$. More details about the derivation and validity of this Hamiltonian are provided in Supplementary Section 1. We have set the $s$-wave atom-electron scattering length to unity since it is identical for all atoms on the ring. To make the relationship between the perturbed Rydberg Hamiltonian $\mathcal{H}$ and the tight-binding Hamiltonian of equation 1 apparent, we introduce the so-called “trilobite” states greeneCreation2000 ; boothProduction2015a . The trilobite $|{T_{q}}\rangle=\sum_{i=1}^{\nu^{2}}\mathcal{W}_{qi}^{\dagger}|{i}\rangle$ is the perturbed eigenstate of the system having only one scatterer at $\vec{R}_{q}$ liuPolyatomic2006 ; eilesUltracold2016 ; eilesTrilobites2019 . Unlike the individual Rydberg eigenstates $|{i}\rangle$, which are spherically symmetric and extend over the entire scatterer array (Fig 1c), the state $|{T_{q}}\rangle$ is peaked – but not completely localized – at the scatterer’s position $\vec{R}_{q}$ (Fig. 1d). We expand $\mathcal{H}$ into these trilobite states, and after simplifying the resulting generalized eigenvalue equation accounting for their non-orthogonality, we obtain a standard eigenvalue equation $H|{\Psi_{k}}\rangle=E_{k}|{\Psi_{k}}\rangle$. The matrix elements of $H$, $H_{qq^{\prime}}=-\frac{1}{2\nu^{2}}\delta_{qq^{\prime}}+2\pi\sum_{i=1}^{\nu^{2}}\mathcal{W}_{qi}^{\dagger}\mathcal{W}_{iq^{\prime}},$ (3) exactly correspond to those in the Hamiltonian represented in the site basis given by equation 1. Fig. 1b,c displays, for both a periodic and a disordered system, exemplary eigenstates using this site basis (black spheres) and the full position-space representation of the electronic eigenstate (surface plots). This “trilobite” representation has additional conceptual and numerical advantages. Since the matrix element $H_{qq^{\prime}}/(2\pi)$ is the amplitude of the trilobite state $|{T_{q}}\rangle$ at the position of a different scatterer, $\vec{R}_{q^{\prime}}$, it can be estimated pictorially. For example, the trilobite in Fig. 1d shows that hopping is restricted to nearest- neighbor sites in this ring configuration. A further advantage is that the trilobite functions can be expressed using only $s$-wave Rydberg wave functions eilesUltracold2016 ; eilesTrilobites2019 . This simplifies calculations at large $\nu$ values and facillitates asymptotic expansions. Scaling to the thermodynamic limit In a typical solid-state system described by a tight-binding Hamiltonian (eq. 1), the elements $E_{q}$ and $V_{qq^{\prime}}$ are independent of $M$ and the thermodynamic limit is reached by increasing the system’s size, i.e. $M\to\infty$. However, the matrix elements $H_{qq^{\prime}}$ of equation 3 depend strongly on both $\nu$ and $M$: the Rydberg atom’s size and energy scales are $\nu$-dependent, and the hopping amplitudes depend on the distance, inversely proportional to $M$, between scatterers. As an initial step in separating these scales, we accomodate the overall size of the Rydberg wave function by parameterizing the ring’s radius as $2\nu^{2}R$, where $R\in[0,1]$. This parametrization ensures that systems with different $\nu$ but identical $R$ values have similar properties eilesRing2020 , and the range of $R$ keeps the scatterers within the classically allowed region. We will discuss the specific cases $R=1$, $R=0.75$, and $R=0.5$ in detail. In a subsequent step, we eliminate the $M$-dependence at a coarse-graining level by fixing $M$ as a function of $\nu$ such that the inter-scatterer distance, and hence the hopping amplitudes, are invariant with respect to changes in $\nu$. The functional form of $M(\nu)$ hinges on the resolving power of the Rydberg wave functions. A useful heuristic is that Rydberg states can resolve as many in-plane scatterers as they have available azimuthal nodes, requiring a linear relationship $M(\nu)=\nu$ for most allowed $R$ values. We use this for $R=0.5$ and $R=0.75$. For $R\to 1$ those Rydberg states possessing the many azimuthal nodes needed to resolve scatterers become exponentially small. Thus, fewer scatterers can be resolved and a sublinear relationship is required. In particular, for the case $R=1$, we set $M(\nu)\sim\nu^{2/3}$ (specifically, $M=\lfloor 3\nu^{2/3})\rfloor$, where $\lfloor x\rfloor$ is the integer part of $x$). Finally, we extract the residual $\nu$-dependence of the matrix elements $H_{qq^{\prime}}$, which also depends on $R$. For $R=1$ we find that the matrix elements are proportional to $\nu^{-13/3}$, but for $R=0.75$ they are proportional to $\nu^{-4}$. The matrix elements of the $R=0.5$ case do not simultaneously possess a global $\nu$-dependence, as discussed in further detail below. This gives rise to interesting finite size effects. Figure 2: Characteristic energies and scaling laws for R = 1. a, Hopping amplitudes as a function of angle around the ring. The angular positions of the scatterers are marked with points. b,c, Dispersion relations for $30\leq\nu\leq 500$ in increments of 5. As $\nu$ increases these discrete spectra tend towards the continuous analytic dispersion relation obtained from the model Hamiltonian, shown as the dashed black curve in b. Figure 3: Localization behavior of the R = 1 scenario. a,b, the minimum, mean, and maximum values of the normalized participation ratios for radial and angular disorder, respectively. The dashed lines show the asymptotic behavior $\mathcal{P}\sim\nu^{\gamma}$, labeled by the numerical fit values for $\gamma$. c,d, the energy-resolved normalized participation ratio values for $30\leq\nu\leq 500$, using the exact model, and $10^{3}\leq\nu\leq 10^{5}$ (blue curves) using the asymptotic model. Note that the equivalent $M$ values are used as labels in d. Now, we are in a position to factor out an overall $\nu$-dependence such that the matrix elements $H_{qq^{\prime}}$, for fixed $R$ and $M(\nu)$, are independent of $\nu$. Taking advantage of the infinite series of Rydberg levels, the thermodynamic limit of a Rydberg atom is realized with $\nu\to\infty$. In Fig. 2 we illustrate this analysis for $R=1$. Fig. 2 (a) shows the angular dependence of the trilobite state for three different $\nu$ values. The appropriately scaled eigenspectra, shown in Fig. 2(b) for $\nu\in[30,500]$, are independent of $\nu$. Here, we have shown the eigenspectra for periodic and disordered scatterer arrangements, where disorder was introduced by random variation in the radial positions of the scatterers. The disorder scaling requires additional analysis since it is not clear a priori that the disorder in position has the same $\nu$-dependence as the resulting disorder in the matrix elements. For example, although angular disorder leads to first-order energy disorder shifts with the same $\nu$-scaling for all considered $R$ values, in the $R=1$ and $R=0.5$ cases radial disorder leads to additional $\nu$-dependencies that must be removed by scaling the positional disorder with $\nu$. For $R=1$ the radial disorder strength must be diminished as $\nu^{-2/3}$. These details are discussed further in Supplementary Section 6. Figure 4: Characteristics of the perturbed Rydberg atom at R = 0.75 and R = 0.5. a, Eigenspectra of the $R=0.75$ ring for $30\leq\nu\leq 500$ plotted as a function of wave number (the mirror-image $k>0$ spectra are not shown). The black curve shows the approximate spectrum obtained in the $\nu\to\infty$ limit, taking the asymptotic form of the hopping to be $V_{qq^{\prime}}\sim\nu^{-4}\text{sinc}[\pi\sqrt{3}(q-q^{\prime})]$. b, The $R=0.75$, $\nu=30$ trilobite state. c, $\mathcal{P}$ distributions for $\nu=200$, $350$, and $500$ with fixed radial disorder. d, Eigenspectra of the $R=0.5$ ring for $30\leq\nu\leq 500$, plotted as a function of wave number. Since the spectra are symmetric about $k=0$, spectra for even $\nu$ are plotted only for negative $k$ values and odd $\nu$ values are plotted only for positive $k$ values. The black lines give the flat band and on-site energies for $\nu=500$. e, The $R=0.5$ trilobite state for $\nu=30$ and two exemplary eigenstates of the disordered system, in both cases using $\nu=30$. f, $\mathcal{P}$ distributions for several $\nu$ values with fixed angle disorder. Transition from extended to localized states To quantify the extent of localization and systematically show that all eigenstates localize in the thermodynamic limit, one typically examines statistical properties of the eigenspectrum oganesyanLocalization2007 ; shklovskiiStatistics1993 or, as we do here, the eigenstates directly kramerLocalization1993 ; mirlinStatistics2000 . The normalized participation ratio, defined for the eigenstate $|{\Psi_{k}}\rangle=\sum_{q=1}^{M}c_{q}^{(k)}|{q}\rangle$ as $\mathcal{P}(k)=\left(M\sum_{q=1}^{M}|c_{q}^{(k)}|^{4}\right)^{-1},$ (4) is a good indicator of the localization length. In a maximally localized (delocalized) state, $\mathcal{P}\to 1/M$ ($\mathcal{P}\to 1$). Perfectly delocalized states with strictly real coefficients are characterized by $\mathcal{P}(k)=2/3$, and therefore we consider states with $\mathcal{P}\geq 2/3$ to be extended. As we show in Supplemental Section 3, the participation ratio computed in the site basis is equivalent to a spatial participation ratio measured at the scatterer positions, and thus localization occurs simultaneously in both representations. Anderson localization in the R = 1 ring As implied by the nearest-neighbor hopping terms revealed by Fig. 2, the $R=1$ case allows for a direct comparison with the standard Anderson model. Using exact diagonalization, we compute the eigenspectrum and participation ratios for both radial and angular disorder, averaged over $\mathcal{N}$ disorder realizations (see Supplementary section 9 for details). To extrapolate our numerical results to the thermodynamic limit it is necessary to study very high $\nu$. The tight-binding formalism provides a clear numerical advantage over brute-force diagonalization of the Rydberg Hamiltonian (Eq. 2), since the matrix dimension $M(\nu)$ is always smaller than the $\nu^{2}$ size of the Rydberg subspace. For the largest $\nu$ studied here we diagonalize a matrix of dimension $10^{5}$, which in the Rydberg representation has dimension $10^{11}$. In addition to the exact spectrum of $H$ computed for $\nu\leq 500$, we also compute the spectrum of a model Hamiltonian containing only nearest and next- nearest neighbor hopping amplitudes extracted from the asymptotic $\nu\to\infty$ limit of $H$. These spectra agree excellently, even for the smallest considered $\nu$ values. Based on this favorable comparison, we use the model Hamiltonian for $\nu>500$ where the exact Hamiltonian becomes numerically cumbersome. The key results of this study are displayed in Fig. 3. In Fig. 3 (a) and (b) we characterize the extent of localization by using the minimum, mean, and maximum values of $\mathcal{P}$ as a function of $\nu$. The fixed disorder strength is sufficiently weak such that extended states having $\mathcal{P}>2/3$ are still present for the lowest $\nu$ values. Numerical power-law fits of this data show that $\langle\mathcal{P}\rangle\sim\nu^{-2/3}\sim M^{-1}$, where $\langle\rangle$ denotes an average over the entire spectrum and disorder realizations. This numerical evidence clearly indicates that all eigenstates of the $R=1$ ring localize in the thermodynamic limit, confirming the existence of Anderson localization. The energy-resolved participation ratios provide insight into the role of correlations in the disorder distributions and the distinction between on- and off-diagonal disorder titovNonuniversality2005 ; kuhlEnhancement2008 ; izrailevAnomalous2012 ; soukoulisOffdiagonal1981 . The positively correlated off-diagonal radial disorder manifests itself in the pronounced asymmetry in the distribution in Fig. 3 (c), especially in contrast to the case in Fig. 3 (d) which has anti-correlated off-diagonal disorder. The residual asymmetry still visible in Fig. 3 arises from the negative next-nearest-neighbor hopping term. A sharp feature in the middle of the band depends on the parity of $M$:when $M$ is odd(even) there is a minimum(maximum). A state with infinite localization length is predicted to occur at the exact band middle in one- dimensional models with off-diagonal disorder soukoulisOffdiagonal1981 ; theodorouExtended1976 ; brouwerDensity2000 ; this could be the source of this feature, which is further modified by the correlated disorder. Long-ranged hopping when R < 1 To illustrate the diversity of localization scenarios possible with a perturbed Rydberg atom, we briefly discuss two other ring sizes, $R=0.75$ and $R=0.5$. As seen in the trilobite functions plotted in Fig. 4 (b) and (e), the hopping terms for these cases extend over some ($R=0.75$) and all ($R=0.5$) sites. We will first contrast the disorder-free properties of these two systems before discussing their responses to the presence of disorder. For $R=0.75$, the hopping terms oscillate as a function of $|q-q^{\prime}|$ before decaying rapidly around $|q-q^{\prime}|\approx M/10$. At $\nu\to\infty$, the continuous form of the hopping amplitudes tends asymptotically toward a sinc function truncated at a finite range, as detailed in Supplementary Section 7. As shown in Fig. 4(a) this results in an eigenspectrum closely approximated by a box function, whose flat bands are broadened by the deviations from the asymptotic form of the hopping amplitudes. Note that the spectra are only shown for half the range of allowed wave numbers, since they are symmetric about $k=0$. The $R=0.5$ hopping amplitudes oscillate over the entire ring, rising to a maximum at the opposite side (see Fig. 4(e)). The effect is particularly strong for even values of $\nu$, leading to a dimerization of the system phillipsLocalization1991 and strongly impacting the observed disorder-free eigenspectra shown in Fig. 4(c). They condense into two relatively flat bands separated by a wide band gap when $\nu$ is even, or a single band when $\nu$ is odd. We find that the dominant hopping amplitude $V_{qq+M/2}$ scales as $\nu^{-13/3}$, while the other hopping amplitudes scale as $\nu^{-5}$. When $\nu$ is even, a coupled-dimer model shows that the width of the band gap scales as $\nu^{-1/3}$ and thus closes in the thermodynamic limit. The strongly split levels around $k=0$ are manifestations of the all-to-all coupling, and survive in in the thermodynamic limit, as shown for a simplex model ossipovAnderson2013 . Finally, we analyze which phenomena in the disordered system arise because of the different properties described above. Fig. 4(f) shows three $\mathcal{P}$ distributions for the radial-disordered $R=0.75$ system. The regions with nearly flat bands localize uniformly. The levels lying in the band gap are well-separated in energy, impeding localization, but as $\nu$ increases the gaps between these levels is found numerically to close approximately as $\nu^{-0.27}$. This causes the band of extended states visible in Fig. 4(f) around $i/M=0.75$ to shrink as $\nu$ increases, suggesting that the boundaries of this region are not mobility edges but rather finite size effects. The $\mathcal{P}$ distributions for the angular-disordered $R=0.5$ system are shown in Fig. 4(g). As in the previous cases, localization occurs most rapidly at band edges: the band gap present in the even-$\nu$ spectrum leads to a pronounced valley in the participation ratio that is absent in the odd $\nu$ case. Fig. 4(e) shows two exemplary $\nu=30$ eigenstates from this valley. These are approximately symmetric under reflection and localize on two opposite sites due to the dominant opposite-neighbor coupling. Although the overall $\mathcal{P}$ distributions shrink to lower values as $\nu$ increases, we find that states near $k=0$, for this disorder strength and range of $\nu$, appear to remain extended. This is akin to the behavior of systems with sufficiently long-range power-law interactions, which have an extended state at the band edge in the thermodynamic limit rodriguezAnderson2003a ; demouraLocalization2005a ; mirlinTransition1996 . However, these results cannot be applied so simply to the Rydberg system for which long-range correlation and off-diagonal disorder can enhance localization nosovCorrelationinduced2019a . Outlook By uncovering and exploiting the surprising relationship between the electronic eigenstates of a perturbed Rydberg atom and those of a tight- binding Hamiltonian, we have connected two paradigmatic concepts in atomic and condensed matter physics, showing that the Rydberg electron of a hydrogen-like atom can undergo Anderson localization. This mapping is contingent on two atypical conditions in a single-particle system: high degeneracy and an infinite spectrum of bound states. Bertrand’s theorem states that the only central force potentials in which all bound orbits are closed are the Coulomb and harmonic oscillator potentials bertrand1873theoreme ; quantum mechanically, this implies that these are unique in providing both the requisite degeneracy and infinite spectrum. We expect that the states of a quantum harmonic oscillator will localize under similar conditions as discussed here, which may also further elucidate the supersymmetric links between these systems alankosteleckySupersymmetry1985 . The study of the two- dimensional hydrogen atom or elliptical harmonic oscillators could reveal the role of inherent symmetry properties of the underlying structure in the localization properties keski-rahkonenQuantum2019 . The ring of ground state atoms is not the only interesting implementation of a perturbed Rydberg atom. A variety of quasi two-dimensional systems could be constructed with scatterers arranged into a spherical shell, staggered, stacked, or intersecting rings, or a helix. It is impossible to realize an analogue of a three-dimensional lattice since scatterers at different distances from the radial core would have wildly different on-site energies and hopping amplitudes. A random three-dimensional system, such as an ultracold gas, corresponds to a tight-binding Hamiltonian characterized by strong on-site disorder and a complicated set of strongly disordered hopping amplitudes. Such a system will exhibit both localized states arising from strongly coupled spatial clusters of scatterers and delocalized states resulting from the very long-ranged coupling between sites luukkoPolyatomic2017 ; abumwisExtended2020 . The experimental realization of this concept involves tradeoffs between the challenges of preparing and manipulating very highly excited atoms and the difficulty of positioning ground state atoms. Experimental signatures of the localization length are provided by observable properties such as the photoionization rate or dipole moments of the eigenstates, which will differ dramatically depending on the wave function extent. ###### Acknowledgements. Acknowledgements: The authors are grateful for numerous valuable discussions with P. Giannakeas and A. Hunter. M.T.E. and A.E. thank I. Khaymovich for useful discussions regarding long-range hopping. M.T.E acknowledges partial support from the Alexander von Humboldt Stiftung. A.E. acknowledges support from the DFG via a Heisenberg fellowship (Grant No EI 872/5-1) ## References * (1) Bohr, N. I. On the constitution of atoms and molecules. _Phil. Mag._ 26, 1–25 (1913). * (2) Pauli, W. Über das Wasserstoffspektrum vom Standpunkt der neuen Quantenmechanik. _Zeitschrift für Physik A Hadrons and nuclei_ 36, 336–363 (1926). * (3) Schrödinger, E. An Undulatory theory of the mechanics of atoms and molecules. _Physical Review_ 28, 1049–1070 (1926). * (4) Bander, M. & Itzykson, C. Group theory and the hydrogen atom (I). _Reviews of Modern Physics_ 38, 330–345 (1966). * (5) Gallagher, T. F. _Rydberg Atoms_ (Cambridge University Press, 2005). * (6) Friedrich, H. & Wintgen, H. The hydrogen atom in a uniform magnetic field — An example of chaos. _Physics Reports_ 183, 37–79 (1989). * (7) Gailitis, M. & Damburg, R. The influence of close coupling on the threshold behaviour of cross sections of electron-hydrogen scattering. _Proceedings of the Physical Society_ 82, 192–200 (1963). * (8) Sadeghpour, H. R. & Greene, C. H. Dominant photodetachment channels in H-. _Physical Review Letters_ 65, 313–316 (1990). * (9) Delande, D. & Gay, J. C. Quantum chaos and statistical properties of energy levels: Numerical study of the hydrogen atom in a magnetic field. _Physical Review Letters_ 57, 2006–2009 (1986). * (10) Wintgen, D. & Hönig, A. Irregular wave functions of a hydrogen atom in a uniform magnetic field. _Physical Review Letters_ 63, 1467–1470 (1989). * (11) Tanner, G., Richter, K. & Rost, J.-M. The theory of two-electron atoms: Between ground state and complete fragmentation. _Reviews of Modern Physics_ 72, 497–544 (2000). * (12) Seaton, M. J. Quantum defect theory. _Reports on Progress in Physics_ 46, 167–257 (1983). * (13) Fermi, E. Sopra lo Spostamento per Pressione delle Righe Elevate delle Serie Spettrali. _Il Nuovo Cimento (1924-1942)_ 11, 157 (2008). * (14) Greene, C. H., Dickinson, A. S. & Sadeghpour, H. R. Creation of polar and nonpolar ultra-long-range Rydberg molecules. _Physical Review Letters_ 85, 2458–2461 (2000). * (15) Shaffer, J. P., Rittenhouse, S. T. & Sadeghpour, H. R. Ultracold Rydberg molecules. _Nature Communications_ 9, 1965 (2018). * (16) Bernien, H. _et al._ Probing many-body dynamics on a 51-atom quantum simulator. _Nature_ 551, 579–584 (2017). * (17) Browaeys, A. & Lahaye, T. Many-body physics with individually controlled Rydberg atoms. _Nature Physics_ 16, 132–142 (2020). * (18) Bluvstein, D. _et al._ Controlling quantum many-body dynamics in driven Rydberg atom arrays. _Science_ 371, 1355–1359 (2021). * (19) Hunter, A. L., Eiles, M. T., Eisfeld, A. & Rost, J. M. Rydberg Composites. _Physical Review X_ 10, 031046 (2020). * (20) Eiles, M. T., Hunter, A. L. & Rost, J. M. Ring Rydberg composites. _Journal of Physics B: Atomic, Molecular and Optical Physics_ 53, 054001 (2020). * (21) Anderson, P. W. Absence of diffusion in certain random lattices. _Physical Review_ 109, 1492–1505 (1958). * (22) Thouless, D. J. Electrons in disordered systems and the theory of localization. _Physics Reports_ 13, 93–142 (1974). * (23) Lee, P. A. & Fisher, D. S. Anderson localization in two dimensions. _Physical Review Letters_ 47, 882–885 (1981). * (24) Abrahams, E., Anderson, P. W., Licciardello, D. C. & Ramakrishnan, T. V. Scaling theory of localization: Absence of quantum diffusion in two dimensions. _Physical Review Letters_ 42, 673–676 (1979). * (25) Evers, F. & Mirlin, A. D. Anderson transitions. _Reviews of Modern Physics_ 80, 1355–1417 (2008). * (26) Booth, D., Rittenhouse, S. T., Yang, J., Sadeghpour, H. R. & Shaffer, J. P. Production of trilobite Rydberg molecule dimers with kilo-Debye permanent electric dipole moments. _Science_ 348, 99–102 (2015). * (27) Liu, I. C. & Rost, J. M. Polyatomic molecules formed with a Rydberg atom in an ultracold environment. _The European Physical Journal D - Atomic, Molecular, Optical and Plasma Physics_ 40, 65–71 (2006). * (28) Eiles, M. T., Pérez-Ríos, J., Robicheaux, F. & Greene, C. H. Ultracold molecular Rydberg physics in a high density environment. _Journal of Physics B: Atomic, Molecular and Optical Physics_ 49, 114005 (2016). * (29) Eiles, M. T. Trilobites, butterflies, and other exotic specimens of long-range Rydberg molecules. _Journal of Physics B: Atomic, Molecular and Optical Physics_ 52, 113001 (2019). * (30) Oganesyan, V. & Huse, D. A. Localization of interacting fermions at high temperature. _Physical Review B_ 75, 155111 (2007). * (31) Shklovskii, B. I., Shapiro, B., Sears, B. R., Lambrianides, P. & Shore, H. B. Statistics of spectra of disordered systems near the metal-insulator transition. _Physical Review B_ 47, 11487–11490 (1993). * (32) Kramer, B. & MacKinnon, A. Localization: Theory and experiment. _Reports on Progress in Physics_ 56, 1469–1564 (1993). * (33) Mirlin, A. D. Statistics of energy levels and eigenfunctions in disordered systems. _Physics Reports_ 326, 259–382 (2000). * (34) Titov, M. & Schomerus, H. Nonuniversality of Anderson localization in short-range correlated disorder. _Physical Review Letters_ 95, 126602 (2005). * (35) Kuhl, U., Izrailev, F. M. & Krokhin, A. A. Enhancement of localization in one-dimensional random potentials with long-range correlations. _Physical Review Letters_ 100, 126402 (2008). * (36) Izrailev, F. M., Krokhin, A. A. & Makarov, N. M. Anomalous localization in low-dimensional systems with correlated disorder. _Physics Reports_ 512, 125–254 (2012). * (37) Soukoulis, C. M. & Economou, E. N. Off-diagonal disorder in one-dimensional systems. _Physical Review B_ 24, 5698–5702 (1981). * (38) Theodorou, G. & Cohen, M. H. Extended states in a one-demensional system with off-diagonal disorder. _Physical Review B_ 13, 4597–4601 (1976). * (39) Brouwer, P. W., Mudry, C. & Furusaki, A. Density of states in coupled chains with off-diagonal disorder. _Physical Review Letters_ 84, 2913–2916 (2000). * (40) Phillips, P. & Wu, H.-L. Localization and its absence: A new metallic state for conducting polymers. _Science_ 252, 1805–1812 (1991). * (41) Ossipov, A. Anderson localization on a simplex. _Journal of Physics A: Mathematical and Theoretical_ 46, 105001 (2013). * (42) Rodríguez, A. _et al._ Anderson transition in low-dimensional disordered systems driven by long-range nonrandom hopping. _Physical Review Letters_ 90, 027404 (2003). * (43) de Moura, F. A. B. F., Malyshev, A. V., Lyra, M. L., Malyshev, V. A. & Domínguez-Adame, F. Localization properties of a one-dimensional tight-binding model with nonrandom long-range intersite interactions. _Physical Review B_ 71, 174203 (2005). * (44) Mirlin, A. D., Fyodorov, Y. V., Dittes, F.-M., Quezada, J. & Seligman, T. H. Transition from localized to extended eigenstates in the ensemble of power-law random banded matrices. _Physical Review E_ 54, 3221–3230 (1996). * (45) Nosov, P. A., Khaymovich, I. M. & Kravtsov, V. E. Correlation-induced localization. _Physical Review B_ 99, 104203 (2019). * (46) Bertrand, J. Théoreme relatif au mouvement d’un point attiré vers un centre fixe. _CR Acad. Sci_ 77, 2 (1873). * (47) Alan Kostelecký, V., Martin Nieto, M. & Rodney Truax, D. Supersymmetry and the relationship between the Coulomb and oscillator problems in arbitrary dimensions. _Physical Review D_ 32, 2627–2633 (1985). * (48) Keski-Rahkonen, J., Ruhanen, A., Heller, E. J. & Räsänen, E. Quantum Lissajous scars. _Physical Review Letters_ 123, 214101 (2019). * (49) Luukko, P. J. J. & Rost, J.-M. Polyatomic trilobite Rydberg molecules in a dense random gas. _Physical Review Letters_ 119, 203001 (2017). * (50) Abumwis, G., Eiles, M. T. & Eisfeld, A. Extended coherently delocalized states in a frozen Rydberg gas. _Physical Review Letters_ 124, 193401 (2020). * (51) Omont, A. On the theory of collisions of atoms in Rydberg states with neutral particles. _Journal de Physique_ 38, 1343–1359 (1977). ## I Introduction and contents This supplementary material aims at facilitating access to details of the main paper on several issues and is accordingly structured into different parts. Section 1 provides more detailed background for the Hamiltonian and its matrix elements in the Rydberg state representation. In particular, various approximations used in defining this Hamiltonian as well as its matrix elements (equation 2) are addressed. Section 2 reveals the explicit steps regarding the transformation into the trilobite basis, leading from 2 in the main text to 3. Section 3 derives several important relationships between the eigenvectors in the two representations and measurables constructed from these eigenvectors, proving that the localization of the wave function in the site basis corresponds to localization in position space. Section 4 re-examines the matrix elements, specializing to the ring geometry studied here, and Section 5 derives the eigenspectrum of that Hamiltonian for the reader’s convenience. Sections 6-8 provide additional details on the properties of the $R=1,R=0.75$ and $R=0.5$ systems, respectively, in each case with and without disorder. Section 9 contains some numerical details. ## II (1) Hamiltonian in the Rydberg wave function basis We write the Hamiltonian of the perturbed Rydberg Hamiltonian in anticipation of its expansion into the basis of Rydberg states: $H=-\sum_{lm}\frac{|{\nu lm}\rangle\langle{\nu lm}|}{2(\nu-\mu_{l})^{2}}+2\pi\sum_{q=1}^{M}a_{s}[k(R_{q})]|{\vec{R}_{q}}\rangle\langle{\vec{R}_{q}}|.$ (5) The first term of Eq. 5 is the energy spectrum of the isolated Rydberg atom, here including also the effect of short-range non-hydrogenic interactions via the quantum defects $\mu_{l}$ These are typically non-zero only for $l<3$ in alkali atoms, and for simplicity we just set all $\mu_{l}=0$. The effect of non-zero quantum defects can be included by using the protocol of Ref. eilesTrilobites2019 or approximately neglected by removing states with $l<3$ from the basis $\\{i\\}=\\{l,m\\}$ in the text. The second term of Eq. 5 describes the electron-scatterer interaction using the Fermi pseudopotential fermiSopra2008 . We include only $s$-wave scattering for two reasons. First, the low kinetic energies characteristic of the Rydberg electron suppress the influence of higher order partial waves. Second, the angular momentum quantum numbers $L$ characterizing these partial waves are approximately good quantum numbers of the whole system over a wide range of $\nu$ and $R$ values eilesTrilobites2019 , and thus to a good approximation the effects of higher order scattering terms can be treated independently. If desired, higher order partial waves can be included via the incorporation of terms developed by Omont omontTheory1977 ) and again using the protocol developed in Ref. eilesTrilobites2019 . The partial wave expansion used in deriving these pseudopotentials requires that energy-dependent phases be used. The $s$-wave scattering length $a_{s}[k(R_{q})]$ thus has an implicit dependence on the Rydberg-scatterer distance $R_{q}$ via the semi-classical momentum, $k^{2}=-\frac{1}{\nu^{2}}+\frac{2}{R_{q}}$. In the ring geometry, the scattering length is identical for all scatterers, and scales out of the problem completely. When the ring is not perfect, this no longer holds strictly, but the scattering length varies very slowly as a function of $R$. It is a therefore a fine approximation to continue to assume that it is constant. As a result we have set this to unity in the main text of the paper. However, in Section II, we show how to include different scattering lengths within the same formalism. Starting in Section III, we again Since the effect of these Fermi pseudopotentials is generally weak, we truncate the Hilbert space to a single $\nu$ manifold. Note that, in the thermodynamic limit and with the geometries considered in the main text, this is guaranteed since the energy separation between Rydberg manifolds scales as $\nu^{-3}$ while the width and center of mass of the perturbed subspace scale, at most, as $\nu^{-4}$. Truncation of the basis to a single degenerate manifold implies that the first term of equation 5 contributes only an irrelevant energy offset, which we set to zero in the following. It also determines the set of Rydberg basis states having the same principal quantum number $\nu$ but different angular momentum quantum numbers $l$ and $m$ with which to represent the perturbation potential. We utilize the shorthand $|{i}\rangle=|{\nu lm}\rangle$ and $|{i^{\prime}}\rangle=|{\nu l^{\prime}m^{\prime}}\rangle$ to describe the Rydberg basis, whose position- space representation is given by the three-dimensional hydrogenic wave functions $\langle{\vec{r}}|{\nu lm}\rangle=\phi_{\nu lm}(\vec{r})=\frac{u_{\nu l}(r)}{r}Y_{lm}(\hat{r}).$ (6) The matrix elements of the Hamiltonian in the Rydberg basis $|{i}\rangle$ are $H_{ii^{\prime}}=2\pi\sum_{q=1}^{M}a_{s}[k(R_{q})]\phi_{i}^{*}(\vec{R}_{q})\phi_{i^{\prime}}(\vec{R}_{q}).$ (7) As stated in the text, diagonalization of this $\nu^{2}\times\nu^{2}$ Hamiltonian leads to $M$ shifted eigenvalues and $M-\nu^{2}$ unshifted and degenerate levels. ## III (2) Transformation into the trilobite basis In this section, we see how to obtain only the non-zero $M$ eigenvalues directily. We absorb a factor $\sqrt{a_{s}[k(R_{q})]}$ into each wave function $\phi_{i}(\vec{R}_{q})$, and define the rectangular $\nu^{2}\times M$ matrix $\mathcal{W}_{iq}=\sqrt{a_{s}[k(R_{q})]}\phi_{i}^{*}(\vec{R}_{q}).$ (8) Note that care must be taken in the following equations if $a_{s}[k(R)]<0$, since then it is vital to not complex conjugate $\sqrt{a_{s}[k(R_{q})]}$ in $\mathcal{W}^{\dagger}$. To avoid this minor complication, in the following we simply assume $a_{s}[k(R_{q})]>0$, i.e. the system is never constructed such that none of the perturbers passes through the distance to the Rydberg core such that the Ramsauer-Townsend minimum energy condition is met. The $\mathcal{W}$ matrices allow us to write the Hamiltonian as a matrix product, using Einstein notation when summing over repeated indices Keep in mind that sums over $q$ or $p$ range from $1$ to $M$ while sums over $i$ range from $1$ to $\nu^{2}$. The Hamiltonian in the Rydberg basis representation is, in this more compact notation, $H_{ii^{\prime}}=2\pi\mathcal{W}_{iq}\mathcal{W}^{\dagger}_{qi^{\prime}}.$ (9) This is a separable matrix. Its rank is not equal to its dimension $\nu^{2}$; in fact, it has the same rank and eigenspectrum as the $M\times M$ matrix $\tilde{H}_{qq^{\prime}}=2\pi\mathcal{W}_{qi}^{\dagger}\mathcal{W}_{iq^{\prime}}.$ (10) We can also derive this by considering the eigenfunctions of a single scatterer, known as trilobite functions. In position space, the trilobite associated with the scatterer at position $\vec{R}_{q}$ is eilesTrilobites2019 $\langle\vec{r}|T_{q}\rangle=\phi_{i}(\vec{r})\mathcal{W}_{iq}.$ (11) Note that this is not normalized, nor is it orthogonal to other trilobites. The overlap of a trilobite at $R_{q}$ with one at $R_{q^{\prime}}$ is $\langle T_{q}|T_{q^{\prime}}\rangle=\mathcal{W}^{\dagger}_{qi}\left(\int\phi_{i}^{*}(\vec{r})\phi_{i^{\prime}}^{*}(\vec{r})\mathrm{d^{3}}{r}\right)\mathcal{W}_{i^{\prime}q^{\prime}}=\mathcal{W}^{\dagger}_{qi}\delta_{ii^{\prime}}\mathcal{W}_{i^{\prime}q^{\prime}}=\mathcal{W}^{\dagger}_{qi}\mathcal{W}_{iq^{\prime}}.$ (12) Now, let us define a set of trilobite quasiparticles $\\{|{T_{q}}\rangle\\}$ associated with the set of scatterer positions $\\{\vec{R}_{q}\\}$ into which we can expand the $M$-scatterer Hamiltonian. We have (in the first line the sum over $p$ is included explicitly) $\displaystyle\langle{T_{q}}|H|{T_{q^{\prime}}}\rangle$ $\displaystyle=\sum_{p=1}^{M}\int\phi_{i}^{*}(\vec{r})\mathcal{W}_{qi}^{\dagger}2\pi\sqrt{a_{s}[k(R_{p})]}\delta^{3}(\vec{r}-\vec{R}_{p})\sqrt{a_{s}[k(R_{p})]}\phi_{i^{\prime}}(\vec{r})\mathcal{W}_{i^{\prime}q^{\prime}}\mathrm{d^{3}}{r}$ $\displaystyle=2\pi\mathcal{W}^{\dagger}_{qi}\mathcal{W}_{ip}\mathcal{W}_{pi^{\prime}}^{\dagger}\mathcal{W}_{i^{\prime}q^{\prime}}.$ (13) From this we obtain a generalized eigenvalue equation, $\displaystyle 2\pi\mathcal{W}^{\dagger}_{qi}\mathcal{W}_{ip}\mathcal{W}_{pi^{\prime}}^{\dagger}\mathcal{W}_{i^{\prime}q^{\prime}}{v_{q^{\prime}}^{(k)}}=\epsilon^{(k)}\mathcal{W}^{\dagger}_{qi}\mathcal{W}_{iq^{\prime}}{v_{q^{\prime}}^{(k)}}.$ (14) Multiplying both sides by the inverse of the overlap matrix leads to $2\pi\mathcal{W}_{qi}^{\dagger}\mathcal{W}_{iq^{\prime}}{v_{q^{\prime}}^{(k)}}=\epsilon^{(k)}{v_{q}^{(k)}},$ (15) which is the same as when using Eq. (10). We therefore see that this transformation of the original Hamiltonian is equivalent to expanding the Hamiltonian in the basis of “trilobite" wave functions (Eq. 14, followed by reducing the problem from a generalized eigenvalue equation to a standard eigenvalue equation by inverting the overlap matrix. The Hamiltonian can then be written in a transparent way as $\displaystyle\tilde{H}$ $\displaystyle=\sum_{q}E_{q}|{q}\rangle\langle{q}|+\sum_{q}\sum_{q^{\prime}\neq q}V_{qq^{\prime}}|{q}\rangle\langle{q^{\prime}}|,$ (16) where $E_{q}$ and $V_{qq^{\prime}}$ are the probability function of the trilobite state and the overlap between different trilobite states, respectively. This overlap between different trilobites $|{T_{q}}\rangle$ and $|{T_{q^{\prime}}}\rangle$ is, in turn, equal to the trilobite state $|{T_{q}}\rangle$ evaluated at the position of scatterer $q^{\prime}$. These overlaps can be evaluated using $\displaystyle V_{qq^{\prime}}$ $\displaystyle=2\pi\sum_{lm}\phi_{\nu lm}^{*}(\vec{R}_{q})\phi_{\nu lm}(\vec{R}_{q^{\prime}})$ (17) $\displaystyle E_{q}$ $\displaystyle=V_{qq}.$ (18) This trilobite transformation has therefore transformed the Hamiltonian $H$ into a type of tight-binding Hamiltonian $\tilde{H}$ describing a particle moving in a lattice of sites $q$ with on-site potentials $E_{q}$ and hopping amplitudes $V_{qq^{\prime}}$. Note that $\langle{q}|{q^{\prime}}\rangle=\delta_{qq^{\prime}}$, although $\langle{T_{q}}|{T_{q^{\prime}}}\rangle\neq\delta_{qq^{\prime}}$. ## IV (4) Comparison of observables computed in the two representations Clearly, no matter which Hamiltonian representation (Rydberg or trilobite), we diagonalize, the eigenspectrum is identical. However, the eigenvectors differ; the eigenvectors of Eq. 9 give the coefficients needed to build the position- space wave function as a linear combination of degenerate Rydberg states, while the eigenvectors of Eq. 10 give the decomposition into the tight-binding site basis. To connect these eigenvectors, which for the eigenvalue $\epsilon^{(k)}$ are denoted $v_{i}^{(k)}$ in the Rydberg basis and $\tilde{v}_{q}^{(k)}$ in the trilobite basis, we explicitly transform from the Rydberg hamiltonian to the trilobite hamiltonian in the eigenvalue equation, $\displaystyle 2\pi\mathcal{W}_{iq}\mathcal{W}^{\dagger}_{qi^{\prime}}v_{i^{\prime}}^{(k)}$ $\displaystyle=\epsilon^{(k)}v_{i}^{(k)}$ $\displaystyle\implies 2\pi\mathcal{W}^{\dagger}_{pi}\mathcal{W}_{iq}\left[\mathcal{W}^{\dagger}_{qi^{\prime}}v_{i^{\prime}}^{(k)}\right]$ $\displaystyle=\epsilon^{(k)}\mathcal{W}^{\dagger}_{pi}v_{i}^{(k)}$ $\displaystyle\implies 2\pi\mathcal{W}^{\dagger}_{pi}\mathcal{W}_{iq}\tilde{v}_{q}^{(k)}$ $\displaystyle=\epsilon^{(k)}\tilde{v}_{p}^{(k)},$ in other words, $\tilde{v}_{p}^{(k)}=\mathcal{W}^{\dagger}_{pi}v_{i}^{(k)}$. Going the other direction, we have $\displaystyle\mathcal{W}_{i^{\prime}p}\tilde{v}_{p}^{(k)}$ $\displaystyle=\mathcal{W}_{i^{\prime}p}\mathcal{W}^{\dagger}_{pi}v_{i}^{(k)}$ $\displaystyle\implies 2\pi\mathcal{W}_{i^{\prime}p}\tilde{v}_{p}^{(k)}$ $\displaystyle=\epsilon^{(k)}v_{i^{\prime}}^{(k)},$ since on the right hand side we have the eigenvalue equation for $v_{i}^{(k)}$. Therefore, $v_{i}^{(k)}=\frac{2\pi\mathcal{W}_{ip}\tilde{v}_{p}^{(k)}}{\epsilon^{(k)}}$. In practical calculations, we obtain the eigenvectors $\tilde{v}$ via numerical diagonalization of Eq. 10. These are normalized. To obtain the position representation of the wave function we need the transformation between the trilobite and Rydberg representations developed above as well as the normalization. This is given by evaluating $\displaystyle\delta_{kk^{\prime}}=\tilde{v}_{q}^{(k)*}\tilde{v}_{q}^{(k^{\prime})}$ $\displaystyle=\frac{v_{i^{\prime}}^{(k)*}\mathcal{W}_{i^{\prime}q}\mathcal{W}^{\dagger}_{qi}v_{i}^{(k)^{\prime}}}{N_{k}^{2}}$ $\displaystyle=\frac{\delta_{kk^{\prime}}\epsilon^{(k)}}{2\pi N_{k}^{2}},$ where $N_{k}$ is the desired normalization constant. We therefore see that the normalization constant is $N_{k}=\sqrt{\epsilon^{(k)}/2\pi}$. From the normalized eigenvectors $\tilde{v}_{p}^{(k)}$ obtained numerically, we get the normalized eigenvectors in the Rydberg basis, $v_{i}^{(k)}=(\epsilon^{(k)}/2\pi)^{-1/2}\mathcal{W}_{ip}\tilde{v}_{p}^{(k)}$. In addition to the wave functions and eigenvalues, which we use to analyze the structure and behavior of the system, we will study the normalized participation ratio (NPR, denoted $\mathcal{P}$), which characterizes the localization of the sytem. Since the wave functions in the trilobite basis give the close connection between our system and the tight-binding hamiltonian of the Anderson model, we will primarily study localization in this representation. Here, the normalized participation ratio of the $k$th eigenstate is defined $\mathcal{P}(k)=\left(M\sum_{q}^{M}|\tilde{v}_{q}^{(k)}|^{4}\right)^{-1}.$ (19) When $\tilde{v}_{q}^{(k)}$ is localized on a single scatterer $\vec{R}_{p}$, or in a single trilobite state $|{p}\rangle$, then $\tilde{v}_{q}^{(k)}=\delta_{qp}$ and $\mathcal{P}(k)=M^{-1}$. However, when $\tilde{v}_{q}^{(k)}$ is localized in a plane wave with amplitudes $e^{2\pi ikq}/\sqrt{M}$, then $\mathcal{P}(k)=(M\cdot M\cdot M^{-2})^{-1}=1$. Finally, if a real representation of the delocalized Bloch states is used (as sometimes occurs in the presence of degenerate states, as on the ring), then $\mathcal{P}(k)=(4M^{-1}\sum_{q}\cos^{4}(2\pi kq))^{-1}=(4M^{-1}(3M/8))^{-1}=2/3$. A necessary condition is that localization in this representation remains physically meaningful. We therefore should compute also the NPR in real space, in order to see if the electron localizes spatially on the scatterers, according to the eigenvector $\tilde{v}^{(k)}$. We define the spatial NPR, $\mathcal{P}_{spatial}(k)$, as a participation ratio of the electronic wave function evaluated at and summed only over scatterer positions, $\mathcal{P}_{spatial}(k)=\left(M\sum_{P}\left|\sum_{i}v_{i}^{(k)}\phi_{i}(\vec{R}_{P})\right|^{4}\right)^{-1}=\left(M\sum_{P}\left|\sum_{i}\mathcal{W}^{\dagger}_{Pi}v_{i}^{(k)}\right|^{4}\right)^{-1}.$ (20) We next write this in terms of the trilobite eigenvector, $\mathcal{P}_{spatial}(k)=\left(M\sum_{P}\left|\sum_{i,q}\mathcal{W}^{\dagger}_{Pi}\frac{\mathcal{W}_{iq}\tilde{v}_{q}^{(k)}}{[\epsilon^{(k)}]^{1/2}}\right|^{4}\right)^{-1}=\left(M\sum_{P}\left|\frac{\epsilon^{(k)}\tilde{v}_{P}^{(k)}}{[\epsilon^{(k)}]^{1/2}}\right|^{4}\right)^{-1}.$ (21) Thus, we find that $\mathcal{P}_{spatial}(k)=[\epsilon^{(k)}]^{-2}\mathcal{P}(k)$; localization in the trilobite basis implies spatial localization, albeit with a normalization factor given by the eigenenergy. This normalization factor can be removed by considering relative spatial probabilities in the formulation of the spatial participation ratio, since the most relevant localization measure is not localization relative to the entire allowed volume (which our previous measure characterizes) but instead localization within the spatial volume of interest, namely the 1D line. The probability of finding the electron on site $p$ is $\text{Prob}(p)=\left|\sum_{i}v_{i}^{(k)}\phi_{i}(\vec{R}_{p})\right|^{2}.$ (22) The probability of finding the electron at the position of one scatterer relative to the total probability of finding it at any perturber is then $P(p)=\frac{\text{Prob}(p)}{\sum_{P}\text{Prob}(p)}.$ (23) Note that this probability is normalized so that there is unit probability to find the electron on the ring of scatterers, i.e. $\sum_{p}P(p)=1$. Prob$(p)$ can be rewritten in terms of $\mathcal{W}$, which facillitates a series of simplifications, $\text{Prob}(p)=\left|\sum_{i}\mathcal{W}_{Pi}^{\dagger}v_{i}^{(k)}\right|^{2}=\left|\sum_{i,q}\mathcal{W}^{\dagger}_{Pi}\frac{\mathcal{W}_{iq}\tilde{v}_{q}^{(k)}}{[\epsilon^{(k)}]^{1/2}}\right|^{2}=\left|\frac{\epsilon^{(k)}\tilde{v}_{P}^{(k)}}{[\epsilon^{(k)}]^{1/2}}\right|^{2}=|\epsilon^{(k)}|\left|\tilde{v}_{P}^{(k)}\right|^{2},$ (24) where in the second step we transformed the eigenvector into the trilobite representation, and in the third step we recognized the appearance of the Hamiltonian matrix acting on the trilobite eigenvector. Using this in the spatial participation ratio definition gives $\mathcal{P}_{spatial}(k)=\left(M\sum_{P}\left|\frac{\text{Prob}(P)}{\sum_{P^{\prime}}\text{Prob}(P^{\prime})}\right|^{2}\right)^{-1}=\left(M\frac{\sum_{P}\left|\tilde{v}_{P}^{(k)}\right|^{4}}{\left|\sum_{P^{\prime}}|\tilde{v}_{P^{\prime}}^{(k)}|^{2}\right|^{2}}\right)^{-1}=\left(M\sum_{P}|\tilde{v}_{P}^{(k)}|^{4}\right)^{-1}=\mathcal{P}(k)$ (25) Thus, we see that the two participation ratios are equivalent. ## V IV. Matrix elements in the ring geometry Here we return to the matrix elements of the Rydberg hamiltonian in the trilobite representation, i.e. equation 17. In the ring geometry we study here, where the scatterers all lie in a plane and are equidistantly spaced, the on-site potentials $E_{q}$ and hopping amplitudes $V_{qq^{\prime}}$ can also be written $\displaystyle E_{q}$ $\displaystyle=\sum_{lm}\mathcal{R}_{lm}(R_{q},R_{q})$ (26) $\displaystyle V_{qq^{\prime}}$ $\displaystyle=\sum_{lm}\mathcal{R}_{lm}(R_{q},R_{q^{\prime}})e^{-\frac{im2\pi}{M}(q-q^{\prime})},$ (27) where $\mathcal{R}_{lm}(R_{q},R_{q^{\prime}})=\frac{(l+\frac{1}{2})(l-m)!(l+m)!}{\left[\left(\frac{l+m}{2}\right)!\left(2^{(l+1)}\frac{l-m}{2}\right)!\right]^{2}}\left[\frac{u_{\nu l}(2\nu^{2}R_{q})}{R_{q}\nu^{2}}\right]\left[\frac{u_{\nu l}(2\nu^{2}R_{q^{\prime}})}{R_{q^{\prime}}\nu^{2}}\right]$ (28) The expressions in 26 can be analytically summed, yielding $\displaystyle E_{q}$ $\displaystyle=\frac{(R_{q}^{-1}-1)[u_{\nu 0}(2\nu^{2}R_{q})]^{2}+\nu^{2}[u_{\nu 0}^{\prime}(2\nu^{2}R_{q})]^{2}}{2\nu^{2}}$ (29) $\displaystyle V_{qq^{\prime}}$ $\displaystyle=\frac{u_{\nu 0}^{\prime}(t_{-})u_{\nu 0}(t_{+})-u_{\nu 0}(t_{-})u_{\nu 0}^{\prime}(t_{+})}{2(t_{+}-t_{-})},$ (30) with $\displaystyle t_{\pm}$ $\displaystyle=\nu^{2}\left(R_{q}+R_{q^{\prime}}\pm\sqrt{R_{q}^{2}+R_{q^{\prime}}^{2}-2R_{q}R_{q^{\prime}}\cos(2\pi(q-q^{\prime})/M)}\right).$ (31) and where $u_{\nu l}(r)$ are the reduced hydrogen radial functions and $u^{\prime}_{\nu l}(r)=\frac{du_{\nu l}(r)}{dr}$. It is curious to note that only the $s$-wave radial wave function needs to be evaluated, making these expressions useful computationally, since only one (out of $\nu$ possible radial wave functions) function, along with its derivative, must be evaluated. It is also useful in determining asymptotic properties, since these are determined by the behavior of only a single function and its derivative. ## VI (5) Eigenspectrum in the ring geometry Due to the periodicity of the ring, the trilobite representation eigenvectors are $\tilde{v}_{q}^{(k)}=\frac{1}{\sqrt{M}}e^{-\frac{2\pi ikq}{M}},$ (32) where $q=1,...,M$ and $k=-M/2,-M/2+1,...,0,...,M/2-1$ when $M$ is even and $k=-(M-1)/2,...,0,...,(M-1)/2$ when $M$ is odd. The corresponding eigenvalues are $\displaystyle\epsilon(k)$ $\displaystyle=E+\cos\left(\pi k\right)V_{1\frac{M}{2}+1}\delta_{0,M(\text{mod}2)}+2\sum_{q^{\prime}=1}^{\tilde{M}}\cos\left(\frac{2\pi kq^{\prime}}{M}\right)V_{1q^{\prime}+1},$ (33) where $E$ is the on-site energy, $\tilde{M}$ is $M/2-1$ for even $M$ and $(M-1)/2$ for odd $M$. Alternatively, we can compute the eigenvalues directly from the Rydberg functions, $\displaystyle c_{kj}H_{jj^{\prime}}c_{k^{\prime}j^{\prime}}$ $\displaystyle=\sum_{jj^{\prime}}\sum_{lm}\frac{\mathcal{R}_{lm}(R_{1},R_{1})}{M}e^{\frac{2\pi i}{M}\left(kj-m(j-j^{\prime})-k^{\prime}j^{\prime}\right)}.$ (34) After rearranging the sums, $\displaystyle c_{kj}H_{jj^{\prime}}c_{k^{\prime}j^{\prime}}$ $\displaystyle=\sum_{lm}\frac{\mathcal{R}_{lm}(R_{1},R_{1})}{M}\sum_{jj^{\prime}}e^{\frac{2\pi i}{M}\left(kj-m(j-j^{\prime})-k^{\prime}j^{\prime}\right)},$ (35) we can compute the sum over $j,j^{\prime}$, which is only over the complex exponential, $\displaystyle\sum_{jj^{\prime}}\dots$ $\displaystyle=\exp\left(\frac{2\pi i}{M}(k+m-k^{\prime}M-mM)\right)$ (36) $\displaystyle\times\frac{(e^{2ik\pi}-e^{2im\pi})(e^{2im\pi}-e^{2ik^{\prime}\pi})}{(e^{2ik\pi/M}-e^{2im\pi/M})(e^{2im\pi/M}-e^{2ik^{\prime}\pi/M})}.$ This gives $M^{2}\delta_{kk^{\prime}}\delta_{(k-m)\text{mod}M,0}$. Thus, $\displaystyle\epsilon(k)$ $\displaystyle=M(\nu)\sum_{lm}\mathcal{R}_{lm}(R_{1},R_{1})\delta_{(m-k)\text{mod}M,0}.$ (37) ## VII (6) Properties of the $R=1$ system The $R=1$ system is the simplest and most straightforward to connect to known theoretical results, as its hopping terms die off rapidly with $|q-q^{\prime}|$ and can be very well approximated with only a nearest neighbor model. Two important asymptotic limits of the $s$-wave radial functions are $\displaystyle\lim_{\nu\to\infty}u_{\nu 0}(2\nu^{2})$ $\displaystyle=a\nu^{-5/6},a\approx-0.56355$ (38) $\displaystyle\lim_{\nu\to\infty}u^{\prime}_{\nu 0}(2\nu^{2})$ $\displaystyle={b}{\nu^{-13/6}},b\approx 0.326.$ (39) The on-site potentials are therefore $\lim_{\nu\to\infty}E_{q}=\frac{[u^{\prime}_{\nu 0}(2\nu^{2})]^{2}}{2}\approx\frac{b^{2}\nu^{-13/3}}{2}=a_{1}\nu^{-13/3},\,\,a_{1}\approx 0.053138.$ (40) From a numerical experiment, we find that setting $M=\text{Floor}(3\nu^{2/3})$ results in a consistent scaling of the hopping terms with an overall factor $\nu^{-13/3}$. The hopping terms can be determined numerically: $\displaystyle\lim_{\nu\to\infty}V_{qq+1}$ $\displaystyle=b_{1}{\nu^{-13/3}},\,\,b_{1}\approx{0.01355}$ (41) $\displaystyle\lim_{\nu\to\infty}V_{qq+2}$ $\displaystyle- c_{1}\nu^{13/3},\,\,c_{1}\approx{0.0004}.$ (42) With these, we can construct a model Hamiltonian which very closely matches the numerical Rydberg results. The small and negative next-nearest-neighbor hopping terms result in a slight asymmetry in the eigenspectrum. To analytically treat the influence of disorder, we expand the Hamiltonian matrix elements to first order in the positional disorder and perform the same asymptotic analysis as in the periodic case. This gives $\displaystyle\nu^{13/3}E_{q}$ $\displaystyle\approx a_{1}-g_{1}\nu^{2/3}\delta_{q}$ (43) $\displaystyle\nu^{13/3}V_{qq+1}$ $\displaystyle\approx b_{1}-f_{1}(\overline{\delta}_{q^{\prime}}-\overline{\delta}_{q})-e_{1}\nu^{2/3}(\delta_{q}+\delta_{q^{\prime}})$ (44) $\displaystyle\nu^{13/3}V_{qq+2}$ $\displaystyle\approx-c_{1},$ (45) where $a_{1}$, $b_{1}$, and $c_{1}$ were reported above, and $e_{1}\approx 0.015$, $f_{1}\approx 0.04$, and $g_{1}\approx 0.1519$. From this we make three conclusions: * • The radial positional disorder in the $R=1$ case must be rescaled by a factor $\nu^{-2/3}$ in order to provide a constant energetic disorder as $\nu$ increases; this is a necessary step for obtaining the propert thermodynamic limit. * • Radial disorder leads to on-site disorder and, roughly an order of magnitude smaller, positively correlated hopping disorder. * • Angle disorder leads to anti-correlated off-diagonal disorder; there is no on- site disorder. ## VIII (7) Properties of the $R=0.75$ system This $R$ value is more challenging to study since there are more non-zero hopping elements to consider. We therefore proceed primarily numerically. We find that the diagonal elements are $E_{q}=\frac{1}{2\nu^{2}}\left[\frac{1}{3}(u_{\nu 0}(3/2\nu^{2}))^{2}+\nu^{2}(u^{\prime}_{\nu 0}(3/2\nu^{2})\right]^{2}\approx\frac{a_{0.75}}{\nu^{4}},\,\,a_{0.75}=\frac{0.3667}{2}$ (46) In the asymptotic limit, $\nu\to\infty$, the hopping terms for relatively small $|q-q^{\prime}|<\nu/20$, approach the surprisingly simple functional form $V_{qq^{\prime}}\sim\frac{a_{0.75}}{\nu^{4}}\frac{\sin(\omega(q-q^{\prime})}{\omega|q-q^{\prime}|},$ (47) where $\omega=\pi\sqrt{3}$. For $\nu/20<|q-q^{\prime}|<\nu/10$ (approximately), the hopping terms continue to oscillate, but with mostly constant amplitude. At $|q-q^{\prime}|\approx\nu/10$, the hopping terms rapidly decay to zero. Figure 5: Hopping amplitudes for $\nu=1000$ for the $R=0.75$ system (blue) compared with the asymptotic form of equation 47 (red) Taking a ’truncated’ sinc function for the hopping gives, its only asymptotic and approximate character, a very good qualitative prediction of the eigenspectrum. The cutoff length (around $\nu/10$) affects the bandwidth of the two individual bands, which become flatter and narrower as as the cutoff length increases. The proportion of states in the lower and upper band depend on the frequency of the oscillations. Performing the same disorder analysis as in the $R=1$ case, we find that the energy disorder stemming from radial disorder is proportional to $\overline{\delta}\nu^{-4}$, and therefore the radial disorder in this case needs no further scaling as in the $R=1$ case. The same holds for the angle- disorder, which remains purely off-diagonal, and the correlation (anti- correlation) of the hopping disorder in the radial (angular) disorder cases remains as well. ## IX (8) Properties of the $R=0.5$ system When the ring is positioned at half the radius of the Rydberg orbit, $R=0.5$, the Hamiltonian’s diagonal elements are straightforward to evaluate: $E_{q}=\frac{1}{2\nu^{2}}\left[[u_{\nu 0}(\nu^{2})]^{2}+\nu^{2}[u^{\prime}_{\nu 0}(\nu^{2})]^{2}\right]\approx\frac{0.6366}{2\nu^{4}}=a_{0.5}\nu^{-4}.$ (48) The largest hopping element, due to the shape of the trilobite orbitals at this ring radius, is between site $q$ and site $q+M/2$, or, if $M$ is odd, between site $q$ and sites $q+(M\pm 1)/2$. The parity of $M$ therefore plays a key role in overall form of the eigenspectrum, in contrast to the previous $R$ values where it was irrelevant. When $M$ is even, the hopping term between $q$ and $q+M/2$ is $V_{q,q+M/2}=\frac{u_{\nu 0}^{\prime}(0)u_{\nu 0}(2\nu^{2})}{4\nu^{2}}=\frac{u_{\nu 0}(2\nu^{2})}{2\nu^{7/2}}\approx\frac{-0.5635}{2\nu^{13/3}}=-c_{0.5}\nu^{-13/3}$ (49) When $M$ is odd, the two neighboring particles on the opposite side of the ring have identical amplitudes, also scaling like $\nu^{-13/3}$. Note that the appearance of fractional exponents in these scaling relations stems again from the peculiar behavior of the $R=1$ radial wave function. For other hopping terms the expression $V_{qq^{\prime}}$ no longer depends on the exceptional scaling of the radial wave function at $R=1$ and we can use the naive scaling behavior $u_{\nu 0}(R)\sim\nu^{-1}$ and $u^{\prime}_{\nu 0}(R)\sim\nu^{-2}$ safely. Together this gives a $\nu^{-5}$ scaling for the off-diagonal elements. This case is therefore distinct from the other two that we have considered in that its matrix elements scale with $\nu$ in different ways. To understand the resulting eigenspectrum in the even parity case, we construct an approximate Hamiltonian. This has $a_{0.5}$ on the diagonal and a constant hopping $b_{0.5}/\nu$ to all sites except for the opposite site, which has a hopping amplitude of $-c_{0.5}/\nu^{1/3}$. The resulting matrix can be diagonalized analytically to gain some insight into the Rydberg composite’s spectrum. Its eigenvalues are $a_{0.5}+c_{0.5}\nu^{-1/3}$ ($\times\nu/2$ degeneracy), $a_{0.5}-2b_{0.5}\nu^{-1}-c_{0.5}\nu^{-1/3}$ ($\times\nu/2-1$ degeneracy), and a single eigenvalue $a_{0.5}+b_{0.5}(\nu-2)\nu^{-1}-c_{0.5}\nu^{-1/3}$. For large $\nu$ the eigenspectrum consists of two flat bands separated from $a_{0.5}$ by $c_{0.5}\nu^{-1/3}$, and a single state lying at $b_{0.5}$. In the thermodynamic limit the band gap closes completely and the system condenses to a flat band with a single shifted state. In this limit, the system qualitatively resembles the odd-parity $M$ state. Although this is a highly simplified qualitative picture of the $R=0.5$ Rydberg eigenspectra, the basic features exist also in the real case. The dominant hopping term, connecting opposite sites on the ring, scales as $\nu^{-13/3}$, as seen above. Angle disorder shifts this hopping term by a term second order in the positional disorder strength, but overall having a $\nu^{-13/3}$ dependence as well. Under radial disorder, this hopping term shifts by a term first order in the positional disorder strength, but having a $\nu^{-11/3}$ scaling. Like the $R=1$ case, this also requires a rescaling of the positional disorder, $\overline{\delta}\to\nu^{-2/3}\overline{\delta}$, to obtain the proper scaling. In turn, this means that the diagonal disorder decreases with larger $\nu$, since this can be shown to scale in the normal way, as $\nu^{-4}$. Due to the reduction in on-site disorder necessary in order to avoid increasingly strong off-diagonal disorder in this system in the thermodynamic limit, in the main text we consider only angle disorder. ## X (9) numerical details Positions of the scatterers In the ring geometry the scatterers are placed at angles $\phi_{q}=\frac{2\pi q}{M}$ in a plane centered around the Rydberg core. We introduce disorder either by shifting the angles of the scatterers, $\phi_{q}\to\frac{2\pi}{M}\left[q+\delta_{q}(\nu)\right]$, or their positions, $R_{q}\to[1+\overline{\delta}_{q}(\nu)]R_{q}$. The former case leads to anti- correlated off-diagonal disorder (i.e. the energetic disorder in a hopping term is proportional to $\delta_{q}-\delta_{q^{\prime}}$), while the latter leads to uncorrelated diagonal disorder and a weaker, correlated off-diagonal disorder (i.e. the energetic disorder in a hopping term is proportional to $\overline{\delta}_{q}+\overline{\delta_{q^{\prime}}}$). We take $\delta_{q},\overline{\delta}_{q}$ to be independent Gaussian random variables with variance $\sigma^{2}$ and mean zero. Details of the disorder averaging In the text, we use several different disorder strengths. For the $R=1$ angle disorder case, $\sigma=17\times 10^{-3}$. For the $R=1$ radial disorder case, $\sigma=(2\times 10^{-3})\cdot(30^{2/3})\approx 0.01931$. Each random number drawn from this distribution $\overline{\delta}_{q}$ was divided by $\nu^{2/3}$ to ensure that the energy disorder remains constant as $\nu$ increases. For the $R=0.75$ radial disorder case, $\sigma=1.33\times 10^{-3}$. For the $R=0.5$ angle disorder case, $\sigma=22\times 10^{-3}$ was used. In all cases we averaged over 1000 disorder realizations.
# FedSpectral+: Spectral Clustering using Federated Learning Janvi Thakkar*, Devvrat Joshi ###### Abstract Clustering in graphs has been a well-known research problem, particularly because most Internet and social network data is in the form of graphs. Organizations widely use spectral clustering algorithms to find clustering in graph datasets. However, applying spectral clustering to a large dataset is challenging due to computational overhead. While the distributed spectral clustering algorithm exists, they face the problem of data privacy and increased communication costs between the clients. Thus, in this paper, we propose a spectral clustering algorithm using federated learning (FL) to overcome these issues. FL is a privacy-protecting algorithm that accumulates model parameters from each local learner rather than collecting users’ raw data, thus providing both scalability and data privacy. We developed two approaches: FedSpectral and FedSpectral+. FedSpectral is a baseline approach that uses local spectral clustering labels to aggregate the global spectral clustering by creating a similarity graph. FedSpectral+, a state-of-the-art approach, uses the power iteration method to learn the global spectral embedding by incorporating the entire graph data without access to the raw information distributed among the clients. We further designed our own similarity metric to check the clustering quality of the distributed approach to that of the original/non-FL clustering. The proposed approach FedSpectral+ obtained a similarity of 98.85% and 99.8%, comparable to that of global clustering on the ego-Facebook and email-Eu-core dataset. ## Introduction Organizations worldwide actively use graph clustering algorithms to map data into communities to infer their relationships. Particularly, spectral clustering (Ng, Jordan, and Weiss 2001) (Von Luxburg 2007) is one of the widely used algorithms to cluster graphs. It employs graph eigenvector embedding by preserving node relationships in euclidean space. However, applying spectral clustering on a graph with billions of nodes is challenging as it requires lots of computational cost and storage space. Researchers in the past had tried to distribute the graph and train the model in a distributed manner by achieving comparable accuracy. Nevertheless, in this case, the communication cost has increased exponentially (Macey and Zomaya 1998). In addition, (Chen et al. 2010) also developed parallel spectral clustering based on the sparse similarity matrix. However, the major drawback of this work is that it assumes that any data in the parallel system can be accessed by any client. But, this raises concerns for data privacy if we share the information with different clients. In this work, we remove this assumption by only sharing the representations learned in the local client to the global server instead of directly communicating with neighboring clients. 111* Authors contributed equally Data privacy in distributed settings was the very reason for the introduction of federated learning. The new machine learning algorithms were becoming complex and required a large amount of data to generalize on a task. However, in reality, data is distributed over many organizations, and sharing them is difficult as it concerns data privacy. Thus, the ‘Federated Learning (FL)’ term coined by Google (McMahan et al. 2017) was the first to propose an algorithm that combines intelligence across the clients without compromising data security. There are two key advantages of using FL over traditional machine learning algorithms. First, it allows training the models on local devices and then sending the trained parameters to the central server; this provides privacy to local learners as there is no exchange of raw data among clients. Secondly, models are trained over different local learners and thus do not need a large dataset to be present in the central cloud. Researchers around the globe have tried to propose several federated learning algorithms for different machine learning models. They have tried to integrate the differential privacy-based approaches in the distributed setting. Some of the past literature works includes logistic regression (Chen, Rezapour, and Tzeng 2018) (Nikolaenko et al. 2013), deep neural networks (McMahan et al. 2017) (Yang et al. 2019) (Bonawitz et al. 2019), support vector machines (Smith et al. 2017) and gradient boosted decision trees (Cheng et al. 2021) (Li, Wen, and He 2020). Thus, we propose FedSpectral+, a novel algorithm for federated spectral clustering. The approach integrates the power iteration method (Booth 2006) and federated learning to enhance spectral clustering by decreasing the communication cost between clients and providing data privacy. Our algorithm takes advantage of the FL technique by learning the representations locally and aggregating them over the server. Firstly, the server creates a single matrix and sends it to all the clients parallelly. The clients run power iterations over this matrix using their own data and send them back to the server. This is repeated for a fixed number of rounds till the matrix converges to the global graph embedding. The proposed approach was tested on the ego-facebook and email-Eu-core datasets. We obtained a similarity of 98.85% and 99.8%, comparable to that of global clustering. Our main contribution includes: 1. 1. We propose the state-of-the-art FL framework for spectral clustering, namely, FedSpectral+. The approach uses the power iteration method to iteratively learn the eigenvector embedding while maintaining data privacy. 2. 2. We experimentally prove the effectiveness of our algorithm on ego-Facebook and email-Eu-core datasets and give a detailed study of factors influencing the results of FedSpectral+. ## Related Work The concept of federated learning(FL) has aroused a plethora of interdisciplinary studies, primarily due to its broad applicability in privacy-constrained scenarios (Li et al. 2020). Federated Averaging (FedAvg) algorithm (McMahan et al. 2017), a simple baseline, which aggregates the local client updates to obtain the global model without exchanging the raw data. (Lalitha et al. 2019) proposed employing peer-to-peer FL for graph-structured data. Further, for the link prediction task, (Chen et al. 2021) proposed FedE to exploit centralized aggregation to utilize FL over knowledge graphs. However, they lack in training the multiple knowledge graphs together. Moreover, several works have also explored the FL on graph neural networks - (Wang et al. 2020), (Zhang et al. 2021). But, the use of FL in spectral clustering still remains unexplored. The nearest work includes that of (Chen et al. 2010) that proposes parallel spectral clustering, which revolves around the idea of using the sparse similarity matrix. However, the major drawback of this work is that it assumes that any data in the parallel system can be accessed from any node. But, if data is shared among nodes, this compromises individual information, raising concerns about data privacy. Thus, in this work, we propose FedSpectral+, a federated spectral clustering approach that globally clusters the data shared across different clients without sharing raw data. This approach handles the problem of data privacy by using the advantage of the FL technique, which only aggregates the representations learned by the clients and, thus, no direct exchange of raw data between the clients and server. In addition, these federated spectral settings also allow one to use large datasets without worrying about the computational cost, as the data is distributed over several clients. ## Problem Statement and Real World This section explains the rationale for recommending the FedSpectral+ strategy and provides a concrete example of how it can be applied in practice. Let there be different social network platforms. Each platform has information about the unique phone number or email address of every person registered in the network, which is the primary key in their database. For every person(represented by a node), there will be a relationship between pair of nodes(represented by an edge) within the master set of nodes. Here, the master set of nodes refers to all the phone numbers/unique ids available worldwide. If a number is not registered on a particular platform, the social network platform still takes it into consideration by making zero edges of that id with other people/ids. Furthermore, we can consider edges to hold the weight equivalent to the amount of interaction between two nodes (number of messages exchanged). There is a possibility that a person interacting with another person on a particular social network might not interact with the same person on another. If a person interacts with a large number of people and has more than average information exchange, capturing that person’s attention to a product can help reduce advertisement costs. It is because if that person likes a product, he/she will do the free advertisement to his/her friends. Each social network platform privately stores the interaction information due to privacy concerns. However, without the knowledge of the entire graph, it will be hard to learn the amount of interaction among individuals over the global graph. Thus, our algorithm solves this problem using the federated learning approach. ## Proposed Approach In this paper, we eliminate the drawback of privacy concerns in distributed spectral clustering by generating the eigenvector embedding of the global graph without revealing raw data to the server. Our first approach is the baseline - FedSpectral algorithm. Here, the clients create the spectral clustering labels of the local graph data. The server receives these clustering labels, which further creates the similarity graph by aggregating the clusterings. While there is no exchange of raw information between any clients, the algorithm doesn’t consider the relation of particular pair of nodes within two different clients. Due to this, the output clustering significantly deviates from the global clustering. To overcome this, we proposed the FedSpectral+ approach that uses the power iteration method to generate the client’s eigenvector embeddings, which also considers the information from all other local clients by utilizing the aggregated result of the server. This global aggregation helps us to learn the overall graph structure. Thus this approach preserves privacy and produces significantly better clustering than the baseline. ### Preliminaries 1. 1. global clustering: It is the spectral clustering of the entire graph. 2. 2. global aggregated clustering: It is the clustering resulting from the federated server-side algorithms described in this section. 3. 3. Power iteration: A method to iteratively find the approximate eigen embedding of a matrix. 4. 4. Similarity Graph: The similarity graph is constructed by assigning a value to each pair of nodes by using equation 1. 5. 5. Overlap of graph: The overlap of the graph is the fraction of times the same edge is repeated over the clients. 6. 6. global embedding: Actual eigenvector embedding constructed using spectral clustering on the entire graph. ### Setting Here, we discuss how the graph is distributed among clients for experimental purposes. We create a null adjacency matrix for each client with the size of the total number of nodes in the graph. Then each edge is randomly assigned to the client uniformly. To regulate the overlap of a graph, i.e., the number of clients that will hold a particular edge, we assign the edge to a fixed number of random clients. Thus, creating our dataset for experimental purposes. Input: client: client identifier numOfClusters: Number of clusters to be formed in clustering the graph data Output: S: List of labels of N nodes after clustering Internal Variables inside client’s memory: numOfNodes: Number of nodes in graph L: Normalized Laplacian matrix of size $N^{2}$ where $N$ is number of nodes in graph $eigenVectors$ = $computeEigenvectors(L)$ Sort eigenVectors based on eigenvalues $vectorEmbedding$ = Transpose of $eigenVectors$ matrix $labels$ = $KMeansClustering(vectorEmbedding)$ Return: $labels$ function name: getClientLabels Algorithm 1 FedSpectral, Client Code function name: getClientLabels Input: clientList: client list numOfClients: Number of clients numOfNodes: Number of nodes in graph numOfClusters: Number of clusters Output: S: List of labels of N nodes after global clustering similarityGraph: initialize a similarity matrix of size $N^{2}$ with zeros as the similarity matrix of graph for _client in clientList_ do labels = $getClientLabels(client,numOfClusters)$ for _i in range(N)_ do for _j in range(N)_ do if $labels[i]==labels[j]$ $similarityGraph[i][j]$ $+=$ $1/numOfClients$ end for end for end for $L$ = $createLaplacian(similarityGraph)$ $eigenVectors$ = $computeEigenvectors(L)$ Sort eigenVectors based on eigenvalues $vectorEmbedding$ = Transpose of $eigenVectors$ matrix $globalLabels$ = $KMeansClustering(vectorEmbedding)$ Return: $globalLabels$ Algorithm 2 FedSpectral, Server Code ### FedSpectral Algorithm (Baseline) Spectral clustering in distributed settings faces the problem of data privacy because it requires sharing the edge data of graphs to create the embedding. As a result, it reveals the relationship between nodes in the graph of a particular client. We propose our baseline - FedSpectral approach, which does not share the graph’s edge information between the clients or the server. In this approach, each client calculates the spectral clustering of its local graph. The labels of all nodes are then delivered to the server by each client. The server then creates a similarity graph using the clustering information received from all clients. The similarity graph is constructed by assigning a value to each pair of nodes using equation 1. Finally, the server performs spectral clustering on the resulting similarity graph to calculate global clustering. Algorithm 1 provides pseudo-code for the client side of this approach. It calculates the cluster labels of the graph data that is local to the client using spectral clustering. Algorithm 2 provides pseudo-code for the server side of this approach. It gets the local spectral clustering labels from each client in the list clientList. It then uses the labels received from the client to create a similarity graph using the rule : $similarityGraph[node_{i}][node_{j}]=\\\ \frac{\\#\>of\>clients\>implying\>nodes(i,j)\>have\>same\>labels}{\\#\>of\>clients}$ (1) Finally, the server applies spectral clustering on this $similarityGraph$ and returns the global aggregated clustering. ### FedSpectral+ Algorithm Spectral clustering needs the entire graph to be kept in a single memory, which limits the method’s use. However, sharing raw graph data with the global server and other clients raises privacy problems. The baseline - FedSpectral technique ignores the primary idea of spectral clustering that an embedding should be constructed using the knowledge of the entire graph rather than sections of graphs within individual clients. This is because a particular node in two separate client datasets can have a different vector embedding. As a result, we are unable to employ the method of calculating spectral embedding individually for each client. We need to find a way to aggregate the eigenvector embedding of clients to learn the representation of the entire graph. In addition, we also need the approach to achieve the former without sharing information about the presence of an edge within the data of a local client. Therefore, in this approach, we use the power iteration method to iteratively learn the eigenvector embedding of the global graph while preventing the exchange of edge data amongst the clients and server. Input: client: client identifier iters: Number of power iterations eigenVectors: A list of smallest $K$ approximate eigenvectors of global graph, dimension: $N\times K$ Output: eigenVectors: Modified list of smallest $K$ approximate eigenvectors of global graph, dimension: $N\times K$ Internal Variables inside client’s memory: numOfNodes: Number of nodes in graph L: Normalized Laplacian matrix of size $numOfNodes^{2}$ of client I: Identity matrix of size $numOfNodes^{2}$ $L\>=\>I\>-\>L$ for _iter in range( $iters$)_ do $eigenVectors$ = $L\times eigenVectors$ end for Return: $eigenVectors$ function name: getPowerIterationClient Algorithm 3 FedSpectral+, Client Code function name: getPowerIterationClient In this approach, we want to learn the embedding as close as possible to the global embedding. The clients initially learn the local embedding using the power iteration method over their Laplacian matrix and send it to the server. Now, the server aggregates(averages) all the embeddings from the local clients and calls this the approximate embedding. The server then sends this approximate embedding to the clients. The client repeats the power iteration method using this learned approximate embedding to incorporate the local graph structure. Here, each node relates to the neighboring nodes of the other clients using approximate embedding. Thus, it gives the impression that we are learning the embedding by incorporating the features of the entire graph. As the number of aggregation rounds increases, the approximate embedding becomes nearer to the global embedding since the global graph structure gets clearer at every step. In the proposed approach, the only exchange of information includes sending the approximate embedding between the server and the clients. Therefore, there is no direct sharing of raw information about the local graph data, maintaining privacy in our federated approach. Input: clientList: client list numOfClients: Number of clients numOfNodes: Number of nodes in graph numOfClusters: Number of clusters iters: Number of power iteration per client globalRounds: Number of global aggregation rounds Output: S: List of labels of N nodes after global clustering v: initialize a $numOfNodes\times numOfCluster$ matrix randomly for _round in range( $globalRounds$)_ do p = $numOfClients$ copies of $v$ for _client in range(numOfClients)_ do $clientId$ = $clientList[client]$ $p[client]$ = $getPowerIterationClient($ $clientId,numOfClusters,iters,p[client])$ end for $v$ = average of all matrices in $p$ $v,r$ = $qrDecomposition(v)$ end for $globalLabels$ = $KMeansClustering(v)$ Return: $globalLabels$ Algorithm 4 FedSpectral+, Server Code Algorithmically, we randomly initialize a matrix $v$ of size $N\times K$. Given the nature of spectral clustering, if there are $K$ clusters, then the first $K$ eigenvectors of the graph’s Laplacian matrix should be used to generate the nodes’ vector embeddings. Therefore, after completion of the algorithm, each of the columns of $v$ will represent one of the first $K$ eigenvectors of the global graph. Let $C$ be the number of clients contributing to the clustering. We create $C$ copies of $v$ and call it $p$. Then we pass these matrices to their corresponding clients. Let $p[i]$ be the $v$ matrix of client $i$. Clients will now run the power iteration approach to calculate the first $K$ eigenvectors simultaneously on their local graphs. After completing the power iterations, all clients will send their $p[i]$ matrix to the global server. The server will compute the average of all the $p[i]$ matrices of clients and then orthogonalize the resulting matrix using a QR decomposition. It will assign the $q$ of the QR decomposition to $v$. Again the server will share $C$ copies of this aggregated matrix $v$ to clients, and then the same process will be repeated for a number of $globalRounds$. After completing all the global rounds, we get the approximate first $K$ eigenvectors in the form of the $N\times K$ matrix, with each column representing one eigenvector. Since $v$ matrix is the eigenvector embedding of the global graph, we directly apply $KMeans$ clustering algorithm over this vector embedding and return the $globalLabels$. Algorithm 3 provides the pseudo-code for the client side of this approach. Since the power iteration method calculates the eigenvectors corresponding to the largest eigenvalues, each client applies this function to the normalized local Laplacian matrix: $L\>=\>I\>-\>L$. It then runs the power iterations over $eigenVectors$ with $L$ as the matrix multiplier. Finally, it returns the modified $eigenVectors$. For the pseudo-code of server-side of this algorithm, refer Algorithm 4. The QR decomposition used in this approach is taken from the NumPy library of python (numpy.linalg.qr()). Input: numOfNodes: number of nodes in graph aggregatedLabels: labels from the federated algorithm globalLabels: labels from non distributed setting Output: $clusterSimilarity$ $misMatch$ = 0 for _i in range(N)_ do for _j in range(N)_ do if $globalLabels[i]==globalLabels[j]$ if $aggregatedLabels[i]!=aggregatedLabels[j]$ $misMatch$ $+=$ $1$ end for end for $clusterSimilarity$ = $1-\frac{misMatch}{numOfNodes^{2}}$ $Return:$ $clusterSimilarity$ Algorithm 5 clusterSimilarityMetric Figure 1: The figure a) on the left, shows the plot of clustering similarity Vs iteration per client on two datasets. The figure b) on the right, shows the plot of clustering similarity Vs global rounds on two datasets. ## Evaluation ### Metric We have created our own metric for comparing the similarity between two different clustering of a graph. Our metric algorithm penalizes the similarity score for each pair of nodes if the pair lie in the same cluster in global clustering and lie in different clusters in global aggregated clustering. The similarity score ranges from 0 to 1. A higher similarity score represents that both the clustering are more similar. Refer to algorithm 5 for the pseudo-code of similarity score. Please Note: The y-axis in the plots 1, 2 and 3 is the similarity metric defined above. ### Datasets 1. 1. ego-Facebook (Leskovec and Krevl 2014): It is an undirected social network graph dataset consisting of 4039 nodes and 88234 edges. The data is made up of anonymized Facebook social circles. 2. 2. email-Eu-core (Leskovec and Krevl 2014): It is a directed network graph dataset consisting of 1005 nodes and 25571 edges. It is an email communication link dataset among the members of an organization. We converted this graph to undirected for experimental purpose. Figure 2: The figure a) on the left, shows the plot of how the clustering similarity Vs iteration per client changes with respect to global rounds on ego-Facebook dataset. The figure b) on the right, shows the plot of how the clustering similarity Vs iteration per client changes with respect to the number of clusters on ego-Facebook dataset. ### Experimental Setting In FedSpectral approach: * • We used 5 clients and 10 clusters as the parameters. In FedSpectral+ approach: * • For both dataset, we used 5 clients and 10 clusters as the parameters. * • For the ego-Facebook dataset, we set $iters$ as 6 and $globalRounds$ as 20 for the best results. * • For the email-Eu-core dataset, we set $iters$ as 1 and $globalRounds$ as 1 for the best results. ### Results and Analysis We experimented with our FedSpectral and FedSpectral+ approach on ego-Facebook and email-Eu-core datasets. * • The FedSpectral approach achieved a clustering similarity of 77.63% on ego- Facebook dataset and 87.05% on email-Eu-core dataset respectively. * • The FedSpectral+ approach achieved a clustering similarity of 98.85% on ego- Facebook and 99.8% on email-Eu-core dataset respectively. In Figure 1, a) shows the variation in similarity with the varying number of iterations for ego-Facebook and email-Eu-core dataset with constant 20 and 1 global rounds respectively on the FedSpectral+ algorithm with graph distributed using 40% overlap. * • We can observe from the plot that as the number of iterations increases for ego-Facebook, the similarity increases and then again drops. The increase in similarity is because the number of iterations required to converge the eigenvectors is not sufficient. However, the similarity drops because aggregation of eigenvectors is done after a larger number of power iterations. Thus, it focuses more on the local graph embedding instead of the global graph. * • We can observe from the plot that as the number of iterations increases for email-Eu-core dataset, the similarity remains almost constant. Here, the number of edges in the graph is very high in comparison to ego-Facebook the number of nodes. However, it is a small dataset, so it takes lesser iterations to converge. Since, for each client, the local graph requires only one iteration to converge to the local embedding, increasing the iterations does not affect the similarity as the number of global rounds is fixed to 1. Figure 3: The figure a) on the left, shows the plot of how the clustering similarity Vs iteration per client changes with respect to the overlap of the graph on ego-Facebook dataset. The figure b) on the right, shows the plot of how the FedSpectral and FedSpectral+ approach performs in terms of clustering similarity when the number of clients is varied on ego-Facebook dataset. In Figure 1 b) shows the variation in similarity with the varying number of global rounds for ego-Facebook and email-Eu-core dataset with constant 10 iterations per client on the FedSpectral+ algorithm with graph distributed using 40% overlap. * • For the ego-Facebook dataset, we can observe from the plot that as the number of global rounds increases, the similarity increases. It is because the higher the number of global rounds, the higher the convergence of FedSpectral+ method. * • For the email-Eu-core dataset, we can observe from the plot that as the number of global rounds increases, similarity remains constant since the algorithm converges in the first global round. Figure 2 a) shows how the similarity plot changes with the number of global rounds for FedSpectral+. We can observe that for a fixed number of iterations, as the number of global rounds increases, the similarity increases. This is because the method converges better with more global rounds. Figure 2 b) shows how the similarity plot changes with the number of clusters for the FedSpectral+ algorithm. Since the ego-Facebook dataset has 10 communities, we can observe that for a fixed number of iterations, the similarity is highest for 10 clusters in general. Figure 3 a) shows how similarity plot changes with the overlap of graph data distribution for the FedSpectral+ algorithm. We can observe that with higher graph overlap, the similarity increases in general. It is because the local embedding becomes more informative because of the availability of more information about the global graph. Therefore, in the aggregation step at the global server, the deviation in the local embeddings from different clients is smaller, resulting in better similarity. In Figure 3 b) shows the comparison of FedSpectral approach with the FedSpectral+ method. We can see that as the number of clients increases, and if we keep the overlap of graph data within the clients to be fixed, then the similarity drops. It is because the graph structure present with a particular client becomes sparsed as the edges are distributed over more clients. We can also observe that FedSpectral+ approach works significantly better than the baseline - FedSpectral. Figure 4: Visual representation of ego-Facebook dataset using FedSpectral approach on right, and the global clustering on left. Figure 5: Visual representation of ego-Facebook dataset using FedSpectral+ approach on right, and the global clustering on left. For the visual comparison, we used the ego-Facebook dataset. We can observe that in Figure 4, the FedSpectral approach cannot distinguish all the clusters in comparison to the global clustering. It is because the FedSpectral approach disregards that a particular node in a client may not have information about all the neighboring nodes which are spread over other clients. However, in Figure 5, the FedSpectral+ approach is able to get the highest detailing in its clustering of the ego-Facebook dataset. This is because the embedding is constructed using the power iteration method on each client and then aggregated by the server to learn the embedding of the global graph. ## Conclusion To conclude, in this work, we proposed a state-of-the-art approach - FedSpectral+, a novel algorithm for federated spectral clustering. We used the power iteration method to iteratively learn the eigenvector embedding of the global graph. As there is no exchange of raw data between clients and the server, the proposed approach overcomes the main issue of spectral clustering in the distributed setting by providing data privacy. We also give a detailed analysis of factors impacting the FedSpectral+. As validated from the experiments, the similarity of clustering obtained using the proposed approach was comparable to that of global clustering. Thus, proving the efficacy of our approach. ## Future Work The suggested approach was only evaluated for its performance. In the future, we plan to test how resilient our algorithm is against different adversarial attacks, such as inference and poisoning attacks (Shen et al. 2020). We also intend to assess the possibility of a data breach while transmitting local updates to the server. To further strengthen data privacy, we will improve our method by combining it with differentially private algorithms and secured multiparty computation. ## Acknowledgement We would like to thank Prof. Anirban Dasgupta (IIT Gandhinagar) for his continuous support and guidance throughout the research. ## References * Bonawitz et al. (2019) Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan, B.; et al. 2019. Towards federated learning at scale: System design. _Proceedings of Machine Learning and Systems_ , 1: 374–388. * Booth (2006) Booth, T. E. 2006. Power iteration method for the several largest eigenvalues and eigenfunctions. _Nuclear science and engineering_ , 154(1): 48–62. * Chen et al. (2021) Chen, M.; Zhang, W.; Yuan, Z.; Jia, Y.; and Chen, H. 2021. Fede: Embedding knowledge graphs in federated setting. In _The 10th International Joint Conference on Knowledge Graphs_ , 80–88. * Chen et al. (2010) Chen, W.-Y.; Song, Y.; Bai, H.; Lin, C.-J.; and Chang, E. Y. 2010. Parallel spectral clustering in distributed systems. _IEEE transactions on pattern analysis and machine intelligence_ , 33(3): 568–586. * Chen, Rezapour, and Tzeng (2018) Chen, Y.-R.; Rezapour, A.; and Tzeng, W.-G. 2018. Privacy-preserving ridge regression on distributed data. _Information Sciences_ , 451: 34–49. * Cheng et al. (2021) Cheng, K.; Fan, T.; Jin, Y.; Liu, Y.; Chen, T.; Papadopoulos, D.; and Yang, Q. 2021\. Secureboost: A lossless federated learning framework. _IEEE Intelligent Systems_ , 36(6): 87–98. * Lalitha et al. (2019) Lalitha, A.; Kilinc, O. C.; Javidi, T.; and Koushanfar, F. 2019. Peer-to-peer federated learning on graphs. _arXiv preprint arXiv:1901.11173_. * Leskovec and Krevl (2014) Leskovec, J.; and Krevl, A. 2014. SNAP Datasets: Stanford Large Network Dataset Collection. http://snap.stanford.edu/data. * Li, Wen, and He (2020) Li, Q.; Wen, Z.; and He, B. 2020. Practical federated gradient boosting decision trees. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, 4642–4649. * Li et al. (2020) Li, T.; Sahu, A. K.; Talwalkar, A.; and Smith, V. 2020. Federated learning: Challenges, methods, and future directions. _IEEE Signal Processing Magazine_ , 37(3): 50–60. * Macey and Zomaya (1998) Macey, B. S.; and Zomaya, A. Y. 1998. A performance evaluation of CP list scheduling heuristics for communication intensive task graphs. In _Proceedings of the First Merged International Parallel Processing Symposium and Symposium on Parallel and Distributed Processing_ , 538–541. IEEE. * McMahan et al. (2017) McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; and y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_ , 1273–1282. PMLR. * Ng, Jordan, and Weiss (2001) Ng, A.; Jordan, M.; and Weiss, Y. 2001. On spectral clustering: Analysis and an algorithm. _Advances in neural information processing systems_ , 14. * Nikolaenko et al. (2013) Nikolaenko, V.; Weinsberg, U.; Ioannidis, S.; Joye, M.; Boneh, D.; and Taft, N. 2013\. Privacy-preserving ridge regression on hundreds of millions of records. In _2013 IEEE symposium on security and privacy_ , 334–348. IEEE. * Shen et al. (2020) Shen, S.; Zhu, T.; Wu, D.; Wang, W.; and Zhou, W. 2020. From distributed machine learning to federated learning: In the view of data privacy and security. _Concurrency and Computation: Practice and Experience_. * Smith et al. (2017) Smith, V.; Chiang, C.-K.; Sanjabi, M.; and Talwalkar, A. S. 2017. Federated multi-task learning. _Advances in neural information processing systems_ , 30. * Von Luxburg (2007) Von Luxburg, U. 2007. A tutorial on spectral clustering. _Statistics and computing_ , 17(4): 395–416. * Wang et al. (2020) Wang, B.; Li, A.; Li, H.; and Chen, Y. 2020. Graphfl: A federated learning framework for semi-supervised node classification on graphs. _arXiv preprint arXiv:2012.04187_. * Yang et al. (2019) Yang, Q.; Liu, Y.; Chen, T.; and Tong, Y. 2019. Federated machine learning: Concept and applications. _ACM Transactions on Intelligent Systems and Technology (TIST)_ , 10(2): 1–19. * Zhang et al. (2021) Zhang, K.; Yang, C.; Li, X.; Sun, L.; and Yiu, S. M. 2021. Subgraph federated learning with missing neighbor generation. _Advances in Neural Information Processing Systems_ , 34: 6671–6682.
* # The advance of Mercury’s perihelion Bertrand Berche1 , Ernesto Medina2 0000-0002-4254-807X 0000-0002-1566-0170 1 Laboratoire de Physique et Chimie Théoriques, Université de Lorraine - CNRS, Nancy, France 2 Departamento de Física, Colegio de Ciencias e Ingeniería, Universidad San Francisco de Quito, Diego de Robles y Vía Interoceánica, Quito, 170901, Ecuador<EMAIL_ADDRESS> ###### Abstract A very famous “test” of the General Theory of Relativity (GTR) is the advance of Mercury’s perihelion (and of other planets too). To be more precise, this is not a prediction of General Relativity, since the anomaly was known in the XIXth century, but no consistent explanation had been found yet at the time GTR was elaborated. Einstein came up with a solution to the problem in 1914. In the case of Mercury, the closest planet to the Sun, the effect is more pronounced than for other planets, and observed from Earth; there is an advance of the perihelion of Mercury of about 5550 arc seconds per century (as/cy). Among these, about $5000$ are due to the equinox precession (the precise value is $5025.645$ as/cy) and about $500$ ($531.54$) to the influence of the external planets. The remaining, about $50$ as/cy ($42.56$), are not understood within Newtonian mechanics. Here, we revisit the problem in some detail for a presentation at the undergraduate level. ††: Eur. J. Phys. Keywords: Mercury, perihelion, Kepler problem, General Relativity ## 1 Introduction The problem of the advance of Mercury’s perihelion is a well-known example of a phenomenon that remained unexplained for a long time and which attracted the attention of many physicists in the 19th century before finding an interpretation within the framework of the theory of General Relativity. It is now a textbook case that played an important role in the acceptance of this theory and which is taught to illustrate the successes of General Relativity. The study of this problem is very rich because it allows us to illustrate the usefulness of perturbative calculations in classical Newtonian dynamics to understand the most important part of the effect observed as being due to the influence of external planets on the movement of Mercury, before devoting oneself to the relativistic approach of the gravitational effect due to the Sun which fully explains the residual advance, extremely tenuous, with a remarkable precision. However, the literature does not generally present what could be considered as the ultimate verification: how the influence of external planets, when treated in a relativistic framework, does not bring an additional correction, which could compete with the exceptional agreement between General Relativity and observations. The purpose of this article is to present this entire approach at an undergraduate level. ## 2 The Kepler problem in Newton dynamics and the statement of the problem As a starting point, let us consider the Newtonian approach to planets’ motion around the Sun. This is an undergrad problem that can be found in all textbooks on classical mechanics [1, 2]. We look for bound states in the Newtonian gravitational potential of the Sun. The gravitational force is central, therefore the angular momentum ${\bm{\sigma}}={\bf r}\times m{\bf v}=mr^{2}\dot{\theta}\ \\!{\bf u}_{\varphi}-mr^{2}\sin\theta\dot{\varphi}\ \\!{\bf u}_{\theta}$ is conserved (we use spherical coordinates, and the dot denotes a derivative w.r.t. time). The planet motion is thus confined to stay in a plane perpendicular to ${\bm{\sigma}}$ w.r.t. which the angle $\theta$ is measured, ($\theta$ is fixed to $\pi/2$) and ${\bm{\sigma}}=-mr^{2}\dot{\varphi}{\bf u}_{\theta}(\pi/2)=+mr^{2}\dot{\varphi}{\bf u}_{z}$ while ${\bf v}=\dot{r}{\bf u}_{r}+r\dot{\varphi}{\bf u}_{\varphi}$. The expression of the square of the velocity $|{\bf v}|^{2}=\dot{r}^{2}+|{\bm{\sigma}}|^{2}/(m^{2}r^{2})$ follows and is used in the second constant of motion, the total energy, $E={\textstyle\frac{1}{2}}m|{\bf v}|^{2}-\frac{GM_{\odot}m}{r}=\frac{|{\bm{\sigma}}|^{2}}{2{m}r^{2}}\Bigl{(}\frac{1}{r^{2}}\Bigl{(}\frac{dr}{d\varphi}\Bigr{)}^{2}+1\Bigr{)}-\frac{GM_{\odot}m}{r}$ (1) where we also used $\dot{r}=\dot{\varphi}\ \\!dr/d\varphi$. Here, $G=6.67430\times 10^{-11}{\rm m}^{3}\ \\!{\rm kg}^{-1}{\rm s}^{-2}$ is Newton’s gravity constant and $M_{\odot}=1.9891\times 10^{30}$kg the mass of the Sun, while $m=M_{\hbox{\mercury}}=3.285\times 10^{23}$kg is the mass of Mercury (data for the planets of the solar system are listed in table 1). We rewrite the energy equation in the form $\frac{1}{r^{2}}\Bigl{(}\frac{dr}{d\varphi}\Bigr{)}^{2}+1=\frac{2mr^{2}}{|{\bm{\sigma}}|^{2}}\Bigl{(}E+\frac{GM_{\odot}m}{r}\Bigr{)}$ (2) and perform a change of variable $u=1/r$ to get $\Bigl{(}\frac{du}{d\varphi}\Bigr{)}^{2}+u^{2}=\frac{2m}{|{\bm{\sigma}}|^{2}}(E+{GM_{\odot}m}u)$ (3) which is then differentiated w.r.t. $\varphi$ to lead to a harmonic equation: $\frac{d^{2}u}{d\varphi^{2}}+u=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}.$ (4) Adding a particular solution of the complete equation, $u_{0}=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}$, to a general solution $A\cos(\varphi-\alpha)$ of the homogeneous equation, and fixing the initial conditions to have $\alpha=0$, we obtain the solution $u(\varphi)=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}(1+e\cos\varphi)$ (5) where $e$ is called the eccentricity. The value of $e$ is fixed in terms of the constants by the first equation of motion (3) where the insertion of $u(0)=u_{0}(1+e)$, $u^{\prime}(0)=0$ provides the second order equation $u_{0}^{2}(1+e)^{2}-2u_{0}(1+e)-2mE/|{\bm{\sigma}}|^{2}=0.$ We get closed ellipses when $E<0$ and the parameters of the trajectory are given by $\frac{1}{r}=\frac{1}{p}(1+e\cos\varphi),\quad\frac{1}{p}=u_{0}=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}},\quad e=\sqrt{1+\frac{2E|{\bm{\sigma}}|^{2}}{G^{2}M_{\odot}^{2}m^{3}}}.$ (6) In the case of Mercury, $e=0.205630$ and $p=a(1-e^{2})=55.46\times 10^{9}$m with $a=57.91\times 10^{9}$m the semi-major axis. Since $E~{}{\propto}~{}m$ and $|{\bm{\sigma}}|~{}{\propto}~{}m$, $p$ and $e$ do not depend on $m$; so the motion is completely fixed by the initial conditions, but doesn’t depend on the planet mass $m$ (hence the name universal gravitation theory). Planet | Mass (kg) | perihelion (km) | aphelion (km) ---|---|---|--- Mercury, ☿ | $3.302\times 10^{23}$ | $46.0\times 10^{6}$ | $69.8\times 10^{6}$ Venus, ♀ | $4.868\times 10^{24}$ | $107.5\times 10^{6}$ | $108.9\times 10^{6}$ Earth, ♁ | $5.974\times 10^{24}$ | $147.1\times 10^{6}$ | $152.1\times 10^{6}$ Mars, ♂ | $6.418\times 10^{23}$ | $206.6\times 10^{6}$ | $249.2\times 10^{6}$ Jupiter, ♃ | $1.899\times 10^{27}$ | $740.7\times 10^{6}$ | $816.1\times 10^{6}$ Saturn, ♄ | $5.685\times 10^{26}$ | $1.349\times 10^{9}$ | $1.504\times 10^{9}$ Uranus, ⛢ | $8.683\times 10^{25}$ | $2.735\times 10^{9}$ | $3.006\times 10^{9}$ Neptune, ♆ | $1.024\times 10^{26}$ | $4.459\times 10^{9}$ | $4.537\times 10^{9}$ Sun, ☉ | $1.989\times 10^{30}$ | | Table 1: Masses and distances in the solar system. In conclusion, the previous approach assumes point-like masses describing Mercury and the Sun with Newtonian central forces operating; the elliptical orbit of Mercury is closed; thus, the perihelion always occurs at the same place in the orbital motion. In the literature this fixed orientation of the orbit is characterized by the conserved (fixed) Runge-Lenz vector ${\bf A}={\bf p}\times{\bf L}-mk\hat{\bf r}$ . Now, since we know that the observations show that the motion indeed stays in a plane, but the ellipse is not closed in reality, its perihelion, the closest distance to the sun, rotates (actually, it precesses). In the case of Mercury, the closest planet to the Sun, this is more pronounced than for other planets. Observed from the Earth, there is a known advance of the perihelion of Mercury of $5599.745$ arc seconds per century (as/cy) (the data are from Ref. [3]), less than two degrees per century, which doesn’t seem to be a very large amount, but astronomical observations were already very accurate long ago! Of these, the equinox precession is responsible for most of the observed value, $5025.645$ as/cy. This is the first and largest contribution to the perihelion advance, and it is because the Earth (from where we observe the phenomenon) is not perfectly spherical, and the combined effect of the Sun and the Moon exerts a torque on the Earth which produces a precession of the daily rotation axis of the Earth with a period of about $26000$ years, first observed by Hipparcos in 125 BC. The order of magnitude of these $\approx 26000$ years is thus $360\times 3600/260\simeq 5000$ as/cy. As said above, this is an order of magnitude, and the measurement is $5025.645$. This is not an anomaly of Mercury’s motion but a relative effect due to the motion of the observer located on Earth. The question we address now is about the remaining differences and the importance of the various causes we may invoke. ## 3 The influence of external planets within the Newtonian theory of gravitation The second effect is more difficult to analyze, even in Newtonian dynamics. We consider the effect on Mercury’s motion of an external planet P of mass $M_{\rm P}$, the gravitational influence of which is assimilated to that of a ring of matter of radius $R_{\rm P}$, i.e., the mean radius of the orbit of P [4, 5], centered on the position of the Sun. This is obviously an approximation, but we will see that the predictions are consistent in terms of orders of magnitude with more sophisticated calculations found in the literature. This model introduces a non-Newtonian ($1/r^{2}$) force effect, which breaks the conservation of the Runge-Lenz vector and makes it rotate, producing a precession of the perihelion. This circular ring of matter carries a linear mass density of $\lambda=M_{\rm P}/(2\pi R_{\rm P})$. An important property here is that all the planets have their orbits almost on the same plane, so Mercury’s position is in the plane of planet P’s orbit. If one considers two symmetric positions on the “ring”, they both carry a mass element $dm=\lambda R_{\rm P}d\theta$ and produce on Mercury’s position a gravitational field contribution which, after being projected on the direction from the Sun to Mercury, is written as $|d{\bf G}_{\rm P}|=G\lambda R_{\rm P}d\theta\Bigl{(}\frac{\cos\phi_{1}}{r_{1}^{2}}-\frac{\cos\phi_{2}}{r_{2}^{2}}\Bigr{)}\simeq G\lambda d\theta\Bigl{(}\frac{1}{r_{1}}-\frac{1}{r_{2}}\Bigr{)}d\theta.$ (7) The angles and distances are defined in figure 1. The last equality comes from the approximation $R_{\hbox{\mercury}}\simeq p<R_{\rm P}$ (for example in the case of Venus, $R_{\rm P}$, of the order of the semi-major axis, is about $a_{\hbox{\venus}}=108.209\times 10^{9}$km), in which case $\phi_{1}\simeq\phi_{2}\simeq\theta$. Then, with $\displaystyle r_{1}=-r\cos\theta+\sqrt{r^{2}\cos^{2}\theta-(r^{2}-R_{\rm P}^{2})},$ (8) $\displaystyle r_{2}=+r\cos\theta+\sqrt{r^{2}\cos^{2}\theta-(r^{2}-R_{\rm P}^{2})},$ (9) we get a (repulsive) central gravitational field after integration over $\theta$, $\delta{\bf G}_{\rm P}=+\frac{GM_{\rm P}}{2R_{\rm P}}\frac{r}{R_{\rm P}^{2}-r^{2}}{\bf u}_{r}\simeq\frac{GM_{\rm P}}{2R_{\rm P}^{3}}\Bigl{[}r+\frac{r^{3}}{R_{\rm P}^{2}}\Bigr{]}{\bf u}_{r},$ (10) where the expansion is allowed because the radial distance of Mercury to the Sun $r<R_{\rm P}$, the radius of the orbits of the external planets (typically, for Mercury and Venus, the ratio $r/R_{\rm P}$ is about $0.5$ – it could be worth taking into account the second-order expansion – and the correction is smaller for the other planets). Figure 1: Gravitational influence of an external planet P on the motion of Mercury (internal trajectory) in the solar system. It is convenient to write the gravitational potential due to the planet P: $\displaystyle\delta\phi_{\rm P}$ $\displaystyle=$ $\displaystyle-\int\delta{\bf G}_{\rm P}\cdot d{\bf r}$ (11) $\displaystyle=$ $\displaystyle\frac{GM_{\rm P}}{4R_{\rm P}}\ln\Bigl{(}1-\frac{r^{2}}{R_{\rm P}^{2}}\Bigr{)}\simeq-\frac{GM_{\rm P}}{4R_{\rm P}^{3}}\Bigl{[}r^{2}+\frac{r^{4}}{2R_{\rm P}^{2}}\Bigr{]}$ and the total gravitational potential energy (Sun plus the Planet P) as a correction to the simple effect of the Sun, $\displaystyle V(r)$ $\displaystyle=$ $\displaystyle-\frac{GM_{\odot}m}{r}+\delta V_{\rm P}(r)$ (12) $\displaystyle\simeq$ $\displaystyle-\frac{GM_{\odot}m}{r}-\frac{GM_{\rm P}m}{4R_{\rm P}^{3}}\Bigl{[}r^{2}+\frac{r^{4}}{2R_{\rm P}^{2}}\Bigr{]}\quad\hbox{for $r<R_{\rm P}$}$ $\displaystyle=$ $\displaystyle-\frac{GM_{\odot}m}{r}\Bigl{(}1+\frac{M_{\rm P}}{M_{\odot}}\frac{r^{3}}{4R_{\rm P}^{3}}\Bigl{[}1+\frac{r^{2}}{2R_{\rm P}^{2}}\Bigr{]}\Bigr{)}.$ With the analytic form of the correction $\delta V_{\rm P}(r)$ to the purely Newtonian gravitational potential energy, it is possible to estimate the effect on the trajectory of Mercury using a perturbation analysis. In Section 2, we have established the equation of motion (4) for the Kepler problem, i.e. the motion of $m$ in the gravitational field exerted by the Sun. For a more general central potential corresponding now to Sun plus planet P, $V(u)=-GM_{\odot}mu+\delta V_{\rm P}(u),$ (13) the equation of motion becomes $\frac{d^{2}u}{d\varphi^{2}}+u=-\frac{m}{|{\bm{\sigma}}|^{2}}\frac{dV(u)}{du}=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}-\frac{m}{|{\bm{\sigma}}|^{2}}\frac{d\delta V_{\rm P}(u)}{du}.$ (14) If we denote $u_{\rm K}$ the solution of Kepler problem and we seek for a perturbation solution, setting $u(\varphi)=u_{\rm K}(\varphi)+u_{1}(\varphi)$, the derivative of $\delta V_{\rm P}(u)$ w.r.t. $u$ at the r.h.s. is expanded in the vicinity of Kepler solution, and the equation of motion becomes $\displaystyle\frac{d^{2}u_{\rm K}}{d\varphi^{2}}+u_{\rm K}+\frac{d^{2}u_{1}}{d\varphi^{2}}+u_{1}=\frac{1}{p}-\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d\delta V_{\rm P}(u)}{du}\right|_{u_{\rm K}}-\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d^{2}\delta V_{\rm P}(u)}{du^{2}}\right|_{u_{\rm K}}u_{1}+\dots$ Simplifying the equation at the Kepler problem level, we obtain an equation for the correction $\frac{d^{2}u_{1}}{d\varphi^{2}}+\Bigl{(}1+\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d^{2}\delta V_{\rm P}(u)}{du^{2}}\right|_{u_{\rm K}}\Bigr{)}u_{1}=-\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d\delta V_{\rm P}(u)}{du}\right|_{u_{\rm K}}$ (16) where we read that the angular frequency of the perturbed motion is $\Omega^{2}=1+2\omega_{1}=1+\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d^{2}\delta V_{\rm P}(u)}{du^{2}}\right|_{u_{\rm K}},$ (17) hence the solution $u_{1}(\varphi)=A\cos(\Omega\varphi)+B\sin(\Omega\varphi)$. The perihelion (the smallest value of $r(\varphi)=1/u(\varphi)$) corresponds to the largest value of $u(\varphi)$. It is obtained at $\varphi=0$ and equals to $u_{\rm max}=(1+e)/p+A$, ($A>0$). The same value is recovered slightly after a revolution at $\varphi=2\pi+\Delta\varphi$ (it will appear that $\Delta\varphi>0$) such that $u_{\rm K}(2\pi+\Delta\varphi)=(1+e)/p+O(\Delta\varphi^{2})$ and $u_{1}(2\pi+\Delta\varphi)=A+B(2\pi(\Omega-1)+\Omega\Delta\varphi)+O(\Delta\varphi^{2})$. To linear order in $\Delta\varphi$, the perihelion is recovered if the coefficient of $B$ vanishes. It follows that the advance (this will appear to be positive) of the perihelion per revolution due to the gravitational force exerted by the planet P is written as: $\Delta\varphi_{\rm P}=\frac{2\pi(1-\Omega)}{\Omega}=-2\pi\omega_{1}=-2\pi\frac{1}{2}\frac{m}{|{\bm{\sigma}}|^{2}}\left.\frac{d^{2}\delta V_{\rm P}(u)}{du^{2}}\right|_{u_{\rm K}}.$ (18) Using the expression of $\delta V_{\rm P}(u)$ we get to leading order $\Delta\varphi_{\rm P}=2\pi\frac{3M_{\rm P}}{4M_{\odot}}\frac{p^{3}}{R_{\rm P}^{3}}=\pi\frac{3}{2}\frac{M_{\rm P}}{M_{\odot}}\Bigl{(}\frac{a(1-e^{2})}{R_{\rm P}}\Bigr{)}^{3}.$ (19) In the case of the planet Venus (for $M_{\hbox{\venus}}/M_{\odot}={2.447}\times 10^{-6}$ and $p/R_{\hbox{\venus}}={0.495}$, where $R_{\hbox{\venus}}$ is taken as the arithmetic mean of the distances to the aphelion and the perihelion), we get $\displaystyle\Delta\varphi_{\hbox{\venus}}\simeq{1.399}\times 10^{-6}\Bigl{(}\frac{{\rm rad}}{{\rm rev}}\Bigr{)}\frac{360}{2\pi}\Bigl{(}\frac{\rm deg}{\rm rad}\Bigr{)}\frac{3600}{1}\Bigl{(}\frac{\rm sec}{\rm deg}\Bigr{)}\frac{1}{0.240}\Bigl{(}\frac{\rm rev}{\rm year}\Bigr{)}\frac{100}{1}\Bigl{(}\frac{\rm year}{\rm century}\Bigr{)}$ $\displaystyle\simeq{120}\hbox{ arcsec}/\hbox{century}.$ (20) The next term in the potential (12) adds another correction of ${0.571}\times 10^{-7}$rad/rev, or $49.087$ as/cy, hence a total contribution of the influence of Venus at the first order perturbation expansion of $169$ as/cy. This is the strongest correction among the planets, the next one being due to Jupiter, which is more distant but far more massive. Another method is used, e.g., in Ref. [6], in terms of forces, but we have adapted it here in terms of potential energies. More precise values have been determined numerically in an article in Am. J. Phys.[5] where we can find more accurate data. The largest contributions are from Venus (the closest planet) and Jupiter (the heaviest planet), and in the previous paper mentioned, Davies reports the numerical estimates for each planet, e.g., for Venus $\Delta\varphi_{\hbox{\venus}}={273.30}$ as/cy and for Jupiter $\Delta\varphi_{\hbox{\jupiter}}={156.75}$ as/cy while for the Earth, $\Delta\varphi_{\hbox{\earth}}={91.49}$ as/cy and for Uranus $\Delta\varphi_{\hbox{\uranus}}={0.14}$ as/cy. These results are in better agreement with those of the specialized literature (see table 2) than ours, but the method that we employed is suitable for a presentation at the undergrad level. Origin | $\Delta\varphi$ (as/cy) | Ref. [7] ---|---|--- Venus, ♀ | $277.856\pm 0.68$ | Earth, ♁ | $90.038\pm 0.08$ | Mars, ♂ | $2.536\pm 0.00$ | Jupiter, ♃ | $153.584\pm 0.00$ | Saturn, ♄ | $7.302\pm 0.01$ | Uranus, ⛢ | $0.141\pm 0.00$ | Neptune, ♆ | $0.042\pm 0.00$ | Sun (☉) asphericity | $0.010\pm 0.02$ | general precession of the equinoxes | $5025.645\pm 0.50$ | Sum | $5557.18\pm 0.85$ | observed advance | $5599.74\pm 0.41$ | remaining difference | $42.56\pm 0.94$ | GTR effect | $43.03\pm 0.03$ | $42.98$ Table 2: Various contributions to the advance of the perihelion of Mercury, from G.M. Clemence [3]. The GTR main contribution (final value, according to Eq. (53)) was later corrected by Nobili and Will [7]. When one sums up all the contributions, as well as a tiny effect due to the non-exact Newtonian form of the Sun’s gravitational potential due to its non- perfect sphericity, there remains a very small difference with the observations. That difference is not explained by the classical theory of gravitation, as one can read in the results [3] given in table 2. ## 4 Looking for possible explanations of the remaining $43$ arc seconds per century This tiny number, about $43$ arc seconds per century (compared to $~{}5599$ as/cy observed), has to find a further explanation. A slight modification to Newton’s law of gravitation has been proposed as well as the idea of a still unknown celestial object (which was even given a name, Vulcan), the influence of which would add to the other planets to produce the desired $43$ arc seconds per century, but nothing was discovered as a possible candidate. A similar hypothesis had been put forward earlier to explain the anomalies in the motion of the planet Uranus. It was all the merit and glory of Le Verrier to specify by calculation the mass and position of the new planet, named Neptune, which was observed then by Galle in Berlin111The planet was within $1^{\circ}$ of where Le Verrier had predicted (and $10^{\circ}$ of where Adams had sooner predicted).. In the XIXth century, carrying out perturbative calculations was not an easy task. A page of Le Verrier’s calculations is given in figure 2. Figure 2: One of the 200 pages of calculation of Le Verrier. From J.-P. Verdet, Astronomie et astrophysique, Larousse, Paris 1993, p.731. It is also instructive to read Le Verrier himself about the motion of Uranus [8]: > A few years ago, we had barely begun to suspect that the movement of Uranus > was modified by some unknown cause when all possible hypotheses were already > hazarded on the nature of this cause. It is true that everyone simply > followed the inclination of their imagination without providing any > consideration to support their assertion. We thought of the resistance of > the ether, we spoke of a large satellite which accompanied Uranus, or of a > still unknown planet whose disturbing force should be taken into > consideration, we even went so far as to suppose that at this enormous > distance from the Sun, the law of gravitation could lose something of its > rigour. A modification of the law of gravitation at large distances is still a very current debate today. In the context of Mercury’s anomalies, this hypothesis was proposed as a correction to Newton’s gravitation, considered by Hall [9, 10], who had shown that any $n>2$ in a gravitational force of the form $GMm/r^{n}$ would result in an advance of the perihelion, and that $n=2.000\ \\!000\ \\!16$ was enough to explain the mysterious $43$ as/cy of Mercury. But then, the same $n$ was spoiling the results concerning the other planets in the solar system, which was not acceptable. The assumptions mentioned by Le Verrier to explain Uranus anomalies are still among the most popular possible causes introduced in cosmology to explain the deviations observed in the evolution of the scale parameter of the metric of the Universe when only observable sources of gravity are considered. Dark matter is indeed similar to the introduction of a supplementary planet, unknown at the time, and dark energy can be considered the analogue of a modification of the law of gravitation. Eventually, the route of a modification of gravity will appear successful for Mercury’s anomaly. Indeed, there is no escape and a relativistic approach has to be used to try to solve the “tiny $43$ arc seconds per century”. ## 5 The resort to Special Relativity First the contribution of Special Relativity should be considered, and there has been a controversy on the role of the purely special relativistic contribution, as one can see in this “ironic” quotation [11]222We keep the reference numbering of the original quotation.: > The question arises as to what is the prediction from “Special Relativity”. > The literature on this is rather erratic. > > Early work (1906-1911) by Poincaré [21, 22], Lorentz [23], de Sitter [24] > and others (…) inferred that the result from Special Relativity for the > precession of the perihelion of Mercury is only $1/6$ that of the observed > value. An effort by Nordström in 1912 [25] predicted precession $-1/6$ of > the observed value. > > In 1917, Lodge [26] claimed to be inspired by Special Relativity to consider > velocity- dependent corrections to the precession of the perihelion, but > actually reverted to Newton’s analysis of precession in case of a force law > $1/r^{n}$ for $n$ different than $2$ [27], as extended by [29, 30]. A debate > followed between Eddington and Lodge [31, 32, 33, 34, 35]. > > In 1929, Kennedy [36] gave two analyses of Newtonian precession of the > perihelion, with corrections for retardation and for Special Relativity, > claiming negligible effects in both cases. It was stated by Goldstein [37] > (1950) that the result from Special Relativity is $1/6$ that of General > Relativity (…). > > In 1984, Phipps [39] claimed that the result of Special Relativity is the > same as that of General Relativity. > > In 1986, Peters [40] noted that Phipps made a computational error, and > claimed the correct result of Phipps’ model is $1/2$ that of General > Relativity (…). > > In 1987, Biswas [41] claimed that the result of (his interpretation of) > Special Relativity is the same as that of General Relativity. > > In 1988, Frisch [42] discussed “post-Newtonian” approximations, claiming > that use of “relativistic momentum” but Newtonian gravity gives the result > of Goldstein [37], $1/6$ of the observed precession of the perihelion of > Mercury, while including the gravitation due to gravitational field energy > doubles the result, to $1/3$ of the observed precession of the perihelion of > Mercury. > > In 1989, Peters [43] argued that Biswas’ calculation was in error. > > In 2006, Jefimenko proposed a theory of “cogravitation”, and claimed it > predicted $1/3$ of observed precession of the perihelion of Mercury (…). > > In 2015, Wayne [46] claimed that Special Relativity can explain the > precession of the perihelion of Mercury. > > In 2016, Lemmon and Mondragon [47] argued that Special Relativity predicts > $1/3$ of the rate of the precession of the perihelion according to General > Relativity. > > In 2020, Corda [48] claimed that Newtonian gravity completely explains the > precession of the perihelion of Mercury (without consideration of > relativity), but not that of other planets. Then, he argued that General > Relativity also explains the precession, but only if one includes the effect > of “rotational time dilation”. > > In 2022, D’Abramo [49] claimed that Corda [48] was wrong. > > What is going on here? To understand thecontroversy, let us first look at the purely kinematic contribution of Special Relativity (neglecting the influence of external planets). The Lagrangian of a particle in a potential $V(r)=-GM_{\odot}m/r$ is $L=-\frac{mc^{2}}{\gamma}+\frac{GM_{\odot}m}{r}$ (21) where $\gamma^{-1}=\sqrt{1-|{\bf v}|^{2}/c^{2}}$ and $|{\bf v}|^{2}=\dot{r}^{2}+r^{2}\dot{\theta}^{2}+r^{2}\sin^{2}\theta\dot{\varphi}^{2}$. Lagrange equations are therefore $\displaystyle\gamma r\dot{\theta}^{2}+\gamma r\sin^{2}\theta\dot{\varphi}^{2}-\frac{GM_{\odot}m}{r^{2}}=\gamma\ddot{r}+\dot{\gamma}\dot{r},$ $\displaystyle\gamma r^{2}\sin 2\theta=\frac{d}{dt}(\gamma r^{2}\dot{\theta}),$ $\displaystyle\frac{d}{dt}(\gamma r^{2}\sin^{2}\theta\dot{\varphi})=0.$ (22) Using the Lagrange equations is an option, but we could also use the first integrals, conservation of the angular momentum, and energy, as we did in Newtonian mechanics. The same works here, which leads to a more direct derivation. The angular momentum is written as ${\bm{\sigma}}={\bf r}\times m\gamma{\bf v}={\bf r}\times{\bf p}$, so that $\frac{d{\bm{\sigma}}}{dt}={\bf v}\times{\bf p}+{\bf r}\times\frac{d{\bf p}}{dt}$. The first term obviously vanishes, and the second vanishes for central potentials for which $\frac{d{\bf p}}{dt}={\bf F}=F(r){\bf r}/r$. Now, since ${\bm{\sigma}}$ is conserved, we deduce again that the motion stays within a plane, and we choose $\theta=\pi/2$ ($\dot{\theta}=0$), measuring the $\theta$ angle w.r.t. the direction of the angular momentum, the kinematic factor $\gamma$ takes the form $\gamma^{2}=\Bigl{[}1-\frac{1}{c^{2}}\left(\frac{dr}{d\varphi}\Bigr{)}^{2}\dot{\varphi}^{2}-\frac{r^{2}}{c^{2}}\dot{\varphi}^{2}\right]^{-1}.$ (23) A first constant of motion follows from the fact that $L$ doesn’t depend on $\varphi$, hence $|{\bm{\sigma}}|=p_{\varphi}=\frac{\partial L}{\partial\dot{\varphi}}=m\gamma r^{2}\dot{\varphi}.$ (24) The second constant of motion is obviously the energy, $E=p_{r}\dot{r}+p_{\varphi}\dot{\varphi}-L=m\gamma\dot{r}^{2}+m\gamma r^{2}\dot{\varphi}^{2}+mc^{2}\gamma^{-1}-\frac{GM_{\odot}m}{r}=mc^{2}\gamma-\frac{GM_{\odot}m}{r}.$ (25) Using the definition of the angular momentum, we eliminate $\dot{\varphi}$ in (23), and factorize out $\gamma^{2}$, leading to $\gamma^{2}=1+\frac{|{\bm{\sigma}}|^{2}}{m^{2}c^{2}r^{4}}\Bigl{(}\frac{dr}{d\varphi}\Bigr{)}^{2}+\frac{|{\bm{\sigma}}|^{2}}{m^{2}c^{2}r^{2}}=\Bigl{(}\frac{E}{mc^{2}}+\frac{GM_{\odot}}{rc^{2}}\Bigr{)}^{2}.$ (26) Like in the Newtonian case, the change of variable $u=\frac{1}{r}$ simplifies the equation into $1+\frac{|{\bm{\sigma}}|^{2}}{m^{2}c^{2}}\Bigl{[}\Bigl{(}\frac{du}{d\varphi}\Bigr{)}^{2}+u^{2}\Bigr{]}=\Bigl{(}\frac{E}{mc^{2}}+\frac{GM_{\odot}u}{c^{2}}\Bigr{)}^{2}.$ (27) A formula analogous to (14) is obtained if we write the equation of motion which follows from the derivative of (27) w.r.t. $\varphi$ $\frac{d^{2}u}{d\varphi^{2}}+\left(1-\frac{G^{2}M_{\odot}^{2}m^{2}}{|{\bm{\sigma}}|^{2}c^{2}}\right)u=\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}\left(\frac{E}{mc^{2}}\right).$ (28) This equation qualitatively differs from the classical case by the value of the angular velocity, which is now $\Omega^{2}=1-\frac{G^{2}M_{\odot}^{2}m^{2}}{|{\bm{\sigma}}|^{2}c^{2}}\simeq 1+2\omega_{1}$ (29) that differs from unity, inducing a shift of the perihelion. We can estimate this shift as we did for the influence of external planets. $\Delta\varphi_{\rm SR}\simeq\frac{2\pi(1-\Omega)}{\Omega}=-2\pi\omega_{1}=2\pi\frac{1}{2}\frac{G^{2}M_{\odot}^{2}m^{2}}{|{\bm{\sigma}}|^{2}c^{2}}=\frac{\pi GM_{\odot}}{a(1-e^{2})c^{2}}.$ (30) The numerical value is estimated by inserting the classical parameters of the ellipse, $|{\bm{\sigma}}|^{2}=GM_{\odot}m^{2}p$ and $p=a(1-e^{2})$. We get $\Delta\varphi_{\rm SR}=8.649\times 10^{-8}\hbox{rad}/\hbox{rev}=7.433\ \\!\hbox{as/cy}.$ (31) To link with the quotation of McDonald’s, it appears to be 6 times smaller than the observed one of 43. So we have to conclude that Special Relativity is not enough to explain the whole effect observed. However, we can notice that in the controversy reported by McDonald [11], the pioneers Poincaré, Lorentz or de Sitter were right! ## 6 How General Relativity solves the problem Now, let us follow the same lines of reasoning in full General Relativity. General Relativity encodes gravitational energy in the metric, $ds^{2}=(1-(2GM_{\odot}/(rc^{2}))c^{2}dt^{2}-(1-(2GM_{\odot}/(rc^{2}))^{-1}dr^{2}-r^{2}d\Omega^{2},$ (32) and the zero-mass limit recovers the case of Special Relativity, therefore there will be no need to add the result (31) to the present calculation. The Lagrangian333The action of a free particle in Special Relativity $S=-mc\int ds$. $L=-mc\frac{ds}{dt}$ now reads as $L=-mc^{2}\frac{d\tau}{dt}=-mc^{2}\Bigl{[}g_{00}(r)-\frac{1}{g_{00}(r)c^{2}}\Bigl{(}\frac{dr}{dt}\Bigr{)}^{2}-\frac{r^{2}}{c^{2}}\Bigl{(}\frac{d\varphi}{dt}\Bigr{)}^{2}\Bigr{]}^{1/2}$ (33) in terms of the proper time $\tau$, and the potential term is hidden in the metric tensor component. The argument for planar motion still works. Hence we have already simplified the problem considering fixed $\theta=\pi/2$, and we have assumed Schwarzchild metric (32) $g_{00}(r)=1-\frac{2GM_{\odot}}{rc^{2}}=-1/g_{11}(r)$. Like in the special relativistic case, we have two constants of motion. The angular momentum is the first $|{\bm{\sigma}}|=\frac{\partial L}{\partial\dot{\varphi}}=mr^{2}\Bigl{(}\frac{d\varphi}{d\tau}\Bigr{)}.$ (34) Here, $\dot{\varphi}=d\varphi/dt$ shouldn’t be confused with $d\varphi/d\tau$. The momentum associated to the radial coordinate is equal to $\frac{\partial L}{\partial\dot{r}}=\frac{m}{g_{00}(r)}\Bigl{(}\frac{dr}{d\tau}\Bigr{)},$ (35) with the same notation $\dot{r}=dr/dt$. The energy follows $E=\frac{d\tau}{dt}\Bigl{[}\frac{m}{g_{00}(r)}\Bigl{(}\frac{dr}{d\tau}\Bigr{)}^{2}+\frac{|{\bm{\sigma}}|^{2}}{mr^{2}}+mc^{2}\Bigr{]}.$ (36) The square of the interval provides an alternative identity, $\displaystyle c^{2}$ $\displaystyle=$ $\displaystyle g_{00}(r)\Bigl{(}\frac{cdt}{d\tau}\Bigr{)}^{2}-\frac{1}{g_{00}(r)}\Bigl{(}\frac{dr}{d\tau}\Bigr{)}^{2}-r^{2}\Bigl{(}\frac{d\varphi}{d\tau}\Bigr{)}^{2},$ (37) which leads to the relation $\frac{1}{g_{00}(r)}\Bigl{(}\frac{dr}{d\tau}\Bigr{)}^{2}=g_{00}(r)c^{2}\Bigl{(}\frac{dt}{d\tau}\Bigr{)}^{2}-c^{2}-\frac{|{\bm{\sigma}}|^{2}}{m^{2}r^{2}}.$ (38) This latter expression is now inserted in (36) and one obtains the simple form $E=g_{00}(r)mc^{2}\Bigl{(}\frac{dt}{d\tau}\Bigr{)}.$ (39) Now, equations (34) and (39) inserted in (37) lead to $\displaystyle g_{00}(r)m^{2}c^{2}$ $\displaystyle=$ $\displaystyle\frac{E^{2}}{c^{2}}-\frac{|{\bm{\sigma}}|^{2}}{r^{4}}\Bigl{(}\frac{dr}{d\varphi}\Bigr{)}^{2}-g_{00}(r)\frac{|{\bm{\sigma}}|^{2}}{r^{2}}$ (40) $\displaystyle g_{00}(u)m^{2}c^{2}$ $\displaystyle=$ $\displaystyle\frac{E^{2}}{c^{2}}-|{\bm{\sigma}}|^{2}\Bigl{(}\frac{du}{d\varphi}\Bigr{)}^{2}-g_{00}(u)|{\bm{\sigma}}|^{2}u^{2}$ (41) where the second line follows from the previous one by the usual change of variable $u=\frac{1}{r}$. We next take another derivative w.r.t. $\varphi$ to get the relativistic equation of motion $\displaystyle\frac{d^{2}u}{d\varphi^{2}}+g_{00}(u)u$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\frac{dg_{00}(u)}{du}\Bigl{(}\frac{m^{2}c^{2}}{|{\bm{\sigma}}|^{2}}+u^{2}\Bigr{)},$ (42) $\displaystyle\frac{d^{2}u}{d\varphi^{2}}+u$ $\displaystyle=$ $\displaystyle\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}+\frac{3GM_{\odot}}{c^{2}}u^{2}.$ (43) It is instructive to compare with the equation of motion in the Kepler approximation (4) or in the special relativistic case (28), both of them being linear. We rewrite these equations (with labels K for Kepler and SR for Special Relativity) here for the purpose of comparison, $\displaystyle\frac{d^{2}u_{\rm K}}{d\varphi^{2}}+u_{\rm K}$ $\displaystyle=$ $\displaystyle\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}},$ (44) $\displaystyle\frac{d^{2}u_{\rm SR}}{d\varphi^{2}}+u_{\rm SR}$ $\displaystyle=$ $\displaystyle\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}+\frac{G^{2}M_{\odot}^{2}m^{2}}{|{\bm{\sigma}}|^{2}c^{2}}u_{\rm SR}$ (45) and clearly the main difference is that in General Relativity we get a non linear equation. This latter equation in (43) is solved perturbatively around the classical case, allowing the periodic solution and harmonics at multiple frequencies, together with a possible shift of the fundamental frequency of the Kepler solution. We thus allow $\displaystyle\phi=\Omega\varphi=(1+\omega_{1}+\dots)\varphi,$ (46) $\displaystyle u(\varphi)=u_{\rm K}(\phi)+u_{1}(\phi)+\dots,$ (47) where $\omega_{1}$ and $u_{1}$ are small perturbations, with $u_{\rm K}(\phi)\sim\cos\phi$ and $u_{1}(\phi)\sim\cos 2\phi$, …Inserting these expansions in (43) leads to $\displaystyle\Omega^{2}\frac{d^{2}u}{d\phi^{2}}+u=\frac{1}{p}+\frac{3GM_{\odot}}{c^{2}}u^{2},$ (48) $\displaystyle 0\hbox{th order\quad}$ $\displaystyle\frac{d^{2}u_{\rm K}}{d\varphi^{2}}+u_{\rm K}=\frac{1}{p}$ (50) $\displaystyle 1\hbox{st order }$ $\displaystyle\frac{d^{2}u_{1}}{d\varphi^{2}}+u_{1}=\hbox{const}+\frac{2e}{p}\Bigl{(}\frac{3GM_{\odot}}{pc^{2}}+\omega_{1}\Bigr{)}\cos\phi$ $\displaystyle\qquad\qquad\qquad\qquad+\frac{3GM_{\odot}e^{2}}{2p^{2}c^{2}}\cos 2\phi.$ The 0th order solution is indeed Kepler solution and, demanding that $u_{1}$ has no dependence at the same frequency leads to the first order correction of the angular velocity, $\omega_{1}=-\frac{3GM_{\odot}}{pc^{2}}.$ (51) It follows an advance of the perihelion $\Delta\varphi_{0}=-2\pi\omega_{1}=\frac{\pi 6G^{2}M_{\odot}^{2}m^{2}}{c^{2}|{\bm{\sigma}}|^{2}}=\frac{6\pi GM_{\odot}}{a(1-e^{2})c^{2}},$ (52) being 6 times larger than in the special relativistic treatment. The numerical value is $\Delta\varphi_{0}=5.05\times 10^{-7}\hbox{rad}/\hbox{rev}=43.2\ \\!\hbox{as/cy}$ (53) for Mercury. This numerical result agrees remarkably with the observations collected, for the case of Mercury, in table 2. The results for other bodies in the solar system are given in table 3 Celestial body | $\Delta\varphi_{0}$ measured (in as/cy) | predicted in GTR (as/cy) ---|---|--- Mercury | $43.11\pm 0.45$ | $43.03$ [3] (or 42.98 [7]) Venus | $8.4\pm 4.8$ | $8.6$ Earth | $5.0\pm 1.2$ | $3.8$ Icarus | $9.8\pm 0.8$ | $10.3$ Table 3: Advance of the perihelion in the solar system. From Ref. [12]. Resolving a disagreement between theory and observations that had been misunderstood for years was a major breakthrough that consolidated the General Theory of Relativity of Einstein. Although numerically minor, solving the discrepancy was essential on fundamental grounds and this opened the era of high precision astronomy and astrophysics. ## 7 But what about the influence of external planets in GTR? For consistency, we must now ensure that taking into account the effect of the external planets at the level of General Relativity does not ruin the formidable agreement of the previous calculation. For that, let us remind that at the level of Newtonian gravitation theory, the equation of motion for central potentials $V(r)$ reads as $\frac{d^{2}u}{d\varphi^{2}}+u(\varphi)=-\frac{m}{|{\bm{\sigma}}|^{2}}\frac{dV}{du},$ (54) while in GTR the corresponding equation is (42) $\frac{d^{2}u}{d\varphi^{2}}+g_{00}(u)u=-\frac{1}{2}\frac{dg_{00}(u)}{du}\Bigl{(}\frac{m^{2}c^{2}}{|{\bm{\sigma}}|^{2}}+u^{2}\Bigr{)}$ (55) with $g_{00}(u)=1+\frac{2\phi(u)}{c^{2}}.$ (56) Using the potential energy found in Section 3, limited to the leading order for the planet contribution, $V(u)=m\phi(u)=-GM_{\odot}mu-\frac{GM_{\rm P}m}{4R_{\rm P}^{3}u^{2}},$ (57) we get the following equation of motion in the classical case $\displaystyle\frac{d^{2}u}{d\varphi^{2}}+u(\varphi)$ $\displaystyle=$ $\displaystyle\underbrace{\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}}_{{\hbox{\footnotesize Sun influence \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize Newtonian level}}}-\underbrace{\frac{GM_{\rm P}m^{2}}{2R_{\rm P}^{3}|{\bm{\sigma}}|^{2}}\frac{1}{u^{3}}}_{{\hbox{\footnotesize External planet influence \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize at the Newtonian level}}}=\frac{1}{p}-\frac{M_{\rm P}}{2M_{\odot}}\frac{1}{pR_{\rm P}^{3}}\frac{1}{u^{3}}.$ This is the equation that we analyzed earlier. Within General Relativity, we have seen that the equation of motion takes the form of Eq. (42) with now $\displaystyle g_{00}(u)=1-\frac{2GM_{\odot}}{c^{2}}u-\frac{GM_{\rm P}}{2R_{\rm P}^{3}c^{2}}\frac{1}{u^{2}}.$ (59) The equation of motion follows in the form $\displaystyle\frac{d^{2}u}{d\varphi^{2}}+u(\varphi)$ $\displaystyle=$ $\displaystyle\underbrace{\frac{GM_{\odot}m^{2}}{|{\bm{\sigma}}|^{2}}}_{{\hbox{\footnotesize Sun influence \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize Newtonian level}}}+\underbrace{\frac{3GM_{\odot}}{c^{2}}u^{2}}_{{\hbox{\footnotesize Sun correction \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize GTR level}}}$ $\displaystyle-\underbrace{\frac{M_{\rm P}}{2M_{\odot}}\frac{1}{pR_{\rm P}^{3}}\frac{1}{u^{3}}}_{{\hbox{\footnotesize External planet influence \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize at the Newtonian level}}}-\underbrace{\frac{GM_{\rm P}}{4R_{\rm P}^{5}c^{2}}\frac{1}{u^{3}}}_{{\hbox{\footnotesize External planet correction \vrule height=0.0pt,depth=3.41432pt,width=0.0pt}}\atop{\hbox{\footnotesize at the GTR level}}}$ (60) Comparing Eq. (60) with Eq. (LABEL:Eq63) we see that only the last term differs in the planet contribution. When we compare the order of magnitude of the two contributions due to the planet, the GTR vs the classical contributions, the ratio is about $10^{8}$ times smaller. This proves that there is no need for further analysis of this latter effect which leads, at the experimental accuracy, to the same results as in Newtonian gravity. The 43 as/cy are indeed due to the sole correction of the influence of the Sun in General Relativity. ## 8 A problem recently revisited C.M. Will is a specialist of the experimental verifications of the General Theory of Relativity. He recently revisited the problem of the advance of Mercury’s perihelion [13] and found a new contribution arising from the interaction between Mercury’s motion and the gravitomagnetic field of the moving planets. The numerical contribution is very small, with a few parts in a million of the GTR main contribution to precession, but considers that this should be detectable experimentally. It has to be noticed that the GTR correction to the planets’ contribution that we have estimated is still a factor 10 smaller, but it might be accessible in the future. ## References ## References * [1] L.D. Landau and E.M. Lifshitz, Mécanique, Editions Mir Moscou, 1981. * [2] H. Goldstein, Classical Mechanics, Addison-Wesley, 1950. * [3] G.M. Clemence, The relativity effect in planetary motions, Rev. Mod. Phys. 19, 361 1947. * [4] M.P. Price and W.F. Rush, Am. J. Phys. 47, 531–534 1979. * [5] B. Davies, Elementary theory of perihelion precession, Am. J. Phys. 51, 909 1983. * [6] Kin-Ho Lo, Kenneth Young, Benjamin Y. P. Lee, Am. J. Phys. 81, 695–702 2013. * [7] A.N. Nobili and C.M. Will, The real value of Mercury’s perihelion advance, Nature 320 (6), 39. * [8] J.-P. Verdet, Astronomie et astrophysique, Larousse, Paris 1993, p.736. * [9] M.A. Tonnelat, Histoire du Principe de Relativité, Flammarion, Paris 1971, pp. 336-339. * [10] N.T. Roseveare, Mercury’s Perihelion - From Le Verrier to Einstein, Oxford University Press, 1982. * [11] K.T. McDonald, Special Relativity and the Precession of the Perihelion. * [12] A.P. French, The story of General Relativity, in Einstein, a centenary volume, ed. by A.P. French, Heinemann, London 1979. * [13] C. Will, New General Relativistic Contribution to Mercury’s Perihelion Advance, Phys. Rev. Lett. 120, 191101 2018, doi:10.1103/PhysRevLett.120.191101 * [21] H. Poincaré, Limites de la loi de Newton, Bull. Astro. 17, 121 (1953). Lectures from 1906. See pp. 236-239 * [22] H. Poincaré, La Dynamique de l’électron, Rev. Gen. sci. Pure Appl. 19, 386 (1908). See pp. 389-401 * [23] H.A. Lorentz, Alte und Neue Fragen der Physik, Phys. Z. 11, 1234 (1910). See p. 1240 * [24] W. De Sitter, On the bearing of the Principle of Relativity on Gravitational Astronomy, Mon. Not. Roy. Astro. Soc. 71, 388 (1911) * [25] G. Nordström, Relativitätsprinzip und Gravitation, Phys. Z. 13, 1126 (1912) * [26] O. Lodge, Astronomical Consequences of the Electrical Theory of Matter, Phil. Mag. 34, 81 (1917) * [27] I. Newton, Philosophiæ Naturalis Principia Mathematica (1686), Prop 45, Book 1, p. 177 * [28] S.R. Valluri, C. Wilson and W.L. Harper, J. Hist. Astron. 28, 13 (1997) * [29] J. Bertrand, Théorème relatif au mouvement d’un point attiré vers un centre fixe, Compt. Rendus Acad. Sci. 77, 749 (1873) * [30] A. Hall, A Suggestion in the Theory of Mercury, Astron. J. 14, 49 (1894) * [31] A.S. Eddington, Astronomical Consequences of the Electrical Theory of Matter. A Note on Sir Oliver Lodge’s Suggestions, Phil. Mag. 34, 163 (1917) * [32] A.S. Eddington, Astronomical Consequences of the Electrical Theory of Matter. A Note on Sir Oliver Lodge’s Suggestions, II, Phil. Mag. 34, 321 (1917) * [33] O. Lodge, Astronomical Consequences of the Electrical Theory of Matter. Supplementary Note, Phil. Mag. 34, 517 (1917) * [34] O. Lodge, Continued Discussion of the Astronomical and Gravitational Bearings of the Electrical Theory of Matter, Phil. Mag. 35, 141 (1918) * [35] A.S. Eddington, Electrical Theories of Matter and their Astronomical Consequences with special reference to the Principle of Relativity, Phil. Mag. 35, 481 (1918) * [36] R.J. Kennedy, Planetary Motion in a Retarded Newtonian Potential Field, Proc. Nat. Acad. Sci. 1, 744 (1929) * [37] H. Goldstein, C.P. Poole and J. Safko, Classical Mechanics, 3rd ed. (Addison-Wesley, 2002), Ex. 26, Chap. 7 * [39] T.E. Phipps Jr, Mercury’s precession according to Special Relativity, Am. J. Phys. 54, 245 (1986) * [40] P.C. Peters, Comment on “Mercury’s precession according to Special Relativity”, Am. J. Phys. 55, 757 (1987) * [41] T. Biswas, Minimally relativistic Newtonian gravity, Am. J. Phys. 56, 1032 (1988) * [42] D.H. Frisch, Simple aspects of post-Newtonian gravitation, Am. J. Phys. 58, 332 (1990) * [43] P.C. Peters, Comment on “Minimally relativistic Newtonian gravity”, Am. J. Phys. 58, 188 (1990) * [44] O.D. Jefimenko, Gravitation and Cogravitation (Electret Scientific Company, Star City, 2006), sec. 20-2, p. 333 * [46] R. Wayne, Explanation of the Perihelion Motion of Mercury in Terms of a Velocity- Dependent Correction to Newton’s Law of Gravitation, Afr. Rev. Phys. 10, 26 (2015) * [47] T.J. Lemmon and A.R. Mondragon, Kepler’ s Orbits and Special Relativity in Introductory Classical Mechanics (Sept. 10, 2020) * [48] C. Corda, The Advance of Planets’ Perihelion in Newtonian Theory Plus Gravitational and Rotational Time Dilation (Sept. 10, 2020) The secret of planets’ perihelion between Newton and Einstein, Phys. Dark Universe 32, 100834 (2021) * [49] G. D’Abramo, Comment on “The secret of planets perihelion between Newton and Einstein”, Phys. Dark Universe 37, 101076 (2022)
# Re-entrant magic-angle phenomena in twisted bilayer graphene in integer magnetic fluxes Yifei Guan Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, CH-1015, Switzerland Oleg V. Yazyev Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, CH-1015, Switzerland Alexander Kruchkov Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, CH-1015, Switzerland Department of Physics, Harvard University, Cambridge, MA 02138, USA Branco Weiss Society in Science, ETH Zurich, Zurich, Switzerland ###### Abstract In this work we address the re-entrance of magic-angle phenomena (band flatness and quantum-geometric transport) in twisted bilayer graphene (TBG) subjected to strong magnetic fluxes $\pm\Phi_{0}$, $\pm 2\Phi_{0}$, $\pm 3\Phi_{0}$… ($\Phi_{0}=h/e$ is the flux quantum per moiré cell). The moiré translation invariance is restored at the integer fluxes, for which we calculate the TBG band structure using accurate atomistic models with lattice relaxations. Similarly to the zero-flux physics outside the magic angle condition, the reported effect breaks down rapidly with the twist. We conclude that the magic-angle physics re-emerges in high magnetic fields, witnessed by the appearance of flat electronic bands distinct from Landau levels, and manifesting non-trivial quantum geometry. We further discuss the possible flat-band quantum geometric contribution to the superfluid weight in strong magnetic fields (28 T at 1.08∘ twist), according to Peotta-Törmä mechanism. In 2D systems, the electronic spectrum in magnetic field develops a fractal structure (”Hofstadter butterfly”),Hofstadter (1976) which lowers the effective dimensionality, and contributes to suppressing superconductivity (long-range order is generically destroyed in dimensions lower than $D$=$2$). The observation of Hofstadter physics requires strong magnetic fluxes ($\sim$$h/e$), which became experimentally accessible only with the advent of moiré superlattices.Dean _et al._ (2013); Hunt _et al._ (2013); Ponomarenko _et al._ (2013) In those experiments,Dean _et al._ (2013); Hunt _et al._ (2013); Ponomarenko _et al._ (2013) the magnetic fields of nearly 30 T were employed in the system of graphene monolayer twisted on hexagonal boron nitride (hBN), resulting into effective fluxes of $\Phi=B/A\sim\Phi_{0}$ ($A$ is the moiré cell area, $\Phi_{0}$$=$$h/e=$$4$ Wb is magnetic flux quantum). Furthermore, the twisted graphene multilayers provide a natural platform to test the interplay between the Hofstadter physics and strong correlations.Cao _et al._ (2018); Hao _et al._ (2021); Park _et al._ (2021a, b); Zhang _et al._ (2021) In twisted bilayer graphene, the smaller is the twist $\theta$, the larger is the effective magnetic flux ($\Phi\propto 1/\theta^{2}$) at the fixed field $\mathbf{B}$: for TBG at the magic angle $1.08^{\circ}$ the magnetic flux quantum corresponds to $B_{0}\approx 28$ T, which is reachable in the modern laboratories.Hahn _et al._ (2019) The magic-angle graphene heterostructures—two or more graphene sheets twisted to the angle $\sim$$1^{\circ}$, at which a very narrow band emerges in the electronic spectrum—have re-attracted significant attention due to re-entrant superconductivity in strong magnetic fields and reported Pauli limit violation.Cao _et al._ (2021); Chaudhary _et al._ (2021); Shaffer _et al._ (2021) The re-entrant correlated (Chern) insulator phases were reported at strong magnetic fluxes,Das _et al._ (2021); Herzog-Arbeitman _et al._ (2021) close to the unit magnetic flux quantum $\Phi_{0}=h/e$ per moiré unit cell (see also Ref.Sheffer and Stern (2021)). In this paper we show that in all the integer magnetic flux $\Phi=\pm N\Phi_{0}$, the magic angle physics of TBG is re-entrant and similar to the physics of TBG in zero magnetic field. Namely, we find non-trivial flat Chern bands at integer flux $h/e$ (see Fig. 1). These bands are distinct from Landau levels for three reasons: (i) Outside the magic angle range the band flatness breaks down; (ii) The magic-angle flat bands are characterized by the total Chern number $|C|=2$, while Landau levels are characterized by $|C|=1$; and (iii) The quantum geometry (Fubini-Study metrics $\mathfrak{G}_{ij}$) of the magic-angle flat bands is highly nontrivial and different from the Landau level case, and provides the enhanced basis for quantum-geometric transportPeotta and Törmä (2015) in flat bands without dispersive contributions due to $\sum_{\mathbf{k}}\text{Tr}\,\mathfrak{G}_{ij}(\mathbf{k})\approx 3$. Figure 1: Electronic band structure of magic-angle twisted bilayer graphene in integer magnetic flux: (a) zero flux ($\Phi=0$); (b) flux one ($\Phi=h/e$); (c) flux two ($\Phi=2h/e$). All band structures are calculated with tight binding model including lattice relaxations effects. In integer magnetic flux, the lower (two out four) flat bands acquire $C=-2$. In this paper we report that the nontrivial quantum geometry, electronic band flatness, and conditions for unconventional quantum transport are re- established at integer magnetic flux (in units of $h/e$) through the moiré unit cell. Similarly to the zero-flux case, the Fermi velocity and the bandwidth drop dramatically at the magic angle, to reappear as a dispersive band both below and above the magic angle. Importantly, the flat bands in finite flux has non-trivial Chern numbers $|C|$=$2$ (defined at half filling, see Fig. 1) and non-trivial quantum geometry, which follows the quantum- geometric flatness criterionKruchkov (2021) $\displaystyle\text{Tr}\mathcal{G}_{ij}(\mathbf{k})\simeq\mathcal{F}_{xy}(\mathbf{k}).$ (1) We show numerically with atomistic calculations that in the realistic TBG, the ”ideal band condition” (1) is satisfied in the magnetic moiré Brillouin zone (mmBZ) regions where the band flatness is pronounced in terms of vanishing Fermi velocity (e.g. around the K points of mmBZ). Here $\mathcal{G}_{ij}$ and $F_{xy}(\mathbf{k})$ are the real and imaginary parts of the quantum-geometric tensor $\mathfrak{G}_{ij}$(see further), determining the quantum distance between electronic states in the (projected) Hilbert space.Provost and Vallee (1980) In this paper, we investigate the effect of strong magnetic fields with integer flux in twisted bilayer graphene with the help of the accurate atomistic model including lattice relaxation effects at the magic angle, and compare the observed results with the established knowledge of the zero-flux TBG case. The details of the tight-binding Hamiltonian are provided in Supplementary Materials (SM).SI The key observation is that the magnetic translation operators commute at every integer flux $\Phi=N\Phi_{0}$, namely $\displaystyle\hat{T}_{\mathbf{a}_{1}}\hat{T}_{\mathbf{a}_{2}}=e^{i2\pi\Phi/\Phi_{0}}\hat{T}_{\mathbf{a}_{2}}\hat{T}_{\mathbf{a}_{1}},\ \ \to\ \ [\hat{T}_{\mathbf{a}_{1}},\hat{T}_{\mathbf{a}_{2}}]_{\Phi=N\Phi_{0}}=0.$ Thus the moiré unit cell is restored, and the system flows towards the electronic band structure defined on the mmBZ, which in the integer flux has the same periodicity as the moiré Brillouin zone (mBZ) in zero magnetic flux. But instead of dispersionless Landau levels, we recover the set of dispersive bands, with its band structure depending crucially on the twist angle (Fig.3). Figure 2: Hofstader spectrum of magic-angle bilayer graphene in magnetic flux (in units $h/e$). The digits in gaps indicate the in-gap Chern numbers, calculated through the edge modes counting. Additionally, the insets below shows Wannier charge center (WCC) winding at fluxes $h/e$ and $2h/e$, calculated at the half filling of the flat bands (2 of 4 subbands are occupied). The WCC winding is nontrivial, revealing $|C|$$=$$2$ at half filling in the integer flux. Re-entrant magic angle spectra at integer magnetic flux. We start from considering the electronic band structure at an integer magnetic flux $\Phi=Nh/e$. Such a flux provides the $2\pi N$ circulation of magnetic vector potential, and hence reconstructs the moire Brillouin zone (mBZ); magnetic translation operators commute at integer flux $[\hat{T}_{\mathbf{a}_{1}},\hat{T}_{\mathbf{a}_{2}}]=0$. This allows us to re- introduce momentum as a good quantum number and compute the electronic band structure starting from the tight binding model for magic-angle twisted bilayer graphene with atomic relaxations, modified with Peierls substitution, $\displaystyle t_{ij}\to t_{ij}e^{-i\frac{e}{h}\int_{\mathbf{r}_{j}}^{\mathbf{r}_{i}}\mathbf{A}(\mathbf{r}^{\prime})d\mathbf{r}^{\prime}}.$ (2) Comparing to the widely-used continuum models,Dos Santos _et al._ (2007); Bistritzer and MacDonald (2011a); Tarnopolsky _et al._ (2019) the accurate tight binding model has an important advantage of addressing the magic-angle physics under realistic conditions of atomic lattice relaxations, proved to be indispensable in the experiments due to domain formation.Carr _et al._ (2018) Worth noting, the finite magnetic flux shifts the effective Brillouin zone (see Fig. 1), thus re-defining the positions of high-symmetry points. Otherwise, the original moire BZ and reconstructed magnetic moire Brillouin zone (mmBZ) have the same orientation and periodicity, which is the main technical condition to observe the magic angle phenomena. We report that the characteristic flat band, the hallmark of the magic-angle graphene, re-appears in every integer magnetic flux, $\pm\Phi_{0}$, $\pm 2\Phi_{0}$, $\pm 3\Phi_{0}$… Figure 1 provides the electronic band spectrum at $\Phi=0,\Phi_{0},2\Phi_{0}$, while the band structures (and other properties) in stronger magnetic fluxes are given in the SM.SI The first observation is that the flat band reappears exactly at the magic angle, while the higher bands are dispersive (see Fig.3). We argue below that the magic-angle flat band (MAFB) in the integer flux are not a consequence of Landau level (LL) flattening since: (i) It has $|C|$=$2$ at half-filling; (ii) It demonstrates quantum geometry incompatible with LL physics (Fig.5); and (iii) It becomes dispersive outside the magic angle. Worth noting, the strong magnetic fields restore the asymptotic particle-hole symmetry of the low energy states, which was moderately broken in the zero flux. Figure 3: Magic angle signature in integer magnetic flux $\Phi=h/e$ at different twists. (a) Above the magic angle, $\theta=1.47^{\circ}$. (b) At the magic angle, $\theta=1.08^{\circ}$. (c) Below the magic angle, $\theta=0.81^{\circ}$. We observe the similar behavior as the zero-flux TBG tuned outside of the magic angle.Bistritzer and MacDonald (2011b); Tarnopolsky _et al._ (2019) Magnetic spectrum distinct from Landau levels. We further address the properties of the electronic band spectrum versus magnetic field. For this, the characteristic quantity to calculate is the Hofstadter diagram, which traces the energy of electronic states allowed in quantized magnetic field, as a function of magnetic flux through the unit cell (Fig. 2). We observe that the flat bands at the integer flux are not stemming from Landau level Hofstadter physics, but rather from the magic angle physics of the TBG. To show that they are fundamentally different from the LLs, we calculate the Chern number through Wilson loop computation in the form of Wannier charge center winding, see Fig. 2. We find that the flat bands at integer $\Phi$ have Chern numbers $|C|=2$ incompatible with Chern numbers of Landau levels on the lattice ($|C|=1$). Furthermore, we fix magnetic flux to $h/e$, and investigate the change in electronic band spectrum versus the change in twist angle (see Fig. 3; Cases of higher integer flux are provided in SM.SI ). While the spectrum is strongly dispersive outside of the magic angle (the bandwidth is approximately 100 meV at $1.47^{\circ}$ twist, at the magic angle $1.08^{\circ}$ the bandwidth is just $15$ meV, comparable to the magic angle bandwidth in zero flux (see also Fig. 1). We conclude that the magic angle physics is in its essence restored. Figure 4: Asymptotic particle-hole symmetry (PHS) in strong magnetic flux is emergent for $\Phi$$\geq$$2h/e$. The low-energy PH asymmetry of the flat bands is determined as a maximum energy difference between the conductance and valence bands at neutrality, in meV. The PH asymmetry is maximal in zero flux ($\Phi=0$) resulting in nearly 4 meV. The PHS violation is however strongly suppressed with applied magnetic field, resulting to 1 meV at $\Phi=h/e$, and just 0.07 meV at $\Phi=2h/e$. The suppression of PH asymmetry roughly follows $\sim\exp[-\Phi/\Phi_{*}]$, with $\Phi_{*}\approx 0.7h/e$. Emergent particle-hole symmetry. We now focus on the low-energy numerical analysis of the flat bands. The main observations under strong magnetic fields is that the low-energy electronic spectrum, featuring flat bands, acquires asymptotic particle-hole symmetry (Fig. 4) To quantify the particle-hole asymmetry (PHA), we track the energy difference maximum difference in the flat band energies between the occupied and free bands at neutrality, and investigate it versus magnetic field (Fig. 4). We observe that while PHS is generically violated in zero flux (PHA = 4 meV), in strong magnetic fields this symmetry reemerges in its asymptotic form above $\Phi\geq 2h/e$ (Fig. 4). Approximately, the suppression of PH asymmetry is observed as $\sim\exp[-\Phi/\Phi_{*}]$, with $\Phi_{*}\approx 0.7h/e$ (corresponding to $\approx 20$ T). This could be qualitatively understood in terms local tight binding hoppings, which due to strong magnetic fields oscillate rapidly in space; the system thus performs self-averaging which re-defines effective hopping parameters. In realistic TBG systems at $B=0$, with sublattice hoppings and lattice relaxations, the Chern number in zero flux vanishes since the particle-hole symmetry (PHS) is violated on the atomistic level, chiral symmetry (CS) is broken explicitly, and time reversal (TRS) is present. At integer fluxes, we have asymptotic PHS, broken TRS, broken CS, and while strictly speaking the system belongs to class $A$ topological insulator, its dynamics flows towards class $C$, characterized by $2Z$ (even-valued) topological invariants in 2D systems.Altland and Zirnbauer (1997) This gives a plausible explanation of promotion of the $|C|$=$2$ Chern numbers in the flat bands at integer flux $|\Phi|\geq\Phi_{0}$, once the two of four subbands are slightly gapped out (Fig. 1). We observe that the magic-angle TBG at zero flux $\Phi=0$ and at integer fluxes $\Phi=\pm N\Phi_{0}$ belong to different topological classes, which should be taken into account for understanding recent TBG experiments. Non-trivial quantum geometric properties. We report that the magic-angle flat band in integer magnetic flux has non-trivial quantum geometry (Fig. 4), distinct from Landau levels. The quantum geometry is defined for the Bloch states in the projective Hilbert space; it can be separated into real (diagonal) part which is Fubini-Study metrics, and imaginary (off-diagonal) part, with its components being Berry curvature.Provost and Vallee (1980) Numerically, we compute quantum-geometric tensor for flat bands in TBG by using its spectral representationKruchkov (2022) $\displaystyle\mathfrak{G}_{ij}(\mathbf{k})=\sum_{n,m}\frac{\langle u_{n\mathbf{k}}|\frac{\partial\mathcal{H}_{\mathbf{k}}}{\partial k_{i}}|u_{m\mathbf{k}}\rangle_{0}\langle u_{m\mathbf{k}}|\frac{\partial\mathcal{H}_{\mathbf{k}}}{\partial k_{i}}|u_{n\mathbf{k}}\rangle_{0}}{(\varepsilon_{n\mathbf{k}}-\varepsilon_{m\mathbf{k}})^{2}}.$ (3) We further introduce $\mathcal{G}_{ij}$=$\text{Re}\mathfrak{G}_{ij}$, $\mathcal{F}_{ij}$=$-2\,\text{Im}\mathfrak{G}_{ij}$. One can find a basis in which $\mathcal{G}_{ij}$ is diagonal and $\mathcal{F}_{ij}$ is off-diagonal. The plots for $\mathcal{G}_{xx},\mathcal{G}_{yy}$ (Fubini-Study metrics) calculated at half-filling are presented in Fig. 5. We observe that the re- entrant flat band has nontrivial quantum geometry within the mmBZ, as manifested in Figs. 5(a,b,c), which is not compatible with a Landau level quantum geometry (the LL quantum geometry is constant in the whole Brillouin zone). For comparison, we plot the Berry curvature $\mathcal{F}_{xy}$ together with trace of Fubini-Study’s $\mathcal{G}_{ij}$ in Fig. 3d. We observe that the flat band closely follows the quantum-geometric condition for ideal flat bandsKruchkov (2021) $\text{Tr}\mathcal{G}_{ij}$=$\mathcal{F}_{xy}$. It is certainly interesting that this condition is satisfied almost exactly in the regions of mmBZ, where the band flatness is pronounced in terms of vanishing Fermi velocity (around the K points of mmBZ). The deviation to this quantum- geometric bound $\text{Tr}\mathcal{G}_{ij}$=$\mathcal{F}_{xy}$ are observed in the regions of mmBZ with finite dispersion and significant $v_{k}=\partial{\varepsilon_{k}}/\partial{k}$ caused by broken CS of the tight binding calculations. The criterion $\text{Tr}\mathcal{G}_{ij}=F_{xy},$ tests the closeness of a realistic flat band in TBG to flat band idealization through holomorphic/meromorphic representation of the flat band wave functions.Kruchkov (2021) However, since total $|C|=2$, the relevant toy model for TBG in integer flux cannot be represented by solely a holomorphic representation of the quasi-LLL TBG (found in Ref.Tarnopolsky _et al._ (2019)); one needs to take meromorphic flat band contributions into account.Popov and Milekhin (2021) We calculate the distribution of the Berry curvature $F_{xy}$ within the mBZ (the cut is shown in Fig. 3d, the density plots are in SM), which reveals non- homogeneous structure, which is not consistent with the homogeneous Berry curvature of the generic Landau levels (LLs are ”Berry flat”). This yet again provides arguments in the support of the magic-angle physics re-entrance in integer magnetic flux. Finally, we check numerically that the Berry flux $F_{xy}$ encompassed by mmBZ sums up to $|C|$=$2.0\pm 0.07$ at the half filling (see Figs. 1,2). Quantum-geometric transport. As was first introduced by Peotta and TörmäPeotta and Törmä (2015), the nontrivial quantum-geometric tensor $\mathfrak{G}_{ij}$ leads to the the finite superfluid current $J_{i}$$=$$-D_{ij}A_{j}$ even in the limit of perfectly flat band (here $D_{ij}$ is the superfluid weight). This argument is now understood to apply directly to TBG in zero flux, where the $\text{Tr}\mathcal{G}_{ij}$ is nonzero due to hidden nontrivial topology of the flat bands. It was reported with different methodsHu _et al._ (2019); Xie _et al._ (2020); Julku _et al._ (2020) that the quantum geometric tensor (QGT) contribution to the superfluid weight $D_{S}$ in TBG is if not dominant, than at least commensurate with the conventional contributions, thus leading to the BTK transition temperature estimate $T_{\text{BKT}}\sim\sum_{\mathbf{k}}\text{Tr}\mathcal{G}_{ij}$. The QGT contribution holds for different symmetries of the order parameter, and the argument is valid beyond the mean field.Wang _et al._ (2020) The essential physics is captured by Bogoliubov-de-Gennes Hamiltonian $\displaystyle\mathcal{H}_{\text{BdG}}=\left(\begin{array}[]{cc}H_{\mathbf{k}}&\Delta_{\mathbf{k}}\\\ \Delta^{*}_{-\mathbf{k}}&H^{*}_{-\mathbf{k}}\end{array}\right)$ (6) Without loss of generality, we consider superconducting order $\Delta_{\mathbf{k}}=\Delta$. The superfluid weight is then calculated within Kubo formalism through the current-current correlators. We explicitly calculate $\mathcal{G}_{ij}$ numerically (Fig. 4) with consequent mmBZ integration at half filling (for two of four flat bands occupied, we have $\text{Tr}\mathcal{G}_{ij}\approx{2.8}$), to obtain at the superfluid weight maximum positioned at the middle of the composite flat band with $|C|=2$, $\displaystyle D_{xx}=\frac{2e^{2}}{\hbar^{2}}\Delta\sum_{\mathbf{k}}\text{Tr}\mathcal{G}_{ij}(\mathbf{k})\approx(5.6\pm 0.1)\frac{e^{2}}{\hbar^{2}}\Delta.$ (7) Symmetry $D_{xx}$$=$$D_{yy}$ is assumed. Here we took into account factor $\sqrt{\nu(1-\nu)}$ (in notations of Ref.Peotta and Törmä (2015)), where $\nu$ is the filling factor of the composite flat band (indexed by $C=-2$ in Fig. 1). We can further make estimates for the BKT transition temperature,Berezinskii (1971); Kosterlitz and Thouless (1973); Nelson and Kosterlitz (1977) indicating the disappearing of the phase coherence of superconducting order from expression $\pi\hbar^{2}D(T_{*})/8e^{2}T_{*}=1$. The order-of-magnitude estimate gives $T_{*}\sim{\hbar^{2}D(0)}/{e^{2}}\sim\Delta$. The remaining question is of course, what is the value of $\Delta$, which should be found self-consistently by solving Gorkov equations in magnetic field, or through indirect experimental data—and it is beyond the scope of this paper. For a rough estimate, even $\Delta\sim 0.1$ meV will give a physically relevant $T_{*}\sim 1$ K. Whether or not the superfluid order is re-entrant in strong magnetic flux remains open,Chaudhary _et al._ (2021); Shaffer _et al._ (2021) however it would not be surprising in the view of the reported Pauli- limit violation and re-entrant superconductivity in strong magnetic parallel fields in twisted graphene multilayers.Cao _et al._ (2021); Park _et al._ (2021b); Zhang _et al._ (2021) Note the magnetic field may or may not change the symmetry of the superconducting order (resulting into different $\Delta_{\mathbf{k}}$), however the quantum-geometric superfluid weight will still remain finite due to the nontrivial geometry of the underlying flat bands. Figure 5: Nontrivial quantum metrics in integer magnetic flux $\Phi=h/e$, calculated at half-filling (two of four flat bands are occupied). (a-c) components of Fubini-Study metrics; (c) shows that $\text{Tr}\mathcal{G}_{ij}$ is gauge-invariant (d) Comparison between trace of Fubini-Study tensor $\text{Tr}\mathcal{G}_{ij}$ and Berry curvature $\mathcal{F}_{xy}$, which probes how close the band flatness is to the perfect flatness through Eq.(1). Conclusions. We conclude that there is a re-entrant magic angle physics in twisted bilayer graphene at every integer magnetic flux quanta $\pm\Phi_{0}$, $\pm 2\Phi_{0}$, $\pm 3\Phi_{0}$, etc., through the moiré cell. To date, the practical importance represents the first magnetic quantum $\pm\Phi_{0}$, which at twist angle $1.08^{\circ}$ corresponds to experimentally-achievable fields of 28 Tesla. We confirm with accurate atomistic calculations, incorporating lattice relaxation effects, that at such fields the magic-angle phenomena re-emerge. This, in particular, could be seen through the re- emergence of very flat bands at the magic angle distinct from Landau levels, while beyond the magic angle this physics breaks down, similar to the zero- flux case.Bistritzer and MacDonald (2011b); Tarnopolsky _et al._ (2019) These flat bands at half filling carry nontrivial Chern numbers ($|C|$=$2$) and nontrivial quantum geometry (Fubini-Study metrics), and are fundamentally different from conventional Landau levels. We conjecture that, similar to TBG in the zero flux, there is a nonvanishing contribution to the superfluid weight coming from the re-entrant quantum geometric properties, and in the flat topological bands this contribution is significant. Due to the strong quantum geometry of the TBG flat bands in integer flux ($\sum_{\text{BZ}}\text{Tr}\mathcal{G}_{ij}\approx 3$), and the estimated BKT temperature is in order of the gap $T_{*}\approx\Delta$, which gives values significantly elevated with regard to the conventional superconductivity in geometrically-trivial dispersive bands. The behavior of superfluid order parameter beyond the conventional Pauli limit (towards integer magnetic flux) is a subject for further research. Acknowledgments. The authors thank Ady Stern, Yarden Sheffer, Luiz Santos, Gaurav Chaudhary, and Marta Brzezińska for useful discussion. The authors thank B. Andrei Bernevig, Jonah Herzog-Arbeitman, Aaron Chew for further discussion. The project was supported by the Branco Weiss Society in Science, ETH Zurich, through the research grant on flat bands, strong interactions and SYK physics, and by the Swiss National Science Foundation, grants No. 172543. Computations have been performed at the Swiss National Supercomputing Centre (CSCS) under project s1008 and the facilities of Scientific IT and Application Support Center of EPFL. ## References * Hofstadter (1976) D. R. Hofstadter, Physical Review B 14, 2239 (1976). * Dean _et al._ (2013) C. R. Dean, L. Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, _et al._ , Nature 497, 598 (2013). * Hunt _et al._ (2013) B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, _et al._ , Science 340, 1427 (2013). * Ponomarenko _et al._ (2013) L. Ponomarenko, R. Gorbachev, G. Yu, D. Elias, R. Jalil, A. Patel, A. Mishchenko, A. Mayorov, C. Woods, J. Wallbank, _et al._ , Nature 497, 594 (2013). * Cao _et al._ (2018) Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature 556, 43 (2018). * Hao _et al._ (2021) Z. Hao, A. Zimmerman, P. Ledwith, E. Khalaf, D. H. Najafabadi, K. Watanabe, T. Taniguchi, A. Vishwanath, and P. Kim, Science 371, 1133 (2021). * Park _et al._ (2021a) J. M. Park, Y. Cao, K. Watanabe, T. Taniguchi, and P. Jarillo-Herrero, Nature 590, 249 (2021a). * Park _et al._ (2021b) J. M. Park, Y. Cao, L. Xia, S. Sun, K. Watanabe, T. Taniguchi, and P. Jarillo-Herrero, arXiv preprint arXiv:2112.10760 (2021b). * Zhang _et al._ (2021) Y. Zhang, R. Polski, C. Lewandowski, A. Thomson, Y. Peng, Y. Choi, H. Kim, K. Watanabe, T. Taniguchi, J. Alicea, _et al._ , arXiv preprint arXiv:2112.09270 (2021). * Hahn _et al._ (2019) S. Hahn, K. Kim, K. Kim, X. Hu, T. Painter, I. Dixon, S. Kim, K. R. Bhattarai, S. Noguchi, J. Jaroszynski, _et al._ , Nature 570, 496 (2019). * Cao _et al._ (2021) Y. Cao, J. M. Park, K. Watanabe, T. Taniguchi, and P. Jarillo-Herrero, Nature 595, 526 (2021). * Chaudhary _et al._ (2021) G. Chaudhary, A. MacDonald, and M. Norman, Physical Review Research 3, 033260 (2021). * Shaffer _et al._ (2021) D. Shaffer, J. Wang, and L. H. Santos, Physical Review B 104, 184501 (2021). * Das _et al._ (2021) I. Das, C. Shen, A. Jaoui, J. Herzog-Arbeitman, A. Chew, C.-W. Cho, K. Watanabe, T. Taniguchi, B. A. Piot, B. A. Bernevig, _et al._ , arXiv preprint arXiv:2111.11341 (2021). * Herzog-Arbeitman _et al._ (2021) J. Herzog-Arbeitman, A. Chew, D. K. Efetov, and B. A. Bernevig, arXiv preprint arXiv:2111.11434 (2021). * Sheffer and Stern (2021) Y. Sheffer and A. Stern, Physical Review B 104, L121405 (2021). * Peotta and Törmä (2015) S. Peotta and P. Törmä, Nature Communications 6, 1 (2015). * Kruchkov (2021) A. Kruchkov, arXiv preprint arXiv:2105.14672 (2021). * Provost and Vallee (1980) J. P. Provost and G. Vallee, Communications in Mathematical Physics 76, 289 (1980). * (20) Supplementary Materials (SM) are available in the published version of this manuscript . * Dos Santos _et al._ (2007) J. L. Dos Santos, N. Peres, and A. C. Neto, Physical Review Letters 99, 256802 (2007). * Bistritzer and MacDonald (2011a) R. Bistritzer and A. H. MacDonald, Proceedings of the National Academy of Sciences 108, 12233 (2011a). * Tarnopolsky _et al._ (2019) G. Tarnopolsky, A. J. Kruchkov, and A. Vishwanath, Physical Review Letters 122, 106405 (2019). * Carr _et al._ (2018) S. Carr, D. Massatt, S. B. Torrisi, P. Cazeaux, M. Luskin, and E. Kaxiras, Physical Review B 98, 224102 (2018). * Bistritzer and MacDonald (2011b) R. Bistritzer and A. H. MacDonald, Proceedings of the National Academy of Sciences 108, 12233 (2011b). * Altland and Zirnbauer (1997) A. Altland and M. R. Zirnbauer, Physical Review B 55, 1142 (1997). * Kruchkov (2022) A. Kruchkov, arXiv:2XXX.XXXXX (2022). * Popov and Milekhin (2021) F. K. Popov and A. Milekhin, Physical Review B 103, 155150 (2021). * Hu _et al._ (2019) X. Hu, T. Hyart, D. I. Pikulin, and E. Rossi, Physical Review Letters 123, 237002 (2019). * Xie _et al._ (2020) F. Xie, Z. Song, B. Lian, and B. A. Bernevig, Physical Review Letters 124, 167002 (2020). * Julku _et al._ (2020) A. Julku, T. J. Peltonen, L. Liang, T. T. Heikkilä, and P. Törmä, Physical Review B 101, 060505 (2020). * Wang _et al._ (2020) Z. Wang, G. Chaudhary, Q. Chen, and K. Levin, Physical Review B 102, 184504 (2020). * Berezinskii (1971) V. Berezinskii, Sov. Phys. JETP 34, 610 (1971). * Kosterlitz and Thouless (1973) J. M. Kosterlitz and D. J. Thouless, Journal of Physics C: Solid State Physics 6, 1181 (1973). * Nelson and Kosterlitz (1977) D. R. Nelson and J. Kosterlitz, Physical Review Letters 39, 1201 (1977).
# Confirmation of the centrality of the Huanan market among early COVID-19 cases Reply to Stoyan and Chiu (2024) F. Débarre1 & M. Worobey2 1 Institut d’Écologie et des Sciences de l’Environnement (IEES-Paris, UMR 7618), CNRS, Sorbonne Université, UPEC, IRD, INRAE, Paris, France. ORCID: 0000-0003-2497-833X. Contact<EMAIL_ADDRESS> 2 Department of Ecology and Evolutionary Biology, University of Arizona, Tucson, AZ, USA. Contact<EMAIL_ADDRESS> ## Abstract The centrality of Wuhan’s Huanan market in maps of December 2019 COVID-19 case residential locations, established by Worobey et al. (2022a), has recently been challenged by Stoyan and Chiu (2024, SC2024). SC2024 proposed a statistical test based on the premise that the measure of central tendency (hereafter, “centre”) of a sample of case locations must coincide with the exact point from which local transmission began. Here we show that this premise is erroneous. SC2024 put forward two alternative centres (centroid and mode) to the centre-point which was used by Worobey et al. for some analyses, and proposed a bootstrapping method, based on their premise, to test whether a particular location is consistent with it being the point source of transmission. We show that SC2024’s concerns about the use of centre-points are inconsequential, and that use of centroids for these data is inadvisable. The mode is an appropriate, even optimal, choice as centre; however, contrary to SC2024’s results, we demonstrate that with proper implementation of their methods, the mode falls at the entrance of a parking lot at the market itself, and the $95\%$ confidence region around the mode includes the market. Thus, the market cannot be rejected as central even by SC2024’s overly stringent statistical test. Our results directly contradict SC2024’s and – together with myriad additional lines of evidence overlooked by SC2024, including crucial epidemiological information – point to the Huanan market as the early epicentre of the COVID-19 pandemic. ## 1\. Introduction While the origin of the COVID-19 pandemic remains widely debated in the media and the wider public sphere, scientific contributions have established the central role played by Wuhan’s Huanan Seafood Wholesale Market (hereafter “Huanan market”) during the early days of the COVID-19 pandemic, given data currently available (Worobey, 2021; Worobey et al., 2022a; Jiang and Wang, 2022; Crits-Christoph et al., 2023b). Although this market almost immediately became the main focus of attention because so many of the very first identified COVID-19 cases worked or spent time there, the identification of other early cases without a known exposure risk at the market (Huang et al., 2020) introduced uncertainty about the role of the Huanan market as the potential source of the outbreak: to some, these cases appeared to be strong evidence that the epidemic in Wuhan began elsewhere and was perhaps already geographically widespread in the city in December 2019 (Cohen, 2020). An important turning point in the scientific understanding of the pandemic’s origin occurred in 2021, with the release of a report from a joint World Health Organization-China COVID-19 origins study, the “WHO-convened global study of origins of SARS-CoV-2: China part” (hereafter “WHO-China report”) (World Health Organization, 2021). This report contained maps of residential locations of early COVID-19 cases in Wuhan, those whose symptoms had begun in December 2019 (hereafter, “December case locations”; there are no earlier known onset dates). The study report noted that the residences of these cases were concentrated in the central districts of Wuhan, where the Huanan market was also located.111“ _There was a concentration of cases, both laboratory- confirmed and clinically diagnosed, in the central districts (which include the Huanan market). The earliest cases were mostly resident in the central districts of Wuhan, but cases began to appear in all districts of Wuhan in mid-to late December 2019_ ”; World Health Organization (2021, p. 44). Starting a few months later, Holmes et al. (2021) and Worobey et al. (2022b, a) manually extracted coordinates from the maps of the WHO-China report and conducted spatial analyses. Worobey et al. (2022a) established that the early cases were geographically centred near the Huanan market to a degree that would be extremely unlikely unless the outbreak had begun there or at another location near it. Based not just on these spatial findings but also many other lines of evidence, Worobey et al. (2022a) concluded that the Huanan market was the “ _early epicenter_ ” of the COVID-19 pandemic. Of the geographic findings in Worobey et al. (2022a), the most striking and consequential involved the early cases epidemiologically unlinked to the market (i.e., those who had not been to the market and who reported no known contact with anyone who had). (Hereafter, as in Worobey et al. (2022a), we use “linked cases” and “unlinked cases” only to refer to ascertained epidemiological linkage, regardless of whether cases were geographically associated with the market.) Analysed separately from linked cases, who worked at or were otherwise knowingly connected to the market, not only was the geographical centre of the unlinked case locations revealed to be closer to the market than expected from a null population distribution, but it was also closer to the Huanan market than the centre of linked case locations (Worobey et al., 2022a). It had previously been argued that the initial requirement of a market link for case definition could have biased the early case data towards the Huanan market. While it is important to take into account potential ascertainment bias, multiple arguments indicate that it was more limited than some claimed. First, case definitions were rapidly updated and did not include a market link (World Health Organization, 2021, Annex E3). Second, a large fraction of the cases with symptom onset in December were identified retrospectively, after case definitions had been updated. Third, the very existence of large numbers of unlinked December cases – about twice as many as linked cases – directly disproves the claim that the cases with onset in December 2019 were identified via their link to the market. Fourth, the spatial structure of these early, unlinked cases provided compelling evidence for a connection between the Huanan market and the onset of the outbreak in Wuhan (Worobey et al., 2022a). Stoyan and Chiu (2024, hereafter, SC2024) contested some of the results presented by Worobey et al. (2022a, hereafter, W2022). SC2024’s criticisms, however, stem from (i) profound misunderstandings of W2022’s study and objectives, (ii) a crude technical implementation of a key function, and (iii) a very narrow scientific perspective. ## 2\. The source of an outbreak is not expected to be at the exact centre of case locations ### 2.1 A profound misunderstanding of Worobey et al. (2022a)’s claims While W2022 inferred, based on both geographical and other evidence, that the market was the “ _epicenter_ ” of the pandemic, they never claimed that the market was positioned exactly at the centre of the cloud of the December case locations. Even in cases where a contagious disease is initially transmitted from a point source, there are many reasons why that source will not fall exactly at the centre of case residential locations. Geographic constraints, like the arrangement of streets or unequal population densities in different directions from the source, are obvious reasons. Long distance dispersal events as people move around an urban landscape is another. There is also not a single possible case map, and therefore not a single centre of case locations: workplaces could for instance have been mapped instead of residences – there would have been many more points right at the Huanan market. The case map would also have been different if the locations of all individuals at a precise time on a precise day had been mapped. In addition, as time goes by, the distribution of cases will change, and not necessarily isotropically. Indeed, as expected as neighbourhood transmission gives way to widespread transmission across a city, in Wuhan the distribution shifted dramatically over time from the neighbourhoods around Huanan market towards the centre of the city’s high density areas, southeast of the market (see Figs. 1D–E in Worobey et al., 2022a, and our Figure 1). This illustrates that even if an outbreak is initially clustered around its point source, time can be the biggest enemy to detecting that signal. $(a)$ December 2019 cases ($n=155$) | $(b)$ Unlinked December 2019 cases only ($n=120$) ---|--- | $(c)$ January 2020 Weibo cases ($n=440$) | $(d)$ February 2020 Weibo cases ($n=233$) | Figure 1: Positions of the modes of case locations and the associated $95\%$ confidence surfaces, inferred by computing the centres of bootstrap pseudoreplicates, using Worobey et al. (2022a)’s KDE function. $(a)$ all December case locations from W2022; $(b)$ unlinked cases only; $(c)$ Weibo cases with confirmation time in January 2020; $(d)$ Weibo cases with confirmation time in February 2020. The December case locations ($(a)$–$(b)$) were compiled by W2022 from World Health Organization (2021); the Weibo data ($(c)$–$(d)$) were compiled by Xu et al. (2020). For these reasons, the rejection of SC2024’s null hypothesis of exact centrality would not actually tell us much about the role of the Huanan market as the possible source. It is therefore preposterous to exclude the market on the basis of a too-stringent definition of “centre”. ### 2.2 Impact on the choice of statistical tests An important divergence between W2022 and SC2024 relates to the question of how best to test the hypothesis that a site – identified in advance of spatial analyses by compelling non-geographic evidence – may have been the place from which the early outbreak spread. For COVID-19, the Huanan market is the preeminent such site. W2022 asked whether the centre of the December case locations was closer to the Huanan market than other possible “epicenter” locations in the city were. SC2024 asked whether December case locations were centred on the Huanan market. As detailed already, there is no reason to expect that the centre of early case locations will necessarily be exactly centred on the initial source. SC2024’s null hypothesis is therefore not appropriate, and rejecting it is not informative. W2022’s tests, on the other hand, compared the distribution of December case locations to chosen null distributions, and they characterized the distributions through their proximity to the Huanan market. W2022 considered two types of null distributions. The first was Wuhan’s population, using age-adjusted data from worldpop.org. This null distribution assumes that the disease has had so much time to spread that any signal of its initial source has vanished, and the distribution of cases matches the underlying distribution of the population of susceptible hosts. However, population density data for Wuhan from worldpop.org may not be as accurate as other available sources,222Daniel A. Walker, 2023-08-23, “ _The worldpop data are highly discordant with other sources regarding Wuhan population density_ ”, https://archive.is/WVSDv. and we therefore use another source in our analysis here (Peng et al., 2020b, shown in Figure S1). The second type of null distribution considered by W2022 consisted of self- reported COVID-19 cases on a social media service (Weibo), with onsets in Jan- Feb 2020 (from Peng et al., 2020a). This distribution provided a snapshot of case distributions later in the spread of the disease, but from a point when clustering of cases may still have existed. In our analysis, we use another dataset from the same original Weibo source (Xu et al., 2020), for which onset dates were available, unlike for the Weibo dataset used in W2022 (see Figure 1c; we use the January data as null distribution). In their first set of tests, W2022 first compared the median distance to Huanan of sets of locations drawn from a null distribution, to the median distance to Huanan of the December case locations. Tests presented in W2022’s main text were based on a null distribution defined using worldpop.org population density data, and as such, did not include a notion of clustering: again, these were tests of the scenario outlined above, that the outbreak had had enough time to spread such that the spatial distribution of cases matched that of the population of hosts, and therefore lacked clustering. Although this is not a hypothesis that W2022 thought was likely to be true, in part because it is refuted by genomic evidence indicating that the Wuhan outbreak likely began only in late November (Pekar et al., 2022), the idea of a hidden outbreak that long predated cases at the Huanan market has been proposed by others (e.g., Cohen, 2020). Despite the limitations of this first test conducted by W2022, the rejection of this null hypothesis thus provided an important confirmation that this was not the case. Finally, unlike the population density data, and contrary to SC2024’s assertions, the second null distribution considered by W2022, Weibo case data, did encompass case clustering and human-to-human transmission (see Worobey et al., 2022a, Table S4, “median distance”). Case clustering was also encompassed in the second, and more important, set of tests conducted by W2022. These tests compared the distance from the centre of the December case locations to the market against the distances to the Huanan market of single points drawn from a null distribution of plausible starting points of the outbreak. On Figure 2, this amounts to comparing distance $d_{DH}$ to the distribution of distances $d_{FH}$. Implicit in this test was the idea that the single point drawn in each replicate represented the first human infection in Wuhan, which then acted as the ultimate source of all subsequent infections, and that the centre of the resulting cluster of cases would have been near the position of the first infection. (It was assumed, but not explicitly modelled, that simulating a clustered outbreak around each first infection, then inferring its centre, would have produced nearly identical null distributions, just with some jitter around the location of each first infection.) For both null distributions considered by W2022 (worldpop.org population density, and Weibo cases), the centre of the December case locations was significantly closer to the Huanan market than were the putative centres of case locations drawn from each null distribution (see Worobey et al., 2022a, Table S4, “center-point distance”). As part of the current study, we reproduced this test and its results with different data sources for the two null distributions considered by W2022, and using the mode as centre (see below for a description of the mode). The mode of the December case locations was significantly closer to the Huanan market, whether we compared it to first infection locations drawn from Wuhan’s population density ($p=8.9\times 10^{-5}$; $10^{6}$ draws), or from the January 2020 Weibo cases ($p<0.0023$). We also implemented a version of the test in which we compared the distance between the market and the centre of the December case locations ($d_{DH}$ in Figure 2) against the distances between the market and the centres of sets of the same number of cases ($n=155$) drawn from a null distribution (i.e., the distribution of distances $d_{CH}$ in Figure 2). We used the mode as centre. This test is meant to provide a more apples-to-apples comparison than the previous ones described above, by comparing the positions of putative centres inferred from the null distribution of cases with the centre of the observed cases. It is additionally meant to mitigate the clustering issue – the distribution of the centres of sets of points is much more condensed than the distribution from which these points are drawn (see Figure S2). We found that the centre of the December case locations was significantly closer to the Huanan market than were the centres of randomly sampled cases, whether we used used sets of ($n=155$) locations from Wuhan’s population density as the null distribution ($p<5\times 10^{-4}$; $1999$ replicates; Figure S2a) or from the January 2020 Weibo cases ($p<5\times 10^{-4}$; $1999$ replicates; Figure S2b). Figure 2: Schematic of the different distances compared in the various tests of market closeness and centrality. While W2022 asked whether the centre of the December case locations was closer to the market than (appropriate) random locations, SC2024 asked a different question: whether the December case locations were centred on the market. SC2024’s question did not involve the identification of a null distribution; instead, SC2024 bootstrapped December case locations. They compared the distance between the Huanan market and the centre of the December case locations ($d_{DH}$ in Figure 2), against the distances between centres of bootstrapped sets of December case locations and the centre of the December case locations ($d_{DB}$ in Figure 2). Their tests (and in particular, as we will see below, their method for computing the mode) led to the rejection of the hypothesis that the market was the centre of the December case locations, because the market was just outside of the region covered by the bootstrapped centres. Again, though, the source of the outbreak is not expected to be exactly at the centre of the December case locations; SC2024’s comparison is therefore based on a false premise. We nevertheless explore SC2024’s approach to further highlight its shortcomings. ## 3\. Defining an appropriate centre of case locations and computing its location SC2024 discussed different types of centres of a cloud of points. They asserted that a centre-point, defined by W2022 as the coordinate-wise median, is “ _a questionable choice_ ”, because of its lack of rotational invariance. SC2024 proposed two alternative centres, which are rotationally invariant: the centroid (defined as the coordinate-wise mean) and the mode (defined as the location of the maximum of a corresponding spatial density function). We will discuss the relevance of the three types of proposed centres, and will show that SC2024’s rejection of the Huanan market as centre was due, specifically, to a crude implementation of a kernel density estimation (KDE). ### 3.1 The impact of the lack of rotation-invariance of the centre-point is limited A rotation-invariant centre does not depend on the orientation of the axes, which, all else being equal, seems to be a desirable property for the centre of a cloud of points – while natural, the latitude-longitude orientation is not the only orientation of axes one could use to reference locations. We tested how the position of the centre-point changes with the orientation of the axes. We find that, while the computed position of the centre-point does indeed change slightly, it remains in a circumscribed area, in the vicinity of the Huanan market (Figure S3). The centre-point may not be strictly speaking rotation-invariant, but the overall conclusions of W2022 are not affected. ### 3.2 A centroid is affected by extreme values SC2024 proposed to use a centroid (coordinate-wise means) as the first of two rotationally invariant alternatives to a centre-point. W2022 had however chosen a centre-point because a median is more robust to extreme values than a mean. The December case locations data used by W2022 contain extreme values, and are not symmetric (Figure S4), so a centre based on medians was preferred by W2022.333“ _We used medians rather than means for our analyses so as to not give undue influence to outliers like those that can be seen in fig. S8._ ”, Worobey et al. (2022a, Supplementary text). In addition, because their focus was on Wuhan, the clear epicentre city of the pandemic, W2022’s data did not include ten additional cases mentioned in the China-WHO report (World Health Organization, 2021, Fig. 23, p.45; annex Fig. 3, p.147). These cases were in seven cities in Hubei province, but outside of Wuhan, and their features (sequenced or not, type of case confirmation) were not described. To investigate the impact of minor changes to data on centroids and centre-points, we digitised the positions of these cities, and added them to the December case locations data from W2022 (Figure S5). We then repeated SC2024’s methodology and computed the new positions of the centroid and the centre-point, and their $95\%$ confidence regions using SC2024’s bootstrap methodology (Figure S6; $1999$ bootstrap resamples). As expected since it is a median-based centre, the centre-point and its $95\%$ confidence surface are barely affected by the inclusion of the seven Hubei cases (Figure S6a–b). However, because the new cases are distant from the centre of Wuhan (about 150 km for the furthest away), their inclusion changes the position of the centroid, but also considerably widens the $95\%$ confidence surface (Figure S6c–d). While the market was outside of the confidence region when only cases within Wuhan were considered, the market is now clearly inside when the seven Hubei locations are added ($p=0.2$), and its centrality is not rejected anymore by SC2024’s (albeit dubious) test. Worobey et al. (2022a) chose centre-points to avoid just such issues with extreme values. Indeed, it is not advisable to rely upon a measure of central tendency that is, like the centroid, so sensitive to minor changes to the data. ### 3.3 Stoyan and Chiu’s exclusion of the Huanan market is due to the use of a circular kernel with too high a bandwidth The mode, i.e. the peak of the underlying spatial density distribution, was the second alternative centre proposed by SC2024. This suggestion, described as rotation-invariant (unlike the centre-point), is (like the centre-point but unlike the centroid) relatively immune to extreme values. As such, in important ways it combines the main advantages of both the centre-point and the centroid. The determination of a mode, however, requires the choice of a kernel function and, specifically, of a bandwidth for kernel density estimation. SC2024 used a circular Gaussian kernel and a large value for their bandwidth, $3000$ m, which had been considered in the preprint version of Worobey et al. (2022b), in which kernel density estimates (KDE) were computed using ArcGIS Online. The published peer-reviewed article (Worobey et al., 2022a), however, employed different software and methods than the preprint (R, function kde from the ks package (Duong, 2024)), and, importantly, automatic bandwidth selection, yielding a matrix (via the Hpi function) instead of a scalar (Figure S7 illustrates the impacts of these choices on the shape of the spatial density function). Rather than attempting to replicate the methods in the published Science paper, SC2024 emulated (partially) the Worobey et al. (2022b) preprint methodology, using a circular kernel (with the bkde2D function from the KernSmooth package (Wand, 2023), and setting the bandwidth to $h=3000$ m). Implementing SC2024’s bootstrap test but with W2022’s more sophisticated KDE function, we find that the mode lies at the entrance of a parking area at the Huanan market, and that the market is within the mode’s $95\%$ confidence region (Figure 1a; $p=0.89$; and see Figure S8 for a detailed view). The market is thus in fact not rejected as the centre of the distribution of the December case locations, even using SC2024’s preferred but inappropriately stringent hypothesis test. A similar non-rejection is obtained with Stoyan and Chiu (2024)’s KDE function when using a bandwidth automatically determined using a principled rather than arbitrary approach ($h=866$; Figure S9b; $p=0.42$). ## 4\. Stoyan and Chiu overlooked key epidemiological data ### 4.1 A limited perspective A key feature of SC2024’s analysis is its limited perspective. Indeed, the authors overlooked almost the entirety of the scientific evidence available about the early period of the pandemic, considering only the December case locations. Other pieces of evidence are however the reason that the Huanan market has, since almost the first inklings that a new disease may have emerged, been the prime suspect as the source of Wuhan’s COVID-19 outbreak. The link to the Huanan market of early patients with pneumonias of unknown aetiology cases that would turn out to be COVID-19 helped lead to the discovery of the new disease, and the market was rapidly closed down (The-nCoV Outbreak Joint Field Epidemiology Investigation Team and Li, 2020; Yang, 2024); this single market accounted for a substantial proportion of the earliest-onset COVID-19 patients in what is a very large city (Worobey, 2021); it had been identified years before the pandemic as a site where viruses with pandemic potential might jump from animals into humans (Zhang and Holmes, 2020); and it was one of only four markets in Wuhan with sustained sales of live mammals from intermediate host species known to harbour SARS-related coronavirus (Xiao et al., 2021). Additionally, although the data were not public until after Worobey et al.’s study was conceived and published, genetic traces of these animals were found in environmental samples from the market, in stalls where SARS-CoV-2 was also detected (Liu et al., 2023; Crits- Christoph et al., 2023a, b; Débarre, 2024). ### 4.2 Linked vs. unlinked cases Notably, SC2024 also overlooked the fundamental difference between the linked and the unlinked December 2019 cases as it relates to spatial analyses. Cases epidemiologically linked to the market worked there or had shopped there, and for some were clearly involved in human-to-human transmission chains within the market (Li et al., 2020). These linked cases are thus expected to be found around the Huanan market only because people often shop close to where they live, and often live not too far from where they work. Unlinked cases, on the other hand, are only expected to be found to reside near the market if local transmission originated from the market, and only at an early point in the outbreak, before the virus had spread widely. The observed spatial pattern for unlinked cases was therefore particularly noteworthy. And yet, even though information on market links was available in the case dataset used by SC2024, they did not use it in their analyses. With Stoyan and Chiu (2024)’s methodology and W2022’s KDE function, we find that analysing only unlinked cases gives the same results as considering all cases regardless of linkage: the Huanan market cannot be rejected as being at the very centre of the unlinked cases (Figure 1b; $p=0.57$). ### 4.3 Missing the tree for the forest In their paper, SC2024 proffer multiple other landmarks, in the vicinity of the Huanan market, that could have played a role in the origin of the pandemic, based on the argument that they are close to one of the centres they computed from the December case locations. It is of course true that, when considering the December case locations data in an epistemic vacuum, it is not possible to rule out alternative sources also in the vicinity of their centre. Other data, however, help assess the plausibility of the other landmarks proposed by SC2024. This is for instance the case for a complex called “Wanda plaza”.444We note that this complex, as well as other landmarks, were improperly located by SC2024. SC2024 seem to have used Google Maps to extract coordinates, but did not take into account the offset on maps of China shown on Google Maps. We present corrected locations (see Figure S10). We also note that the complex does not just include a shopping centre, but also residences. The social media check-in data used by SC2024 do not seem to differentiate between the different uses of the place. SC2024’s post hoc reasoning, in inferring a centre of the December case locations and then looking for a landmark near it, completely misses the fact that human-to-human transmission did happen in relation to the Huanan market (Li et al., 2020). The market did play an epidemiological role, and is of interest not just because of its position in Wuhan: it was of interest because of its role in the early days of COVID-19, before it was known that the centre of the December case locations would be revealed to be so close to it – as already described above. Landmarks are to be chosen by taking into account actual potential sources. While the market is one such potential source, Wanda Plaza has, to our knowledge, never been mentioned as such until SC2024. Highlighting the existence of other data linking cases to the Huanan market, and the absence of such evidence for other landmarks in Wuhan, was actually the point of the “No other location except the Huanan market clearly epidemiologically linked to early COVID-19 cases” paragraph in W2022’s supplementary text, that SC2024 seem to have misunderstood. SC2024 used social media check-in data to argue that Wanda Plaza was a much more visited place than the Huanan market. Putting aside the fact that Wanda Plaza is more than just a shopping centre (see footnote 4), SC2024’s argument reveals a profound misunderstanding of the use of social media check-in data by W2022. The Huanan market being much less visited than Wanda Plaza actually emphasises the extraordinary role played by the Huanan market in the early days of COVID-19. The market was not a hotspot in Wuhan in terms of visitation check-ins – it was one of hundreds of possible early sites for case clustering, and even amongst markets across the city, there were 70 with (up to hundred times) more check-ins (Worobey et al., 2022a). And yet, the Huanan market was the hotspot for December cases and in the distribution of the December case locations. A new location of the Wuhan Centre for Disease control (WCDC; see Figure S11) was also proposed as the potential source of the Wuhan outbreak. WCDC was the focus of one of the earliest speculations about a potential non-natural origin of SARS-CoV-2 (Xiao and Xiao, 2020), and was identified because of its proximity to the market (i.e., post-hoc). However, WCDC had moved to its new location close to the market on December 2nd, 2019 (World Health Organization, 2021, p.122 and Annex D5), leaving little time or no time for its BSL2 laboratory to be operational – and this laboratory did not conduct elaborate virology experiments like the Wuhan Institute of Virology. While WCDC had been involved in wildlife sampling, no mammalian samples were moved between the old and new locations (Holmes, 2024). Finally, as noted in Worobey et al. (2022b, a), only one WCDC staff member had a positive serology result (World Health Organization, 2021, Annex D5), to be contrasted with the at least $30$ Huanan market vendors who were identified as cases. Moreover, this individual was reported to have been infected by a family member and not at WCDC (World Health Organization, 2021, Annex D5). Finally, it is useful to zoom out and to consider the distribution of the December case locations in Wuhan relative to the locations of the campuses of the Wuhan Institute of Virology (WIV). This institute has been at the centre of speculations around a potential lab leak since early 2020. However, Figure 3 shows the clear disconnect between the December case locations and the WIV campuses. Figure 3: Zooming out. The two campuses of the Wuhan Institute of Virology (WIV) are shown. ## 5\. Discussion Using the same methodology as Stoyan and Chiu (2024, (SC2024)) on the data extracted by Worobey et al. (2022a, (W2022)), but with more appropriate parameters and implementation, we arrive at opposite conclusions, and show that Wuhan’s Huanan market cannot, after all, be rejected as the source of the COVID-19 pandemic – even with an overly-stringent test. This result is not surprising given the amount of other evidence pointing to the Huanan market. In their article, SC2024 dismissed these other pieces of information, characterising them as “ _some non-statistical argument_ ”. The question of the origin of SARS-CoV-2, however, requires the consideration of multiple lines of evidence – and these include other statistical arguments. Epidemiological links of numerous early cases to the market were identified early on (Worobey, 2021; Yang, 2024), and our study re-emphasizes the spatial proximity between the market and December case locations, including cases for which no link to the market had been identified (Worobey et al., 2022a). A second main line of evidence involves wildlife sales: animals susceptible to SARS-CoV-2 were demonstrated to have been in the Huanan market (Xiao et al., 2021; Crits-Christoph et al., 2023a, b; Liu et al., 2023; Débarre, 2024), which was the most active of only four markets in Wuhan with consistent sales of live mammals that are the most plausible intermediate hosts of SARS-CoV-2 (Xiao et al., 2021). In addition, there was a high concentration of SARS- CoV-2-positive samples in the corner of the market where most live wildlife was sold (Wu, 2020; Worobey et al., 2022a; Crits-Christoph et al., 2023b), and genetic traces of SARS-CoV-2 and animals were detected in the same stalls and in drains directly below and downstream of these stalls (Liu et al., 2023). A third line of evidence involves SARS-CoV-2’s genetic diversity. There were two early SARS-CoV-2 lineages, A and B. Sequenced market-linked cases were of lineage B, but the earliest-known lineage A cases were close to the market (Lu et al., 2020; Worobey et al., 2022a), and both lineages A and B were detected in environmental samples at the market (Liu et al., 2023). In addition, statistical analyses of the SARS-CoV-2 molecular clock strongly reject a single introduction into humans of a bat virus-like lineage A-haplotype ancestor from which lineage B evolved within humans (Pekar et al., 2022). The inconsistency between ancestor inference with molecular clock methods (rejecting lineage A) vs. with the consideration of related sarbecoviruses (favouring lineage A), led to the suggestion that there may not have been only a single introduction of the progenitor virus into humans, but multiple ones (Pekar et al., 2022) – a scenario that is fully expected at a wildlife market, but not with a lab origin. Importantly, a market origin is not contingent on multiple spillovers, and any origin scenario needs to account for the presence of early lineage A and lineage B both in and near the market. Finally, temporal arguments also align with a market origin: phylogenetic and epidemiological analyses suggested that the COVID-19 outbreak in Wuhan began only shortly before the earliest known cases at the market became symptomatic (Jijón et al., 2024; Pekar et al., 2022). Consistent with this, serological testing of stored blood samples, collected from September to December 2019 from more than 32,000 individuals in Wuhan, revealed that not a single one had neutralizing antibodies against SARS-CoV-2 (Chang et al., 2023), i.e., that SARS-CoV-2 was not widely circulating in Wuhan in the last quarter of 2019. And it was not until December 27th–29th 2019 that the first trickle of reports of COVID-19 cases was recognized by local health authorities as a concern (Yang, 2024). All but one of the hospitals first reporting pneumonia of aetiology cases were very near the Huanan market (Yang, 2024). We agree with SC2024 that there are of course many ways that the spatial pattern of cases in an emerging epidemic of an infectious respiratory disease could have deviated from the Huanan-centred pattern observed here. Indeed, data of individuals self-reporting their COVID-19 infection on social media (Weibo) to get help, mostly in January and February 2020, indicate that by that time cases had spread widely across the city, with locations of these cases largely recapitulating patterns of population density, particularly for older people (Peng et al., 2020a, and our Figures 1 and S1). By then, the pattern placing the Huanan market at the centre of the earlier, December 2019-onset cases, had been obscured by subsequent spread (Worobey et al., 2022a). Even in cases of non-transmissible diseases, a source may not be located at the centre of a cloud of case locations, when environmental factors come into play. For instance, the anthrax cases near the Sverdlovsk military facility in Spring 1979 were to the south-east of the building, reflecting the direction of the wind (Meselson et al., 1994). Conversely, in the classic case of ‘John Snow and the Broad Street pump’, which established the field of spatial epidemiology, the Broad street pump was a well-identified source during the 1854 London cholera outbreak, with quite a clearly central position compared to case fatality locations (Snow, 1855; Falcone et al., 2020). As with live wildlife markets and COVID-19, water pumps were pre-identified by John Snow as potential point sources of water-borne disease. Snow did not go for a pump, of all possible features in the landscape of the city, at random; he focused on pumps because he had other reasons to suspect a role of water in spreading cholera. But even then, inappropriate criteria of the sort employed by SC2024 would exclude the pump as the source. Indeed, we applied SC2024’s methodology and their KDE function on the Snow data; the pump was outside of the $95\%$ confidence surfaces (see Figure 4a) and thus rejected as the centre of that cholera outbreak. Yet we all understand that the source of the cholera outbreak was not a cobblestone near the pump, nor a building across the street from the pump, but the contaminated water flowing from the pump itself. When we used W2022’s KDE on the Snow data on the other hand, the pump was not rejected as central (Figure 4b). $(a)$ Mode, SC2024’s KDE, $h=500$ | $(b)$ Mode, W2022’s KDE ---|--- | Figure 4: Identification (or not) of the Broad street pump using John Snow’s data for the 1854 London cholera outbreak, using Stoyan and Chiu (2024)’s bootstrap method (the background map shows modern London). The panels show results for different centres and KDE functions. $p$ values: $(a)$ Mode with SC2024’s KDE and $h=500$, $p=0.001$; $(b)$ Mode with W2022’s KDE $p=0.12$. Real-life data are always finite. However, while data on the early days of COVID-19 could have been shared more quickly, in more accessible formats, and while some data are known to exist but have not been shared yet (Holmes, 2024), the amount of information available is unprecedented for the emergence of a new disease. Data from various sources and of various types, all indicate that wildlife sales in the Huanan market played a key role in the onset of the COVID-19 pandemic. In this way, the Huanan market hypothesis is fundamentally distinct from lab leak hypotheses, which were important to consider (Bloom et al., 2021), but which encompass many unsupported, highly unlikely, and mutually exclusive scenarios. (For example, if the virus escaped from the Wuhan CDC, then it did not escape from the Wuhan Institute of Virology; if it came from a mine in southern China then it did not come from Laos; if this was a fieldwork accident then it was not an engineered virus, etc.) The Huanan market hypothesis, in contrast, has given rise to multiple predictions that were later confirmed as additional evidence arose. These include (i) that the SARS-CoV-2 lineage A, found to be geographically associated with the Huanan market (Lu et al., 2020; Worobey, 2021; Worobey et al., 2022a), would eventually be found to have been at the market (Liu et al., 2023), and (ii) that cases unlinked epidemiologically to the market would be found to reside around the market (and see Débarre, 2024, for other examples of such predictions). Contrary to what SC2024 contended to have rebutted,555Their article’s title reads “ _Statistics did not prove that the Huanan Seafood Wholesale Market was the early epicentre of the COVID-19 pandemic_ ”. Worobey et al. (2022a)’s study did not claim to “prove” that the pandemic started at the market. Statistics do not offer “proof”, but rather a means to think clearly about how probable different hypotheses are. Our results show that it is highly probable that community transmission in Wuhan began in the neighbourhoods directly surrounding the Huanan market. Other evidence shows that the Huanan market was deeply unlikely to be at the centre of the Wuhan outbreak by pure chance (i.e., if the Wuhan outbreak had not begun there; Worobey et al., 2022a). We should not lose sight of how remarkable it is that we have such unique insights into the origin of this pandemic, and we would do well to be mindful of the lesson all this evidence provides: that to prevent SARS-CoV-3 from emerging, we must reduce opportunities for pathogens with pandemic potential to be brought into the heart of large human populations via the live animals that harbour them. ## Acknowledgements We thank Alex Crits-Christoph, Zach Hensel, Peter Jacobs, Josh Levy, Lorena Malpica, and Marc Suchard for discussions and/or comments. This project has been funded in part with federal funds from the National Institute of Allergy and Infectious Diseases, National Institutes of Health (NIH), Department of Health and Human Services (contract no. 75N93021C00015 to M.W.) ## Competing interests M.W. has received consulting fees from GLG and compensation for expert testimony from Endurance Specialty Insurance. ## Data and methods We based our analyses on Stoyan and Chiu (2024)’s R script (provided as supplementary information) and on Worobey et al. (2022a)’s data and code, shared on Zenodo (https://zenodo.org/records/6908012; scripts/construct_KDE_contours.R). The data we used and our R (R Core Team, 2023) scripts are available on Zenodo at https://doi.org/10.5281/zenodo.10779463. The maps were drawn thanks to the R leaflet package (Cheng et al., 2023). ### Data sources Like Stoyan and Chiu (2024), we used case location data shared by Worobey et al. (2022a), which were manually extracted from figures in the China-WHO Joint mission report (World Health Organization, 2021) (see Worobey et al. (2022b, a) for the extraction method). Following a similar methodology, we manually extracted the seven additional locations shown in annex Fig. 3 of the China- WHO Joint mission report (World Health Organization, 2021) (there are ten additional cases, but seven locations, and we do not know which location(s) had multiple cases; we therefore only placed seven additional points). Because some maps in the WHO-China report were too blurry, the linked or unlinked status of some cases is uncertain; the number of identified linked December case locations in W2022 – $35$ out of the $155$ cases recovered from a map that contained $164$ cases within Wuhan (World Health Organization, 2021, Annex E4) – is lower than the reported number of linked cases in the report – $55$ out of $168$, including one or more Huanan-linked cases outside of Wuhan; the $168$ number excludes $6$ cases, out of the total of $174$ December 2019 cases, whose exposure history was unknown (World Health Organization, 2021, Annex E4). W2022 tested the robustness of their results to this (and other) sources of uncertainty (Worobey et al., 2022a, Supplementary information). The Snow data were from the dataset provided by Falcone et al. (2020). The Weibo case data that we used were compiled by Xu et al. (2020). Population density data were manually digitised from Peng et al. (2020b, Fig. 10) (This imperfect solution was chosen for lack of access to data). We first increased contrast between consecutive colors using Gimp, and then extracted the positions of squares with WebPlotDigitizer (Rohatgi, 2024). We georeferenced the figure using the positions of two landmarks (tips of islands). We programmatically re-aligned all positions. ### Landmarks The position of Stoyan and Chiu (2024)’s additional landmarks was clearly erroneous (as revealed by a comparison of their Figures 1 and 2), so we repositioned these landmarks using Baidu Maps and Open Street Maps. ### Kernel Density Estimation (KDE) Stoyan and Chiu (2024) used the bkde2D function from the KernSmooth package for 2-dimensional kernel smoothing (Wand, 2023), with bandwidth $h=3000$ m. Stoyan and Chiu (2024) used a circular Gaussian kernel, i.e. the same bandwidth value in the horizontal and vertical directions. Given that the choice of a bandwidth can be a bit arbitrary, we used the bw.diggle function for automatic bandwidth selection (spatstat package (Baddeley and Turner, 2005)), and obtained a value of $866$ m (i.e., yielding less smoothing). Worobey et al. (2022a), in the peer-reviewed version of their work, used a different KDE function: the kde function from the ks package (Duong, 2024). Like W2022 did, we automatically selected a bandwidth matrix using the package’s Hpi function. The matrix $H$ affects the amount and the orientation of smoothing. This matrix is not necessarily (and is not, in our case) diagonal, i.e. it is not necessarily aligned along the coordinate axes. ### Bootstrap analyses When implementing Stoyan and Chiu (2024)’s test, we estimate $p$ values using resampled datasets of December case locations. We draw the same number of data points as in the original datasets, with replacement, $1999$ times. The $p$ value is given by the relative rank of the distance between the original centre and the landmark of interest, compared to the distances between the original centre and the centres of the resampled datasets. ## References * Baddeley and Turner (2005) Baddeley A, Turner R. 2005. spatstat: An R package for analyzing spatial point patterns. Journal of Statistical Software. 12:1–42. * Bloom et al. (2021) Bloom JD, Chan YA, Baric RS, Bjorkman PJ, Cobey S, Deverman BE, Fisman DN, Gupta R, Iwasaki A, Lipsitch M et al. 2021. Investigate the origins of COVID-19. Science. 372:694–694. * Chang et al. (2023) Chang L, Zhao L, Xiao Y, Xu T, Chen L, Cai Y, Dong X, Wang C, Xiao X, Ren L et al. 2023. Serosurvey for sars-cov-2 among blood donors in wuhan, china from september to december 2019. Protein & Cell. 14:28–36. * Cheng et al. (2023) Cheng J, Schloerke B, Karambelkar B, Xie Y. 2023. leaflet: Create Interactive Web Maps with the JavaScript ’Leaflet’ Library. R package version 2.2.1. * Cohen (2020) Cohen J. 2020. Wuhan seafood market may not be source of novel virus spreading globally. Science. https://www.science.org/content/article/wuhan-seafood-market-may-not-be-source-novel-virus-spreading-globally. * Crits-Christoph et al. (2023a) Crits-Christoph A, Gangavarapu K, Pekar JE, Moshiri N, Singh R, Levy JI, Goldstein SA, Suchard MA, Popescu S, Robertson DL et al. 2023a. Genetic evidence of susceptible wildlife in SARS-CoV-2 positive samples at the Huanan Wholesale Seafood Market, Wuhan: Analysis and interpretation of data released by the Chinese Center for Disease Control. Technical report. Zenodo. https://zenodo.org/record/7754299. * Crits-Christoph et al. (2023b) Crits-Christoph A, Levy JI, Pekar JE, Goldstein SA, Singh R, Hensel Z, Gangavarapu K, Rogers MB, Moshiri N, Garry RF et al. 2023b. Genetic tracing of market wildlife and viruses at the epicenter of the COVID-19 pandemic. Preprint. Genomics. https://www.biorxiv.org/content/10.1101/2023.09.13.557637v1.full. * Débarre (2024) Débarre F. 2024. What we can and cannot learn from SARS-CoV-2 and animals in metagenomic samples from the Huanan market. Virus Evolution. 10:vead077. https://academic.oup.com/ve/article/10/1/vead077/7503693. * Duong (2024) Duong T. 2024. ks: Kernel Smoothing. R package version 1.14.2. * Falcone et al. (2020) Falcone M, Koschinsky J, Vinten-Johansen P, Coleman T, Anselin L. 2020. John Snow & the Cholera Epidemic in Mid-19th Century London: 8 Datasets With Documentation for Use in GeoDa{}. https://geodacenter.github.io/data-and-lab/data/snow_documentation.pdf. * Holmes (2024) Holmes EC. 2024. The emergence and evolution of SARS-CoV-2. Annu. Rev.Virol.. p. In Press. * Holmes et al. (2021) Holmes EC, Goldstein SA, Rasmussen AL, Robertson DL, Crits-Christoph A, Wertheim JO, Anthony SJ, Barclay WS, Boni MF, Doherty PC et al. 2021\. The origins of SARS-CoV-2: A critical review. Cell. 184:4848–4856. https://www.cell.com/cell/fulltext/S0092-8674(21)00991-0. * Huang et al. (2020) Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, Zhang L, Fan G, Xu J, Gu X et al. 2020. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. The Lancet. 395:497–506. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30183-5/fulltext. * Jiang and Wang (2022) Jiang X, Wang R. 2022. Wildlife trade is likely the source of sars-cov-2. Science. 377:925–926. https://www.science.org/doi/abs/10.1126/science.add8384. * Jijón et al. (2024) Jijón S, Czuppon P, Blanquart F, Débarre F. 2024. Using early detection data to estimate the date of emergence of an epidemic outbreak. PLOS Computational Biology. 20:1–19. https://doi.org/10.1371/journal.pcbi.1011934. * Li et al. (2020) Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, Ren R, Leung KS, Lau EH, Wong JY et al. 2020. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia. New England Journal of Medicine. 382:1199–1207. https://www.nejm.org/doi/10.1056/nejmoa2001316. * Liu et al. (2023) Liu WJ, Liu P, Lei W, Jia Z, He X, Shi W, Tan Y, Zou S, Wong G, Wang J et al. 2023. Surveillance of SARS-CoV-2 at the Huanan Seafood Market. Nature. https://www.nature.com/articles/s41586-023-06043-2. * Lu et al. (2020) Lu R, Zhao X, Li J, Niu P, Yang B, Wu H, Wang W, Song H, Huang B, Zhu N et al. 2020. Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. The Lancet. 395:565–574. * Meselson et al. (1994) Meselson M, Guillemin J, Hugh-Jones M, Langmuir A, Popova I, Shelokov A, Yampolskaya O. 1994. The Sverdlovsk Anthrax Outbreak of 1979. Science. 266:1202–1208. https://www.science.org/doi/10.1126/science.7973702. * Pekar et al. (2022) Pekar JE, Magee A, Parker E, Moshiri N, Izhikevich K, Havens JL, Gangavarapu K, Malpica Serrano LM, Crits-Christoph A, Matteson NL et al. 2022. The molecular epidemiology of multiple zoonotic origins of SARS-CoV-2. Science. 377:960–966. https://www.science.org/doi/10.1126/science.abp8337. * Peng et al. (2020a) Peng Z, Wang R, Liu L, Wu H. 2020a. Exploring Urban Spatial Features of COVID-19 Transmission in Wuhan Based on Social Media Data. ISPRS International Journal of Geo-Information. 9:402. https://www.mdpi.com/2220-9964/9/6/402. * Peng et al. (2020b) Peng Z, Wang R, Liu L, Wu H. 2020b. Fine-Scale Dasymetric Population Mapping with Mobile Phone and Building Use Data Based on Grid Voronoi Method. ISPRS International Journal of Geo-Information. 9:344. https://www.mdpi.com/2220-9964/9/6/344. * R Core Team (2023) R Core Team. 2023. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria. * Rohatgi (2024) Rohatgi A. 2024. Webplotdigitizer: Version 4.7. * Snow (1855) Snow J. 1855. On the Mode of Communication of Cholera. John Churchill. London. second edition. https://archive.org/stream/b28985266#page/n3/mode/2up. * Stoyan and Chiu (2024) Stoyan D, Chiu SN. 2024. Statistics did not prove that the Huanan Seafood Wholesale Market was the early epicentre of the COVID-19 pandemic. Journal of the Royal Statistical Society Series A: Statistics in Society. p. qnad139. https://doi.org/10.1093/jrsssa/qnad139. * The-nCoV Outbreak Joint Field Epidemiology Investigation Team and Li (2020) The-nCoV Outbreak Joint Field Epidemiology Investigation Team, Li Q. 2020. An outbreak of NCIP (2019-nCoV) infection in China–Wuhan, Hubei province, 2019–2020. China CDC Weekly. 2:79. * Wand (2023) Wand M. 2023. KernSmooth: Functions for Kernel Smoothing Supporting Wand & Jones (1995). R package version 2.23-22. * World Health Organization (2021) World Health Organization. 2021. WHO-convened Global Study of Origins of SARS-CoV-2: China Part: Joint WHO-China Study, 14 January-10 February 2021 : Joint Report. WHO. https://www.who.int/publications/i/item/who-convened-global-study-of-origins-of-sars-cov-2-china-part. * Worobey (2021) Worobey M. 2021. Dissecting the early COVID-19 cases in Wuhan. Science. 374:1202–1204. https://www.science.org/doi/10.1126/science.abm4454. * Worobey et al. (2022a) Worobey M, Levy JI, Serrano LM, Crits-Christoph A, Pekar JE, Goldstein SA, Rasmussen AL, Kraemer MUG, Newman C, Koopmans MPG et al. 2022a. The Huanan Seafood Wholesale Market in Wuhan was the early epicenter of the COVID-19 pandemic. Science. 377:951–959. https://www.science.org/doi/10.1126/science.abp8715. * Worobey et al. (2022b) Worobey M, Levy JI, Serrano LMM, Crits-Christoph A, Pekar JE, Goldstein SA, Rasmussen AL, Kraemer MUG, Newman C, Koopmans MPG et al. 2022b. The Huanan market was the epicenter of SARS-CoV-2 emergence. https://zenodo.org/record/6299600. * Wu (2020) Wu G. 2020. Report from the Chinese Center for Disease Control and Prevention’s Office of Virus Control and Prevention, Regarding the Results of Environmental Sample Testing for the Novel Coronavirus Epidemic in Wuhan. Technical Report 53. Chinese Center for Disease Control and Prevention. https://archive.ph/xPBFD. * Xiao and Xiao (2020) Xiao B, Xiao L. 2020. The possible origins of 2019-nCoV coronavirus. ResearchGate. 6. * Xiao et al. (2021) Xiao X, Newman C, Buesching CD, Macdonald DW, Zhou ZM. 2021. Animal sales from Wuhan wet markets immediately prior to the COVID-19 pandemic. Scientific Reports. 11:1–7. https://www.nature.com/articles/s41598-021-91470-2. * Xu et al. (2020) Xu Q, Shen Z, Shah N, Cuomo R, Cai M, Brown M, Li J, Mackey T et al. 2020\. Characterizing weibo social media posts from wuhan, china during the early stages of the covid-19 pandemic: qualitative content analysis. JMIR public health and surveillance. 6:e24125. * Yang (2024) Yang DL. 2024. Wuhan: How the COVID-19 Outbreak in Wuhan, China Spiralled out of Control. Oxford University Press. New York, NY. * Zhang and Holmes (2020) Zhang YZ, Holmes EC. 2020. A Genomic Perspective on the Origin and Emergence of SARS-CoV-2. Cell. 181:223–227. https://www.cell.com/cell/fulltext/S0092-8674(20)30328-7. ## Appendix Figure S1: Heatmap of population density in Wuhan, manually digitised from Peng et al. (2020b)’s Figure 10. $(a)$ With population density data | $(b)$ With Jan Weibo cases ---|--- | Figure S2: Positions of the modes of sets of $155$ locations, drawn from $(a)$ Wuhan’s population density or $(b)$ from January 2020 Weibo cases; $1999$ resampled sets. Figure S3: Positions of the centre-point of December case locations when the coordinates axes are rotated. The centre-point with regular latitude-longitude axes is the centre of rotation, and we tested rotations of $101$ evenly spaced angles between $0$ and $2\pi$; the resulting centre-points are shown in lighter colour. $(a)$ Distribution of longitudes of the December case locations --- $(b)$ Distribution of latitudes of the December case locations Figure S4: Distribution of longitudes and latitudes of the December case locations in the dataset used by Worobey et al. (2022a). The vertical lines correspond to the means (dashed lines) and medians (full lines) of the distributions. Figure S5: All case residential locations, including seven Hubei case locations outside of Wuhan (the city contour is delimited in black). $(a)$ Centre-point, cases inside of Wuhan | $(b)$ Centre-point, adding the seven Hubei locations ---|--- | $(c)$ Centroid, cases inside of Wuhan | $(d)$ Centroid, adding the seven Hubei locations | Figure S6: Effect of the addition of new cases on the position of the centre-point and centroid and on their $95\%$ confidence surfaces. The full distribution of cases is shown in Figure S5. The contour delimitates the $95\%$ closest bootstrapped centre locations. $p$ values: $(a)$ centre-point, cases inside Wuhan, $p=0.0085$; $(b)$ centre-point, with Hubei cases, $p=0.012$. $(c)$ centroid, cases inside Wuhan, $p=0.022$; $(d)$ centroid, with Hubei cases, $p=0.2$; $1999$ resamples of the data. $(a)$ SC2024’s KDE, $h=3000$ m | $(b)$ W2022’s KDE, bandwidth matrix $H$ ---|--- | $(c)$ SC2024’s KDE, $h=866$ m | | Figure S7: The impacts of KDE functions and bandwidth choice ($h$) on kernel density estimation , Wuhan data. SC2024 used $h=3000$ but a coarser grid, $501\times 501$, than ours ($1001\times 1001$), which is why the mode is at the exact same position (but is still very similar; compare the positions of the modes in their Figure 3a to our Figure S7a). In $(b)$, the bandwidth matrix $H$ is automatically determined by the Hpi function; in $(c)$ the bandwidth value $h$ is automatically determined by the bw.diggle function. $(a)$ Satellite view | $(b)$ Close-up ---|--- | Figure S8: Zoom on the position of the mode, computed with W2022’s KDE function, relative to the Huanan market. In panel $(a)$, the red marker between the two sides of the market is the position used throughout our study. Panel $(b)$ further zooms in on the position of the mode (green star in $(a)$); the snapshot is from a picture taken in December 2021, uploaded by user “logan logan” on Google Maps, and available as dynamical image at https://maps.app.goo.gl/q5CmZjrTS3bkSGqW8. $(a)$ SC2024’s KDE, $h=3000$ | $(b)$ SC2024’s KDE, $h=866$ ---|--- | Figure S9: Position of the mode and the associated $95\%$ confidence surface, inferred by computing the centres of bootstrap pseudoreplicates, as a function of the bandwidth $h$, using SC2024’s KDE function implementation in R. Panel $(a)$: $h=3000$, which is the bandwidth value used by SC2024 ($p=0.031$). Panel $(b)$: $h=866$; the bandwidth value was chosen automatically via the bw.diggle function from the spatstat package (Baddeley and Turner, 2005) ($p=0.42$). The $95\%$ confidence surfaces were determined after $1999$ bootstrap resamples in each panel. We used a finer discretisation grid than SC2024 ($1001\times 1001$ instead of $501\times 501$). Figure S10: Positions of landmarks in Wuhan, using latitude and longitude coordinates shared by SC2024 (“SC” on the figure) as supplementary information, and showing the corrected positions. As an example, the Hankou railway station is in reality to the North-West of the Huanan market (as shown on SC2024’s Figure 2 map). It appears in the South-West of the market on their Figure 1 and here, because their coordinates did not take into account the offset on China maps. The position of Wuhan CDC in the figure is different from SC2024’s Figure 1b, because we used their latitude and longitude coordinates, which are not consistent with the UTM coordinates in SC2024’s data (UTM coordinates for Wuhan CDC seem to have been manually changed in SC2024’s data.) Figure S11: Equivalent of Figure 1a, showing additional landmarks (with locations corrected compared to SC2024, see Figure S10).
††thanks: These authors contributed equally††thanks: These authors contributed equally††thanks: These authors contributed equally # Spatially correlated rotational dynamics reveals strain dependence in amorphous particle packings Dong Wang Department of Physics & Center for Non-linear and Complex Systems, Duke University, Durham, North Carolina 27708, USA Department of Mechanical Engineering & Materials Science, Yale University, New Haven, Connecticut 06520, USA Nima Nejadsadeghi Mechanical Engineering, University of Kansas, 1530 W. 15th St. Lawrence, KS 66045-7609, USA Yan Li Computer Science & Engineering, University of Minnesota, Minneapolis, MS, USA Shashi Shekhar <EMAIL_ADDRESS>Computer Science & Engineering, University of Minnesota, Minneapolis, MS, USA Anil Misra<EMAIL_ADDRESS>Civil, Environmental & Architectural Engineering, University of Kansas, 1530 W. 15th St. Lawrence, KS 66045-7609, USA Joshua A. Dijksman<EMAIL_ADDRESS>Physical Chemistry and Soft Matter, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands ###### Abstract Microstructural dynamics in amorphous particle packings is commonly probed by quantifying particle displacements. While rigidity in particle packings emerges when displacement of particles are hindered, it is not obvious how the typically disordered displacement metrics connect to mechanical response. Particle rotations, in contrast, are much less sensitive to confinement effects, while still sensitive to the mechanics of the packing. So far, little attention has been paid to connect microscopic rotational motion to mechanics of athermal amorphous packings. We demonstrate through experimental measurements that particle packing mechanics can be directly linked to the rotational motion of even round particles in a sheared packing. Our results show that the diffusive nature of rotational dynamics is highly strain sensitive. Additionally, there is substantial spatial correlation in rotation dynamics that is a function of the particle friction and packing density. Analysis of our measurements reveals that particle rotation dynamics plays an essential role in amorphous material mechanics. ††preprint: APS/123-QED ## I Introduction Amorphous packings of particles occur in many contexts, ranging from glassy polymers to colloidal gels and geological sediments. These materials are well known to have complex deformation behavior. For example, their mechanical response is often strain history dependent [1, 2, 3, 4]. The amorphous nature of the microstructure of these systems makes it notoriously challenging to understand the origin of such strain history dependence and mechanical behavior in general [5]. Notably, these “granular” material systems are characterized by a length-scale proportional to particle size, that makes their theoretical description using classical continuum physics concepts particularly challenging. For accurate descriptions of amorphous packing mechanics, the traditional views of continuum mechanics desperately needs updating from a microscopic point of view. One route towards a more general continuum description considers material point rotations and has an origin in the work from the Cosserat brothers [6]. Indeed, it has long been recognized that the rotational motion of particles in thermally driven amorphous packings can be linked to slowdown effects and glassy dynamics [7, 8, 9]. Nevertheless, much progress is needed to include (particle) rotational degrees-of-freedom in continuum mechanics approaches [10, 11]. Here we show experimentally that even in an completely athermal amorphous packing, rotational degrees-of-freedom are directly coupled to its mechanical response, both at the particle level and via mesoscopic spatial anticorrelations in the rotation field. Our data suggests that particle rotations are an essential yet overlooked kinematical quantity in the study of dense amorphous packing. In addition, the spatial autocorrelation analysis of rotations can reveal essential features in materials science of a large variety of materials with intrinsic length scales. To study the role of rotational degrees of freedom, decoupling rotation from translation is challenging. In many circumstances the involved molecules, colloids or grains are not spherically symmetric, hence their rotation requires also _spatial displacement_ of their neighbors, particularly for high density, jammed granular materials. To probe the role of _only_ the rotational degrees of freedom in the strain dependence of amorphous packings, athermal round particle systems are an optimal prototypical choice. Such particles can be designed to experience contact friction, which directly couples rotational degrees of freedom to displacements. In an athermal packing of frictional disks, shear for even circular particles is thus directly coupled to rotations without necessarily requiring particle displacements. While rotational dynamics of spherical particles in athermal packings has been probed via wave- propagation measurements [12], particle-level experimental evidence that links rotational degrees of freedom directly to mechanics in amorphous packings has so far not been obtained. The unique set of experimentally measured data analyzed in this work shows that the particles’ micro-rotation dynamics are linked to both the packing density and the particle surface characteristics (or “friction”) which directly mediates in tangential motion of contacting circular grain-pairs. We will see that particle rotations display strain- induced diffusive behavior even at very small strain amplitudes. The diffusive behavior changes with particle packing density and friction coefficient in manners consistent with previously observed packing properties [13, 14, 15]. Additionally, particle rotations display non-local correlations as revealed by spatial auto-correlation measures [16], indicating that the non-local mechanical effects well known to exist in sheared glassy granular media and amorphous materials in general [17, 18, 19] can be mediated by rotational dynamics. Our results show that rotational degrees of freedom are a crucial element to be considered in the quest to understand the flow behavior of amorphous materials. Granular material systems exhibit many non-standard physical phenomena, such as that of negative group velocity [20, 21], frequency band gaps [22, 23, 24, 25], chirality [26] and load path dependency. The latter is the key physical phenomenon that underlies our results: collections of discrete athermal particles have the remarkable property that they can become _rigid_ when assembled into certain arrangements. Referred to as granular media, these loose particle packings can resist shear or compression when a sufficient number of them is present per unit volume, a feature commonly called _jamming_. Even when not spatially confined, packing of particles can enhance its rigidity when the assembly is subject to shear strain. This shear deformation induced rigidification is known as _shear jamming_. The peculiar strain dependence of granular media can be indicated by the correspondence between rotational particle motion and collective packing mechanics. Indeed, some work had already hinted at the relevance of rotational degrees of freedom of particles in loose particle assemblies for both slow [27, 10, 28, 29] and fast granular flows [30, 31, 32] and recently also for the statistical mechanics of sheared packings [33, 34]. ## II Rotation Kinematics Figure 1: (a) Examples of the UV image taken to track particle orientation. Blue bars show actual UV marks and white lines indicate tracked orientation. Red circles mark edges of disks. (b - d) Examples of photoelastic particles used for different inter-particle friction coefficients $\mu_{l,m,h}$ respectively. (e) The microrotation of grains induced by the first shear step; particle locations shown in their initial configuration. Results obtained for $\gamma=0.27$, $\mu_{m}$ with packing fraction $\phi$=0.816. (f) evolution of the mean $\langle\theta_{i}^{m}\rangle$ and standard deviation $\sigma_{m}$ of the microrotation, and the rigid body rotation $\theta^{R}$, for five shear tests at a given density. $\theta^{R}$ grows linearly at a rate of 0.0013 rad per frame as expected from the imposed strain. Packing of disk-shaped particles at different packing fractions $\phi$, were subjected to quasi-static stepwise shear to observe the diffusive rotational dynamics of particles. The packing were imaged after each strain step. For details of the experiments, see the Methods section. In our experiments we have used quasi-two dimensional packing, as three dimensional shear experiments have the propensity for formation of finite shear localization bands into which the particle rotations typically concentrate [35, 36, 37, 29]. In contrast, our two dimensional geometry with articulated base allows us to suppress shear localization entirely [13, 14]. A fluorescent bar placed on the particles allows us to track the absolute orientation of every particle in the packings; see Fig. 1a. We use three different particle friction coefficients and refer to these as $\mu_{l}$, $\mu_{m}$ and $\mu_{h}$; examples of these particle types are shown in Fig. 1b-d. We probe the dynamics of orientations obtained from image analysis of each frame. An example rotation field from a single shear step is shown in Fig. 1e for $\mu_{m}$ particles at packing fraction $\phi=0.816$. While the grain displacements tend to follow the imposed macro-scale deformation field, there exists fluctuations from the imposed linear macro-scale deformation field in the grain centroid displacements that follow a complex process dictated by not only the nearest or contacting grains, but also by their neighbors and by extension an increasingly larger neighborhood for every grain [38, 39, 15]. Figure 2: Standard deviation of rotations as a function of imposed shear strain in three different packing fractions for the (a) $\mu_{l}$, (b) $\mu_{m}$, and (c) $\mu_{h}$ particles. All experiments are repeated five times. (d) and (e) show, respectively, the variation of the parameters $D$ and $n$ as a function of packing fraction of the sets in $\mu_{l,m,h}$. The highlighted data in yellow are the data used in panels a-c and Fig. 3. Similarly, the grain rotation observed in Fig. 1e can be decomposed into two parts. One part of the rotation of each grain is a result of the imposed affine macro-scale deformation field, which contributes an overall rigid body rotation. The second rotation contribution is due to the micro-scale phenomena such as individual grain spin that we call _microrotation_. Denoting the rotation of grain $i$ by $\theta_{i}$ and the macro-scale rigid body rotation by $\theta^{R}$, the microrotation of grain $i$, $\theta_{i}$, is obtained as $\theta_{i}^{m}=\theta_{i}-\theta^{R}$. The mean of the microrotations $\langle\theta_{i}^{m}\rangle$ and the standard deviation of microrotations $\langle{\theta_{i}^{m}}^{2}\rangle=\sigma_{m}$ change as a function of strain as observed in Fig. 1f for all five repeats done for $\mu_{m}$ at $\phi=0.816$. The initial frame is taken as the reference configuration to obtain the evolution of the grain-spin measures as a function of imposed strain. The rigid body rotation $\theta^{R}$ grows linearly with strain as expected from the linearly increasing strain field imposed on the packing. Notably, the mean microrotation $\langle\theta_{i}^{m}\rangle$ is zero for the packing: that is, there is no preferred direction in which grain rotation _fluctuations_ occur. This null-result is highly reproducible between different repeats of the experiments and consistent with earlier numerical simulations [40, 28, 41, 27, 37]. It is also noteworthy that the microrotations follow a nearly Gaussian distribution for all cases (see results in SI). However, the amplitude of grain rotation fluctuations increases strongly with strain. Note that some of such shear induced rotational fluctuations have been observed in the experimental work of Matsushima in [10] on nonspherical particles. Focusing further on the growth of $\sigma_{m}(\gamma)$ we see that its strong growth with $\gamma$ and the reproducibility among different initial configurations is also observed for different $\phi$ over the entire range of relevant densities and $\mu$ as shown in Fig. 2a-c. Up to a strain of 0.15, $\sigma_{m}$ can be well described by the empirical relation $\sigma_{m}=D\gamma^{n}+\sigma_{0}$ as shown by the good quality of the fits. It is tempting to interpret prefactor $D$ as a diffusion constant as done previously for rotations induced by thermal fluctuations [42, 43]. Power law index $n$ indicates the (weakly) nonlinear strain dependence and $\sigma_{0}$ is a possible offset that is negligible for all experiments. We see that the strength of the fluctuations captured by $D(\phi)$ and $n(\phi)$ is very sensitive to friction $\mu$ as shown in Fig. 2d,e. The friction dependence does however capture the mechanical performance of the packing as well: at large $\mu$, particle interactions associated with rotation are stronger even at smaller $\phi$, and this trend is observed in both $D(\phi)$ and $n(\phi)$. When considering $D$ as a diffusion constant, its thermal analogue would be given by the ratio of thermal fluctuations and viscous damping. Such competition can also be seen in the rotational diffusion: both $D$ and $n$ indicate that there are two mechanisms that play a role in the rotational diffusion, which is especially visible for $\mu_{m}$. Initially, $D,n(\phi)$ grows with $\phi$, indicating the enhanced particle interactions that give more fluctuations in rotations. However, above a certain $\phi_{c}(\mu)$, $D$ decreases, and above the packing fraction, $\phi\approx 0.80$, parameters $D,n$ tend towards a plateau, indicating that competing mechanism emerge in high packing fractions suppressing the growth of further rotational fluctuations. Steric hindrance does play a role for $\mu_{h}$, the gear-shaped particles, but less so for the much smoother $\mu_{l,m}$ particles. The exponent for the $\mu_{h}$ can be as high as 1.4, indicating superdiffusive behavior if we consider $\gamma$ as a time variable and $\langle{\theta_{i}^{m}}^{2}\rangle=\sigma_{m}$ as a displacement fluctuation metric. Interestingly, the values for $n(\phi)$ become _independent_ of $\mu$ above a volume fraction of about 0.81, tending towards a linear behavior at very high packing fraction. These trends are even more visible if we consider less strain, see Supplementary Information. Figure 3: (a) The average neighborhood variance $\sigma_{m}(\gamma)$; different colors indicate different $\mu$; different hues represent different $\phi$; color scheme applies to all panels. (b) $I$ as a function of $\gamma$. (c) $I_{i}$ standard deviation $\sigma_{I}$ as as function of $\gamma$. (d) $\mu_{l}$, $\phi=0.828$ local Moran’s I. (e) $\mu_{h}$, $\phi=0.807$ local Moran’s I. ## III Correlations in micro-rotations The observed Gaussian nature of the particle rotation fluctuations belies the underlying correlations in particle motion as expected to exist in dense amorphous packings where many grains are in contact. There are long range correlations in particle rotations as visible in our two step approach to quantifying the spatial autocorrelation of rotations, leading to a mathematically well defined quantitative system average signal widely used for geographical data called Moran’s $I$ [16]. We first compute the particle average neighborhood rotational variance $S_{n}$. For details, see the Methods sections. A positive/negative value of $S_{n}$ means that in general a particle rotates in the same/opposite direction as its neighborhood. Fig. 3(a) shows the average neighborhood variance of materials with different $\mu$ and packing fraction $\phi$. The average neighborhood variance is clearly monotonically dependent on both $\mu$ and $\phi$ and grows with $\gamma$: the higher friction or density, the lower the average neighborhood variance, which means greater difference between a particle’s micro-rotation and its neighborhood’s. This anticorrelation makes mechanical sense: gear-like motion forces rotation of opposite direction in interlocking particles. A large absolute value of $S_{n}$ may however be caused by either (dis)similarity between neighborhood particle micro-rotations, or a large variance in the micro-rotation. To focus only on the comparison of the dissimilarity among neighborhood particles micro-rotation across the packing, we have to normalize $S_{n}$ by the variance of the particle micro-rotation, $\sigma_{m}^{2}$. We showed the dynamics of $\sigma_{m}^{2}$ and its non- monotonic dependence on friction and packing fraction in Fig. 2. By computing $S_{n}/\sigma_{m}^{2}$, we arrive immediately at the system wide spatial autocorrelation metric called Moran’s $I$. Generally, the micro-rotations of the grains in all the materials in these analyses are negatively autocorrelated: the grains rotate like a chain of gears to some extent. Figure 3(b) shows the trends of $I$ as the shear strain increases. The differences in behavior for $\mu_{l,m,h}$ is evident: low friction particles have a weak spatial autocorrelation, whereas particles with higher friction coefficient develop stronger autocorrelations, with $I$ decreasing to -0.3. The difference between the packings with a different $\phi$ is small but not insignificant. In general, anti-autocorrelations increase with larger packing fractions. Strikingly, also the rotational correlations are very strain sensitive, with 3% strain being enough to indicate significant difference between packings of the different $\phi$ and $\mu$. We go one step further and use the normalized neighborhood variance to gain insight into the local mechanics of sheared amorphous packings. Spatial autocorrelations as captured by $I$ are not the same everywhere; in fact there are clusters of (anti)correlated rotations in $I_{i}$. We can quantify the local variability of these correlations by computing the standard deviation $\sigma_{I}$ of $I_{i}$. This metric captures the rotational “floppyness” in the packing: at large $\phi$ in a highly overconstrained system, interlocking grains must all have the same rotational behavior so the variability of $I$ in the packing should be small. At smaller $\phi$, there are more ways to reach mechanical equilibrium, hence the variability among correlations should be higher. Similarly, $\sigma_{I}$ should express the phenomena of shear jamming: at small strain, the shear jamming mechanisms has not been activated yet, so $\sigma_{I}$ is small. As strain increases, the packing moves from partially to completely constrained and should thus achieve a small $\sigma_{I}$. Finally, the role of $\mu$ should also be non-linear: at small and large $\mu$, the rotational variability should be high as per previous arguments, so $\sigma_{I}(\mu)$ should have an optimum. We observe indeed all these mechanically reasonable trends in $\sigma_{I}(\phi,\gamma,\mu)$. $I_{i}$ standard deviation is strain dependent, exhibiting a distinct peak floppiness at about 3% strain. Note that at these strain levels, system level pressure is undetectable, highlighting again the sensitive nature of rotations. The connection to the rotational diffusion is also still visible in the fluctuations of the anticorrelated micro-rotation: observe how for $\phi>0.80$, $\sigma_{I}$ is small for all $\mu$, precisely where also the diffusivity of rotation becomes independent of $\mu$. Finally, we show two examples of the spatial distribution $I_{i}(x,y)$ for two situations $\mu_{l}$, $\phi=0.828$ and $\mu_{h}$, $\phi=0.807$ in Fig. 3d,e. These examples clearly show clusters of isotropic and and anisotropic shapes emerging along boundaries and in the bulk of the packing. Spatial fluctuations can span up to ten particle diameters and can be string-like or globular, highlighting again the spatial anisotropy that can build up in the amorphous system (see Supplementary Information videos). While the complete spatial dynamics is neighborhood rotation similarity is challenging to interpret due to the dual and non-monotonic role of both friction and density, we can clearly evidence particle rotation becoming an essential parameter necessary to include in continuum modeling theories with non-local mechanical couplings inside sheared amorphous packings. ## IV Conclusions We have shown that simple shear induces spatially correlated fluctuations in the a rotational dynamics of round, frictional particles. Individual particle motion is diffusive, and diffusive motion is $\mu$ and $\phi$ dependent as one would expect based on the mechanical characteristics of the packing. The local neighborhood of particles shows on-average anticorrelated motion that reveals that two distinct mechanisms affect the mechanics of individual grains. Rotational motion fluctuations indicate the state of the system early in the deformation regime after a few percent shear strain, even though the _average_ particle micro-rotation is zero. Our results indicate that rotational motion is a highly relevant field in the study of amorphous particulate materials, ranging from sands to frictional emulsions, colloids and even molecular glasses. Beyond materials analyses, the results have a broader relevance to spatial data science, particularly in reference to the “first law of Geography” [44] stating that nearby things are similar. The value of the widely used geographical spatial autocorrelation measure Moran’s $I$ is negative for granular materials systems with a clear physical interpretation related to particle friction. This finding is in contrast with a large majority of spatial datasets coming from human-scale natural systems which have positive spatial autocorrelation. Intriguingly, the role of absolute interparticle orientations has long been recognized for system mechanics: the role of the bond angle is recognized as essential in constraint counting approaches for glassy polymeric systems [45] and is also relevant for protein folding dynamics [46]. Not surprisingly rotational dynamics has been measured indirectly on a system scale via dielectric spectroscopy [47], for example to probe glassy dynamics in rotational degrees of freedom in nonspherically symmetric glassforming molecules. Note that not only friction can make the rotational degree of freedom relevant for the packing dynamics. Rotations also play a role for particle packings that are composed of aspherical, adhesive or deformable particles, which covers many types of particulate materials, ranging from granular materials to colloids, proteins [48], emulsions and even metamaterials in which the node hinges are not ideal [26]. In particular, it is of interest to explore how energy is stored in sheared granular packings and how rotations and friction in contacts play a role in this. Our work thus suggest that rotational dynamics are a potentially unifying characteristic through which the often suggested similarity among amorphous materials can be understood. ###### Acknowledgements. We thank the organizers and participants of the Lorentz Center workshop “Granular Matter Across Scales” for fostering an environment where the seeds for this work were planted. We are grateful to the late Robert Behringer for always reminding us of the importance of grain rotations and friction. AM and NN are supported in part by the United States National Science Foundation grant CMMI-1727433 and EEC-1840432 (which also involves SS). YL is supported by the University of Minnesota Doctoral Dissertation Fellowship. ## V Methods ### V.1 Experimental Setup In our experiments, we analyze a series of experiments that allow for the tracking of rotation of every disk-shaped particle in a $\sim$1000-particles large shear environment in which shear bands and other large scale inhomogeneities have been completely eliminated, as reported elsewhere [13, 14]. Shear is applied quasi-statically from an isotropic stress free state and tracked during the initial shear transient up to a strain of 0.5. Previous experiments have described dilatancy and displacement dynamics in these packings [13, 14]. Within the scope of the experimentation in the current research, we study the effect of inter-particle friction using granular assemblies with controlled variations of friction coefficient, as well as the effect of different initial packing fractions upon the response of the granular assembly. One set of particles was cut from photoelastic sheets as in previous experiments[13], having an inter-particle friction coefficient $\mu_{m}$ of approximately 0.7. After conducting experiments with this set, we wrapped these particles with teflon tape. Dry teflon-teflon contacts have a friction coefficient of $\mu_{l}\sim$0.15 [15]. A third set of data was obtained with photoelastic disks cut with fine teeth on their circumference so that particles will interlock when they come into contact. Such a particle shape mimics an extremely large friction coefficient; we refer to these particles as $\mu_{h}$. The diameter ratio of big to small disks is 1.25:1, and the number ratio is roughly 1:3.3 (big to small) for each packing. Particles were first randomly placed in the shear cell and manually relaxed until no inter-particle contact force very visible by eye. Then starting from either a parallelogram or a rectangle, the shear cell was deformed by strain steps of 0.0027. The system was then relaxed for 10 seconds followed by taking three kinds of pictures: one with white light, one with polarized light, and one with UV light. These three pictures reveal particle positions, particle contact forces/pressure, and particle orientation, respectively. Such a process of shearing, relaxing and picture taking was repeated until a certain total shear strain was achieved. For each packing fraction and friction coefficient, we repeated the experiment five times with the exception of the lowest density $\mu_{l}$ runs. Note that the analysis of the images acquired during the experiment reveals that not all the grains were detected in all frames, where some grains move out of or inside the boundaries of the images from one frame to another. As a result, for the analysis performed in the current paper, only grains common between all the frames were considered, and the grains present at one frame and not detected in another frame are excluded. Moreover, the grains on the boundary were removed from the analysis. ### V.2 Rotations The rigid body rotation between any two frames can be measured as half of the difference in the slope of straight lines fitted to the coordinates of grains centroids in the two frames. We note that, in general, the relation between the measured change in slope and the rigid rotation is nonlinear especially in finite deformation. However, a linear relation in the current analysis for the considered shear strain range is a good approximation. ### V.3 Neighborhood Variance The neighborhood variance of each particle refers to the product of its micro- rotation deviation from mean micro-rotation and the mean micro-rotation deviation of its Voronoi neighborhood from mean micro-rotation. We compute the “average neighborhood variance” by $S_{n}=\frac{Z^{T}WZ}{N-1},\text{ and }Z=\Theta^{m}-\langle\theta_{i}^{m}\rangle,$ (1) where $\Theta^{m}$ is a vector of particles’ micro-rotation whose $i$th element is $\theta_{i}^{m}$, $\langle\theta_{i}^{m}\rangle$ is the mean of all particles’ micro-rotation, and $N$ is the number of particles. $W$ is the row- wise normalized spatial weight matrix. A commonly used spatial weight matrix is the adjacency matrix whose element at the $i$th row and $j$th column indicates whether the $i$th particle is adjacent to the $j$th particle. If the particles are adjacent, the element is 1. Otherwise, the element is 0. A row- wise normalized spatial weight matrix is gotten by dividing each row of a spatial weight matrix by the row sum of the matrix. We conducted analyses of the materials with different surfaces and density: $\mu_{l}$; $\phi=0.783,0.810,0.828$; $\mu_{m}$; $\phi=0.692,0.758,0.816$; $\mu_{h}$; $\phi=0.713,0.744,0.807$. The observations of each grain were the micro- rotation. We constructed Delaunay triangles to link grains with their neighbor grains, and removed the link whose length was greater than the sum of the radius of the two grains connected by the link. ### V.4 Global Moran’s $I$ Spatial autocorrelation is a measure of the correlation between spatially proximate observations. Positive spatial autocorrelation is the tendency for spatially proximate observations to be similar, while negative spatial autocorrelation means spatially proximate observations tend to be different. Global Moran’s $I$ is defined as follows: $I=\frac{Z^{T}WZ}{Z^{T}Z}=\sigma_{n}/\sigma_{m},\text{ and }Z=\Theta^{m}-\langle\theta_{i}^{m}\rangle,$ (2) where $\Theta^{m}$ is a vector of particles’ micro-rotation whose $i$th element is $\theta_{i}^{m}$, $\langle\theta_{i}^{m}\rangle$ is the mean of all particles’ micro-rotation, which is negligible. $W$ is the the row-wise normalized spatial weight matrix. This metric measures the average spatial autocorrelation of the entire dataset. The expected value of global Moran’s $I$ under the null hypothesis of no spatial autocorrelation is $E(I)=-\frac{1}{N-1}$, where $N$ is the number of observations. In other words, the more observations there are, the closer the expectation to 0. Values of $I$ usually range from -1 to +1. Values significantly below $E(I)$ indicate negative spatial autocorrelation and values significantly above $E(I)$ indicate positive spatial autocorrelation. ### V.5 Local Moran’s $I$ There are cases where there is no global trend of spatial autocorrelation, but there are local communities where spatial autocorrelation is strong. Local Moran’s $I$ is used to represents the spatial autocorrelation within the local neighborhood of each observation, which is defined as follows: $I_{i}=\frac{z_{i}W_{i:}Z}{Z^{T}Z/(N-1)},\text{ and }z_{i}=\theta_{i}^{m}-\langle\theta_{i}^{m}\rangle,$ (3) where $\theta_{i}^{m}$ is the ith particle’s micro-rotation. A positive value of $I_{i}$ means within the $i$th observation’s neighborhood the observations are similar, while a negative value means the observations are different. In order to analyze whether the local communities in a dataset are homogeneous regarding to spatial autocorrelation, we compute the standard deviation of local Moran’s $I$ defined as $\sigma_{I}$. The greater this standard deviation, the greater the differences between local communities. ## References * Lade [1977] P. V. Lade, Elasto-plastic stress-strain theory for cohesionless soil with curved yield surfaces, International Journal of Solids and Structures 13, 1019 (1977). * Viasnoff and Lequeux [2002] V. Viasnoff and F. Lequeux, Rejuvenation and overaging in a colloidal glass under shear, Phys. Rev. Lett. 89, 065701 (2002). * Lee _et al._ [2009] H.-N. Lee, K. Paeng, S. F. Swallen, and M. D. Ediger, Direct measurement of molecular mobility in actively deformed polymer glasses, Science 323, 231 (2009), https://science.sciencemag.org/content/323/5911/231.full.pdf . * Tighe [2014] B. P. Tighe, Shear dilatancy in marginal solids, Granular Matter 16, 203 (2014). * Berthier _et al._ [2011] L. Berthier, G. Biroli, J.-P. Bouchaud, L. Cipelletti, and W. van Saarloos, _Dynamical heterogeneities in glasses, colloids, and granular media_ , Vol. 150 (OUP Oxford, 2011). * Cosserat and Cosserat [1909] E. Cosserat and F. Cosserat, Theory of deformable bodies (translated by dh delphenich), Paris: Sorbonne: Herman and Sons (1909). * Schwartz _et al._ [1984] L. M. Schwartz, D. L. Johnson, and S. Feng, Vibrational modes in granular materials, Phys. Rev. Lett. 52, 831 (1984). * Stillinger and Hodgdon [1994] F. H. Stillinger and J. A. Hodgdon, Translation-rotation paradox for diffusion in fragile glass-forming liquids, Phys. Rev. E 50, 2064 (1994). * Edmond _et al._ [2012] K. V. Edmond, M. T. Elsesser, G. L. Hunter, D. J. Pine, and E. R. Weeks, Decoupling of rotational and translational diffusion in supercooled colloidal fluids, Proceedings of the National Academy of Sciences 109, 17891 (2012). * Matsushima _et al._ [2003] T. Matsushima, H. Saomoto, Y. Tsubokawa, and Y. Yamada, Grain rotation versus continuum rotation during shear deformation of granular assembly, Soils and Foundations 43, 95 (2003). * Poorsolhjouy and Misra [2019] P. Poorsolhjouy and A. Misra, Granular micromechanics based continuum model for grain rotations and grain rotation waves, J. Mech. Phys. Solids 129, 244 (2019). * Merkel _et al._ [2011] A. Merkel, V. Tournat, and V. Gusev, Experimental evidence of rotational elastic waves in granular phononic crystals, Phys. Rev. Lett. 107, 225502 (2011). * Ren _et al._ [2013] J. Ren, J. A. Dijksman, and R. P. Behringer, Reynolds pressure and relaxation in a sheared granular system, Phys. Rev. Lett. 110, 018302 (2013). * Wang _et al._ [2018] D. Wang, J. Ren, J. A. Dijksman, H. Zheng, and R. P. Behringer, Microscopic origins of shear jamming for 2d frictional grains, Phys. Rev. Lett. 120, 208004 (2018). * Wang _et al._ [2020] D. Wang, J. A. Dijksman, J. Barés, J. Ren, and H. Zheng, Sheared amorphous packings display two separate particle transport mechanisms, Phys. Rev. Lett. 125, 138001 (2020). * Moran [1950] P. A. Moran, Notes on continuous stochastic phenomena, Biometrika 37, 17 (1950). * Hébraud and Lequeux [1998] P. Hébraud and F. Lequeux, Mode-coupling theory for the pasty rheology of soft glassy materials, Phys. Rev. Lett. 81, 2934 (1998). * Bocquet _et al._ [2009] L. Bocquet, A. Colin, and A. Ajdari, Kinetic theory of plastic flow in soft glassy materials, Phys. Rev. Lett. 103, 036001 (2009). * Henann and Kamrin [2013] D. L. Henann and K. Kamrin, A predictive, size-dependent continuum model for dense granular flows, Proceedings of the National Academy of Sciences 110, 6730 (2013). * Nejadsadeghi and Misra [2020a] N. Nejadsadeghi and A. Misra, Role of higher-order inertia in modulating elastic wave dispersion in materials with granular microstructure, International Journal of Mechanical Sciences 185, 105867 (2020a). * Wei _et al._ [2020] L.-S. Wei, Y.-Z. Wang, and Y.-S. Wang, Nonreciprocal transmission of nonlinear elastic wave metamaterials by incremental harmonic balance method, International Journal of Mechanical Sciences 173, 105433 (2020). * Mouraille and Luding [2008] O. Mouraille and S. Luding, Sound wave propagation in weakly polydisperse granular materials, Ultrasonics 48, 498 (2008), selected Papers from ICU 2007. * Boechler _et al._ [2011] N. Boechler, J. Yang, G. Theocharis, P. G. Kevrekidis, and C. Daraio, Tunable vibrational band gaps in one-dimensional diatomic granular crystals with three-particle unit cells, Journal of Applied Physics 109, 074906 (2011). * Göncü _et al._ [2012] F. Göncü, S. Luding, and K. Bertoldi, Exploiting pattern transformation to tune phononic band gaps in a two-dimensional granular crystal, The Journal of the Acoustical Society of America 131, EL475 (2012), https://doi.org/10.1121/1.4718384 . * Misra and Poorsolhjouy [2016] A. Misra and P. Poorsolhjouy, Granular micromechanics based micromorphic model predicts frequency band gaps, Continuum Mech. Thermodyn. 28, 215 (2016). * Misra _et al._ [2020] A. Misra, N. Nejadsadeghi, M. De Angelo, and L. Placidi, Chiral metamaterial predicted by granular micromechanics: verified with 1d example synthesized using additive manufacturing, Continuum Mechanics and Thermodynamics 32, 1497–1513 (2020). * Misra and Jiang [1997] A. Misra and H. Jiang, Measured kinematic fields in the biaxial shear of granular materials, Computers and Geotechnics 20, 267 (1997). * Kuhn and Bagi [2004] M. R. Kuhn and K. Bagi, Contact rolling and deformation in granular media, International journal of solids and structures 41, 5793 (2004). * Andò _et al._ [2012] E. Andò, S. A. Hall, G. Viggiani, J. Desrues, and P. Bésuelle, Grain-scale experimental investigation of localised deformation in sand: a discrete particle tracking approach, Acta Geotechnica 7, 1 (2012). * Seto _et al._ [2013] R. Seto, R. Mari, J. F. Morris, and M. M. Denn, Discontinuous shear thickening of frictional hard-sphere suspensions, Phys. Rev. Lett. 111, 218301 (2013). * Lin _et al._ [2015] N. Y. Lin, B. M. Guy, M. Hermes, C. Ness, J. Sun, W. C. Poon, and I. Cohen, Hydrodynamic and contact contributions to continuous shear thickening in colloidal suspensions, Physical review letters 115, 228304 (2015). * Singh _et al._ [2020] A. Singh, C. Ness, R. Seto, J. J. de Pablo, and H. M. Jaeger, Shear thickening and jamming of dense suspensions: The “roll” of friction, Phys. Rev. Lett. 124, 248005 (2020). * Ciamarra _et al._ [2012] M. P. Ciamarra, P. Richard, M. Schröter, and B. P. Tighe, Statistical mechanics for static granular media: open questions, Soft Matter 8, 9731 (2012). * Sun _et al._ [2020] X. Sun, W. Kob, R. Blumenfeld, H. Tong, Y. Wang, and J. Zhang, Friction-controlled entropy-stability competition in granular systems (2020), arXiv:2007.14145 [cond-mat.soft] . * Cheng and Wang [2019] Z. Cheng and J. Wang, Quantification of the strain field of sands based on x-ray micro-tomography: A comparison between a grid-based method and a mesh-based method, Powder Technology 344, 314 (2019). * Hall _et al._ [2010] S. A. Hall, M. Bornert, J. Desrues, Y. Pannier, N. Lenoir, G. Viggiani, and P. Bésuelle, Discrete and continuum analysis of localised deformation in sand using x-ray $\mu$ct and volumetric digital image correlation, Géotechnique 60, 315 (2010). * Alshibli and Alramahi [2006] K. A. Alshibli and B. A. Alramahi, Microscopic evaluation of strain distribution in granular materials during shear, Journal of geotechnical and geoenvironmental engineering 132, 80 (2006). * Misra and Poorsolhjouy [2017] A. Misra and P. Poorsolhjouy, Grain- and macro-scale kinematics for granular micromechanics based small deformation micromorphic continuum model, Mechanics Research Communications 81, 1 (2017). * Nejadsadeghi and Misra [2020b] N. Nejadsadeghi and A. Misra, Extended granular micromechanics approach: a micromorphic theory of degree n, Mathematics and Mechanics of Solids 25, 407 (2020b). * Aharonov and Sparks [2002] E. Aharonov and D. Sparks, Shear profiles and localization in simulations of granular materials, Physical Review E 65, 051302 (2002). * Calvetti _et al._ [1997] F. Calvetti, G. Combe, and J. Lanier, Experimental micromechanical analysis of a 2d granular material: relation between structure evolution and loading path, Mechanics of Cohesive-frictional Materials: An International Journal on Experiments, Modelling and Computation of Materials and Structures 2, 121 (1997). * Kim _et al._ [2011] M. Kim, S. M. Anthony, S. C. Bae, and S. Granick, Colloidal rotation near the colloidal glass transition, The Journal of Chemical Physics 135, 054905 (2011). * Vivek and Weeks [2017] S. Vivek and E. R. Weeks, Decoupling of translational and rotational diffusion in quasi-2d colloidal fluids, The Journal of Chemical Physics 147, 134502 (2017). * Tobler [1970] W. R. Tobler, A computer movie simulating urban growth in the detroit region, Economic geography 46, 234 (1970). * Thorpe [1995] M. Thorpe, Bulk and surface floppy modes, Journal of Non-Crystalline Solids 182, 135 (1995). * Jacobs _et al._ [2002] D. J. Jacobs, L. A. Kuhn, and M. F. Thorpe, Flexible and rigid regions in proteins, in _Rigidity Theory and Applications_, edited by M. F. Thorpe and P. M. Duxbury (Springer US, Boston, MA, 2002) pp. 357–384. * Madden and Kivelson [1984] P. Madden and D. Kivelson, A consistent molecular treatment of dielectric phenomena, Adv. Chem. Phys 56, 467 (1984). * Haridasan _et al._ [2019] N. Haridasan, S. K. Kannam, S. Mogurampelly, and S. P. Sathian, Rotational diffusion of proteins in nanochannels, The Journal of Physical Chemistry B 123, 4825 (2019), pMID: 31117604.